Hey.
Yeah, you. The one trying to create that new EC2 instance.
I see you’re trying to use a launch template with that. Did you remember to specify the correct version number?
EC2 will use the default version number in the launch template which may not be what you want. So make sure you’ve got the version number properly set, just in case you’re not interested in wasting 30 minutes of your day.
If you were to log into AWS and try to delete a Secrets Manager secret, you will be told that the secret will not be removed straight away. Instead, you’ll be asked to specify a “recovery window” in which the deleted secret can be recovered. The shortest recovery window you can specify is 7 days. Until that time, you cannot use this secret at all: cannot read it, cannot modify it.
Small one today. I’ve got a Gitlab CI/CD file that imports a much larger one defining a standard set of jobs for a build. But I needed to disable one of those jobs as it didn’t apply to the project I’m working on.
So after trying a few things, the approach I found that worked was to “override” the jobs in my Gitlab CI/CD file and configure the rules to execute “never”.
This is going to be of a rambling post as I try to find my way to best configure a long-running service that’s managed with Systemd. The motivation here is to install the service on a Linux machine, then configure it to run using Systemd such that:
It launches on startup It restarts when the launch fails with a non-zero error code It runs as a non-root user It can be configured via environment variables loaded from an external file We’ll go through each step piece-by-piece, as each one builds on the next, and a typical service may not need each of these properties.
Spent around two hours trying to diagnose why one of our containers couldn’t read secrets from AWS Secrets Manager. I was seeing errors of the following form when I was trying to call GetSecretValue:
operation error Secrets Manager: GetSecretValue, get identity: get credentials: failed to refresh cached credentials, not found, Signing Turned out to be a bug in version v1.29.2 of the secrets manager client: github.com/aws/aws-sdk-go-v2/service/secretsmanager. Downgrading the client version to v1.
Go has had parallel tests enabled by default for a while now. But if you can’t run your tests in parallel, what should you do?
Most people probably know of the -p 1 option to disable parallel execution of tests:
go test -p 1 . But this only works if you have access to the command itself. If you’ve got the go test command in a Makefile running in a CI/CD environment that’s difficult to change, this may not be available to you (it wasn’t available to me when I faced this problem yesterday).
Subtitle: if you know what these four words mean, then this post is for you
This is a quick one as I’m not really in a blogging mood. But I couldn’t for the life of me find a way to decode a PostgreSQL bytea value (which is what PostgreSQL uses for BLOB values) using pgx and sqlc so that it would match what I actually stored.
The documentation of pgx and sqlc, along with various web searches, yielded nothing.
I’ve been writing a bunch of technical documents in Obsidian recently: complete with code-blocks and Mermaid diagrams. And I must say, it’s been a pretty good writing experience. Certainly much nicer than writing in Confluence.
But when I made PDF exports of these documents, I found a few things that could be improved. Namely:
Moving page-breaks to the next page if doing so will avoided introducing a page-break. Mermaid.JS images being too wide, and running off the right side of the page.
The same person that taught me about LATERIAL SQL queries also showed me the BIGSERIAL type, which is an automatically incrementing integer column.
I don’t know why I didn’t see this before. The AUTOINCREMENT option in MySQL was one of the things I missed when I started using PostgreSQL. I guess I just assumed that one had to explicitly create a sequence and include calls to nextval() when inserting things into a PostgreSQL table, and that was “just how it was done.
You’ve got a bunch of Typescript files with both .ts and.tsx. You need to format them using prettier. Your code is managed with source control so you can back out of the changes whenever you need to.
Here’s a quick command line invocation to do it:
npx prettier -w '**/*.ts' '**/*.tsx' If there are any directories you’d like to ignore, list it in the arguments with a !:
npx prettier -w '**/*.
Does getting a count in DynamoDB return the total across the entire table, or just the total for the current page?
While a brief search online didn’t give any conclusive result, it did show that it’s possible to get a count for a scan or a query without fetching the items themselves, which is a good thing (honestly, I was not expecting this to be possible). This is done by setting the select parameter to count:
One of the craziest ideas I had recently was to move all my code from Github to a self-hosted SCM system. The impetus for this was to have import paths with a custom domain name for all my Go packages, rather than have all them start with github.com/lmika/(something). Fortunately, this proved to be unnecessary, as Go does allow one to customise the import path of packages hosted elsewhere.
This area of the docs has all the details, but here’s the process in short.
Someone shared with me the LATERAL join type supported by PostgreSQL. He described it as a “for each” built into SQL:
When a FROM item contains LATERAL cross-references, evaluation proceeds as follows: for each row of the FROM item providing the cross-referenced column(s), or set of rows of multiple FROM items providing the columns, the LATERAL item is evaluated using that row or row set’s values of the columns. The resulting row(s) are joined as usual with the rows they were computed from.
What changes can you make to a gRPC schema that’s already in use?
Honestly, I always forget these rules, or get paranoid when I’m touching messages that are already deployed, so it was time to find out what these rules are. I came across this Medium post by Toan Hoang which listed them, and I thought I’d make a note of them here.
Here they are in brief.
Non-Breaking Changes The following changes are completely safe, and will not break existing clients or servers using an earlier version of the schema:
You’re working on a Go project (yay! 🎉) and you need to write a unit test. You decide to go with a table-driven approach. What names should you use for your variables?
For a long while, I was writing tests using names like these:
func TestSomething(t *testing.T) { scenarios := []struct { description string arg string expected string }{ { description: "Thing 1", arg: "this", expected: "that" } // more cases go here } for _, scenario := range scenarios { t.
This will list all Docker containers, and delete each one regardless of whether it’s running or not. Good if you use Docker for dev containers and need to reset your state.
docker ps -a --format '{{.ID}}' | xargs -I{} docker rm -v {} Another way to do this (thanks to @sonicrocketman):
docker ps -aq | xargs -I{} docker rm -vf {}
If you’ve got a slow internet connection and are trying to pull a large collection of images, you may encounter retries causing the entire pull to fail, not to mention multiple concurrent downloads sucking up all your bandwidth.
I found that the following Docker daemon configuration changes helped:
Set max-concurrent-downloads to 1 Bump max-download-attempts to something relatively high, like 20 The resulting JSON would look a little like this:
Even though I’ve been using MacOS for a while, there are certain things that I just still remember doing in Linux. Probably it got burned into my mind when I was under pressure to debug something in work. Whatever the reason, I can never remember how to do similar in MacOS.
So I’m noting them down here.
Listing Open TCP Ports Equivalent to netstat -nap:
lsof -nP -i4TCP:$PORT | grep LISTEN Managing Daemons The tool to manage running services and daemons is launchctl.
There’s a small tool called csvtk which can be used to do various things with CSV files on the terminal. It’s… fine. If I had my own way, I would’ve made different decisions. But one thing going for it is that it exists, and my fantasy tool does not, so it’ll do for now.
Anyway, here’s a collection of common operations that this tool supports.
Grep The grep subcommand requires both the field, using -f , and the pattern, using -p:
Here are some random notes about working with Buffalo (may it rest in peace).
Cleaning Up Assets In Buffalo Buffalo doesn’t clean up old versions of bundled JavaScript files. This means that the public/asset directory can grow to gigabytes in size, eventually reaching the point where Go will simply refuse to embed that much data.
The tell-tail sign is this error message when you try to run the application:
too much data in section SDWARFSECT (over 2e+09 bytes) If you see that, deleting public/assets should solve your problem.