How To REALLY Use Cron To Run Scheduled Jobs
Some techniques for using cron in the modern era.
Some techniques for using cron in the modern era.
Using the class_name
and is
to detect if a body entering an Area2D is of a particular scene.
The sort command allows one to specify which field to sort by. These fields need to be separated by a character, such as a colon or comma, which is specified using the -t option. To select the actual field to sort, specify the field position using the -k option, which is one based: $ cat eg.csv 111, plank, hello 222, apple, world 333, kilo, example 444, delta, values $ sort -t, -k2 eg.
One other JQ thing: there’s an operator which returns the filename of the file currently being processed by JQ, called input_filename. This can be used alongside the concat operator to reproduce something similar to grep -H: $ jq -r 'input_filename + ": " + .payload.data' *.json message-0000.json: id_111 message-0001.json: id_234 message-0002.json: id_512
Yesterday, I discovered that JQ has a recursive descent operator — .. — which allows one to go through each field of a JSON structure. When used, it will emit every value of a JSON structure, doing so in pre-order (i.e. any nested objects or arrays will be written to the output before they’re visited). $ cat eg.json { "name": "hello", "age": 123, "phone": {"home": 1, "mobile": 2} } $ jq '.
For anyone working with gRPC or Protobuf messages that need to decode the binary message format, but don’t have or don’t want to find the actual schema, this invocation of protoc works great: protoc --decode_raw It takes a binary Protobuf message from STDIN and produces a text-based output showing the message structure in a human readable format. You could say it almost looks like JSON, but since there’s no type information, there are no field names, and all you have are field numbers and value (so don’t throw away your schemas just yet):
Anyone scanning or querying DynamoDB with a projection, you may want to know that if you plan to get the LastEvaluatedKey for paging, you will want to make sure the primary key attribute is included in the projection. Leaving it out may produce an error much like the following:
dynamo: failed to infer LastEvaluatedKey in scan: dynamo: can't determine LastEvaluatedKey: primary key attribute is missing from result: pk; add it to your projection or use SearchLimit instead of Limit
For much of my use of Fiber, whenever I needed to tack on a value that’s scoped to the request, I used the standard approach in Go whenever a Context is available: define a new key and call context.WithValue() type userKeyType struct{} var userKey = userKeyType{} func setUser(c *fiber.Ctx) error { usr, err := fetchUserFromSession() if err != nil { return err } newCtx := context.WithValue(c.UserContext(), userKey, usr) c.SetUserContext(newCtx) <span class="hljs-keyword">return</span> c.
Learnt a very import thing about Stimulus outlets this evening: the outlet name must match the controller name of the outlet target. From the docs: The outlet identifier in the host controller must be the same as the target controller’s identifier. That is, if you have a controller with the name playfield: <div data-controller="playfield" id="myElement">...</div> Then any outlet that wants to use this controller also needs to be called playfield:
Fair warning, this is probably an approach that’ll only work for me, since the services I’m working on upserts1 NATS Jetstream consumers on startup. We’re using NATS Jetstream to dispatch jobs to a pool of workers. Jobs will come in via the job-inbox stream, and the workers publish results to a jobs-results stream. These streams are created with the WorkQueue policy and do not have a dead-letter queue configured. The job-results stream has a durable consumer, configured with a deliver policy of all.
🔗 Understanding the difference between INSERT and INSERT..ON CONFLICT Okay, I admit I was considering this. My Go-like treatment of errors as “just another value” had me wondering if I could avoid writing an INSERT with an ON CONFLICT DO UPDATE clause, and instead do this in code: // This is bad err := db.doInsert(data) if errors.Is(err, sql.ErrConflict) { db.doUpdate(data) } But it felt wrong, and now I know that it would’ve been inefficient too.
Note to self: when choosing an instance type for a production PostgreSQL database in Amazon RDS or Amazon Aurora, don’t choose any of the ‘T’ classes. The reason is that the ‘T’ classes work on CPU credits, and if you have a sustained performance for a long period of time, those credits will be consumed and your database CPU will be throttled down to 10%. Instead, consider one of the other instances.
Imagine, if you will, a service responsible for dispatching jobs to a pool of workers. The worker reports when a job has started, and will send updated while the job is in progress, and when it’s finished. The dispatcher tracks this in a PostgreSQL database and forwards the completion message upstream. This message should only be sent if the job exists in the database and should only be ever sent once (i.
Heads-up for anyone using Golang NATS client: setting up a subscription using Subscribe or QueueSubscribe will not setup a worker pool. As far as I can tell, using either Subscribe or QueueSubscribe will only setup a handler backed by a single goroutine. What’s Your Evidence? I setup a small experiment using a single sender and a single subscriber. The sender publishes an incrementing integer to a subject once every 100 milliseconds.
Some techniques with using psql. Supplying Password Non-interactively To supply a password in a non-interactive way, use the PGPASSWORD environment variable: PGPASSWORD=xxx psql -U username database Source: Stack Overflow Describing Customer Types These are the enum types created using CREATE TYPE my_enum AS ENUM constructs. To get the list of those types, use the \dT meta-command: postgres=> \dT List of data types Schema | Name | Description --------+-----------------+------------- public | my_enum | public | another_enum | public | even_more_enums | (3 rows) To actually list the enum elements, use \dT+:
I tend to be stuck in my old ways when writing POST handlers. When I accept a POST request from a HTML page, I send a 303 See Other redirect to force the browser to get the resulting page with a GET. This keeps the path routing relatively clean, and saves me from having multiple handlers return the same bit of HTML. This technique broke down when I started using HTMX.
A collection of useful operations for working with JSON fields I wish to remember. Others would be added here when I encounter them (maybe). Selecting JSON Fields Imaging a table with the following schema: CREATE TABLE data ( id INT PRIMARY KEY, json_props jsonb ); INSERT INTO data (id, json_props) VALUES (1, ‘{“foo”:“baz”}'); To select based on the value of foo in the JSON data, use the following query: SELECT * FROM data WHERE json_props->>'foo' = 'baz'; Sources:
One operation I find myself occasionally doing with anything involving a database is “get or create”: return an object with a particular ID or unique field, and if it doesn’t exist, create it. Yesterday, in the project I’m working on now, I saw a need for this. Being backed by a PostgreSQL database, I guess I could’ve just opened up a transaction, ran a SELECT and if it was empty, run an INSERT.
I’m sure I’m not alone in thinking that OpenSSL is a bit of a dark art, what with all the terminology and strange CLI invocations and such. I suppose one may think that documenting this process would be better, but there’s already a lot out there that just needs to be surfaced. So, here’s a collection of helpful links for working with OpenSSL to create certificates and CSRs, working with private keys, etc.
Hey. Yeah, you. The one trying to create that new EC2 instance. I see you’re trying to use a launch template with that. Did you remember to specify the correct version number? EC2 will use the default version number in the launch template which may not be what you want. So make sure you’ve got the version number properly set, just in case you’re not interested in wasting 30 minutes of your day.