A list of techniques for diagnosing and fixing a slowdown caused by PostgreSQL. Some hallmarks for when these techniques could be useful are as follows:

  1. The database CPU is maxing out: this could be an indication that database is overloaded, either by lots of requests, or a few unoptimised queries. Finding any slow running queries may be a good first step.

Diagnosis

Finding Slow Queries

The pg_stat_activity table lists the current database connections, and what each one is currently do. Details of this table can be found here. This table can be used to find slow queries by becoming the postgres user and running the following query:

select * from pg_stat_activity
where (now() - pg_stat_activity.query_start) > interval '30 seconds'
and state = 'active';

Replace 30 seconds for any interval of time you feel is considered slow.

You can then use explain on the query itself to determine how the query is running. If you’re seeing scans over a large number of items, it may be necessary to make an index.

Responding

Creating An Index

This is probably necessary for slow queries that are scanning over a large number of items. Just beware that creating scans will take out locks, which may slow things even further while you wait for the index to finish. Some timing details of how long it takes to do this is provided below:

DB EngineIndex TypeTable Size (rows)Duration
AWS Aurora 15.10 (db.r7g.2xlarge)New index on foreign key1,147,000~1 minute