PostgreSQL (10)
- SaaS for Developers with Gwen Shapira — Postgres, Performance and Rails with Andrew Atkinson 🎙️
- Podcast: Code and the Coding Coders who Code it! Episode 27 Andrew Atkinson 🎙️
- Presenting 'Partitioning Billions of Rows' at SFPUG August 2023
- PGSQL Phriday #011 — Sharding and Partitioning
- Slow & Steady — Database Constraints with Andrew Atkinson 🎙️
- PostgreSQL Table Partitioning Primary Keys — The Reckoning — Part 2 of 2
- Code With Jason 190 — PostgreSQL and Sin City Ruby 🎙️
- PostgreSQL Table Partitioning — Growing the Practice — Part 1 of 2
- PGSQL Phriday #009 — Database Change Management
- PGDay Chicago 2023 Conference
- PGSQL Phriday #008 — pg_stat_statements, PgHero, Query ID
- Ruby For All Podcast: My Guest Experience 🎙️
- Upgrading to PostgreSQL 15 on Mac OS
- PostgreSQL, Ruby on Rails, Rails Guides
- AWS re:Invent Day 1
- PGSQL Phriday #001 — Query Stats, Log Tags, and N+1s
- PgHero 3 Released
- RailsConf 2022 Conference
- PGConf NYC 2021 Conference
- Find Duplicate Records Using ROW_NUMBER Window Function
- Bulk Inserts, RETURNING, and UNLOGGED Tables
- Using pg_repack to Rebuild Indexes
- Manually Fixing a Rails Migration
- PostgreSQL pgbench Workload Simulation
- PostgreSQL Indexes: Prune and Tune
- Views, Stored Procedures, and Check Constraints
- Building a Web App with Boring Technologies (2017 Edition)
- A Look at PostgreSQL Foreign Key Constraints
- Intro to PostgreSQL 'generate_series'
- Automated Daily Backup Solution for PostgreSQL
- PostgreSQL for the Busy MySQL Developer
Learning Resources
- https://postgres.fm
- https://www.pgcasts.com
- https://www.youtube.com/c/ScalingPostgres
- https://sqlfordevs.io
- The
EXPLAIN
Glossary from PgMustard https://www.pgmustard.com/docs/explain
My Typical Workloads
My tips operating high scale PostgreSQL databases primarily with Ruby on Rails web applications. OLTP, high quantity of short lived transactions.
OLAP workload. Using application databases as the data source for a data warehouse or ETL process.
Queries
I keep queries in a GitHub repository here: pg_scripts.
Query: Approximate Count
A count(*)
query on a large table may be too slow. If an approximate count is acceptable use this:
SELECT reltuples::numeric AS estimate
FROM pg_class WHERE relname = 'table_name';
Query: Get Table Stats
SELECT attname, n_distinct, most_common_vals, most_common_freqs
FROM pg_stats
WHERE tablename = 'table';
Look for columns with few values, and indexes on those few values with low selectivity. Meaning, most values in the table are the same value. In index on that column would not be very selective, and given enough memory, PG would likely not use that index, preferring a sequential scan.
Cancel or Kill by Process ID
Get a PID with select * from pg_stat_activity;
Try to cancel the pid first, more gracefully, or terminate it:
select pg_cancel_backend(pid);
select pg_terminate_backend(pid);
Tuning Autovacuum
PostgreSQL runs an autovacuum process in the background. Dead tuples are also called dead rows or “bloat”. Bloat can also exist for indexes.
Two parameters may be used to trigger the AV process: “scale factor” and “threshold”. These can be configured DB-wide or per-table.
In routine vacuuming, the two options are listed:
- scale factor (a percentage)
autovacuum_vacuum_scale_factor
- threshold (a specific number)
autovacuum_vacuum_threshold
The scale factor defaults to 20% (0.20
). To optimize for our largest tables we set it lower at 1% (0.01
).
To opt out of scale factor, set the value to 0 and set the threshold, e.g. 1000, 10000 etc.
ALTER TABLE bigtable SET (autovacuum_vacuum_scale_factor = 0);
ALTER TABLE bigtable SET (autovacuum_vacuum_threshold = 1000);
If after experimentation you’d like to reset, use the RESET
option.
ALTER TABLE bigtable RESET (autovacuum_vacuum_threshold);
ALTER TABLE bigtable RESET (autovacuum_vacuum_scale_factor);
https://www.postgresql.org/docs/current/sql-altertable.html
AV Tuning
- Set
log_autovacuum_min_duration
to0
to log all Autovacuum. A logged AV run includes a lot of information. - pganalyze: Visualizing & Tuning Postgres Autovacuum
AV Parameters
autovacuum_max_workers
autovacuum_max_freeze_age
maintenance_work_memory
Specialized Index Types
The most common type is B-Tree. Specialized Index types are:
- Multicolumn
- Covering (Multicolumn style, and newer
INCLUDES
style) - Partial
- GIN
- GiST
- BRIN
- Expression
- Unique
- Multicolumn (a,b) for a only, a & b, but not for b
- Indexes for sorting
Removing Unused Indexes
Ensure these are set to on
SHOW track_activities;
SHOW track_counts;
Now we can take advantage of tracking on whether indexes have been used or not. We can look for zero scans, and also very infrequent scans.
Cybertec blog post with SQL query to discover unused indexes: Get Rid of Your Unused Indexes!
On our very large production database where this process had never been done, we had dozens of indexes that could be eliminated, taking of 100s of gigabytes of space.
SELECT s.schemaname,
s.relname AS tablename,
s.indexrelname AS indexname,
pg_relation_size(s.indexrelid) AS index_size
FROM pg_catalog.pg_stat_user_indexes s
JOIN pg_catalog.pg_index i ON s.indexrelid = i.indexrelid
WHERE s.idx_scan = 0 -- has never been scanned
AND 0 <>ALL (i.indkey) -- no index column is an expression
AND NOT i.indisunique -- is not a UNIQUE index
AND NOT EXISTS -- does not enforce a constraint
(SELECT 1 FROM pg_catalog.pg_constraint c
WHERE c.conindid = s.indexrelid)
ORDER BY pg_relation_size(s.indexrelid) DESC;
Remove Duplicate and Overlapping Indexes
https://wiki.postgresql.org/wiki/Index_Maintenance
Query that finds duplicate indexes, meaning using the same columns etc. Recommends that usually it is safe to delete one of the two.
Remove Seldom Used Indexes on High Write Tables
New Finding Unused Indexes Query
This is a great guideline.
As a general rule, if you’re not using an index twice as often as it’s written to, you should probably drop it.
In our system on our highest write table we had 10 total indexes defined and 6 were classified as Low Scans, High Writes. These indexes may not be worth keeping.
Partial Indexes
How Partial Indexes Affect UPDATE Performance in PostgreSQL
Partial indexes weigh significantly less, but this article uses pgbench to show how they may benefit SELECT TPS, but negatively impact UPDATE TPS.
Timeout Tuning
statement_timeout
: The maximum time a statement can execute before it is terminated
Connections Management
A connection forks the OS process (creates a new process) and is thus expensive.
Using a connection pool reduces the amount of connection establishment overhead and thus reduces the latency involved with connections, which can increase TPS at a certain scale.
- PgBouncer. Running PgBouncer on ECS
- RDS Proxy. AWS RDS Proxy
- Managing Connections with RDS Proxy
Connection issues could benefit from changing:
connect_timeout
read_timeout
checkout_timeout
(Rails, default5s
): maximum time Rails will spend trying to check out a connection from the pool before raising an error. checkout_timeout API documentationstatement_timeout
. In Rails/Active Record, set inconfig/database.yml
under avariables
section with a value in milliseconds. This becomes a session variable which is set like this:
SET statement_timeout = 5000
(in milliseconds) and be displayed like this: SHOW statement_timeout
production:
variables:
statement_timeout: 5000
When serving Rails apps with Puma and using Sidekiq, carefully manage the connection pool size and total connections for the database.
The Ruby on Rails database connection pool. We also use a proxy in between the application and PG.
This allows the application to allocate many more client connections (for example doubling during a zero downtime deploy) but not exceed the max supported connections/resource usage on the DB server.
PgBouncer
Install pgbouncer on OS X with brew install pgbouncer
. Create the .ini
config file as the article mentions, point it to a database, accept connections, and track the connection count.
H.O.T. Updates
HOT (“heap only tuple”) updates, are updates to tuples not referenced from outside the table block.
HOT updates in PostgreSQL for better performance
2 requirements:
- there must be enough space in the block containing the updated row
- there is no index defined on any column whose value is modified (big one)
fillfactor
What is fillfactor and how does it affect PostgreSQL performance?
- Percentage between 10 and 100, default is 100 (“fully packed”)
- Reducing it leaves room for “HOT” updates when they’re possible. Set to 90 to leave 10% space available for HOT updates.
- “good starting value for it is 70 or 80” Deep Dive
- For tables with heavy updates a smaller fillfactor may yield better write performance
- Set per table or per index (b-tree is default 90 fillfactor)
- Trade-off: “Faster UPDATE vs Slower Sequential Scan and wasted space (partially filled blocks)” from Fillfactor Deep Dive
- No index defined any column whose value it modified
Limitations: Requires a VACUUM FULL
after modifying (or pg_repack)
ALTER TABLE foo SET ( fillfactor = 90 );
VACUUM FULL foo;
Or use pg_repack
pg_repack --no-order --table foo
Installing pg_repack on EC2 for RDS
Note: use -k, --no-superuser-check
Locks Management
log_lock_waits
deadlock_timeout
“Then slow lock acquisition will appear in the database logs for later analysis.”
Lock Types
Exclusive locks, and shared locks. Prefer shared locks.
AccessExclusiveLock
- Locks the table, queries are not allowed.
Tools
Tools: Query Planning
EXPLAIN (ANALYZE, BUFFERS)
This article 5 Things I wish my grandfather told me about ActiveRecord and PostgreSQL has a nice translation of EXPLAIN ANLAYZE output written more in plain English.
pgMustard
Nice tool and I learned a couple of tips. Format EXPLAIN
output with JSON, and specify some additional options. Handy SQL comment to have hanging around on top of the query to study:
Verbose invocation:
EXPLAIN (analyze, buffers, verbose, format text) <sql-query>
Using pgbench
Repeatable method of determining a transactions per second (TPS) rate.
Useful for determining impact of tuning parameters like shared_buffers
with a before/after benchmark. Configurable with a custom SQL queries.
Could also be used to test the impact of ramping up connections.
- Initialize database example with scaling option of 50 times the default size:
pgbench -i -s 50 example`
- Benchmark with 10 clients, 2 worker threads, and 10,000 transactions per client:
pgbench -c 10 -j 2 -t 10000 example`
I created PR #5388 adding pgbench to tldr!
pgtune
PGTune is a website that tries to suggest values for PG parameters that can be tuned and may improve performance for a given workload.
https://pgtune.leopard.in.ua/#/
PgHero
PgHero brings a bunch of operational concerns into a dashboard format. It is built as a Rails engine and provides a nice interface on top of queries related to the PG catalog tables.
We are running it in production and some immediate value has been helping clarify unused and duplicate indexes we can remove.
https://github.com/ankane/pghero
pgmonitor
https://github.com/CrunchyData/pgmonitor
Have not yet tried this out but it looks helpful.
postgresqltuner
Perl script to analyze a database. Do not have experience with this. Has some insights like the shared buffer hit rate, index analysis, configuration advice, and extension recommendations.
https://github.com/jfcoz/postgresqltuner
pg_test_fsync
pgmetrics
pgcli
brew install pgcli
An alternative to psql
with syntax highlighting, autocomplete and more.
Write Ahead Log (WAL) Tuning
Can cause a significant I/O load
checkpoint_timeout
- in seconds, default checkpointing every 5 minutesmax_wal_size
- if max wal size is about to be exceeded, default 1 GB
Reducing the values causes checkpoint to run more frequently.
checkpoint_warning
parameter
checkpoint_completion_target
General Recommendation (not mine): “On a system that’s very close to maximum I/O throughput during normal operation, you might want to increase checkpoint_completion_target
to reduce the I/O load from checkpoints.”
Parameters
commit_delay
(0 by default)wal_sync_method
wal_debug
Extensions and Modules
Foreign Data Wrapper (FDW)
Native Foreign data wrapper functionality in PostgreSQL allows connecting to a remote table and treating it like a local table.
The table structure may be specified when establishing the foreign table or it may be imported as well.
A big benefit of this for us at work is that for a recent backfill. We were able to avoid the need for any intermediary data dump files.
We used a temp
schema to isolate any temporary tables away from the main schema (public
).
Essentially the process is:
- Create a server
- Create a user mapping
- Create a foreign table (optionally importing the schema)
Let’s say we had 2 services, one for managing inventory items for sale, and one for managing authentication.
We wanted to connect to the authentication database from the inventory database.
In the case below, the inventory database is connected to with the root
user so there is privileges to create temporary tables, foreign tables etc.
create EXTENSION postgres_fdw;
CREATE SCHEMA temp;
CREATE SERVER temp_authentication;
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (host 'authentication-db-host', dbname 'authentication-db-name', port '5432'); -- set the host, name and port
CREATE USER MAPPING FOR root
SERVER temp_authentication
OPTIONS (user 'authentication-db-user', password 'authentication-db-password'); -- map the local root user to a user on the remote DB
IMPORT FOREIGN SCHEMA public LIMIT TO (customers)
FROM SERVER temp_authentication INTO temp; -- this will make a table called temp.customers
Once this is established, we can issue queries as if the foreign table was a local table:
select * from temp.customers limit 1;
On Amazon RDS type show rds.extensions
to view available extensions.
uuid-ossp
Generate universally unique identifiers (UUIDs) in PostgreSQL. Documentation link
pg_stat_statements
Tracks execution statistics for all statements and made available via a view. Requires reboot (static param) on RDS on PG 10 although pg_stat_statements
is available by default in shared_preload_libraries
in PG 12.
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
https://www.virtual-dba.com/blog/postgresql-performance-enabling-pg-stat-statements/
pgstattuple
The pgstattuple module provides various functions to obtain tuple-level statistics.
https://www.postgresql.org/docs/9.5/pgstattuple.html
citext
Case insensitive column type
pg_cron
Available on PG 12.5+ on RDS, pg_cron is an extension that can be useful to schedule maintenance tasks, like manual vacuum jobs.
See: Scheduling maintenance with the PostgreSQL pg_cron extension
pg_timetable
pg_timetable: Advanced scheduling for PostgreSQL
pg_squeeze
Replacement for pg_repack, automated, without needing to run a CLI tool.
auto_explain
Adds explain plans to the query logs. Maybe start by setting it very high so it only logged for extremely slow queries, and then lessening the time if there is actionable information.
Percona pg_stat_monitor
pg_stat_monitor: A cool extension for better monitoring using PMM - Percona Live Online 2020
pganalyze Index Advisor
This is not an extension but looks like a useful tool. A better way to index your PostgreSQL database: pganalyze Index Advisor
pgbadger
brew install pgbadger
Bloat
Overview
How does bloat (table bloat, index bloat) affect performance?
- “When a table is bloated, PostgreSQL’s ANALYZE tool calculates poor/inaccurate information that the query planner uses.”. Example of 7:1 bloated/active tuples ratio causing query planner to skip.
- Queries on tables with high bloat will require additional IO, navigating through more pages of data.
- Bloated indexes, such as indexes that reference tuples that have been vacuumed, requires unnecessary seek time. Rebuild the index
REINDEX ... CONCURRENTLY
- Index only scans slow down with outdated statistics. Autovacuum updates table statistics. Minimize table bloat to improve performance of index only scans. PG Routing vacuuming docs.
- Cybertec: Detecting Table Bloat
- Dealing with significant PostgreSQL database bloat — what are your options?
Upgrades
We are currently running PG 10, so I had a look at some upgrades in 11 and 12.
This is also a really cool Version Upgrade Comparison Tool: 10 to 12
PG 11
Release announcement October 2018
- Improves parallel query performance and parallelism of B-tree index creation. Source: Release announcement
- Adds partitioning by hash key
- Significant partitioning improvements
- Adds “covering” indexes via
INCLUDE
to add more data to the index. Docs: Index only scans and Covering indexes
PG 12
Release announcement. Released October 2019.
- Partitioning performance improvements
- Re-index concurrently
PG 13
Released September 2020
- Parallel vacuum
PG 14
- More of
query_id
- Multi-range types
PG 15
- SQL
MERGE
RDS
Amazon RDS is hosted PostgreSQL. RDS is regular single-writer primary PostgreSQL, and AWS has a variation called Aurora with a different storage model.
Aurora PG
AWS RDS Parameter Groups
Working with RDS Parameter Groups
- Try out parameter changes on a test database prior to making the change. Potentially create a backup before making the change as well.
- Parameter groups can be restored to their defaults (or they can be copied to create an experimental group). Groups can be compared with each other to determine differences.
- Parameter values can process a formula. RDS provides some formulas that utilize the instance class CPU or memory available to calculate a value.
Database Constraints
Blog: A Look at PostgreSQL Foreign Key Constraints
CHECK
NOT NULL
UNIQUE
PRIMARY KEY
FOREIGN KEY
EXCLUSION
Native Replication
PostgreSQL Logical Replication
Crunchydata Logical Replication in PostgreSQL
- Create a
PUBLICATION
, counterpartSUBSCRIPTION
. - All operations like
INSERT
andUPDATE
are enabled by default, fewer can be configured - Logical replication available since PG 10.
max_replication_slots
should be set higher than number of replicas- A role must exist for replication
- Replication slot is a replication object that keeps track of where the subscriber is in the WAL stream
- Unlike normal replication, writes are still possible to the subscriber. Conflicts can occur if data is written that would conflict with logical replication.
Declarative Partitioning
RANGE
(time-based)LIST
-
HASH
- Crunchydata Native Partitioning Tutorial
- pgslice
- pg_partman
- pg_party
Partition Pruning
Default is on
or SET enable_partition_pruning = off;
to turn it off.
https://www.postgresql.org/docs/13/ddl-partitioning.html#DDL-PARTITION-PRUNING
Uncategorized Content
- Use
NULL
s instead of default values when possible, cheaper to store and query. Source: Timescale DB blog
Stored Procedures
Stored Procedures are User Defined Functions (UDF).
Using PL/pgSQL, functions can be added to the database directly. Procedures and functions can be written in other languages as well.
To manage these functions in a Ruby app, use the fx gem (versioned database functions)!
PostgreSQL Monitoring
pg_top
On Mac OS:brew install pg_top
and run itpg_top
Uncategorized Resources
This is an amazing article full of nuggets.
- The idea of an “Application DBA”
- Things I liked: Usage of intermediate table for de-duplication. Column structure is elegant, clearly broken out destination ID and nested duplicate IDs.
- Working with arrays
ANY()
for an array of items to compare againstarray_remove(anyarray, anyelement)
to build an array but remove an elementarray_agg(expression)
to build up list of IDs andunnest(anyarray)
to expand it
- Avoidance of indexes for low selectivity, and value of partial indexes in those cases (activated 90% v. unactivated users 10%)
-
Tip on confirming index usage by removing index in a transaction with
BEGIN
and rolling it back withROLLBACK
. - Generalists/specialists: Application DBA and Performance Analyst
- PostgreSQL Connection Pooling: Part 1 – Pros & Cons
PostgreSQL Presentations
PostgreSQL Tuning
shared_buffers
. RDS default is around 25% of system memory. Recommendations say up to 40% of system memory could be allocated, at which point there may be diminishing returns beyond that.
The unit is 8kb chunks, and requires some math to change the value for. Here is a formula:
https://stackoverflow.com/a/42483002/126688
Parameter | Unit | Default RDS | Tuned | Link |
---|---|---|---|---|
shared_buffers |
8kb | 25% mem | Â | Â |
autovacuum_cost_delay |
ms | 20 | 2 | Â |
autovacuum_vaccum_cost_limit |
 | 200 | 2000 | Docs |
effective_cache_size |
8kb | Â | Â | Â |
work_mem |
MB | 4 | 250 | Â |
maintenance_work_memory |
 |  |  |  |
checkpoint_timeout |
 |  |  |  |
min_wal_size |
MB | 80 | 4000 | High write log blog |
max_wal_size |
MB | 4000 | 16000 | Â |
max_worker_processes |
 | 8 | 1x/cpu |  |
max_parallel_workers |
 | 8 | 1x/cpu |  |
max_parallel_workers_per_gather |
 | 2 | 4 |  |
PostgreSQL Backups
Sequences
TRUNCATE
and reset:TRUNCATE <table name> RESTART IDENTITY
https://brianchildress.co/reset-auto-increment-in-postgres/ALTER SEQUENCE <seq-name> RESTART WITH 1;
(e.g.users_id_seq
)