PostgreSQL is a very versatile database system, capable of running efficiently in low-resource environments and in environments shared with a variety of other applications. In order to ensure it will run properly for many different environments, the default configuration is very conservative and not terribly appropriate for a high-performance production database. Add the fact that geospatial databases have different usage patterns, and the data tend to consist of fewer, much larger records than non-geospatial databases, and you can see that the default configuration will not be totally appropriate for our purposes.
I think we can agree that everyone wants a fast database. The question is: fast in what respect?
In terms of databases, there are at least two different directions of “fast”:
- The number of transactions per second
- Throughput or amount of data processing
These are interrelated but definitely not the same. Both have completely different requirements in terms of I/O. In general, you want to avoid I/O at all costs. This is because I/O is always slow in comparison to access to data in memory, CPU caches of different levels, or even CPU registers. As a rule of thumb, every layer slows down access by about 1:1000.
For a system with high demand for a large number of transactions per second, you need as many concurrent IOs as you can get. For a system with high throughput, you need an IO subsystem that can deliver as many bytes per seconds as possible.
That leads to the requirement to have as much data as possible near to the CPU—e.g., in RAM. At least the working set, which is the set of data that is needed to give answers in an acceptable amount of time, should fit.
Each database engine has a specific memory layout and handles different memory areas for different purposes.
To recap: we need to avoid IO, and we need to size the memory layout so that the database is able to work efficiently (and, I assume, that all other tasks are done in terms of proper schema design).
All of these configuration parameters can be edited in the postgresql.conf database configuration file. This is a regular text file and can be edited using Notepad or any other text editor. The changes will not take effect until the server is restarted.
Note: These values are recommendations only. Each environment will differ, and testing is required to determine the optimal configuration. But this section should get you off to a good start.
Database Parameters
Here are some parameters that can be tuned to have optimal performance depending on your system and workload.
shared_buffer
The PostgreSQL buffer is called shared_buffer and is the most effective tunable parameter for most operating systems. This parameter sets how much dedicated memory will be used by PostgreSQL for the cache.
The default value of shared_buffer is set very low, and you will not get much benefit from it. It is set low because certain machines and operating systems do not support higher values. But in most modern machines, you need to increase this value for optimal performance.
If you have a dedicated database server with 1GB or more of RAM, a reasonable starting value for shared_buffers is 25% of the memory in your system. You should try some lower and higher values, because in some cases to achieve optimal performance you need a setting over 25%. The useful range for shared_buffers on Windows systems is generally from 64MB to 512MB. The configuration depends on your machine, the working data set, and the workload on your machine.
In production environments, it is observed that a large value for shared_buffer gives a really good performance, though you should always benchmark to find the right balance.
Check shared_buffer value :
edb=# Show shared_buffers;
shared_buffers
----------------
128MB
wal_buffers
PostgreSQL writes its WAL (write-ahead log) record into the buffers, and then these buffers are flushed to disk. The default size of the buffer, defined by wal_buffers, is 16MB, but if you have a number of concurrent connections, then a higher value can give better performance.
effective_cache_size
The effective_cache_size parameter provides an estimate of the memory available for disk caching. It is just a guideline, not the exact allocated memory or cache size. It does not allocate actual memory but tells the optimizer the amount of cache available in the kernel. This is factored into estimates of the cost of using an index; a higher value makes it more likely index scans will be used, while a lower value makes it more likely sequential scans will be used. If the value is set too low the query planner can decide not to use some indexes, even if they’d be helpful. Account should also be taken of the expected number of concurrent queries on different tables, since they will have to share the available space. Therefore, setting a large value is always beneficial.
Let us take a look at some practical implications of effective_cache_size.
Example
Set effective_cache_size = 1MB
edb=# SET effective_cache_size TO '1 MB';
edb=# explain SELECT * FROM bar ORDER BY id LIMIT 10;
QUERY PLAN
--------------------------------------------------------------------------
Limit (cost=0.00..39.97 rows=10 width=4)
-> Index Only Scan using idx_in on bar (cost=0.00..9992553.14 rows=2500000 width=4)
(2 rows)
As you can see costs of this query are estimated at 39.97 penalty points.
What happens if we change effective_cache_size to an insanely high value?
SET effective_cache_size = 10000MB
edb=# SET effective_cache_size TO '10000 MB';
edb=# explain SELECT * FROM bar ORDER BY id LIMIT 10;
QUERY PLAN
--------------------------------------------------------------------------
Limit (cost=0.00..0.44 rows=10 width=4)
-> Index Only Scan using idx_in on bar (cost=0.00..109180.31 rows=2500000 width=4)
(2 rows)
As you can see the costs drop dramatically. This makes sense, because we don’t expect the kernel to cache any data if we have only 1MB of RAM; however, we can expect the cache hit rate on the kernel side to up dramatically if we expect data to be cached by the OS. Random I/O is the most expensive thing, and changing this cost parameter has serious impacts on what the planner believes. Just imagine a more complex query—different cost estimates can lead to totally different plans.
work_mem
This configuration is used for complex sorting. If you have complex sorting to do, then increase the value of work_mem for good results. In-memory sorts are much faster than sorts spilling to disk. Setting a very high value can cause a memory bottleneck for your deployment environment because this parameter is a per-user sort operation. Therefore, if you have many users trying to execute sort operations, then the system will allocate work_mem * total sort operations for all users. Setting this parameter globally can cause very high memory usage. So it is highly recommended to modify this at the session-level.
Example
Set work_mem = 2MB
edb=# SET work_mem TO "2MB";
edb=# EXPLAIN SELECT * FROM bar ORDER BY bar.b;
QUERY PLAN
-----------------------------------------------------------------------------------
Gather Merge (cost=509181.84..1706542.14 rows=10000116 width=24)
Workers Planned: 4
-> Sort (cost=508181.79..514431.86 rows=2500029 width=24)
Sort Key: b
-> Parallel Seq Scan on bar (cost=0.00..88695.29 rows=2500029 width=24)
(5 rows)
Set work_mem = 256MB
The initial query’s sort node has an estimated cost of 514431.86. A cost is an arbitrary unit of computation. For the above query, we have a work_mem of only 2MB. For testing purposes, let’s increase this to 256MB and see if there is any impact on cost.
edb=# SET work_mem TO "256MB";
edb=# EXPLAIN SELECT * FROM bar ORDER BY bar.b;
QUERY PLAN
-----------------------------------------------------------------------------------
Gather Merge (cost=355367.34..1552727.64 rows=10000116 width=24)
Workers Planned: 4
-> Sort (cost=354367.29..360617.36 rows=2500029 width=24)
Sort Key: b
-> Parallel Seq Scan on bar (cost=0.00..88695.29 rows=2500029 width=24)
The query cost is reduced from 514431.86 to 360617.36 — a 30% reduction.
maintenance_work_mem
The maintenance_work_mem parameter is a memory setting used for maintenance tasks. The default value is 64MB. Setting a large value helps in tasks like VACUUM, RESTORE, CREATE INDEX, ADD FOREIGN KEY, and ALTER TABLE.
Example
Set maintenance_work_mem = 10MB
edb=# CHECKPOINT;
edb=# SET maintenance_work_mem to '10MB';
edb=# CREATE INDEX foo_idx ON foo (c);
CREATE INDEX
Time: 170091.371 ms (02:50.091)
Set maintenance_work_mem = 256MB
edb=# CHECKPOINT;
edb=# set maintenance_work_mem to '256MB';
edb=# CREATE INDEX foo_idx ON foo (c);
CREATE INDEX
Time: 111274.903 ms (01:51.275)
The index creation time is 170091.371 ms when maintenance_work_mem is set to only 10MB, but that is reduced to 111274.903 ms when we increase maintenance_work_mem setting to 256MB.
synchronous_commit
This is used to enforce that commit will wait for WAL to be written on disk before returning a success status to the client. This is a trade-off between performance and reliability. If your application is designed so that performance is more important than reliability, you should turn off synchronous_commit. With synchronous_commit off, there will be a time gap between the success status and a guaranteed write to disk. In the case of a server crash, data might be lost, even though the client received a success message on commit. In this case, a transaction commits very quickly because it does not wait for a WAL file to be flushed, but reliability is compromised.
max_connections
This parameter does just what you think: sets the maximum number of current connections. If you reach the limit, you will not be able to connect to the server anymore. Every connection occupies resources, so the number should not be set too high. If you have long-running sessions, you probably need to use a higher number than if the sessions are mostly short-lived. Keep aligned with the configuration for connection pooling.
max_prepared_transactions
When you use prepared transactions you should set this parameter at least equal to the number of max_connections, so that every connection can have at least one prepared transaction. You should consult the documentation for your preferred ORM (Object-Relational-Mapper) to see if there are special hints on this.
max_worker_processes
Set this to the number of CPUs you want to share for PostgreSQL exclusively. This is the number of background processes the database engine can use. Setting this parameter will require a server restart. The default is 8.
When running a standby server, you must set this parameter to the same value or higher than on the master server. Otherwise, queries will not be allowed on the standby server.
max_parallel_workers_per_gather
The maximum workers a Gather or GatherMerge node can use. This parameter should be set equal to max_worker_processes. When the Gather node is reached during query execution, the process that is implementing the user's session will request a number of background worker processes equal to the number of workers chosen by the planner. The number of background workers that the planner will consider using is limited to max_parallel_workers_per_gather or below. The total number of background workers that can exist at any one time is limited by both max_worker_processes and max_parallel_workers. Therefore, it is possible for a parallel query to run with fewer workers than planned, or even with no workers at all. The optimal plan may depend on the number of workers that are available, so this can result in poor query performance.
If this occurrence is frequent, consider increasing max_worker_processes and max_parallel_workers so that more workers can be run simultaneously or alternatively reducing max_parallel_workers_per_gather so that the planner requests fewer workers.
max_parallel_workers
Maximum parallel worker processes for parallel queries. Same as for max_worker_processes. The default value is 8.
Note that a setting for this value that is higher than max_worker_processes will have no effect, since parallel workers are taken from the pool of worker processes established by that setting.
effective_io_concurrency
The number of real concurrent IO operations supported by the IO subsystem. As a starting point: with plain HDDs try setting at 2, with SSDs go for 200, and if you have a potent SAN you can start with 300.
random_page_cost
This factor basically tells the PostgreSQL query planner how much more (or less) expensive it is to access a random page than to do sequential access. With SSDs or potent SANs, this does not seem so relevant, but it was in times of traditional hard disk drives. For SSDs or SANs, start with 1.1, for plain old disks set it to 4.
min_ and max_wal_size
These settings set size boundaries on the transaction log of PostgreSQL. Basically, this is the amount of data that can be written until a checkpoint is issued, which in turn syncs the in-memory data with the on-disk data.
max_fsm_pages
This option helps to control the free space map. When something is deleted from a table it isn't removed from the disk immediately. It is simply marked as "free" in the free space map. This space can then be reused for any new INSERTs that you do on the table. If your setup has a high rate of DELETEs and INSERTs it may be necessary to increase this value to avoid table bloat.
Tuning PostgreSQL database parameters is essential for optimizing performance based on specific workload requirements. By adjusting configurations like memory allocation and I/O settings, you can enhance transaction speed and data processing efficiency. Proper tuning can lead to:
- Improved query response times
- Reduced resource consumption
- Enhanced overall system stability
The optimal value for shared_buffers generally starts at 25% of your system's RAM if you have 1GB or more. However, testing different values is crucial, as the best setting can vary depending on your specific workload and hardware configuration.
The effective_cache_size parameter helps the query planner estimate available memory for disk caching. A higher value encourages the use of indexes, potentially speeding up query execution. Setting it too low may lead to suboptimal query plans, resulting in slower performance and increased I/O operations.
- It is often set to about 50-75% of total system memory.
- Regularly review and adjust based on workload changes.
Most PostgreSQL parameters can only be changed in the postgresql.conf file and require a server restart to take effect. However, some parameters can be modified at the session level without a restart.
Turning off synchronous_commit improves performance by allowing transactions to commit without waiting for WAL (write-ahead log) to be flushed to disk. However, this increases the risk of data loss in case of a server crash before the data is written to disk.
You can monitor performance changes by using PostgreSQL's built-in statistics views, such as pg_stat_statements, and by analyzing query execution times before and after adjustments. Benchmarking under load conditions similar to your production environment is also recommended. Additionally, consider using third-party monitoring tools for deeper insights.
- Track metrics like CPU usage, memory consumption, and disk I/O.
- Regularly review logs for anomalies or unexpected behavior.
EDB provides comprehensive support for PostgreSQL optimization through its managed services, including performance tuning consultations and access to expert resources that help tailor configurations to your specific needs.
Yes, EDB offers migration services that ensure a smooth transition to optimized PostgreSQL configurations. Their team assists in tuning parameters during the migration process for enhanced performance from day one.
EDB offers a variety of resources, including documentation, webinars, and community forums where users can learn best practices for tuning PostgreSQL databases and share experiences with other professionals.
You can get started with EDB's PostgreSQL services by reading more here to explore more about EDB Postgres AI, or contacting our sales team for personalized assistance tailored to your organization's needs.