How Do We Test PGD

July 15, 2025

This blog was co-authored by Bharat Telange and Amruta Deolasee.

The PostgreSQL Distributed (PGD) is a distributed database that provides high availability and scalability. The PGD team performs rigorous and extensive testing to deliver a robust, high-performance product.

This blog provides an overview of the extensive testing done for the PGD product. 

Testing Methodologies and Strategies

To guarantee high performance and quality across supported platforms and versions, the PGD team uses a variety of modern testing methodologies. Continuous testing is at the core of our efforts, enabling us to catch issues early and maintain product integrity as code evolves.

Supported platforms include:

  • Community PostgreSQL
  • EDB Postgres Advanced Server
  • EDB Postgres Extended Server

We support multiple major versions for each PGD release:

  • PGD 4: PostgreSQL 12–14
  • PGD 5: PostgreSQL 12–17
  • PGD 6: PostgreSQL 14–17

This comprehensive support ensures PGD delivers high performance and reliability, no matter which Postgres variant or version you use.

At a high level, PGD testing is organized into several categories:

  • Functional testing
  • Integration testing
  • Smoke tests on packages
  • Performance testing
  • Exploratory testing

Functional Testing

These can be called structured white-box testing. In-depth testing of all internal components and features is covered. These tests ensure code coverage, expected routine outcomes, conditional behavior of features, and the combination of inputs and features. These tests are covered by two different test frameworks based on the complexity of the test case.

TAP Testing

TAP (Test Anything Protocol) framework provides isolated, independent test environments for each test. This allows us to safely simulate complex scenarios including server restarts, crash simulations, and high concurrency.

Examples: 

  • Verify the expected behavior of various DMLs and DDLs in a degraded cluster
  • Triggering panics at specific code paths
  • Testing concurrent joins and more

Regression Testing

Uses multiple PostgreSQL databases on one PostgreSQL instance to simulate different nodes. Scenarios that can give an output that can be easily compared against an exact expected result are included in these tests. Each test is performed on the same cluster. Cluster type cannot be altered, service cannot be restarted, and consists of simple 2 or more data nodes only. These often test individual function or operation behavior, like unit tests usually do.

Examples:

  • Create a node, join, and part
  • Valid/invalid inputs to the BDR routines
  • Valid and invalid DMLs
  • Valid and invalid DDLs
  • Verification of the expected sequence kind
  • Expected outputs of BDR views and catalogs

Integration Testing

Integration testing means the entire PGD stack is involved, and cluster sanity is tested. We call them AVs (Architecture Verification). These tests use EDB’s tpaexec tool for provision and deployment and its inbuilt testing feature for test development. Cluster deployment is done on Docker or cloud (AWS EC2). Test scripts are Ansible-based YAML scripts. These are mostly stable, package-based automated runs, but if required, they can be run on a required source branch as well. We cover high availability where node crash is caused via network partitioning using iptables, package-based upgrade tests, performance capturing, and OS distribution-specific package sanity testing.

High Availability Scenario Example:

  • Bring the cluster into varied degraded states in the form of having or not having a raft majority.
  • The cluster could be a default basic or have CAMO/commit scopes applied.
  • Some tests have a continuous load applied, while a few only test specific transaction statuses.
  • PGD handles transactions differently based on the degraded state of the cluster. The sanity and consistency are verified for each state.

Examples:

  • Testing a basic cluster with local/global routing enabled
  • Testing CAMO Split Brain
  • Testing with various group commit scopes applied
  • Proxy/Connection Manager and CLI testing (connectivity, CLI commands, raft and write leader election, failover, switchover)
  • Testing various scenario while the cluster is under continuous load

Package-based Upgrade Testing:

  • Covers minor and major version PGD upgrades.
  • PGD and PG objects are created on older version nodes, then upgraded by either adding a new node or using pg_upgrade.

Package verifications on various supported OS distributions:
Verification of package installation on various OS Distributions, and tests to determine the health of the cluster.

Smoke Tests on Packages

During the package building step on every supported version, platform, and architecture, the package sanity is verified in smoke tests before uploading to the production package repository.

Performance Testing

EDB Performance Regression Framework is leveraged to run Performance Benchmark/Regression Tests on PGD. The primary goal of these tests is to verify that any modifications to the codebase do not introduce performance regressions in PGD.

Performance Regression Tests focus on key performance metrics such as TPS and latency, using a fixed workload on a standard PGD6 Essential Setup. 

These tests evaluate the performance of PGD6, including performance of Connection Manager, to ensure that the expected performance standards are met.

These tests are designed to ensure that any changes made to the codebase of PGD6 do not negatively impact the performance of PGD6 Cluster. The tests are run weekly, and the results are thoroughly documented.

By establishing a baseline for these metrics, the performance of the database with PGD is compared to identify any deviations.

Exploratory Testing

The PGD team also conducts exploratory tests during the release testing. The aim of this testing is to uncover any potential issues beyond the regular release testing suite. It is also a part of the internal product quality improvement initiative. Often, these exploratory tests result in creation of automated tests.

Continuous Testing

Type of Testing

Detail

Frequency                              

PR Check

Each PR triggers a test check, starting upgrade tests, regression tests, and critical TAP tests. These ensure new commits do not cause breakage. PRs are blocked until all tests pass and are peer-reviewed.

Every PR

Nightly/Weekly runs

Regression and entire suite of TAP tests run nightly on all supported PGD-PG versions. Multiple runs with different configuration matrix. AV scenario testing: 70% daily, 30% weekly. Performance tests weekly.

Daily/

Weekly     

Pre-release runs

Rigorous upgrade tests and AVs on all supported distributions before release.

Daily/

Pre-release                   

 

Conclusion

The rigorous and extensive testing ensures that PGD is reliable and high-performance, meeting the needs of mission-critical customer applications.

 

About the authors:

Bharat Telange, Amruta Deolasse, and Abhijit Save are part of the PGD testing team. The team has extensive experience with PGD testing and has performed various types of tests for multiple PGD product releases.

 

 

 

 

 

Share this