- How to Get the Best Out of PostgreSQL Logs | EDB (enterprisedb.com)
- PostgreSQL Logs: How to Enable Logging [Config Tutorial] (sematext.com)
- Mastering PostgreSQL Log Management - Mydbops | Blog
- PostgreSQL Logging: Everything You Need to Know | Better Stack Community
- Syslog logging with journald in PostgreSQL (cybertec-postgresql.com)
Tuesday, October 01, 2024
PostgreSQL - Logging
Thursday, September 19, 2024
PostgreSQL - Setting Parameters for Production
- shared_buffers
- Memory the database server uses for shared memory buffers
- 25% of physical RAM if physical RAM > 1GB
- Larger settings for shared_buffers usually require a corresponding increase in max_wal_size and setting huge_pages
- wal_buffers
- The amount of shared memory used for WAL data that has not yet been written to disk
- Default value: 3% of shared_buffers, but not less than 64kB nor more than the size of one WAL segment (16MB)
- The contents of the WAL buffers are written out to disk at every transaction commit, so extremely large values are unlikely to provide a significant benefit. However, setting this value to at least a few megabytes can improve write performance on a busy server where many clients are committing at once. The auto-tuning selected by the default setting of -1 should give reasonable results in most cases
- work_mem
- Used for sorting tuples by ORDER BY and DISTINCT operations, and for joining tables by merge-join and hash-join operations.
- Be careful with high values, as this is allocated per query, potentially consuming a large amount of RAM during heavy workloads
- Consider with formular "Total RAM * 0.25 / max_connections"
- temp_buffers
- Used for temporary buffers within each database session and for storing temporary tables.
- This setting can be changed within individual sessions, but only before the first use of temporary tables within the session; subsequent attempts to change the value will have no effect on that session
Thursday, September 12, 2024
PostgreSQL - Streaming Replication (Hot Standby)
- Disable Selinux : SELINUX=disabled
- Stop Firewall
- Config /etc/hosts
- 192.168.56.111 postgres-master.taolaoxibup.com postgres-master
- 192.168.56.222 postgres-slave.taolaoxibup.com postgres-slave
- Setup SSH KEY-PAIR
- ssh-keygen -t rsa
- ssh-copy-id -i ~/.ssh/id_rsa.pub postgres@192.168.56.[111/222]
- Replication slots provide an automated way to ensure that the primary does not remove WAL segments until they have been received by all standbys, and that the primary does not remove rows which could cause a recovery conflict even when the standby is disconnected.
- In lieu of using replication slots, it is possible to prevent the removal of old WAL segments using wal_keep_size, or by storing the segments in an archive using archive_command or archive_library
Wednesday, September 11, 2024
PostgreSQL - Backup and Restore
- pg_dump is a utility for backing up a PostgreSQL database. It makes consistent backups even if the database is being used concurrently. pg_dump does not block other users accessing the database (readers or writers)
- pg_dump only dumps a single database.
- p - plain: Output a plain-text SQL script file (the default). Using psql to restore this script.
- c - custom: Output a custom-format archive suitable for input into pg_restore. Together with the directory output format, this is the most flexible output format in that it allows manual selection and reordering of archived items during restore. This format is also compressed by default.
- d - directory: Output a directory-format archive suitable for input into pg_restore. This will create a directory with one file for each table and large object being dumped, plus a so-called Table of Contents file describing the dumped objects in a machine-readable format that pg_restore can read. A directory format archive can be manipulated with standard Unix tools; for example, files in an uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This format is compressed by default using gzip and also supports parallel dumps.
- t - tar : Output a tar-format archive suitable for input into pg_restore. The tar format is compatible with the directory format: extracting a tar-format archive produces a valid directory-format archive. However, the tar format does not support compression. Also, when using tar format the relative order of table data items cannot be changed during restore
Monday, September 09, 2024
PostgreSQL - pglogical 2 extention
- The pglogical 2 extension provides logical streaming replication for PostgreSQL, using a publish/subscribe model.
- Using pglogical the provider and subscriber must be running PostgreSQL 9.4 or newer.
- Use cases supported are:
- Upgrades between major versions (given the above restrictions)
- Full database replication
- Selective replication of sets of tables using replication sets
- Selective replication of table rows at either publisher or subscriber side (row_filter)
- Selective replication of table columns at publisher side
- Data gather/merge from multiple upstream servers
- Architectural details:
- pglogical works on a per-database level, not whole server level like physical streaming replication
- One Provider may feed multiple Subscribers without incurring additional disk write overhead
- One Subscriber can merge changes from several origins and detect conflict between changes with automatic and configurable conflict resolution (some, but not all aspects required for multi-master).
- Cascading replication is implemented in the form of changeset forwarding.
- Limitations and restrictions
- To replicate multiple databases you must set up individual provider/subscriber relationships for each. There is no way to configure replication for all databases in a PostgreSQL install at once
- UPDATEs and DELETEs cannot be replicated for tables that lack a PRIMARY KEY or other valid replica identity such as a UNIQUE constraint. Replication has no way to find the tuple that should be updated/deleted since there is no unique identifier.
- Automatic DDL replication is not supported. Managing DDL so that the provider and subscriber database(s) remain compatible is the responsibility of the user.
- The state of sequences added to replication sets is replicated periodically and not in real-time.
- PGLogical can replicate across PostgreSQL major versions. Despite that, long term cross-version replication is not considered a design target, though it may often work. Issues where changes are valid on the provider but not on the subscriber are more likely to arise when replicating across versions.
- It is safer to replicate from an old version to a newer version since PostgreSQL maintains solid backward compatibility but only limited forward compatibility. Initial schema synchronization is only supported when replicating between same version of PostgreSQL or from lower version to higher version
- ...
Wednesday, September 04, 2024
PostgreSQL - Installation
Thursday, August 29, 2024
PostgreSQL - Upgrade Methods
- Upgrade with downtime
- Using pg_dump and pg_restore
- Using pg_upgrade
- pg_upgrade (formerly called pg_migrator) allows data stored in PostgreSQL data files to be upgraded to a later PostgreSQL major version without the data dump/restore typically required for major version upgrades, e.g., from 12.14 to 13.10 or from 14.9 to 15.5. It is not required for minor version upgrades, e.g., from 12.7 to 12.8 or from 14.1 to 14.5
- pg_upgrade supports upgrades from 9.2.X and later to the current major release of PostgreSQL, including snapshot and beta releases
- Using pg_dumpall
- Upgrading with no/near zero downtime using logical/trigger based replication
- PostgreSQL logical replication (v10 and above)
- Known as transactional replication, the subscriber initially receives a copy of the replicated database object from the publisher and pulls any subsequent changes on the same object as they occur in real-time.
- The typical use-cases for logical replication are:
- Sending incremental changes in a single database or a subset of a database to subscribers as they occur.
- Firing triggers for individual changes as they arrive on the subscriber.
- Consolidating multiple databases into a single one (for example for analytical purposes).
- Replicating between different major versions of PostgreSQL.
- Replicating between PostgreSQL instances on different platforms (for example Linux to Windows)
- Giving access to replicated data to different groups of users.
- Sharing a subset of the database between multiple databases.
Thursday, July 25, 2024
PostgreSQL - Extensions
- EDB - Postgres extensions available by deployment
- Appendix F. Additional Supplied Modules and Extensions
- Postgres extensions
- Top 9 PostgreSQL Extensions 2024 (bytebase.com)
- Key PostgreSQL Extensions Every Developer Must Know When Migrating from Oracle. | Database and Migration Insights (wordpress.com)
- Using Foreign Data Wrappers to access remote PostgreSQL and Oracle databases | EDB (enterprisedb.com)
- PostgreSQL Insider - How to link to Oracle databases using oracle_fdw (part 1) (fastware.com)
- PostgreSQL Insider - How to link to Oracle databases using oracle_fdw (part 2) (fastware.com)
Wednesday, July 24, 2024
PostgreSQL - Vacuum Processing
Vacuum processing is a maintenance process that facilitates the persistent operation of PostgreSQL. Its two main tasks are removing dead tuples and the freezing transaction ids.
- MVCC
- VACUUM
- HOT
- PostgreSQL VACUUM Guide and Best Practices | EDB (enterprisedb.com)
- Deep dive into PostgreSQL VACUUM garbage collector | Google Cloud Blog
- PostgreSQL Vacuuming Command to Optimize Database Performance (percona.com)
- Monitoring PostgreSQL VACUUM Processes | Datadog (datadoghq.com)
- VACUUM Processing :: Hironobu SUZUKI @ InterDB
Wednesday, July 10, 2024
EDB - Replication Server (xDB) - Important Concepts
Deleting all rows in a source table executed by the SQL TRUNCATE command results in replication to the target tables only if the log-based method of synchronization replication is used. If the trigger-based method of synchronization replication is used, executing the TRUNCATE command on a source table doesn't replicate the effect to the target tables. You must perform a snapshot from the source table to the target tables if you use the trigger-based method.
Tuesday, July 02, 2024
EDB - Replication Server (xDB) - Installation
Introduction
- From Oracle to PostgreSQL
- From Oracle to EDB Postgres Advanced Server
- From SQL Server to PostgreSQL
- From SQL Server to EDB Postgres Advanced Server
- From EDB Postgres Advanced Server to Oracle
- From PostgreSQL to SQL Server
- From EDB Postgres Advanced Server to SQL Server
- Between PostgreSQL and EDB Postgres Advanced Server
- From PostgreSQL to Oracle (WAL mode)
- From PostgreSQL to Oracle (trigger mode)
EDB - PEM - Installation
- You can install PEM on a single server, or you can install the web application server and the backend database on two separate servers. In this post, we will install PEM on a single server.
- Because installing without internet, so we must config LOCAL yum server. Please check via Create Local HTTP Yum/DNF Repository
Please specify PEM web server (openssl) certificate subject, template: /C=CountryCode/ST=StateName/L=CityName/O=CompanyName/OU=CompanySectionName/CN=CommonNameOrHostname/emailAddress=EmailAddress /C=VN/ST=HoChiMinh/L=HCM/O=CHERRY/OU=VHUD/CN=edb-pem/emailAddress=khochiutronglong@gmail.com
Monday, June 24, 2024
EDB - pgBackRest - Part III: RESTORE
- The restore command automatically defaults to selecting the latest backup from the first repository where backups exist. The order in which the repositories are checked is dictated by the pgbackrest.conf (e.g. repo1 will be checked before repo2). To select from a specific repository, the --repo option can be passed (e.g. --repo=1). The --set option can be passed if a backup other than the latest is desired.
- Restoration Scenarios:
- Restoring the backup on a different host
- Restoring a single database from the Postgres cluster
- Restoring the backup to a specific point in time
- Restoring only the delta changes
- Restoring the backup on a different host to start as a streaming standby
- Setup Passwordless SSH Connection between Backup Server (192.168.56.79) and Database Server (192.168.56.77)
- Backup Server
- ssh-keygen -t rsa
- ssh-copy-id enterprisedb@192.168.56.77
- Database Server
- ssh-keygen -t rsa
- ssh-copy-id enterprisedb@192.168.56.79
- Database Server - Setup pgBackRest Repository
EDB - pgBackRest - Part II: Configuration & Backup
- A stanza is the configuration for a PostgreSQL database cluster that defines where it is located, how it will be backed up, archiving options, etc. Most db servers will only have one PostgreSQL database cluster and therefore one stanza, whereas backup servers will have a stanza for every database cluster that needs to be backed up.
- The stanza-delete command removes data in the repository associated with a stanza. Use this command with caution — it will permanently remove all backups and archives from the pgBackRest repository for the specified stanza.
- Enable archive mode ON
- Set archive_command = ‘pgbackrest --stanza=employees archive-push %p’
- Get data_directory = ‘/u01/edb/as16/data’
Saturday, June 15, 2024
EDB - pgBackRest - Part I: Installation
- User is enterprisedb, not postgres
- Grant right privileges to enterprisedb user
Friday, June 14, 2024
EDB - Barman - Part IV: Remote Recovery Database
Note:
- The
recover
command is used to recover a whole server. - Remote recovery - Add the --remote-ssh-command option to the invocation of the recovery command. Doing this will allow Barman to execute the copy on a remote server, using the provided command to connect to the remote host. The SSH connection between Barman and the remote host must use the public key exchange authentication method.
- Barman wraps PostgreSQL's Point-in-Time Recovery (PITR), allowing you to specify a recovery target, either as a timestamp, as a restore label, or as a transaction ID.
- Barman 2.4 also adds the --standby-mode option for the recover command which, if specified, properly configures the recovered instance as a standby by creating a standby.signal file (from PostgreSQL versions lower than 12), or by adding standby_mode = on to the generated recovery configuration.
- Barman allows you to specify a target timeline for recovery using the --target-tli option. This can be set to a numeric timeline ID or one of the special values latest (to recover to the most recent timeline in the WAL archive) and current (to recover to the timeline which was current when the backup was taken). If this option is omitted then PostgreSQL versions 12 and above will recover to the latest timeline and PostgreSQL versions below 12 will recover to the current timeline.
employees4streaming 20240614T102115 - Fri Jun 14 10:21:22 2024 - Size: 979.7 MiB - WAL Size: 0 B (tablespaces: tbs_ihrp_data:/u01/ihrp/data, tbs_ihrp_index:/u01/ihrp/index)
employees4streaming 20240613T161509 - Thu Jun 13 16:15:17 2024 - Size: 960.1 MiB - WAL Size: 9.8 MiB (tablespaces: tbs_ihrp_data:/u01/ihrp/data, tbs_ihrp_index:/u01/ihrp/index)
employees4streaming 20240613T153802 - Thu Jun 13 15:38:09 2024 - Size: 921.1 MiB - WAL Size: 19.2 MiB (tablespaces: tbs_ihrp_data:/u01/ihrp/data, tbs_ihrp_index:/u01/ihrp/index)
[barman@barman-server ~]$ ls -la /barman/backup/employees4streaming/streaming/
[barman@barman-server ~]$ ls -la /barman/backup/employees4streaming/wals/0000000100000001/
total 29768
drwxr-xr-x. 2 barman barman 4096 Jun 14 10:22 .
drwxr-xr-x. 3 barman barman 45 Jun 14 10:22 ..
-rw-------. 1 barman barman 16467 Jun 13 15:38 000000010000000100000004
-rw-------. 1 barman barman 5838855 Jun 13 15:50 000000010000000100000005
-rw-------. 1 barman barman 5812523 Jun 13 16:14 000000010000000100000006
-rw-------. 1 barman barman 5846212 Jun 13 16:14 000000010000000100000007
-rw-------. 1 barman barman 2635336 Jun 13 16:15 000000010000000100000008
-rw-------. 1 barman barman 16469 Jun 13 16:15 000000010000000100000009
-rw-------. 1 barman barman 16393 Jun 13 16:15 00000001000000010000000A
-rw-------. 1 barman barman 16570 Jun 13 16:48 00000001000000010000000B
-rw-------. 1 barman barman 5838885 Jun 14 09:58 00000001000000010000000C
-rw-------. 1 barman barman 4225404 Jun 14 10:16 00000001000000010000000D
-rw-------. 1 barman barman 16490 Jun 14 10:16 00000001000000010000000E
-rw-------. 1 barman barman 16486 Jun 14 10:17 00000001000000010000000F
-rw-------. 1 barman barman 16489 Jun 14 10:17 000000010000000100000010
-rw-------. 1 barman barman 16485 Jun 14 10:19 000000010000000100000011
-rw-------. 1 barman barman 16488 Jun 14 10:19 000000010000000100000012
-rw-------. 1 barman barman 16488 Jun 14 10:20 000000010000000100000013
-rw-------. 1 barman barman 16489 Jun 14 10:20 000000010000000100000014
-rw-------. 1 barman barman 16484 Jun 14 10:21 000000010000000100000015
-rw-------. 1 barman barman 16465 Jun 14 10:21 000000010000000100000016
Thursday, June 13, 2024
EDB - Barman - Part III: Backup via Streaming protocol
In Part I, we have already configured backup via Streaming protocol. In this section, we will backup database with this method.
Note:
- The reuse_backup option can’t be used with the postgres backup method at this time
- If a backup has been compressed using the backup_compression option then barman recover is able to uncompress the backup on recovery. Using the recovery_staging_path option in order to choose a suitable location for the staging directory to unpcompress. Set recovery_staging_path in the global/server config or use the --recovery-staging-path option with the barman recover command
1. Check information
💩Database server
slot_name | barman
plugin |
slot_type | physical
datoid |
database |
temporary | f
active | t
active_pid | 11580
xmin |
catalog_xmin |
restart_lsn | 0/C9000000
confirmed_flush_lsn |
wal_status | reserved
safe_wal_size |
two_phase | f
conflicting |
archive_command
---------------------------------------------------------
barman-wal-archive barman-server employees4streaming %p
(1 row)
drwx------. 2 enterprisedb enterprisedb 8192 Jun 13 11:01 archive_status
-rw-------. 1 enterprisedb enterprisedb 16777216 Jun 13 11:01 0000000100000000000000CA
Wednesday, June 12, 2024
EDB - Barman - Part II: Backup via rsync/SSH
In Part I, we have already configured backup via rsync/SSH. In this section, we will backup database with this method.
Note: Barman implements incremental backup through a global/server option called reuse_backup, that transparently manages the barman backup command. It accepts three values:
- off: standard full backup (default)
- link: incremental backup, by reusing the last backup for a server and creating a hard link of the unchanged files (for backup space and time reduction)
- copy: incremental backup, by reusing the last backup for a server and creating a copy of the unchanged files (just for backup time reduction)
1. Validate configuration
employees4ssh - Backup employees database via rsync/SSH
[barman@barman-server ~]$ barman
status employees4ssh
Server employees4ssh:
PostgreSQL: OK
superuser or standard user with backup privileges: OK
wal_level: OK
directories: OK
retention policy settings: OK
backup maximum age: OK (no last_backup_maximum_age provided)
backup minimum size: OK (0 B)
wal maximum age: OK (no last_wal_maximum_age provided)
wal size: OK (0 B)
compression settings: OK
failed backups: OK (there are 0 failed backups)
minimum redundancy requirements: OK (have 0 backups, expected at least 0)
ssh: OK (PostgreSQL server)
systemid coherence: OK
archive_mode: OK
archive_command: OK
continuous archiving: OK
archiver errors: OK
Server employees4ssh:
active: True
archive_command: barman-wal-archive barman-server employees4ssh %p
archive_mode: on
backup_directory: /barman/backup/employees4ssh
backup_method: rsync
basebackups_directory: /barman/backup/employees4ssh/base
conninfo: host=edb-saigon.taolaoxibup.com user=barman dbname=employees
current_lsn: 0/8DBA0208
current_size: 756601559
current_xlog: 00000001000000000000008D
data_directory: /u01/edb/as16/data
description: Backup employees database via rsync/SSH
errors_directory: /barman/backup/employees4ssh/errors
last_archived_time: 2024-06-12 14:18:51.796703+07:00
last_archived_wal: 00000001000000000000008C
name: employees4ssh
parallel_jobs: 4
ssh_command: ssh enterprisedb@edb-saigon.taolaoxibup.com
streaming_backup_name: barman_streaming_backup
streaming_conninfo: host=edb-saigon.taolaoxibup.com user=barman dbname=employees
streaming_wals_directory: /barman/backup/employees4ssh/streaming
synchronous_standby_names: ['']
wal_level: replica
wal_retention_policy: main
wal_streaming_conninfo: None
wals_directory: /barman/backup/employees4ssh/wals
xlog_segment_size: 16777216
Monday, June 10, 2024
EDB - Barman - Part I: Installation and Configuration
1. System information
Barman server
- IP : 192.168.56.101
- Barman version: 3.10.0 Barman by EnterpriseDB (www.enterprisedb.com)
- The pg_basebackup,pg_receivewal/pg_receivexlog, pg_verifybackup binaries are installed with the PostgreSQL client/server packages
- [root@barman-server ~]# dnf install barman barman-cli edb-as16-server
Database server
- IP: 192.168.56.102
- Database version: PostgreSQL 16.3 (EnterpriseDB Advanced Server 16.3.0) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3), 64-bit
- Package barman-cli:
- Using barman-wal-archive instead of rsync/SSH reduces the risk of data corruption of the shipped WAL file on the Barman server. When using rsync/SSH as archive_command a WAL file, there is no mechanism that guarantees that the content of the file is flushed and fsync-ed to disk on destination.
- [root@edb-saigon ~]# dnf install barman-cli
- After configuring "employees-ssh" on barman-server, we can check:
- [enterprisedb@edb-saigon ~]$ barman-wal-archive --test barman-server employees4ssh /barman/backup/employees4ssh/incomingReady to accept WAL files for the server employees4ssh
- Database parameters:
- archive_mode = on
- archive_command = ' test ! -f /u01/edb/as16/data/archivelog/%f && cp %p /u01/edb/as16/data/archivelog/%f && rsync -a %p barman@192.168.56.101:/barman/backup/employees4ssh/incoming/%f'
- Or archive_command = 'barman-wal-archive barman-server employees4ssh %p'
- wal_level = replica (optional)
- max_replication_slots = 4
- max_wal_senders = 4
- Database users for backup by barman:
- barman user
- [enterprisedb@edb-saigon ~]$ createuser -s -P barman
- streaming_barman user
- [enterprisedb@edb-saigon ~]$ createuser -P --replication streaming_barman
- pg_hba.conf
- host all all 0.0.0.0/0 trust
- host replication all 0.0.0.0/0 trust
Friday, June 07, 2024
EDB - Installing EDB Postgres Advanced Server 16
[root@edb-saigon ~]# curl -1sLf 'https://downloads.enterprisedb.com/XYZ.../enterprise/setup.rpm.sh' | sudo -E bash
... ... ...
... ... ...
OK: Updating the dnf cache to fetch the new repository metadata ...
OK: The repository has been installed successfully - You're ready to rock!
Install EPAS v16, we need lz4 package. So we must to configure Oracle Linux Yum Repo.
name=Oracle Linux 9 BaseOS Latest ($basearch)
baseurl=https://yum.oracle.com/repo/OracleLinux/OL9/baseos/latest/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1
name=Oracle Linux 9 AppStream ($basearch)
baseurl=https://yum.oracle.com/repo/OracleLinux/OL9/appstream/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1
name=Oracle Linux 9 Developer ($basearch)
baseurl=https://yum.oracle.com/repo/OracleLinux/OL9/developer/EPEL/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1
[root@edb-nhatrang ~]# dnf repolist
2. Install the software packages
uid=986(enterprisedb) gid=983(enterprisedb) groups=983(enterprisedb)