Thursday, September 19, 2024

PostgreSQL - Setting Parameters for Production

There are several key parameters that should be adjusted for optimal performance, security, and stability. The default settings are usually designed for small systems and need tuning based on your hardware, workload, and specific use cases. Here's a breakdown of important parameters to adjust:

1. Memory Settings
  • shared_buffers
    • Memory the database server uses for shared memory buffers
    • 25% of physical RAM if physical RAM > 1GB
    • Larger settings for shared_buffers usually require a corresponding increase in max_wal_size and setting huge_pages
  • wal_buffers
    • The amount of shared memory used for WAL data that has not yet been written to disk
    • Default value:  3% of shared_buffers, but not less than 64kB nor more than the size of one WAL segment (16MB)
    • The contents of the WAL buffers are written out to disk at every transaction commit, so extremely large values are unlikely to provide a significant benefit. However, setting this value to at least a few megabytes can improve write performance on a busy server where many clients are committing at once. The auto-tuning selected by the default setting of -1 should give reasonable results in most cases
  • work_mem
    • Used for sorting tuples by ORDER BY and DISTINCT operations, and for joining tables by merge-join and hash-join operations.
    • Be careful with high values, as this is allocated per query, potentially consuming a large amount of RAM during heavy workloads
    • Consider with formular "Total RAM * 0.25 / max_connections"
  • temp_buffers
    • Used for temporary buffers within each database session and for storing temporary tables.
    • This setting can be changed within individual sessions, but only before the first use of temporary tables within the session; subsequent attempts to change the value will have no effect on that session

Thursday, September 12, 2024

PostgreSQL - Streaming Replication (Hot Standby)

1. General information

Master Node
IP: 192.168.56.111
Hostname: postgres-master
Version: postgres (PostgreSQL) 16.4
$PGDATA=/var/lib/pgsql/16/data

postgres=# \db
                    List of tablespaces
    Name    |  Owner   |              Location              
------------+----------+------------------------------------
 dvd_data   | tucdt    | /var/lib/pgsql/dvdrental_tbs/data
 dvd_index  | tucdt    | /var/lib/pgsql/dvdrental_tbs/index
 pg_default | postgres | 
 pg_global  | postgres | 
(4 rows)

Slave Node
IP: 192.168.56.222
Hostname: postgres-slave
Version: postgres (PostgreSQL) 16.4
$PGDATA=/var/lib/pgsql/16/data    // empty

2. Preparation on Master/Slave
  • Disable Selinux : SELINUX=disabled
  • Stop Firewall
  • Config /etc/hosts
    • 192.168.56.111  postgres-master.taolaoxibup.com postgres-master
    • 192.168.56.222  postgres-slave.taolaoxibup.com postgres-slave
  • Setup SSH KEY-PAIR
    • ssh-keygen -t rsa
    • ssh-copy-id -i ~/.ssh/id_rsa.pub postgres@192.168.56.[111/222]
3. Master - Configuration
3.1 Create user for replication
postgres=# CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'P@ssw0rd';

3.2 Create replication slot
  • Replication slots provide an automated way to ensure that the primary does not remove WAL segments until they have been received by all standbys, and that the primary does not remove rows which could cause a recovery conflict even when the standby is disconnected.
  • In lieu of using replication slots, it is possible to prevent the removal of old WAL segments using wal_keep_size, or by storing the segments in an archive using archive_command or archive_library

Wednesday, September 11, 2024

PostgreSQL - Backup and Restore

  • pg_dump is a utility for backing up a PostgreSQL database. It makes consistent backups even if the database is being used concurrently. pg_dump does not block other users accessing the database (readers or writers)
  • pg_dump only dumps a single database.
[postgres@postgres-master ~]$ pg_dump --help
pg_dump dumps a database as a text file or to other formats.

Usage:
  pg_dump [OPTION]... [DBNAME]

General options:
  -f, --file=FILENAME          output file or directory name
  -F, --format=c|d|t|p         output file format (custom, directory, tar, plain text (default))
  • p - plainOutput a plain-text SQL script file (the default). Using psql to restore this script.
  • c - custom: Output a custom-format archive suitable for input into pg_restore. Together with the directory output format, this is the most flexible output format in that it allows manual selection and reordering of archived items during restore. This format is also compressed by default.
  • d - directory: Output a directory-format archive suitable for input into pg_restore. This will create a directory with one file for each table and large object being dumped, plus a so-called Table of Contents file describing the dumped objects in a machine-readable format that pg_restore can read. A directory format archive can be manipulated with standard Unix tools; for example, files in an uncompressed archive can be compressed with the gzip, lz4, or zstd tools. This format is compressed by default using gzip and also supports parallel dumps.
  • t - tar : Output a tar-format archive suitable for input into pg_restore. The tar format is compatible with the directory format: extracting a tar-format archive produces a valid directory-format archive. However, the tar format does not support compression. Also, when using tar format the relative order of table data items cannot be changed during restore
  -j, --jobs=NUM               use this many parallel jobs to dump    // only supported by the directory format -Fd

Options controlling the output content:
  -a, --data-only              dump only the data, not the schema    // only metadata of database
  -c, --clean                  clean (drop) database objects before recreating
  -n, --schema=PATTERN         dump the specified schema(s) only
  -N, --exclude-schema=PATTERN do NOT dump the specified schema(s)
  -s, --schema-only            dump only the schema, no data
  -t, --table=PATTERN          dump only the specified table(s)
  -T, --exclude-table=PATTERN  do NOT dump the specified table(s)

Connection options:
  -d, --dbname=DBNAME      database to dump
  -h, --host=HOSTNAME      database server host or socket directory
  -p, --port=PORT          database server port number
  -U, --username=NAME      connect as specified database user
  -w, --no-password        never prompt for password
  -W, --password           force password prompt (should happen automatically)

Example:
[postgres@postgres-master ~]$ pg_dump dvdrental -j2 -Fd -f /source/dumpfile
[postgres@postgres-master ~]$ pg_dump dvdrental -f /source/dvdrental.sql
[postgres@postgres-master ~]$ pg_dump dvdrental -t cherry -t actor > cherry_actor.sql

Monday, September 09, 2024

PostgreSQL - pglogical 2 extention

1. Overview 
  • The pglogical 2 extension provides logical streaming replication for PostgreSQL, using a publish/subscribe model.
  • Using pglogical the provider and subscriber must be running PostgreSQL 9.4 or newer.
  • Use cases supported are:
    • Upgrades between major versions (given the above restrictions)
    • Full database replication
    • Selective replication of sets of tables using replication sets
    • Selective replication of table rows at either publisher or subscriber side (row_filter)
    • Selective replication of table columns at publisher side
    • Data gather/merge from multiple upstream servers
  • Architectural details:
    • pglogical works on a per-database level, not whole server level like physical streaming replication
    • One Provider may feed multiple Subscribers without incurring additional disk write overhead
    • One Subscriber can merge changes from several origins and detect conflict between changes with automatic and configurable conflict resolution (some, but not all aspects required for multi-master).
    • Cascading replication is implemented in the form of changeset forwarding.
  • Limitations and restrictions
    • To replicate multiple databases you must set up individual provider/subscriber relationships for each. There is no way to configure replication for all databases in a PostgreSQL install at once
    • UPDATEs and DELETEs cannot be replicated for tables that lack a PRIMARY KEY or other valid replica identity such as a UNIQUE constraint. Replication has no way to find the tuple that should be updated/deleted since there is no unique identifier.
    • Automatic DDL replication is not supported. Managing DDL so that the provider and subscriber database(s) remain compatible is the responsibility of the user. 
    • The state of sequences added to replication sets is replicated periodically and not in real-time.
    • PGLogical can replicate across PostgreSQL major versions. Despite that, long term cross-version replication is not considered a design target, though it may often work. Issues where changes are valid on the provider but not on the subscriber are more likely to arise when replicating across versions. 
    • It is safer to replicate from an old version to a newer version since PostgreSQL maintains solid backward compatibility but only limited forward compatibility. Initial schema synchronization is only supported when replicating between same version of PostgreSQL or from lower version to higher version
    • ...
2. General information

Master Node
IP: 192.168.56.111
Hostname: postgres-master.taolaoxibup.com
OS: Red Hat Enterprise Linux release 9.4 (Plow)

postgres=# select datname from pg_database ;
  datname  
-----------
 postgres
 dvdrental        // we will replica this database
 template1
 template0
(4 rows)

Slave Node
IP: 192.168.56.222
Hostname: postgres-slave.taolaoxibup.com
OS: Red Hat Enterprise Linux release 9.4 (Plow)

3. Installation on Master/Slave nodes
# dnf install -y postgresql16-server postgresql16-contrib
# dnf install pglogical_16

Wednesday, September 04, 2024

PostgreSQL - Installation

A. Online - Installation from Binaries

1. Install the repository RPM
sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-9-x86_64/pgdg-redhat-repo-latest.noarch.rpm

2. Disable the built-in PostgreSQL module
sudo dnf -qy module disable postgresql

3. Install PostgreSQL
sudo dnf install -y postgresql16-server

4. Optionally initialize the database and enable automatic start
sudo /usr/pgsql-16/bin/postgresql-16-setup initdb
sudo systemctl enable postgresql-16
sudo systemctl start postgresql-16

5. Install extensions packages
# dnf install postgresql16-contrib.x86_64

Thursday, August 29, 2024

MongoDB - Performance Tuning in Linux for DBAs

There are some important elements must be considered carefully for tuning database performance.

1. File Systems

  • Using XFS as it generally performs better with MongoDB. 
  • With the WiredTiger storage engine, using XFS is strongly recommended for data bearing nodes to avoid performance issues that may occur when using EXT4 with WiredTiger.
  • Set 'noatime' on MongoDB data volumes in /etc/fstab
2. Swappiness
  • “Swappiness” is a Linux kernel setting that influences the behavior of the Virtual Memory manager. The vm.swappiness setting ranges from 0 to 100: the higher the value, the more strongly it prefers swapping memory pages to disk over dropping pages from RAM
  • Set vm.swappiness to 1, not 0
  • Allocating RAM as much as possible (storage.wiredTiger.engineConfig.cacheSizeGB)
  • Transparent Huge Pages (THP) is a Linux memory management system that reduces the overhead of Translation Lookaside Buffer (TLB) lookups on machines with large amounts of memory by using larger memory pages.
  • However, database workloads often perform poorly with THP enabled, because they tend to have sparse rather than contiguous memory access patterns. When running MongoDB on Linux, THP should be disabled for best performance
  • Avoid overloading the connection resources of a mongod or mongos instance by adjusting the connection pool size to suit your use case.
  • Specify connection pool settings in these locations:
    • The MongoDB URI
    • Your application's MongoClient instance
    • Your application framework's configuration files
  • Some parameters control connections in /etc/mongod.conf file:
    • net.maxIncomingConnections
    • setting parameters with setParameter: option
  • For improved performance, consider separating your database's data, journal, and logs onto different storage devices, based on your application's access and write pattern. 
  • Mount the components as separate filesystems and use symbolic links to map each component's path to the device storing it.
  • For the WiredTiger storage engine, you can also store the indexes on a different storage device by configuring storage.wiredTiger.engineConfig.directoryForIndexes parameter.

PostgreSQL - Upgrade Methods

There are several upgrade database methods, such as:
  • Upgrade with downtime
    • Using pg_dump and pg_restore
    • Using pg_upgrade
      • pg_upgrade (formerly called pg_migrator) allows data stored in PostgreSQL data files to be upgraded to a later PostgreSQL major version without the data dump/restore typically required for major version upgrades, e.g., from 12.14 to 13.10 or from 14.9 to 15.5. It is not required for minor version upgrades, e.g., from 12.7 to 12.8 or from 14.1 to 14.5
      • pg_upgrade supports upgrades from 9.2.X and later to the current major release of PostgreSQL, including snapshot and beta releases
    • Using pg_dumpall
  • Upgrading with no/near zero downtime using logical/trigger based replication
    • PostgreSQL logical replication (v10 and above)
      • Known as transactional replication, the subscriber initially receives a copy of the replicated database object from the publisher and pulls any subsequent changes on the same object as they occur in real-time.
      • The typical use-cases for logical replication are:
        • Sending incremental changes in a single database or a subset of a database to subscribers as they occur.
        • Firing triggers for individual changes as they arrive on the subscriber.
        • Consolidating multiple databases into a single one (for example for analytical purposes).
        • Replicating between different major versions of PostgreSQL.
        • Replicating between PostgreSQL instances on different platforms (for example Linux to Windows)
        • Giving access to replicated data to different groups of users.
        • Sharing a subset of the database between multiple databases.

Wednesday, August 28, 2024

MongoDB - /etc/mongod.conf sample file

[root@mongodb-01 ~]# cat /etc/mongod.conf 
# mongod.conf

# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/

# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

# Where and how to store data.
storage:
  dbPath: /u01/mongoDB/helloWorld
  journal:
    enabled: true
  engine: wiredTiger
  wiredTiger:
    engineConfig:
      journalCompressor: snappy
      cacheSizeGB: 1.5            // Adjust base on physical resource
    collectionConfig:
      blockCompressor: snappy

# how the process runs - Controls how MongoDB is managed as a system process.
processManagement:
  fork: true                              # Enable MongoDB to run as a background daemon.
  pidFilePath: /var/run/mongodb/mongod.pid     # Location of the PID file when running as a service.
  timeZoneInfo: /usr/share/zoneinfo

Saturday, August 24, 2024

13.5.0 Enterprise Manager Cloud Control - Patching RU22 from Base version

Overview of the Enterprise Manager Proactive Patch Program (Doc ID 822485.1)

 

List of patches:

  • OMS – RU 22
    • Patch 36335368: 13.5.0 Enterprise Manager Cloud Control Base Platform Monthly Release Update (RU) 22
    • Patch 19999993: OMSPatcher patch of version 13.9.5.20.0 for Enterprise Manager Cloud Control 13.5.0.0.0
    • Patch 28186730: OPATCH 13.9.4.2.15 FOR EM 13.5 AND FMW/WLS 12.2.1.4.0 and 14.1.1.0.0
    • Patch 35430934: MERGE REQUEST ON TOP OF 12.2.1.4.0 FOR BUGS 32720458 33607709
    • Patch 34153238: HTTPS PROXY CONFIGURATION IS NOT USED WHEN PROTOCOL IS CONFIGURED TO TCP
    • Patch 31657681: THREADS CONTEND FOR LOCK IN LOADFILEBASEDKEYSTORE WHEN OPENING TLS/SSL ENABLED JDBC CONNECTIONS
  • Agent – RU 22
    • Patch 36335371: Oracle Enterprise Manager 13c Release 5 Update 22 ( 13.5.0.22 ) for Oracle Management Agent
    • Patch 33355570: AgentPatcher release of version 13.9.5.8.0 for Enterprise Manager Cloud Control Agent 13.5.0.0.0

Notes:

  • Backup Database and Application before Patching
  • Apply OMS RU 22 before applying Agent RU 22
  • Apply OMSPatcher and OPatch the latest version before applying OMS RU 22
  • Apply Oracle Database to the latest version of Oracle Database 19c before applying OMS RU 22

Main steps:

  • Apply Oracle Database to the latest version of Oracle Database 19c
  • Apply OMSPatcher and Opatch to the latest version
  • Apply OMS RU 22
  • Apply Agent RU 22

 

Friday, August 23, 2024

13.5.0 Enterprise Manager Cloud Control - Holistic Patch for Oracle Management Service - July 2024

Holistic Patching in Enterprise Manager is a method of applying all stack patch bundle for a given CPU released by Oracle for WebLogic and its components. Applying all SPB,CPU patches on OMS is a downtime patching activity and requires maintenance window. 

NoteThere is no dependency between OMS RU Patch to Holistic Patch.

A. Current System Information

1. Database version

[oracle@monitor-db ~]$ opatch lspatches

36233263;Database Release Update : 19.23.0.0.240416 (36233263)

29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399) 

OPatch succeeded.

2. OMS version

[oracle@monitor-app ~]$ . oms_env

[oracle@monitor-app ~]$ omspatcher lspatches

oracle.sysman.db.oms.plugin/13.5.1.0.0            Plugin              36335368            36329033            Oracle Enterprise Manager for Oracle Database 13c Release 5 Plug-in Update 22 (13.5.1.22) for Oracle Management Service

oracle.sysman.cfw.oms.plugin/13.5.1.0.0           Plugin              36335368            36329231            Oracle Enterprise Manager for Cloud Framework 13c Release 5 Plug-in Update 22 (13.5.1.22) for Oracle Management Service

oracle.sysman.xa.oms.plugin/13.5.1.0.0            Plugin              36335368            36329152            Oracle Enterprise Manager for Exadata 13c Release 5 Plug-in Update 22 (13.5.1.22) for Oracle Management Service

oracle.sysman.top.oms/13.5.0.0.0                  Core                36335368            36329009            Oracle Enterprise Manager 13c Release 5 Platform Update 22 (13.5.0.22) for Oracle Management Service

oracle.sysman.si.oms.plugin/13.5.1.0.0            Plugin              36335368            36329196            Oracle Enterprise Manager for Systems Infrastructure 13c Release 5 Plug-in Update 22 (13.5.1.22) for Oracle Management Service

oracle.sysman.emas.oms.plugin/13.5.1.0.0          Plugin              36335368            36329020            Oracle Enterprise Manager for Fusion Middleware 13c Release 5 Plug-in Update 22 (13.5.1.22) for Oracle Management Service

3. Agent version

[oracle@monitor-app ~]$ . agent_env

[oracle@monitor-app ~]$ agentpatcher lspatches

oracle.sysman.si.agent.plugin/13.5.1.0.0          Plugin              36335371            36335417            Oracle Enterprise Manager for Systems Infrastructure 13c Release 5 Plug-in Update 22 (13.5.1.22) for Oracle Management Agent

oracle.sysman.top.agent/13.5.0.0.0                Core                36335371            36335374            Oracle Enterprise Manager 13c Release 5 Platform Update 22 (13.5.0.22) for Oracle Management Agent

4. OMSPatcher version

[oracle@monitor-app ~]$ . oms_env

[oracle@monitor-app ~]$ omspatcher version

OMSPatcher Version: 13.9.5.20.0

OPlan Version: 12.2.0.1.16

OsysModel build: Tue Apr 28 18:16:31 PDT 2020 

OMSPatcher succeeded. 

13.5.0 Enterprise Manager Cloud Control Base Platform Monthly Release Update (RU) 23 - July 2024

Ref: 13.5.0 Enterprise Manager Cloud Control Base Platform Monthly Release Update (RU) 23 (Doc ID 3040236.1)

A. System Information

👉Database version
[oracle@monitor-db ~]$ opatch lspatches
36233263;Database Release Update : 19.23.0.0.240416 (36233263)
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)

OPatch succeeded.

👉OMS version
[oracle@monitor-app ~]$ . oms_env
[oracle@monitor-app ~]$ omspatcher lspatches


oracle.sysman.db.oms.plugin/13.5.1.0.0            Plugin              36335368            36329033            Oracle Enterprise Manager for Oracle Database 13c Release 5 Plug-in Update 22 (13.5.1.22) for Oracle Management Service

oracle.sysman.cfw.oms.plugin/13.5.1.0.0           Plugin              36335368            36329231            Oracle Enterprise Manager for Cloud Framework 13c Release 5 Plug-in Update 22 (13.5.1.22) for Oracle Management Service

oracle.sysman.xa.oms.plugin/13.5.1.0.0            Plugin              36335368            36329152            Oracle Enterprise Manager for Exadata 13c Release 5 Plug-in Update 22 (13.5.1.22) for Oracle Management Service

oracle.sysman.top.oms/13.5.0.0.0                  Core                36335368            36329009            Oracle Enterprise Manager 13c Release 5 Platform Update 22 (13.5.0.22) for Oracle Management Service

oracle.sysman.si.oms.plugin/13.5.1.0.0            Plugin              36335368            36329196            Oracle Enterprise Manager for Systems Infrastructure 13c Release 5 Plug-in Update 22 (13.5.1.22) for Oracle Management Service

oracle.sysman.emas.oms.plugin/13.5.1.0.0          Plugin              36335368            36329020            Oracle Enterprise Manager for Fusion Middleware 13c Release 5 Plug-in Update 22 (13.5.1.22) for Oracle Management Service

👉Agent version
[oracle@monitor-app ~]$ . agent_env 
[oracle@monitor-app ~]$ agentpatcher lspatches


oracle.sysman.si.agent.plugin/13.5.1.0.0          Plugin              36335371            36335417            Oracle Enterprise Manager for Systems Infrastructure 13c Release 5 Plug-in Update 22 (13.5.1.22) for Oracle Management Agent

oracle.sysman.top.agent/13.5.0.0.0                Core                36335371            36335374            Oracle Enterprise Manager 13c Release 5 Platform Update 22 (13.5.0.22) for Oracle Management Agent

👉OMSPatcher & OPatch version
[oracle@monitor-app ~]$ . oms_env 
[oracle@monitor-app ~]$ omspatcher version

OMSPatcher Version: 13.9.5.21.0
OPlan Version: 12.2.0.1.16
OsysModel build: Tue Apr 28 18:16:31 PDT 2020

OMSPatcher succeeded.

[oracle@monitor-app ~]$ opatch version
OPatch Version: 13.9.4.2.16

OPatch succeeded.

Ref
  • 13.5: How To Upgrade Enterprise Manager 13.5 Cloud Control OMSPatcher Utility to the Latest Version (Doc ID 2809842.1)
  • EM 13c: How To Upgrade Enterprise Manager 13.4 Cloud Control OMSPatcher Utility to the Latest Version (Doc ID 2646080.1)

Wednesday, August 14, 2024

MongoDB - Percona Backup for MongoDB

Percona Backup for MongoDB (PBM) is an open source and distributed solution for consistent backups and restores of MongoDB sharded clusters and replica sets to a specific point in time.

Architecture
Percona Backup for MongoDB consists of the following components:
  • pbm-agent is a process running on every mongod node within the cluster or within a replica set that performs backup and restore operations.
  • pbm CLI is a command-line utility that instructs pbm-agents to perform an operation.
  • PBM Control collections are special collections in MongoDB that store the configuration data and backup states. Both pbm CLI and pbm-agent use PBM Control collections to check backup status in MongoDB and communicate with each other.
  • Remote backup storage is where Percona Backup for MongoDB saves backups. It can be either an S3 compatible storage or a filesystem-type storage.
The following diagram illustrates how Percona Backup for MongoDB components communicate with MongoDB.




1. Installation
  • Install percona-release
    • $ sudo yum install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm
  • Enable the repository
    • $ sudo percona-release enable pbm release
  • Install Percona Backup for MongoDB packages
    • $ sudo yum install percona-backup-mongodb
[root@mongodb-01 ~]# ls -l /etc | grep pbm
-rw-r-----   1 root         root           3626 May 15 17:10 pbm-conf-reference.yml
-rw-r-----   1 mongod       mongod           91 Aug 14 10:56 pbm-storage.conf

2. Configuration

👀Create user for backup on Primary Node

use admin

db.getSiblingDB("admin").createRole({ "role": "pbmAnyAction",
      "privileges": [
         { "resource": { "anyResource": true },
           "actions": [ "anyAction" ]
         }
      ],
      "roles": []
   });
   
db.getSiblingDB("admin").createUser({user: "p4backup",
       "pwd": "Password789",
       "roles" : [
          { "db" : "admin", "role" : "readWrite", "collection": "" },
          { "db" : "admin", "role" : "backup" },
          { "db" : "admin", "role" : "clusterMonitor" },
          { "db" : "admin", "role" : "restore" },
          { "db" : "admin", "role" : "pbmAnyAction" }
       ]
    });  

Tuesday, August 06, 2024

MongoDB - Replication

Preparation
1. Install mongodb on all nodes
2. Configuring /etc/hosts on nodes
[root@mongodb-xxx ~]# cat /etc/hosts 
192.168.56.87  mongodb-01.taolaoxibup.com               mongodb-01
192.168.56.88  mongodb-02.taolaoxibup.com               mongodb-02
192.168.56.89  mongodb-slave.taolaoxibup.com           mongodb-slave
192.168.56.90  mongodb-arbiter.taolaoxibup.com         mongodb-arbiter    // an arbiter
 
3. Configuring /etc/mongod.conf on all nodes
[root@mongodb-xxx ~]# vi /etc/mongod.conf 
...

# network interfaces
net:
  port: 27017
  bindIp: mongodb-01  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.            // Update right value for each mongos instances

replication:
  replSetName: "rs0"
...

[root@mongodb-xxx ~]# systemctl restart mongod.service 

Configuration

1. On Primary Node
- Initiate the replica set
  • rs.initiate()
  • rs.status()
  • rs.conf()
- Add Members to a Replica Set
  • rs.add({host: "mongodb-02:27017"})
  • rs.add({host: "mongodb-slave:27017"})
- Verify configuration
  • rs.status()
  • rs.conf()
  • rs.hello()
  • db.isMaster()
2. On Secondary node - Verify configuration
  • rs.status()
  • rs.conf()
  • rs.hello()
  • db.isMaster()
3. Add Members to a Replica Set
  • Start new mongod instance
    • mongod --port 27018 --dbpath /u01/mongoDB/saigon --replSet rs0 --bind_ip 0.0.0.0
    • OR nohup mongod --port 27018 --dbpath /u01/mongoDB/saigon --replSet replOMNI --bind_ip_all > 27018_$HOSTNAME.log 2>&1 &
  • On Primary node, add new member to replica set
    • rs.add("mongodb-01:27018")
  • Verify configuration
    • rs.status()
    • rs.conf()
    • db.isMaster()

Thursday, July 25, 2024

PostgreSQL - Extensions

1. Check for available extensions
edb=# select * from pg_available_extensions;

2. Check for installed extensions
edb=# select * from pg_available_extensions where installed_version is not null;
edb=# select * from pg_extension;
edb=# \dx

3. Some important extensions
pg_stat_statements provides a means for tracking planning and execution statistics of all SQL statements executed by a server. When pg_stat_statements is active, it tracks statistics across all databases of the server

dblink is used to connect to other PostgreSQL databases from within a database

postgres_fdw can be used to access data stored in external PostgreSQL servers. postgres_fdw is the successor of the old dblink extension

oracle_fdw extension is a foreign data wrapper that allows you to access Oracle table and views (including materialized views) via foreign tables.

pgcrypto is a PostgreSQL extension that provides cryptographic functions and capabilities directly within the database. It enhances data security by allowing various cryptographic operations to be performed within SQL queries.

pg_cron is a simple cron-based job scheduler that runs inside the database as an extension. It uses the same syntax as regular cron, but it allows you to schedule PostgreSQL commands directly from the database

pgAudit is an extension for PostgreSQL that provides detailed session and/or object audit logging via the standard logging facility provided by PostgreSQL. It is designed to help database administrators and developers meet security and compliance requirements by providing detailed information on database activities.

pg_partman is an extension that simplifies creating and maintaining partitions of your PostgreSQL tables.

pglogical extension provides logical streaming replication for PostgreSQL, using a publish/subscribe module.

pg_buffercache module provides a means for examining what's happening in the shared buffer cache in real time.

pg_freespacemap module provides a means for examining the free space map (FSM). It provides a function called pg_freespace, or two overloaded functions, to be precise. The functions show the value recorded in the free space map for a given page, or for all pages in the relation

Pgpool-II is a middleware that works between PostgreSQL servers and a PostgreSQL database client.

pgBouncer is a Lightweight connection pooler for PostgreSQL

system_stats extension is a library of stored procedures that allow users to access system-level statistics for monitoring Postgres activity. These procedures reveal a variety of system metrics to the database server.

Ref:

Wednesday, July 24, 2024

PostgreSQL - Vacuum Processing

Vacuum processing is a maintenance process that facilitates the persistent operation of PostgreSQL. Its two main tasks are removing dead tuples and the freezing transaction ids.

Vacuum parameters:

[enterprisedb@edb-saigon ~]$ less /u01/edb/as16/data/postgresql.conf
...
#------------------------------------------------------------------------------
AUTOVACUUM
#------------------------------------------------------------------------------

#autovacuum = on                                                                    # Enable autovacuum subprocess?  'on' & requires track_counts to also be on.
#autovacuum_max_workers = 3                                        # max number of autovacuum subprocesses & (change requires restart)
#autovacuum_naptime = 1min                                           # time between autovacuum runs
#autovacuum_vacuum_threshold = 50                               # min number of row updates before vacuum
#autovacuum_vacuum_insert_threshold = 1000                # min number of row inserts before vacuum; -1 disables insert vacuums
#autovacuum_analyze_threshold = 50                                # min number of row updates before analyze
#autovacuum_vacuum_scale_factor = 0.2                          # fraction of table size before vacuum
#autovacuum_vacuum_insert_scale_factor = 0.2               # fraction of inserts over table size before insert vacuum
#autovacuum_analyze_scale_factor = 0.1                           # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000                        # maximum XID age before forced vacuum & (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000    # maximum multixact age before forced vacuum & (change requires restart)
#autovacuum_vacuum_cost_delay = 2ms                              # default vacuum cost delay for autovacuum, in milliseconds; & -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1                                   # default vacuum cost limit for autovacuum, -1 means use vacuum_cost_limit
...

Wednesday, July 10, 2024

EDB - Replication Server (xDB) - Important Concepts

Snapshot and synchronization overview

Replication Server performs two different types of replications. These two main types are called snapshot replication and synchronization replication.

In either method, the source tables refer to the tables from which the replication data is originating. In a single-master replication system, it's the publication. In a multi-master replication system it's the primary node whose changes are being replicated to another primary node.

The target tables are the tables that are receiving the replication data from the source tables. In a single-master replication system, it's the subscription tables. In a multi-master replication system, it's the primary node receiving changes from another primary node.

In snapshot replication, all existing rows in the target tables are deleted using the database system’s TRUNCATE command. The tables are then completely reloaded from the source tables of the publication.

In synchronization replication, only the changes (inserts, updates, and deletions) to the rows in the source tables since the last replication are applied to the target tables.

Note
Deleting all rows in a source table executed by the SQL TRUNCATE command results in replication to the target tables only if the log-based method of synchronization replication is used. If the trigger-based method of synchronization replication is used, executing the TRUNCATE command on a source table doesn't replicate the effect to the target tables. You must perform a snapshot from the source table to the target tables if you use the trigger-based method.

Synchronization replication is implemented using two different methods: the trigger-based method and the log-based method.

In the trigger-based method, changes to rows in the source tables result in row-based triggers executing. These triggers record the changes in shadow tables. The changes recorded in the shadow tables are then periodically extracted from the shadow tables, converted to an in-memory data structure, and applied to the target tables by means of SQL statements executed using JDBC. See Synchronization replication with the trigger-based method for information on the trigger-based method.

In the log-based method, changes to rows in the source tables are extracted from the write-ahead log segments (WAL files) using asynchronous streaming replication implemented by the logical decoding feature available in Postgres database servers. The extracted changes are converted to an in-memory data structure and applied to the target tables by means of SQL statements executed using JDBC. See Synchronization replication with the log-based method for information on the log-based method.

Tuesday, July 02, 2024

EDB - Replication Server (xDB) - Installation

Introduction

Replication Server is an asynchronous replication system available for PostgreSQL and for EDB Postgres Advanced Server.

You can use Replication Server to implement replication systems based on either of two different replication models: single-master (primary-to-secondary) replication or multi-master replication. Regardless of the replication model, Replication Server is flexible and easy to use.

For single-master replication, PostgreSQL, EDB Postgres Advanced Server, Oracle, and Microsoft SQL Server are supported in an assortment of configurations (including cascading replication), allowing organizations to use it in multiple use cases with a variety of benefits. The following are some combinations of cross database replications that Replication Server supports for single-master replication:
  • From Oracle to PostgreSQL
  • From Oracle to EDB Postgres Advanced Server
  • From SQL Server to PostgreSQL
  • From SQL Server to EDB Postgres Advanced Server
  • From EDB Postgres Advanced Server to Oracle
  • From PostgreSQL to SQL Server
  • From EDB Postgres Advanced Server to SQL Server
  • Between PostgreSQL and EDB Postgres Advanced Server
  • From PostgreSQL to Oracle (WAL mode)
  • From PostgreSQL to Oracle (trigger mode)

EDB - PEM - Installation

Theory
  • You can install PEM on a single server, or you can install the web application server and the backend database on two separate servers. In this post, we will install PEM on a single server.
  • Because installing without internet, so we must config LOCAL yum server. Please check via Create Local HTTP Yum/DNF Repository
Install the package
[root@edb-pem ~]# dnf install edb-as16-server
[root@edb-pem ~]# dnf install edb-as16-server-sslutils   //using for EDB Postgres Advanced Server
[root@edb-pem ~]# dnf install edb-pem

Change password of enterprisedb user
[enterprisedb@edb-pem ~]$ psql edb
edb=# \conninfo 
You are connected to database "edb" as user "enterprisedb" via socket in "/tmp" at port "5444".

edb=# \password 
Enter new password for user "enterprisedb": 

Configure authentication on the backend database
Modifying the pg_hba.conf file to allow pem to connect to database.
    
Initial configuration
[root@edb-pem ~]# /usr/edb/pem/bin/configure-pem-server.sh
...
 -----------------------------------------------------
 EDB Postgres Enterprise Manager
 -----------------------------------------------------
 Install type: 1:Web Services and Database, 2:Web Services 3: Database  1
 Enter local database server installation path (i.e. /usr/edb/as12 , or /usr/pgsql-12, etc.)  /usr/edb/as16
 Enter database super user name  enterprisedb
 Enter database server port number  5444
...
Please enter CIDR formatted network address range that agents will connect to the server from, to be added to the server's pg_hba.conf file. For example, 192.168.1.0/24  0.0.0.0/0
Enter database systemd unit file or init script name (i.e. edb-as-12 or postgresql-12, etc.)  edb-as-16
Please specify agent certificate path (Script will attempt to create this directory, if it does not exists)  /root/.pem/
Please specify PEM web server (openssl) certificate subject, template: /C=CountryCode/ST=StateName/L=CityName/O=CompanyName/OU=CompanySectionName/CN=CommonNameOrHostname/emailAddress=EmailAddress  /C=VN/ST=HoChiMinh/L=HCM/O=CHERRY/OU=VHUD/CN=edb-pem/emailAddress=khochiutronglong@gmail.com
...
...
...
-->  [Info] -->  [Info] Executing systemctl start httpd
-->  [Info] -->  [Info] Configured the webservice for EDB Postgres Enterprise Manager (PEM) Server on port '8443'.
-->  [Info] -->  [Info] PEM server can be accessed at https://127.0.0.1:8443/pem at your browser


Ref:


Monday, June 24, 2024

EDB - pgBackRest - Part III: RESTORE

 Theory
  • The restore command automatically defaults to selecting the latest backup from the first repository where backups exist. The order in which the repositories are checked is dictated by the pgbackrest.conf (e.g. repo1 will be checked before repo2). To select from a specific repository, the --repo option can be passed (e.g. --repo=1). The --set option can be passed if a backup other than the latest is desired.
  • Restoration Scenarios:
    • Restoring the backup on a different host
    • Restoring a single database from the Postgres cluster
    • Restoring the backup to a specific point in time
    • Restoring only the delta changes
    • Restoring the backup on a different host to start as a streaming standby
1. Verify backup information
[enterprisedb@edb-nhatrang ~]$ pgbackrest info --stanza=employees

2. Restoring the backup on a different host (IP : 192.168.56.77)
  • Setup Passwordless SSH Connection between Backup Server (192.168.56.79) and Database Server (192.168.56.77)
    • Backup Server 
      • ssh-keygen -t rsa
      • ssh-copy-id enterprisedb@192.168.56.77
    • Database Server
      • ssh-keygen -t rsa
      • ssh-copy-id enterprisedb@192.168.56.79
  • Database Server - Setup pgBackRest Repository
        [root@edb-quynhon ~]# vi /etc/pgbackrest.conf 
        [global]
        repo1-host=192.168.56.79
        repo1-host-user=enterprisedb
        log-level-console=info
        log-level-file=debug

        [employees]
        pg1-path=/u01/edb/as16/data
        pg1-database=edb
        pg1-port=5444
        pg-version-force=16

2.1 Database Server - Restore the Lastest Backup 

[enterprisedb@edb-quynhon ~]$ pgbackrest --log-level-console=info --stanza=employees --process-max=2 restore
2024-06-25 15:47:07.509 P00   INFO: restore command begin 2.52: --exec-id=9675-76a8160f --log-level-console=info --log-level-file=debug --pg1-path=/u01/edb/as16/data --pg-version-force=16 --process-max=2 --repo1-host=192.168.56.79 --repo1-host-user=enterprisedb --stanza=employees
2024-06-25 15:47:08.481 P00   INFO: repo1: restore backup set 20240624-221733F_20240624-221842I, recovery will start at 2024-06-24 22:18:42
2024-06-25 15:47:17.598 P00   INFO: write updated /u01/edb/as16/data/postgresql.auto.conf
2024-06-25 15:47:17.625 P00   INFO: restore global/pg_control (performed last to ensure aborted restores cannot be started)
2024-06-25 15:47:17.628 P00   INFO: restore size = 740.5MB, file total = 2201
2024-06-25 15:47:17.628 P00   INFO: restore command end: completed successfully (10124ms)

==> The latest backup from the first repository will be restored

EDB - pgBackRest - Part II: Configuration & Backup

Theory
  • A stanza is the configuration for a PostgreSQL database cluster that defines where it is located, how it will be backed up, archiving options, etc. Most db servers will only have one PostgreSQL database cluster and therefore one stanza, whereas backup servers will have a stanza for every database cluster that needs to be backed up.
  • The stanza-delete command removes data in the repository associated with a stanza. Use this command with caution — it will permanently remove all backups and archives from the pgBackRest repository for the specified stanza.

A. CONFIGURATION

1. Create backup location

[root@edb-nhatrang /]# mkdir -p /backup/pgbackrest
[root@edb-nhatrang /]# chown -R enterprisedb: /backup/

2. Database Cluster Configuration
  • Enable archive mode ON
  • Set archive_command = ‘pgbackrest --stanza=employees archive-push %p’
  • Get data_directory = ‘/u01/edb/as16/data’
edb=# show archive_mode ;
 archive_mode 
--------------
 on
(1 row)

edb=# show archive_command ;
                archive_command                
-----------------------------------------------
 pgbackrest --stanza=employees archive-push %p
(1 row)

edb=# show data_directory ;
   data_directory   
--------------------
 /u01/edb/as16/data
(1 row)

edb=# show wal_level ;
 wal_level 
-----------
 replica
(1 row)

edb=# show max_wal_senders ;
 max_wal_senders 
-----------------
 10
(1 row)

edb=# show log_filename ;
           log_filename           
----------------------------------
 enterprisedb-%Y-%m-%d_%H%M%S.log
(1 row)

3. Pgbackrest Repository Configuration

[root@edb-nhatrang ~]# vi /etc/pgbackrest.conf 
[global]
repo1-path=/backup/pgbackrest
repo1-retention-full=2
archive-async=y
log-level-console=info
log-level-file=debug
start-fast=y

[employees]
pg1-path=/u01/edb/as16/data
pg1-database=edb
pg1-port=5444
pg-version-force=16

Tuesday, June 18, 2024

MySQL 8 - Part III: Master - Master Replication with GTID

Theory:
  • The method based on global transaction identifiers (GTIDs) is transactional and therefore does not require working with log files or positions within these files, which greatly simplifies many common replication tasks. 
  • Replication using GTIDs guarantees consistency between source and replica as long as all transactions committed on the source have also been applied on the replica.
  • Enabling log-slave-updates on a replica means, the replica writes updates that are received from a master to its own binary logs. So a replica can become a intermediate master of another replica. This option is a stepping stone for various simple to complex level replication setups and new technologies like Group Replication.

1. General information

Master Node 01
IP: 192.168.56.59
Hostname: edb-saigon.taolaoxibup.com
OS: Red Hat Enterprise Linux release 9.4 (Plow)
MySQL version: Ver 8.0.37-commercial for Linux on x86_64 (MySQL Enterprise Server - Commercial)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| classicmodels      |
| information_schema |
| mysql              |
| performance_schema |
| sg2hn              |
| sys                |
+--------------------+
6 rows in set (0.00 sec)

Master Node 02
IP: 192.168.56.29
Hostname: edb-hanoi.taolaoxibup.com
OS: Red Hat Enterprise Linux release 9.4 (Plow)
MySQL version: Ver 8.0.37-commercial for Linux on x86_64 (MySQL Enterprise Server - Commercial)

2. Preparation on all Master Nodes
  • Turn off Firewall
  • Disable SELinux
  • Configuring /etc/hosts
    • 192.168.56.59   edb-saigon.taolaoxibup.com     edb-saigon
    • 192.168.56.29   edb-hanoi.taolaoxibup.com       edb-hanoi

3. Update Configuration file on all Master Nodes

Master Node 01
[root@mysql-saigon ~]# vi /etc/my.cnf
server-id=1
bind-address=192.168.56.59
log-bin=saigon-bin
gtid-mode=ON
enforce-gtid-consistency
log-slave-updates

[root@mysql-saigon ~]# systemctl restart mysqld.service 

MySQL 8 - Part II: Master - Slave Replication with Binary log

Theory:
The traditional method is based on replicating events from the source's binary log, and requires the log files and positions in them to be synchronized between source and replica.

1. General information

Master Node
IP: 192.168.56.59
Hostname: edb-saigon.taolaoxibup.com
OS: Red Hat Enterprise Linux release 9.4 (Plow)
MySQL version: Ver 8.0.37-commercial for Linux on x86_64 (MySQL Enterprise Server - Commercial)

Slave Node
IP: 192.168.56.29
Hostname: edb-hanoi.taolaoxibup.com
OS: Red Hat Enterprise Linux release 9.4 (Plow)
MySQL version: Ver 8.0.37-commercial for Linux on x86_64 (MySQL Enterprise Server - Commercial)

2. Preparation on Master/Slave Node
  • Turn off Firewall
  • Disable SELinux
  • Configuring /etc/hosts
    • 192.168.56.59   edb-saigon.taolaoxibup.com     edb-saigon
    • 192.168.56.29   edb-hanoi.taolaoxibup.com       edb-hanoi
3. Update Master/Slave Server config file

Master Node
[root@mysql-saigon ~]# vi /etc/my.cnf
server-id=1
bind-address=192.168.56.59
log-bin=saigon-bin

[root@mysql-saigon ~]# systemctl restart mysqld.service 

Slave Node
[root@mysql-hanoi ~]# vi /etc/my.cnf
server-id=2
bind-address=192.168.56.29
log-bin=hanoi-bin
read_only=1 (this variable prevents normal user to execute DDL, DML on Slave server)

[root@mysql-hanoi ~]# systemctl restart mysqld.service 

Saturday, June 15, 2024

MySQL 8 - Part I: Installation

1. Download MySQL packages from Oracle eDelivery (Server, Shell, ...)

[root@mysql-saigon mysql8]# ls -l
total 961092
-rw-r--r--. 1 root root   4010566 Mar 31 16:46 mysql-commercial-backup-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root  20388471 Mar 31 16:46 mysql-commercial-backup-debuginfo-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root   4105053 Mar 31 16:46 mysql-commercial-client-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root  27305094 Mar 31 16:46 mysql-commercial-client-debuginfo-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root   1457867 Mar 31 16:46 mysql-commercial-client-plugins-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root   2052547 Mar 31 16:46 mysql-commercial-client-plugins-debuginfo-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root    570629 Mar 31 16:46 mysql-commercial-common-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root   7680309 Mar 31 16:46 mysql-commercial-debuginfo-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root   7311390 Mar 31 16:46 mysql-commercial-devel-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root   2383993 Mar 31 16:46 mysql-commercial-icu-data-files-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root   1588746 Mar 31 16:46 mysql-commercial-libs-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root   2412596 Mar 31 16:47 mysql-commercial-libs-debuginfo-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root  57277741 Mar 31 16:48 mysql-commercial-server-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root  27151180 Mar 31 16:48 mysql-commercial-server-debug-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root 170157585 Mar 31 16:48 mysql-commercial-server-debug-debuginfo-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root 220976584 Mar 31 16:48 mysql-commercial-server-debuginfo-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root 366474178 Mar 31 16:50 mysql-commercial-test-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root  20801351 Mar 31 16:50 mysql-commercial-test-debuginfo-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root   4425344 Mar 31 16:57 mysql-router-commercial-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root  35571693 Mar 31 16:57 mysql-router-commercial-debuginfo-8.0.37-1.1.el9.x86_64.rpm
-rw-r--r--. 1 root root      1472 Apr 27 20:53 README.txt