Digital Asset Management persistence
This topic describes the components of the Digital Asset Management (DAM) persistence. The updated DAM persistence feature is available with HCL Digital Experience (DX) 9.5 Container Update CF198 and later.
persistence-node
The persistence-node
provides the database functions for HCL Digital Asset Management HCL Digital Asset Management. The persistence-node
is a DX Red Hat Universal Base Image (UBI) container image installed with PostgreSQL and the Replication Manager Service. The repmgr
is an open-source tool suite for managing replication and failover in a cluster of PostgreSQL servers. The repmgr
enhances the built-in hot-standby capabilities of PostgreSQL with tools to set up standby servers, monitor replication, and perform administrative tasks, such as failover or manual switchover operations.
In case of PostgreSQL master server failure, the repmgr
service switches the server role from primary to standby.
The persistence-node
configurations are available in the Helm chart values.yaml file as persistenceNode
.
The administrator can configure the number of persistence-nodes
under scaling
configuration.
# Scaling settings for deployed applications
scaling:
# The default amount of replicas per application
replicas:
persistenceConnectionPool: 1
persistenceNode: 3
Note
Scaling affects only the read requests and ensures failover capabilities. Write requests are always directed only to the primary pod.
The persistence-node
is a stateful application and it requires a volume. The configuration must have a dynamic volume class to start the container. You must update the storageClassName
and storage
according to the cloud service provider and project requirements.
volumes:
# Persistent Volumes for Persistence Node
persistenceNode:
# Database PVC, one per Persistence Node
database:
storageClassName: "manual"
requests:
storage: "2Gi"
# Optional volume name to specifically map to.
volumeName:
persistence-connection-pool
The persistence-connection-pool
container runs the Pg-pool service. Pg-pool is middleware that works between persistence-node
and HCL Digital Asset Management HCL Digital Asset Management.
The service provides the following services:
- Connection pooling
- Load balancing
For better performance, the administrator can scale the persistence-connection-pool
to more than one pod. The persistence-connection-pool
configurations are available in Helm chart values.yaml file as persistenceConnectionPool
.
# Scaling settings for deployed applications
scaling:
# The default amount of replicas per application
replicas:
persistenceConnectionPool: 1
persistenceNode: 3
The following example shows a persistence cluster in a successful deployment.
PostgreSQL version upgrade
PostgreSQL version 11 (PG 11) is no longer supported. Starting CF220, the persistence node containing the PostgreSQL DB is upgraded from version 11 to 16. This happens automatically as soon as CF220 or later versions deploy for the first time. Along with PostgreSQL, repmgr
is also updated to version 5.2.1. Before upgrading from pre-CF220 to CF220 or later, make sure to back up the database dump. Refer to Backup Persistence for instructions on exporting the dump.
During the upgrade, note that the data directory of PG 11 is not removed. To remove the data directory after upgrading to the latest CF, refer to the following instructions. Make sure to change the pod name accordingly and execute the commands for all persistence pods.
-
Navigate to the persistence pod using the following command:
kubectl -n dxns exec -it dx-deployment-persistence-node-0 bash
-
Remove the PG 11 data directory by running the following command:
rm -rf '/var/lib/pgsql/11/data/dx'
Note
Be cautious when deleting the PG 11 data directory. The mount path for PostgreSQL is
/var/lib/pgsql/11/data
; the PG 16 data folder is available in parallel to DX. Wrongful deletion could cause the entire data to be removed.
In case of upgrade failure, rollback to the previous CF version and reach out to HCL Software Customer Support for assistance.