S3 Integration - A reference command line integration
S3 integration for backup is reference command-line configuration. The S3 interface uses the command line integration based on formulas directly calling commands for AWS CLI and Minio client.
You can use a similar configuration approach for your specific command-line integrations. S3 is a simple to use configuration without deploying scripts, because S3 commands are used. But the integration requires the binaries for AWS CLI, Minio or another S3 client.
This integration uses the @formula approach. There is a similar configuration type where the command-line is directly invoked using a standard parameter list. The formula configuration defines the exact command-line invoked.
Open file support required for NSF files
An important requirement for any NSF file copy operation is open file backup support. Databases in backup mode are still in use and are modified during backup. Changes are still written to the database and are also recorded into the delta buffer. The backup operation does not need to take those changes into account. The delta file created will contain all the changes, which are applied on restore to ensure the NSF file is consistent and complete.
The databases are still in use and are updated!
For example the rclone client cannot be used for this type of integration, because it does not support copying open files.
S3 backup performance
S3 performance depends on the interface and the back-end used. It has some technology-based inherent performance limitations.
Leveraging S3 integration might not be the best choice for larger Domino servers. But it could be a good option for delta files
and translog
extends or small servers.
Depending on the selected back-end different storage tiers with varying performance might be available.
When implementing an integration you should always test the performance first for deploying a production environment.
S3 backup configuration for back-ends
S3 integrations require an external S3 client. We have tested the AWS S3 CLI and also the Minio command line client mc
. Those clients can be also configured for different S3 back-ends. Refer to the documentation for configuration options.
Important requirement for command-line operations
Domino backup hands over the control to the invoked application.
Therefore a very important requirement is that the called command-line operation always completes the invoked command (shell script, batch, etc.).
In any case, you have to ensure all operations are designed to operate in batch mode and never require any interactive input!
For example, a file operation requiring to confirm creating a directory or file would cause a backup/restore operation to hang.
Most command-line tools provide a silent/batch operation mode. Ensure also simple commands like mkdir
are invoked in a silent mode!
DXL configurations
The following configuration examples are provided in the GitHub repository:
- S3 Minio Linux configuration
- S3 Minio Windows configuration
- S3 AWS CLI Windows
The Basic tab needs to be configured for the S3 back-end.
Minio is a great option for a S3 local setup. It can be installed natively or leveraging a Docker container.
Minio client installation and configuration
Installing the Minio client
Installation of the mc
client requires just a single binary with the following steps.
curl -L https://dl.min.io/client/mc/release/linux-amd64/mc -o /usr/bin/mc
chmod +x /usr/bin/mc
Minio client configuration for AWS S3
Because S3 is a standard, you can configure different back-ends.
For example, you can also use the Mino client accessing AWS S3.
However, in the case of AWS the S3 AWS client is recommended!
See the Minio client configuration guide for details.
Minio Docker installation
A very easy to use installation is a Minio Docker container.
Check the Minio Docker image and the Minio GitHub Repository.
Create directories
For a Docker installation, you need first need two directories.
- One directory will contain the data
- Another directory will contain the TLS keys
mkdir -p /local/minio-root
mkdir -p /local/minio
chown -R 1000:1000 /local
Run Minio server
The following example can be used to create a Minio container:
docker run -d --name minio \
-p 9000:9000 -p 9001:9001 \
-v /local/minio-root:/root -v /local/minio:/data \
-e MINIO_SERVER_URL="https://s3.acme.com:9000" \
-e MINIO_BROWSER_REDIRECT_URL="https://s3.acme.com:9001" \
-e "MINIO_ROOT_USER=s3-user" -e "MINIO_ROOT_PASSWORD=s3-password" \
quay.io/minio/minio server /data --console-address ":9001"
Ensure you replace the login information with a secure user and password.
Refer to the Minio Quickstart reference for an easy to use configuration.
TLS configuration
Copy the private key PEM file and certificate PEM file into the newly-created directory.
Check the Minio TLS configuration page for details.
/root/.minio/certs/private.key
/root/.minio/certs/public.crt
Configure backend
./mc alias set s3-backup https://s3.acme.com:9000 s3-user s3-password
./mc mb s3-backup/domino-backup
Important S3 back-end configuration information
Minio is not using the a prefix for connections like AWS s3://
.
If the back-end does not exist due to a misconfiguration, the Minio client assumes the paramter is a directory.
This can lead to undesired file copy operations to the local file system.
According to the Mino project, this is the desired behavior, which cannot be changed.
AWS CLI configuration
For AWS configurations check the following links:
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/index.html
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html