Migrating data from Elasticsearch 7 to OpenSearch
To preserve the Metrics, Orient Me, and Recent History data stored in Elasticsearch 7 for your Connections 7 deployment, you must migrate that data to the OpenSearch service provided with Component Pack for Connections 8.
Before you begin
Before you begin migration, verify that the Elasticsearch 7 server is running and ensure that the user account used has the proper access rights to perform the steps in this task.
To avoid losing new data while migrating the existing data, run this task during a maintenance window.
Important:
- The following PV Storage components are needed to restore Component Pack data to OpenSearch:
- Component Pack service: OpenSearch
- Required storage component: OpenSearch
- Directories: /pv-connections/opensearchbackup
- Ensure that the OpenSearch cluster with master, data, and client node or nodes is already set up. Check that the OpenSearch cluster health is green.
For information on setting up persistent volumes, see Set up persistent volumes and persistent volume claims on NFS.
Migrate data
-
Get the index list that will be migrated into the new OpenSearch instance.
-
Run the following command:
kubectl exec -ti -n connections $(kubectl get pods -n connections | grep es-client | awk '{print $1}') -- bash /opt/elasticsearch-7.10.1/probe/sendRequest.sh GET /_cat/indices
Note: Any green or yellow index status is fine. Fix the Elasticsearch 7 cluster first if you find any red status.
Disconnect from the pod (press Ctrl+D, or type
exit
and press Enter). -
Record the index names for the applications that you want to migrate. These names will be used in the rest of the migration process.
Application Index name Number of indices Metrics icmetrics_a_YYYY_{1h | 2h} for example, icmetrics_a_2019_2h Two per calendar year of data collection Type-ahead recent history quickresults One
-
-
Register the snapshot repository in new OpenSearch cluster.
-
Run the following commands:
kubectl exec -ti -n connections $(kubectl get pods -n connections -o wide | grep opensearch-cluster-client | awk '{print $1}' | head -n 1) --bash
/usr/share/opensearch/probe/sendRequest.sh POST /_snapshot/${REPONAME} -H 'Content-Type: application/json' -d '{"type": "fs","settings": {"compress": true, "location": "${BACKUPPATH}"}}'
This command returns the following output:
o/p: {"acknowledged":true}
Where:
${REPONAME}
is the name of the snapshot repository, which will be used to register and manage the OpenSearch snapshot. The first time that you perform these steps, you must give the repository an appropriate name, for example, connectionsbackup.${BACKUPPATH}
is the mount path of the shared OpenSearch backup path. By default, this path is /backup. Disconnect from the pod (press Ctrl+D, or typeexit
and press Enter).
-
Verify:
kubectl exec -ti -n connections $(kubectl get pods -n connections -o wide | grep opensearch-cluster-client | awk '{print $1}' | head -n 1) --bash
/usr/share/opensearch/probe/sendRequest.sh GET /_snapshot/_all?pretty
This command returns the following output:
o/p: { “${REPONAME}” : { "type" : "fs", "settings" : { "compress" : "true", "location" : “${BACKUPPATH}” } } }
Disconnect from the pod (press Ctrl+D, or type
exit
and press Enter).
-
-
Copy the snapshot from the old Elasticsearch 7 cluster to the new OpenSearch cluster.
Note: The following steps do not include specific commands, as file systems might vary between users (NFS, EFS, and the like).
-
Ensure that you have the necessary permissions to complete this step.
-
Go to the location which was configured as the backup storage of your old Elasticsearch 7 cluster (for example /mnt/pv-connections/esbackup-7 or the NFS master location / /pv-connections/example/esbackup-7).
-
Package all content in it by running the following command:
tar -cvf backup.tar *
-
Copy the package to the location that was configured as the backup storage for the new OpenSearch cluster (for example /mnt/pv-connections/opensearchbackup or the NFS master location / /pv-connections/example/opensearchbackup).
-
Extract the package by running the following command:
tar -xvf backup.tar
.
-
-
Verify that the existing OpenSearch indexes are present in the new OpenSearch cluster.
-
Run the following command:
kubectl exec -ti -n connections $(kubectl get pods -n connections -o wide | grep opensearch-cluster-client |awk '{print $1}' | head -n 1) --bash
/usr/share/opensearch/probe/sendRequest.sh GET /_cat/indices
Disconnect from the pod (press Ctrl+D, or type
exit
and press Enter).Note: If you don't want the default or automatically created indexes or the ones that are already created here because you are planning to migrate them from ElasticSearch 7, it is best to delete those here (either delete all or choose the ones you want to delete):
kubectl exec -ti -n connections $(kubectl get pods -n connections -o wide | grep opensearch-cluster-client | awk '{print $1}' | head -n 1) --bash
To delete all indexes:
/usr/share/opensearch/probe/sendRequest.sh DELETE /_all
To delete specific indexes:
/usr/share/opensearch/probe/sendRequest.sh DELETE /<< INDEX NAME >>
Disconnect from the pod (press Ctrl+D, or type
exit
and press Enter). -
Do one of the following:
-
If the output shows that there are no existing indexes (ignore default index if any, for example, opendistro_security) that use the same name that you recorded in step 1:
-
Restore the snapshot directly by running the following command:
kubectl exec -ti -n connections $(kubectl get pods -n connections -o wide | grep opensearch-cluster-client | awk '{print $1}' | head -n 1) --bash
/usr/share/opensearch/probe/sendRequest.sh POST /_snapshot/${REPONAME}/snapshot_migration/_restore
This command returns the following output:
o/p: {"accepted":true}
Where
${REPONAME}
is the name of the snapshot repository, which will be used to register and manage the OpenSearch snapshot. The first time that you perform these steps, you must give the repository an appropriate name, for example,connectionsbackup
.Disconnect from the pod (press Ctrl+D, or type
exit
and press Enter). -
Skip to the Verify migrated data in the user interface section.
-
-
If the output shows that there are existing indexes with the same names that you recorded in step 1, continue to the next steps to resolve the conflict.
-
-
-
Restore the index names that do not have any conflict:
kubectl exec -ti -n connections $(kubectl get pods -n connections -o wide | grep opensearch-cluster-client | awk '{print $1}' | head -n 1) --bash
/usr/share/opensearch/probe/sendRequest.sh POST /_snapshot/${REPONAME} /snapshot_migration/_restore \ -H 'Content-Type: application/json' \ -d' { "indices": "comma_separated_list_of_index_names_with_no_conflict", "ignore_unavailable": true, }
Where
${REPONAME}
is the name of the snapshot repository, which will be used to register and manage the OpenSearch snapshot. The first time that you perform these steps, you must give the repository an appropriate name, for example, connectionsbackup.Disconnect from the pod (press Ctrl+D, or type
exit
and press Enter). -
Restore the index names that have conflicts, using temporary names.
Run the following commands to restore the snapshot using a temporary name for each index that has a conflict. The Metrics application is used as an example in the following commands and steps:
kubectl exec -ti -n connections $(kubectl get pods -n connections -o wide | grep opensearch-cluster-client | awk '{print $1}' | head -n 1) --bash
/usr/share/opensearch/probe/sendRequest.sh POST /_snapshot/${REPONAME} /snapshot_migration/_restore \ -H 'Content-Type: application/json' \ -d' { "indices": "comma_separated_list_of_index_names_WITH_conflict", "ignore_unavailable": true, "rename_pattern": "icmetrics_a_(.+)", "rename_replacement": "migrated_icmetrics_a_$1" }
Where
${REPONAME}
is the name of the snapshot repository, which will be used to register and manage the OpenSearch snapshot. The first time that you perform these steps, you must give the repository an appropriate name, for example, connectionsbackup.Disconnect from the pod (press Ctrl+D, or type
exit
and press Enter). -
Re-index data from the restored temporary indexes to the target indexes.
-
For every temporary index restored and renamed in step 6, run the following commands to re-index the migrated data into a target index (run these commands for one index at a time):
kubectl exec -ti -n connections $(kubectl get pods -n connections -o wide | grep opensearch-cluster-client | awk '{print $1}' | head -n 1) --bash
/usr/share/opensearch/probe/sendRequest.sh POST /_reindex \ -H 'Content-Type: application/json' \ -d' { "source": { "index": "migrated_ONE_index_name_with_conflict" }, "dest": { "index": "Corresponding_EXISTING_index_name" } }
Disconnect from the pod (press Ctrl+D, or type
exit
and press Enter). -
Delete each temporary index that you resolved in above step by running the following command:
kubectl exec -ti -n connections $(kubectl get pods -n connections -o wide | grep opensearch-cluster-client | awk '{print $1}' | head -n 1) --bash
/usr/share/opensearch/probe/sendRequest.sh DELETE /index_name_with_conflict_from_step_8a
Disconnect from the pod (press Ctrl+D, or type
exit
and press Enter).
-
Verify migrated data in the user interface
Use the UI to verify that the Metrics and type-ahead search data was migrated successfully.
-
Verify Metrics data:
-
Log in to Metrics as the organization admin, using the following address:
https://your\_Connections\_server\_host\_name/metrics/orgapp\#/
-
View a global report and verify that the migrated data displays.
- Verify that you can view reports by Group by in either Community Metrics or Global Metrics.
-
-
Verify type-ahead search data:
- Log in to Connections.
- Navigate to some content, such as a forum, wiki, or community. View a global report and verify that the migrated data displays.
- Open the type-ahead search sidebar by clicking the Search icon. Verify that you can see the content you viewed at the top of the list.
Parent topic: Steps to install or upgrade to Component Pack 8 CR2