Upgrading a core element of your infrastructure without downtime is usually quite tricky. In this article I’ll take you through a solution that we’ve (i.e. the team I’m working in) taken to migrate our Elasticsearch cluster.
But first, a bit of context. We’re running a containerized Laravel application within an AWS-hosted Kubernetes cluster (AWS EKS). Our application uses MySQL as a data source, as well as Elasticsearch (ES) for improved searching and aggregation of data. Before this migration we ran ES 2.4 at AWS, using their Elasticsearch Service solution. That version was a bit outdated to say the least. We’re now running a version 6.8 cluster that is provisioned by Elastic’s Cloud solution, which provides more control and still is hosted in the same AWS region so there is a low latency.
The approach we took within the team was to have an extra container running our Laravel application in the production cluster. This container contained the code of a separate branch, including an updated
elasticsearch-php composer package, and separate Kubernetes
configMap with references to the new ES cluster within Elastic Cloud.
In essence our approach boils down to the following:
- Setup an ES 6.8 cluster;
- Create a new branch of your production code with all the changes needed for ES 6.8, including the Kubernetes configuration;
- Build the container image with the code of that specific branch;
- Create a separate Kubernetes
deploymentfor the extra container, so that your running application will not be affected;
- Deploy the container to your Kubernetes cluster;
- Start indexing your source data into the new cluster;
- After your indexing is done, and you know that it is up to date, merge the branch, build the container image and deploy;
- Your application should now point to the new ES cluster, and you can index the objects that may have changed or have been added during the deployment;
- You can now remove the extra container and its configuration from your Kubernetes cluster, as you don’t need it anymore.
If your data doesn’t change frequently, you could also choose to define a Kubernetes
job using the new container image and the correct configuration. As we wanted to have the ability to run multiple commands and access the shell of the container, we decided against that route as the only way. In the end, we did create several jobs using the new container image to simultaneously index data into our new cluster.
There are a lot of ways to migrate data and/or update structural components of your application without downtime. This approach, by adding a new container to our Kubernetes cluster worked for us. So far I’m quite happy with the results, and our users haven’t really noticed that we migrated, besides the odd bugs due to incompatible ES queries.
P.S. If you’ve enjoyed this article or found it helpful, please share it, or check out my other articles. I’m on Instagram and Twitter too if you’d like to follow along on my adventures and other writings, or comment on the article.