Upgrading Upsource cluster
Follow this guide to upgrade your existing Upsource cluster installation to a newer one. Upgrade procedures vary significantly depending on the version increment (i.e. 3.5.1 to 3.5.2 vs. 3.5.X to 4.0.X) as outlined below.
Minor upgrade
Use this instruction when upgrading to a version that only differs by a build number, for example, when upgrading from 3.5.111 to 3.5.222.
-
Stop Upsource cluster and remove its containers:
./cluster.sh -H tcp://<swarm.master.host>:<swarm.master.port> stop ./cluster.sh -H tcp://<swarm.master.host>:<swarm.master.port> rm
-
Download cluster-config-<major.minor.NewBuildNumber>.zip
Important! If you've ever changed cluster.sh and docker-compose.yml files (for example, added new analyzer properties), you need to:
-
Check that a correct UPSOURCE_VERSION is set inside cluster.sh
-
Start Upsource cluster:
./cluster.sh -H tcp://<swarm.master.host>:<swarm.master.port> up
Major upgrade
Use this instruction when upgrading to a new version, for example, from 3.5.111 to 4.0.111.
-
Create a backup of your existing installation.
-
Delete the data from your Cassandra database instance. Indexed data stored in Cassandra can not be migrated during a major upgrade. All user-generated data will be restored from backup.
-
Stop Upsource cluster and remove its containers:
./cluster.sh -H tcp://<swarm.master.host>:<swarm.master.port> stop ./cluster.sh -H tcp://<swarm.master.host>:<swarm.master.port> rm
-
Copy your backup to some temporary folder on the host where cluster-init will be started (let's assume the folder containing backup is /tmp/upsource/backup/2016 Oct 11 12-18-26)
-
Run the following command:
chown -R 13001:13001 "/tmp/upsource/backup/2016 Oct 11 12-18-26"
-
Run the new version's cluster init process on the host where backup was copied to (the backup location and hub certificate, if any, are provided as volumes):
docker -H <docker host where backup was copied to> run -v /var/log/upsource/cluster-init:/opt/upsource-cluster-init/logs -v /opt/hub/cert:/opt/upsource-cluster-init/conf/cert -v "/tmp/upsource/backup/2016 Oct 11 12-18-26/data":/opt/upsource-cluster-init/data --env-file=upsource.env jetbrains/upsource-cluster-init:<major.minor.NewBuildNumber>
-
Check that it has run successfully. The cluster-init execution logs are located in the directory /var/log/upsource/cluster-init of the node on which the container was started. The following command will give a clue which node cluster-init was executed on:
docker -H tcp://<swarm.master.host>:<swarm.master.port> ps -a --format "{{.ID}}: {{.Names}} {{.Image}}"
-
Download cluster-config-<major.minor.NewBuildNumber>.zip
Important! If you've ever changed cluster.sh and docker-compose.yml files (for example, added new analyzer properties), you need to:
-
Check that a correct UPSOURCE_VERSION is set inside cluster.sh
-
Start Upsource cluster:
./cluster.sh -H tcp://<swarm.master.host>:<swarm.master.port> up