Setting up Upsource cluster
This is a step-by-step instruction on how to install Upsource in a distributed multi-node cluster. The installation procedure should only be performed once. After it's completed, the services will be managed by a standard docker-compose tool (or more accurately by cluster.sh — a JetBrains provided docker-compose wrapper that has the same command line format).
What's included
Upsource cluster consists of the following services:
The cluster structure is defined by the docker-compose.yml file and is parametrized by the properties defined in the upsource.env file. Both files are included in the cluster-config artifact (you can download the latest version from here).
Upsource cluster doesn't include:
- JetBrains Hub
- Cassandra database
Prerequisites
- Install JetBrains Hub or use existing standalone instance.
- Install Cassandra database (see Cassandra installation guide)
- Install Lucene libraries: download and unpack the Lucene libs archive, then place its contents (all the .jar files) into the /lib folder of your Cassandra installation directory.
-
Setup Docker Swarm Cluster, including:
Refer to this instruction to setup a key-value based swarm cluster.
Notes to the instruction above:
Make sure that time between all cluster nodes, and Hub and Cassandra nodes is synchronized (systemd/Timers may be suggested as one of the ways to synchronize time).
Configure Upsource Cluster
Unpack cluster-config.zip to a host from which you're going to manage Upsource cluster — we'll be referring to it as the cluster admin host.
You can use any server, not necessarily a swarm manager or a swarm node. Just make sure that once you've selected it, the cluster is managed from that admin host only.
- Make sure all of these files are located in the same directory:
- cluster.sh a wrapper of the standard docker-compose tool. cluster.sh defines some variables substituted in docker-compose.yml
- docker-compose.yml defines Upsource cluster structure.
- upsource.env defines properties passed to the upsource services running inside docker containers.
- docker-compose-params.env defines parameters used in docker-compose.yml (it is assumed that cluster.sh defines default values of the parameters, and docker-compose-params.env overrides defaults if needed depending on the environment).
The port Upsource cluster listens to is defined by the property UPSOURCE_EXPOSED_PROXY_PORT in cluster.sh and is equal to 8080. This property might be overridden in docker-compose-params.env.
UPSOURCE_EXPOSED_PROXY_PORT=<The port number Upsource should listen to>
Define a swarm node on which opscenter should be deployed by specifying a value for the variable UPSOURCE_OPSCENTER_NODE located in docker-compose-params.env:
UPSOURCE_OPSCENTER_NODE=<opscenter_nodeId>
where opscenter_nodeId is the name of the swarm worker node you're defining.
On the node you specified in the previous step (opscenter_nodeId), create a folder for backups and give read-write access permissions to the user with ID 13001 (Upsource service runs under the user jetbrains with ID 13001 inside a container, and will have no access to the mapped volume on the host machine otherwise).
Although the backups volume is mapped to the folder /opt/upsource/backups on the host machine by default, it can be customized in the docker-compose-params.env by editing the UPSOURCE_BACKUPS_PATH_ON_HOST_SYSTEM property. The following commands should be executed on the swarm node (opscenter_nodeId) assuming that the property UPSOURCE_BACKUPS_PATH_ON_HOST_SYSTEM was not changed (otherwise commands should be run against the overridden backups directory):
mkdir -p -m 750 /opt/upsource/backups chown 13001:13001 /opt/upsource/backups
-
Define a swarm node on which haproxy service should be deployed by specifying a value for the variable UPSOURCE_PROXY_NODE located in docker-compose-params.env:
UPSOURCE_PROXY_NODE=<haproxy_nodeId>
Where haproxy_nodeId is the name of the swarm worker node you're defining.
Note: it is recommended to deploy the haproxy and opscenter services on the same node. Otherwise in some environments, there might be connectivity problems between the haproxy and opscenter containers (when routing a request to the opencenter service, haproxy addresses it by its network alias, not by an overlay network's raw IP address).
-
On all the swarm nodes, pre-create service logs directories and give the user with id 13001 read-write access to them (Upsource service runs under the user jetbrains with ID 13001 inside a container, and will have no access to the mapped volume on the host machine otherwise):
mkdir -p -m 750 /var/log/upsource/psi chown 13001:13001 /var/log/upsource/psi mkdir -p -m 750 /var/log/upsource/analyzer chown 13001:13001 /var/log/upsource/analyzer mkdir -p -m 750 /var/log/upsource/frontend chown 13001:13001 /var/log/upsource/frontend mkdir -p -m 750 /var/log/upsource/opscenter chown 13001:13001 /var/log/upsource/opscenter mkdir -p -m 750 /var/log/upsource/cluster-init chown 13001:13001 /var/log/upsource/cluster-init
-
-
Set CASSANDRA_HOSTS and CASSANDRA_PORT properties:
CASSANDRA_HOSTS=<comma-separated list of cassandra cluster hosts>
CASSANDRA_PORT=<node port number in cassandra cluster>
- Define Hub related properties:
-
HUB_URL
HUB_URL=<URL of external Hub>
-
Import a Hub certificate to Upsource. Skip this step unless Hub is available through HTTPS via a self-signed certificate or a certificate signed by a private CA.
-
-
Set the property UPSOURCE_URL to the URL the end users will use to access Upsource.
UPSOURCE_URL=<URL for end users to access Upsource>
The default URL Upsource is available at is: http://<haproxy_nodeId.address>:${UPSOURCE_EXPOSED_PROXY_PORT}/
where haproxy_nodeId is the node to which haproxy service is deployed.
Your production environment will most likely be set up behind an SSL-terminating proxy (See how to configure proxy for Upsource. The only difference with cluster is that there is no need to run Upsource configure command, instead Upsource cluster base-url is defined as a value of the property UPSOURCE_URL). In this case, set proxy address as a value of the variable UPSOURCE_URL.
Let's assume the value of variable UPSOURCE_URL was set to some <upsource.url>. In this case you should:
-
Create a trusted service in Hub (the page <hub.url>/hub/services) for Upsource cluster (note the service id and secret as you will need them in Step e to set in the upsource.env file).
This service will be used by all Upsource services in order to communicate with Hub.
Add redirect URLs for the created service:
- <upsource.url>
- <upsource.url>/~download
- <upsource.url>/~generatedTree
- <upsource.url>/~oauth/github
- <upsource.url>/~unsubscribe
- <upsource.url>/~uploads
- <upsource.url>/monitoring
Set Upsource service home URL to <upsource.url>
-
Set UPSOURCE_SERVICE_ID and UPSOURCE_SERVICE_SECRET to the values defined in the Hub trusted service you've created:
UPSOURCE_SERVICE_ID=<Key of the trusted service pre-configured in Hub >
UPSOURCE_SERVICE_SECRET=<Secret of the trusted service pre-configured in Hub>
-
-
Start Upsource cluster init service that will initialize Upsource data structures in the Cassandra instance:
docker -H <swarm.master.host>:<swarm.master.port> run -v /var/log/upsource/cluster-init:/opt/upsource-cluster-init/logs -v /opt/hub/cert:/opt/upsource-cluster-init/conf/cert --env-file=upsource.env jetbrains/upsource-cluster-init:<major.minor.buildNumber>
-
Check that upsource-cluster-init has started successfully:
The logs of the cluster-init execution are located in the /var/log/upsource/cluster-init directory of the node where the container was started. The following command will give a clue on which node the cluster-init was executed:
docker -H tcp://<swarm.master.host>:<swarm.master.port> ps -a --format "{{.ID}}: {{.Names}} {{.Image}}"
Start Upsource cluster
-
Make sure your external Hub is started and available prior to Upsource cluster startup.
-
Make sure the docker-compose (version 1.8.1 or higher) is installed on the node where you'd like to manage the cluster from:
docker-compose -v
-
Launch Upsource cluster:
./cluster.sh -H tcp://<swarm.master.host>:<swarm.master.port> up
The cluster.sh has the same command line format as docker-compose since cluster.sh is simply a wrapper of docker-compose.