Run Environment
Space Automation run environment is based on the concept of workers. A worker is a lightweight agent that connects to Space Automation, gets jobs and source code, runs the jobs, and reports results back to Space. A worker can run in virtual machines in the Space Automation Cloud, your own self-hosted machines, and Docker containers. The following table summarizes possible run environments:
Step type | Environment | Description | OS and resources |
---|---|---|---|
| Space Cloud workers | Virtual machines hosted in the Space cloud infrastructure. Learn more | OS: Linux. Planned: macOS, Windows Default: 2 vCPU, 7800 MB. Large: 4 vCPU, 15600 MB. Extra large: 8 vCPU, 31200 MB (not available for the Free plan). |
| Containers in Space Cloud | Docker containers running in the Space cloud workers. Learn more | OS: Linux only Default: 2 vCPU, 7800 MB. Max: 8 vCPU, 31200 MB. Max for the Free plan: 4 vCPU, 15600 MB. |
| Self-hosted workers | Self-hosted hardware or virtual machines. Learn more | OS: Linux, macOS, Windows All available resources of the host machine |
| Containers in self-hosted workers | Docker containers running on self-hosted hardware or virtual machines. Learn more | OS: Linux only All resources allocated to the container on the host machine |
Choose run environment for a job
The environment where a job will eventually run depends on:
The default worker pool selected for the organization or a project: Space Automation Cloud (default) or Self-Hosted Workers.
The requirements of a job:
job.requirements
.A step type:
job.container
orjob.host
.The requirements of a step:
job.container.requirements
orjob.host.requirements
.
For better understanding how to run a job in a particular environment, see Examples.
Default worker pool
The default pool for running jobs is defined by the Default worker pool parameter on the organization and project levels. The project-level parameter has priority over the organization-level one.
To change the default worker pool for the organization
On the main menu, click Administration and choose Automation.
Set the Default worker pool parameter:
Space Automation Cloud – to use cloud workers.
Self-Hosted Workers – to use self-hosted workers.
To change the default worker pool for a project
Open the project and then open the Jobs page.
Click Settings.
Set the Default worker pool parameter: Space Automation Cloud or Self-Hosted Workers.
Job requirements
The job.requirements
block makes requirements to the run environment more specific based on the following parameters:
- Worker pool
job.requirements.workerPool
: If specified, overrides the Default worker pool value. Possible values:WorkerPools.SPACE_CLOUD
or"space-cloud"
for Space Automation Cloud.WorkerPools.SELF_HOSTED
or"self-hosted.default"
for self-hosted workers.
- Worker type
job.requirements.workerType
: specifies a worker instance. Now, you can use this parameter only to specify worker instance type in Space Automation Cloud:WorkerTypes.SPACE_CLOUD_UBUNTU_LTS_REGULAR
or"space-cloud.ubuntu-lts.regular"
(default)WorkerTypes.SPACE_CLOUD_UBUNTU_LTS_LARGE
or"space-cloud.ubuntu-lts.large"
WorkerTypes.SPACE_CLOUD_UBUNTU_LTS_XLARGE
or"space-cloud.ubuntu-lts.xlarge"
- Resources
job.requirements.resources
: depending on a worker pool, Space will choose either a cloud worker or self-hosted worker instance that meets the specified resources requirements:minCpu
andminMemory
. Note that if you also specifyresources
on thehost
level, Space will look for suitable worker based on the higher resource requirements. For example:// this job will run on a worker // that has at least 2.cpu and 4000.mb job("Example") { requirements { resources { minCpu = 1.cpu minMemory = 2000.mb } } // the container will be limited to 1.cpu and 2000.mb container(displayName = "Say Hello", image = "hello-world") host("Say Hello 2") { shellScript { content = "echo Hello World!" } // these requirements override job's requirements // as they are higher requirements { resources { minCpu = 2.cpu minMemory = 4000.mb } } } }
Run environments in Space On-Premises
Support for different run environments depends on Space On-Premises installation type:
Self-hosted workers – full support.
Cloud workers – not supported.
Self-hosted workers – full support.
Cloud workers – as Kubernetes implies running workload in containers only, the behavior will be different if Automation is configured to run a job in the cloud (e.g., Space Automation Cloud is selected in Jobs settings or
job.requirements
hasworkerPool = WorkerPools.SPACE_CLOUD
). In this case, even if a job uses ahost
block, Automation will run it in a Docker container:Regardless of what is specified in
job.requirements.workerType
, the container has 2 vCPU and 7800 MB memory.The container image is based on Alpine Linux and provides support for Docker and Docker Compose.
Examples
Below you will find examples on how to run jobs in various environments.