Manage AI Enterprise
AI Enterprise lets you use different providers of AI services across your organization: JetBrains AI or a custom OpenAI-based solution. You can enable both options and choose a preferred provider for specific user profiles.
note
Before activating the product for your organization, make sure that JetBrains AI is enabled in your JetBrains Account.
Enabling AI Enterprise for your organization involves the following steps:
In the Web UI, open the Configuration page and navigate to the License & Activation tab.
Scroll down to the AI Enterprise section and click Enable:
In the Enable AI Enterprise dialog, choose one of the AI providers:
JetBrains AI service (Learn more)
Custom OpenAI models (Learn more)
Google Vertex AI with Gemini (Learn more)
AWS Bedrock (Learn more)
If you'd like to use different AI providers for specific profiles, you can easily add and enable an additional provider at any time.
After enabling AI Enterprise in your organization, you need to select and enable an AI provider for relevant profiles. Until then, developers won't have access to AI features and the AI Assistant plugin.
By default, the AI features in JetBrains products are powered by the JetBrains AI service. This service transparently connects you to different large language models (LLMs) and enables specific AI-powered features within JetBrains products. It's driven by OpenAI and Google as the primary third-party providers, as well as several proprietary JetBrains models. JetBrains AI is deployed as a cloud solution on the JetBrains' side and does not require any additional configuration from your side.
In the AI Enterprise section, click Enable.
In the Enable AI Enterprise dialog, select the JetBrains AI provider.
note
This step is only required if you use the Pay as you go billing model.
Set the usage limit for AI Enterprise.
Unlimited usageLimited number of usersEnable the Unlimited option.
Disable the Unlimited option and specify the limit on the number of AI Enterprise users.
Click Apply.
AI Enterprise works with Google Vertex AI, Amazon Bedrock, and selected presets powered by OpenAI.
warning
The OpenAI GPT-4o mini model is now required when configuring the OpenAI provider.
If you previously configured the OpenAI provider with the OpenAI Platform or Azure OpenAI preset, you need to manually update the configuration.
Before starting, make sure to set up your OpenAI Platform account and get an API key for authentication. For more information, refer to the OpenAI documentation.
In the AI Enterprise section, click Enable.
In the Enable AI Enterprise dialog, specify the following details:
Select the OpenAI provider.
Select OpenAI Platform from the Preset list.
Provide an endpoint for communicating with the OpenAI service. For example,
https://api.openai.com/v1
.Provide your API key to authenticate to the OpenAI API. For more details, refer to the OpenAI documentation.
(Optional) AI Enterprise uses the GPT-3.5-Turbo, GPT-4, and GPT-4o mini models for AI-powered features within JetBrains products. However, if you have the GPT-4o model available on your account, we recommend adding it to the list by clicking Add optional model.
note
This step is only required if you use the Pay as you go billing model.
Set the usage limit for AI Enterprise.
Unlimited usageLimited number of usersEnable the Unlimited option.
Disable the Unlimited option and specify the limit on the number of AI Enterprise users.
Click Apply.
Before starting, make sure to create an Azure OpenAI resource and deploy the required models: GPT-3.5-Turbo, GPT-4. For more details, refer to the Azure OpenAI Service documentation.
In the AI Enterprise section, click Enable.
In the Enable AI Enterprise dialog, specify the following details:
Select the OpenAI provider.
Select Azure OpenAI from the Preset list.
Provide an endpoint for communicating with the Azure OpenAI service. For example,
https://YOUR_RESOURCE_NAME.openai.azure.com
.Provide your API key to authenticate to the Azure OpenAI API.
Specify the deployment names of your models. For more details, refer to the Azure OpenAI Service documentation.
(Optional) AI Enterprise uses the GPT-3.5-Turbo, GPT-4, and GPT-4o mini models for AI-powered features within JetBrains products. However, if you have the GPT-4o model available on your account, we recommend adding it to the list by clicking Add optional model.
note
This step is only required if you use the Pay as you go billing model.
Set the usage limit for AI Enterprise.
Unlimited usageLimited number of usersEnable the Unlimited option.
Disable the Unlimited option and specify the limit on the number of AI Enterprise users.
Click Apply.
Before starting, make sure to set up your Google Cloud account, create a service account key in the JSON format, and enable the gemini-1.5-pro and gemini-1.5-flash models.
In the AI Enterprise section, click Enable.
In the Enable AI Enterprise dialog, specify the following details:
Select the Google Vertex AI provider.
In the Project field, specify the name of the Google Cloud project.
In the Region field, specify the Google Vertex AI region.
In the Token field, specify the service account key in the JSON format.
note
This step is only required if you use the Pay as you go billing model.
Set the usage limit for AI Enterprise.
Unlimited usageLimited number of usersEnable the Unlimited option.
Disable the Unlimited option and specify the limit on the number of AI Enterprise users.
Click Apply.
AI Enterprise provides an integration with Amazon Bedrock, a fully managed service that provides access to a variety of high-performing foundation models. In the current version, AI Enterprise supports the Claude 3.5 Sonnet V2 and Claude 3.5 Haiku LLMs to use in the AI Assistant.
tip
In this implementation, AI Enterprise doesn't support cross-region inference models, as they require a region prefix in their names. Currently, model names can't be edited.
Before configuring Amazon Bedrock as an AI provider in IDE Services, you need to set up your access rights and get an access key.
Follow the Getting Started instructions to:
Create an AWS account (if you don't already have one).
Create an AWS Identity and Access Management role with the necessary permissions for Amazon Bedrock.
Request access to the foundation models (FM) that you want to use.
Access AWS IAM Identity Center, find your user, and review the Permissions policies section.
In addition to the default permission policy
AmazonBedrockReadOnly
, add a new inline policy for the Bedrock service.Configure the new inline policy to have the Read access level for the InvokeModel and InvokeModelWithResponseStream actions.
Generate an access key for your user.
When creating an access key, specify Third-party service as a use case.
The access key ID and secret are necessary for configuring Amazon Bedrock in IDE Services. Make sure to save these values.
Request access to the Claude 3.5 Sonnet V2 and Claude 3.5 Haiku models.
In the AI Enterprise section, click Enable.
In the Enable AI Enterprise dialog, specify the following details:
Select the Amazon Bedrock provider.
In the Region field, specify the AWS region that supports Amazon Bedrock.
In the Access key field, specify the access key ID.
In the Secret key field, specify the access key secret.
note
This step is only required if you use the Pay as you go billing model.
Set the usage limit for AI Enterprise.
Unlimited usageLimited number of usersEnable the Unlimited option.
Disable the Unlimited option and specify the limit on the number of AI Enterprise users.
Click Apply.
When enabling AI Enterprise for your organization, you get to choose only one AI provider. To enable an additional provider:
Navigate to Configuration | License & Activation.
Scroll down to the AI Enterprise section and click Settings.
Click Add provider and choose one from the menu.
If you're adding a Google Vertex, OpenAI, or AWS Bedrock provider, refer to the specific configuration instructions for further steps.
If you have more than one AI provider enabled for your organization, the default provider will be preselected when you enable AI Enterprise in profiles. Additionally, it allows you to centrally switch providers for all profiles that have the Default provider option currently selected. To choose the default provider:
Navigate to Configuration | License & Activation.
Scroll down to the AI Enterprise section and click Settings.
Select one of the listed AI providers as Default, then confirm and save your selection.
Navigate to Configuration | License & Activation.
Scroll down to the AI Enterprise section and click Settings.
On the AI Enterprise Settings page, configure the usage limit for AI Enterprise:
Enable the Unlimited number of users option to let all users with AI Enterprise enabled on the profile level gain access to the AI features.
Disable the Unlimited number of users option and specify the limit on the number of AI Enterprise users. Users above this limit will have restricted access to the product features.
Click Save.
Navigate to Configuration | License & Activation.
Scroll down to the AI Enterprise section and click Settings.
On the AI Enterprise Settings page, use the Allow detailed data collection option to enable or disable detailed data collection in your organization. Detailed data collection includes full data about the interactions with large language models.
tip
You can configure this option granularly for each profile.
Click Save.
In the Web UI, open the Configuration page and navigate to the License & Activation tab.
In the AI Enterprise section, click Disable.
In the Disable AI Enterprise? dialog, click Disable.
Thanks for your feedback!