Prowler Studio is an AI assistant that helps you to create checks for Prowler. It can be used as a CLI tool or as a web application.
Caution
The code generated by the AI system could not be perfect and should be reviewed by a human before being used.
Prowler Studio is model agnostic, so you can use any LLM model that you want. There are two kinds of models that are used in Prowler Studio:
The main model used in Prowler Studio is the reasoner/writing model. This model is responsible for generating the checks based on the input question. The official supported and tested model are:
For Gemini provider the supported models are:
models/gemini-1.5-flash
To use the Gemini model, you will need an API key. You can get one from the Gemini API Key page and set it as an environment variable:
export GOOGLE_API_KEY="XXXXXXXX"
For OpenAI provider the supported models are:
gpt-4o
gpt-4o-mini
To use the OpenAI models, you will need an API key. You can get one from the OpenAI Platform page and set it as an environment variable:
export OPENAI_API_KEY="XXXXXXXX"
The embedding model is used to calculate the similarity between the generated checks and the existing checks in the Prowler repository. Currently, the only supported embedding model is the Google text-embedding-004 model. To use it you will need an API key.
You can get one from the Google Cloud Console page and set it as an environment variable.
It is the same as the GOOGLE_API_KEY
environment variable used for the Gemini model. So you can set it as:
export GOOGLE_API_KEY="XXXXXXXX"
export EMBEDDING_MODEL_API_KEY="XXXXXXXX"
Google has a free tier for the text-embedding-004
model, so you can use it without any cost.
The CLI is a command-line tool that allows you to ask questions to the AI model and get the answer in a more programmatic way.
- Create new checks
- RAG dataset based on your Prowler local installation
- Multiple LLM providers supported
- Save checks in your Prowler local installation
- Update compliance requirements
# AWS checks
prowler-studio create-check "Create a new AWS check to ensure EC2 security groups with inbound rules allowing unrestricted ICMP access are not present."
prowler-studio create-check "Create a new AWS check to ensure ACM certificates for specific domain names are used over wildcard certificates to adhere to best security practices, providing unique private keys for each domain/subdomain."
prowler-studio create-check "Create a new AWS check to ensure that each Amazon SQS queue is configured to use a Dead-Letter Queue (DLQ) in order to help maintain the queue flow and avoid losing data by detecting and mitigating failures and service disruptions on time."
prowler-studio create-check "Create a new AWS check to detect EC2 instances with TCP port 9000 open to the Internet."
# Azure checks
prowler-studio create-check "Create a new Azure check to ensure that all my clusters from AKS (Azure Kubernetes Service) has the latest Kubernetes API Version."
prowler-studio create-check "Create a new Azure check to ensure that all my Azure Web Apps has a backup retention policy configured."
prowler-studio create-check "Please, could create an Azure check for storage service to ensure that lifecycle management is enabled for blob storage accounts?"
prowler-studio create-check "Create a new Azure check to ensure that all my Azure VNets have DDoS protection enabled."
# GCP checks
prowler-studio create-check "Create a check fot GCP to ensure that my Dataproc cluster instances are not accessible from the Internet."
prowler-studio create-check "Ensure for all backups for Google Kubernetes Engine (GKE) clusters have a backup configured."
prowler-studio create-check "To improve reliability, ensure that Google Cloud Compute Engine service restarts automatically your virtual machine instances when they are terminated due to non-user initiated reasons such as maintenance events, hardware, and software failures."
# K8s checks
prowler-studio create-check "Create a new Kubernetes check to ensure that default service accounts are not actively used."
prowler-studio create-check "Create a new Kubernetes check to ensure that all my pods are running with a non-root user."
đź’ˇ Did you know? You can use Prowler Studio to easily update your compliance requirements with the latest checks available in Prowler.
prowler-studio update-compliance --max-check-number-per-requirement 5 --confidence-threshold 0.6 compliance_test.json
Requirements:
git
docker
git clone [email protected]:prowler-cloud/prowler-studio.git
cd prowler-studio
docker build -f ./cli/Dockerfile -t prowler-studio-cli:latest .
To use it just run the Docker container:
docker run --rm -it --env-file .env prowler-studio-cli
If you want to save the generated checks in your local machine you can mount use a Docker volume:
docker run --rm -it --env-file .env -v ./generated_checks:/home/prowler_studio/prowler_studio/cli/generated_checks prowler-studio-cli
Warning
If you have problems with the permissions of the generated checks folder add write permissions to write in the folder by other users.
You can do it with the following command: chmod o+w ./generated_checks
Requirements:
git
uv
(Installation tutorial can be found here)- At least Python 3.12
git clone [email protected]:prowler-cloud/prowler-studio.git
cd prowler-studio
uv sync
uv tool install -e ./cli/
cp .env.template .env
Then fill the .env
file with the needed values, the minimum required values are:
OPENAI_API_KEY
orGOOGLE_API_KEY
: The API key for the LLM provider.EMBEDDING_MODEL_API_KEY
: The API key for the embedding model provider. It must be the same as theGOOGLE_API_KEY
for now.
Type the following command to set the environment variables:
set -a
source .env
set +a
Important
In order to work some environment variables are needed. Use the .env.template
file as a template to create a .env
file with the needed variables.
For now is only supported Google embedding model, so the GOOGLE_API_KEY
must be set always.
To get one go to Gemini's documentation and follow the instructions to get one.
The CLI can be configured using the cli/prowler_studio/_cli/config.yml
file. The file is already created in the repository and you can change the values to fit your needs.
The supported values for the configuration are:
llm_provider
: The LLM provider to use. The supported values are:gemini
openai
llm_reference
: How the model is named in the provided provider. The supported values depend on the provider:- For
gemini
provider:models/gemini-1.5-flash
- For
openai
provider:gpt-4o
gpt-4o-mini
- For
embedding_model_provider
: The embedding model provider to use, it only affects on thebuild-check-rag
command. The supported values are:gemini
embedding_model_reference
: How the model is named in the provided provider, it only affects on thebuild-check-rag
command. The supported values depend on the provider:- For
gemini
provider:text-embedding-004
- For
The CLI comes with a help command to show the available commands and their usage:
prowler-studio --help
If you installed it as a uv
tool and the command is not working you can try to run uv tool update-shell
to update the $PATH
correctly.
Remember that the API keys has to be passed to the CLI through env variables (recommended) or arguments. The recommended way is to set the environment variables in the .env
file
but if you are going to use the Prowler Studio CLI in different locations than the reopsitory maybe is good idea to set them in your SHELL configuration ~/.bashrc
, ~/.zshrc
, etc.
create-check
: Create a new check based on the input prompt.build-check-rag
: Update the knowledge base with new checks (you need to have the Prowler repository cloned in your machine).update-compliance
: Update a specified compliance using the given compliance path using the Prowler Compliance Framework format.
The Prowler Studio Chatbot is a web application that allows you to generate checks for Prowler in a more user-friendly way.
- Get the answer in a more user-friendly way
- API powered by FastAPI
Requirements:
git
docker
The first step is to download the repository:
git clone [email protected]:prowler-cloud/prowler-studio.git
Then you can build the Docker image:
docker compose build
Now you can run the Docker containers using docker-compose
from the root of the repository:
Important
In order to work some environment variables are needed. Use the .env.template
file as a template to create a .env
file with the needed variables.
For now is only supported Google embedding model, so the GOOGLE_API_KEY
must be set always.
To get one go to Gemini's documentation and follow the instructions to get one.
docker compose up -d
Once the containers are running you can access the UI from your browser at http://localhost:80
.
Requirements:
git
uv
- At least Python 3.12
git clone [email protected]:prowler-cloud/prowler-studio.git
cd studio
uv install --no-dev --extra api
To start the API server you have multiple options:
- Run the main directly:
uv run python -m uvicorn api.prowler_studio._api.main:app --host 0.0.0.0 --port 8000
- Using the
uv
runner:uv run --no-dev prowler-studio-api
Requirements:
npm
cd ui
npm install
To start the UI server run:
npm run start
Now you can access the UI from your browser at http://localhost:3000
.
Just type your check creation request in the input field and press "Enter"!
The Prowler Studio MCP Server is a server implementation that allows you to integrate Prowler Studio's capabilities directly into your development environment through the Model Context Protocol (MCP).
- Direct integration with development environments.
- Help your favorite IDE to generate checks in the correct way.
- Seamless workflow integration.
Requirements:
git
docker
First, clone the repository and build the Docker image:
git clone [email protected]:prowler-cloud/prowler-studio.git
cd prowler-studio
docker build -f ./mcp_server/Dockerfile -t prowler-studio-mcp-server:latest .
Requirements:
git
uv
- At least Python 3.12
git clone [email protected]:prowler-cloud/prowler-studio.git
cd prowler-studio
uv sync --no-dev --extra mcp_server
To use the MCP Server, you need to configure your MCP-compatible development environment. Add the following configuration to your MCP settings:
Important
The MCP Server OPENAI_API_KEY
is optional, if you don't want to use OpenAI models you can leave it empty.
The MCP Server GOOGLE_API_KEY
is required, it is used for the embedding model.
In Cursor, you can use the MCP Server by adding the following configuration to your MCP settings:
{
"mcpServers": {
"prowler-studio": {
"command": "docker",
"args": ["run", "--rm", "-e", "OPENAI_API_KEY=your_openai_api_key", "-e", "GOOGLE_API_KEY=your_google_api_key", "-i", "prowler-studio-mcp-server:latest"]
}
}
}
{
"mcpServers": {
"prowler-studio": {
"command": "uvx",
"args": ["/path/to/prowler_studio/mcp_server/"],
"env": {
"OPENAI_API_KEY": "your_openai_api_key",
"GOOGLE_API_KEY": "your_google_api_key"
}
}
}
}
For automatic installation in VS Code, you can use one of the following installation buttons:
Note
Remember to set your OPENAI_API_KEY
and GOOGLE_API_KEY
environment variables after installation.
For manual installation, add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P
and typing Preferences: Open Settings (JSON)
.
Optionally, you can add it to a file called .vscode/mcp.json
in your workspace. This will allow you to share the configuration with others.
Note that the
mcp
key is not needed in the.vscode/mcp.json
file.
{
"mcp": {
"servers": {
"prowler-studio": {
"type": "stdio",
"command": "docker",
"args": ["run", "--rm", "-e", "OPENAI_API_KEY=your_openai_api_key", "-e", "GOOGLE_API_KEY=your_google_api_key", "-i", "prowler-studio-mcp-server:latest"],
}
}
}
}
{
"mcp": {
"servers": {
"prowler-studio": {
"type": "stdio",
"command": "uvx",
"args": ["/path/to/prowler_studio/mcp_server/"],
"env": {
"OPENAI_API_KEY": "your_openai_api_key",
"GOOGLE_API_KEY": "your_google_api_key"
}
}
}
}
}