This guide shows how to enable the AI-assistant in Baserow, configure the required environment variables, and (optionally) turn on knowledge-base lookups via an embeddings server.
BASEROW_ENTERPRISE_ASSISTANT_LLM_MODEL with the provider and model
of your choosing.gpt-oss-120b family. Other models can
work as well.Set the model you want, restart Baserow, and let migrations run.
# Required
BASEROW_ENTERPRISE_ASSISTANT_LLM_MODEL=openai/gpt-5-mini
OPENAI_API_KEY=your_api_key
# Optional - adjust LLM temperature (default: 0)
BASEROW_ENTERPRISE_ASSISTANT_LLM_TEMPERATURE=0
About temperature:
Choose one provider block and set its variables.
BASEROW_ENTERPRISE_ASSISTANT_LLM_MODEL=openai/gpt-5-mini
OPENAI_API_KEY=your_api_key
# Optional alternative endpoints (OpenAI EU or Azure OpenAI, etc.)
UDSPY_LM_OPENAI_COMPATIBLE_BASE_URL=https://eu.api.openai.com/v1
# or
UDSPY_LM_OPENAI_COMPATIBLE_BASE_URL=https://<your-resource-name>.openai.azure.com
# or any OpenAI compatible endpoint
BASEROW_ENTERPRISE_ASSISTANT_LLM_MODEL=bedrock/openai.gpt-oss-120b-1:0
AWS_BEARER_TOKEN_BEDROCK=your_bedrock_token
AWS_REGION_NAME=eu-central-1
BASEROW_ENTERPRISE_ASSISTANT_LLM_MODEL=groq/openai/gpt-oss-120b
GROQ_API_KEY=your_api_key
BASEROW_ENTERPRISE_ASSISTANT_LLM_MODEL=ollama/gpt-oss:120b
OLLAMA_API_KEY=your_api_key
# Optionally and alternative endpoint
UDSPY_LM_OPENAI_COMPATIBLE_BASE_URL=http://localhost:11434/v1
Under the hood, UDSPy auto-detects provider from the model prefix and builds an OpenAI-compatible client accordingly.
If your deployment method doesn’t auto-provision embeddings, run the Baserow embeddings service and point Baserow at it.
docker run -d --name baserow-embeddings -p 80:80 baserow/embeddings:latest
BASEROW_EMBEDDINGS_API_URL=http://your-embedder-service
# e.g., http://localhost if you mapped -p 80:80 locally
# Then restart Baserow and allow migrations to run.
After restart and migrations, knowledge-base lookup will be available.