This guide shows how to enable the AI-assistant in Baserow, configure the required environment variables, and (optionally) turn on knowledge-base lookups via an embeddings server.
BASEROW_ENTERPRISE_ASSISTANT_LLM_MODEL with the provider and model
of your choosing.gpt-oss-120b family. Other models can
work as well.Set the model you want, restart Baserow, and let migrations run.
Important: When running Baserow with Docker Compose or multiple services, BASEROW_ENTERPRISE_ASSISTANT_LLM_MODEL must be set in all services (both backend and frontend) for the assistant to work properly.
# Required
BASEROW_ENTERPRISE_ASSISTANT_LLM_MODEL=openai/gpt-4o
OPENAI_API_KEY=your_api_key
# Optional - adjust LLM temperature (default: 0)
BASEROW_ENTERPRISE_ASSISTANT_LLM_TEMPERATURE=0
About temperature:
Choose one provider block and set its variables.
BASEROW_ENTERPRISE_ASSISTANT_LLM_MODEL=openai/gpt-4o
OPENAI_API_KEY=your_api_key
# Optional alternative endpoints (OpenAI EU or Azure OpenAI, etc.)
UDSPY_LM_OPENAI_COMPATIBLE_BASE_URL=https://eu.api.openai.com/v1
# or
UDSPY_LM_OPENAI_COMPATIBLE_BASE_URL=https://<your-resource-name>.openai.azure.com
# or any OpenAI compatible endpoint
BASEROW_ENTERPRISE_ASSISTANT_LLM_MODEL=bedrock/openai.gpt-oss-120b-1:0
AWS_BEARER_TOKEN_BEDROCK=your_bedrock_token
AWS_REGION_NAME=eu-central-1
BASEROW_ENTERPRISE_ASSISTANT_LLM_MODEL=groq/openai/gpt-oss-120b
GROQ_API_KEY=your_api_key
BASEROW_ENTERPRISE_ASSISTANT_LLM_MODEL=ollama/gpt-oss:120b
OLLAMA_API_KEY=your_api_key
# Optionally and alternative endpoint
UDSPY_LM_OPENAI_COMPATIBLE_BASE_URL=http://localhost:11434/v1
Under the hood, UDSPy auto-detects provider from the model prefix and builds an OpenAI-compatible client accordingly.
If your deployment method doesn’t auto-provision embeddings, run the Baserow embeddings service and point Baserow at it.
For developers using Docker Compose: See embeddings-server.md for setup instructions.
docker run -d --name baserow-embeddings -p 80:80 baserow/embeddings:latest
BASEROW_EMBEDDINGS_API_URL=http://your-embedder-service
# e.g., http://localhost if you mapped -p 80:80 locally
# Then restart Baserow and allow migrations to run.
After restart and migrations, knowledge-base lookup will be available.
If the assistant is not visible in the sidebar or doesn’t work, verify that:
BASEROW_ENTERPRISE_ASSISTANT_LLM_MODEL is set correctly in both the backend and frontend servicesOPENAI_API_KEY, GROQ_API_KEY, etc.)To check if the variables are set correctly in development, from the host run:
# Check backend
just dcd run --rm backend bash -c env | grep LLM_MODEL
just dcd run --rm backend bash -c env | grep API_KEY
# Check frontend
just dcd run --rm web-frontend bash -c env | grep LLM_MODEL
Both commands must return the same value for BASEROW_ENTERPRISE_ASSISTANT_LLM_MODEL. If either is missing or they differ, update your environment configuration and restart the services.