Configuring Baserow

Any questions, problems or suggestions with this guide? Ask a question in our community or contribute the change yourself at https://gitlab.com/baserow/baserow/-/tree/develop/docs .

The table below shows all available environment variables supported by Baserow. Some environment variables have different defaults, are not supported, are optional etc depending on how you installed Baserow. See the specific installation guides for how to set these environment variables.

Environment Variables

The installation methods referred to in the variable descriptions are:

Variables marked with Internal should only be changed if you know what you are doing.

Access Configuration

Name Description Defaults
BASEROW_PUBLIC_URL The public URL or IP that will be used to access baserow. Always should start with http:// https:// even if accessing via an IP address. If you are accessing Baserow over a non-standard (80) http port then make sure you append :YOUR_PORT to this variable.

Setting this will override PUBLIC_BACKEND_URL and PUBLIC_WEB_FRONTEND_URL with BASEROW_PUBLIC_URL’s value.
Set to empty to disable default of http://localhost in the compose to instead set PUBLIC_X_URLs.
http://localhost
BASEROW_CADDY_ADDRESSES Not supported by standalone images. A comma separated list of supported Caddy addresses( https://caddyserver.com/docs/caddyfile/concepts#addresses). If a https:// url is provided the Caddy reverse proxy will attempt to automatically setup HTTPS with lets encrypt for you. If you wish your Baserow to still be accessible on localhost and you set this value away from the default of :80 ensure you append “,http://localhost :80
PUBLIC_BACKEND_URL Please use BASEROW_PUBLIC_URL unless you are using the standalone baserow/backend or baserow/web-frontend images. The publicly accessible URL of the backend. Should include the port if non-standard. Ensure BASEROW_PUBLIC_URL is set to an empty value to use this variable in the compose setup. $BASEROW_PUBLIC_URL, http://localhost:8000/ in the standalone images.
PUBLIC_WEB_FRONTEND_URL Please use BASEROW_PUBLIC_URL unless you are using the standalone baserow/backend or baserow/web-frontend images. The publicly accessible URL of the web-frontend. Should include the port if non-standard. Ensure BASEROW_PUBLIC_URL is set to an empty value to use this variable in the compose setup. $BASEROW_PUBLIC_URL, http://localhost:3000/ in the standalone images.
BASEROW_EMBEDDED_SHARE_URL Optional URL for public sharing and email links that can be used if Baserow is used inside an iframe on a different URL. $PUBLIC_WEB_FRONTEND_URL
WEB_FRONTEND_PORT The HTTP port that is being used to access Baserow using. Only used by the docker-compose files. Only used by the docker-compose.yml files, defaults to 80 but prior to 1.9 defaulted to 3000.
BASEROW_EXTRA_ALLOWED_HOSTS An optional comma separated list of hostnames which will be added to the Baserow’s Django backend ALLOWED_HOSTS setting. In most situations you will not need to set this as the hostnames from BASEROW_PUBLIC_URL or PUBLIC_BACKEND_URL will be added to the ALLOWED_HOSTS automatically. This is only needed if you need to allow additional different hosts to be able to access your Baserow.
PRIVATE_BACKEND_URL Only change this with standalone images. This is the URL used when the web-frontend server directly queries the backend itself when doing server side rendering. As such not only the browser, but also
the web-frontend server should be able to make HTTP requests to the backend. The web-frontend nuxt server might not have access to the `PUBLIC_BACKEND_URL` or there could be a more direct route, (e.g. from container to container instead of via the internet). For example if the web-frontend and backend were containers on the same docker network this could be set to http://backend:8000.
BASEROW_CADDY_GLOBAL_CONF Not supported by standalone images. Will be substituted into the Caddyfiles global config section. Set to “debug” to enable Caddies debug logging.
BASEROW_MAX_IMPORT_FILE_SIZE_MB The maximum file size in mb you can import to create a new table. Default 512Mb. 512
BASEROW_PERSONAL_VIEW_LOWEST_ROLE_ALLOWED When RBAC is enabled from an enterprise or advanced license this controls what lowest user role is allowed to create personal views on a table. The options are: VIEWER (the default), COMMENTER, EDITOR, BUILDER, ADMIN VIEWER

Backend Configuration

Name Description Defaults
SECRET_KEY The Secret key used by Django for cryptographic signing such as generating secure password reset links and managing sessions. See https://docs.djangoproject.com/en/3.2/ref/settings/#std:setting-SECRET_KEY for more details Required to be set by you in the docker-compose and standalone installs. Automatically generated by the baserow/baserow image if not provided and stored in /baserow/data/.secret.
SECRET_KEY_FILE Only supported by the baserow/baserow image If set Baserow will attempt to read the above SECRET_KEY from this file location instead.
BASEROW_JWT_SIGNING_KEY The signing key that is used to sign the content of generated tokens. For HMAC signing, this should be a random string with at least as many bits of data as is required by the signing protocol. See https://django-rest-framework-simplejwt.readthedocs.io/en/latest/settings.html#signing-key for more details Recommended to be set by you in the docker-compose and standalone installs (default to the SECRET_KEY). Automatically generated by the baserow/baserow image if not provided and stored in /baserow/data/.jwt_signing_key.
BASEROW_ACCESS_TOKEN_LIFETIME_MINUTES The number of minutes which specifies how long access tokens are valid. This will be converted in a timedelta value and added to the current UTC time during token generation to obtain the token’s default “exp” claim value. 10 minutes.
BASEROW_REFRESH_TOKEN_LIFETIME_HOURS The number of hours which specifies how long refresh tokens are valid. This will be converted in a timedelta value and added to the current UTC time during token generation to obtain the token’s default “exp” claim value. 168 hours (7 days).
BASEROW_BACKEND_LOG_LEVEL The default log level used by the backend, supports ERROR, WARNING, INFO, DEBUG, TRACE INFO
BASEROW_BACKEND_DATABASE_LOG_LEVEL The default log level used for database related logs in the backend. Supports the same values as the normal log level. If you also enable BASEROW_BACKEND_DEBUG and set this to DEBUG you will be able to see all SQL queries in the backend logs. ERROR
BASEROW_BACKEND_DEBUG If set to “on” then will enable the non production safe debug mode for the Baserow django backend. Defaults to “off”
BASEROW_AMOUNT_OF_GUNICORN_WORKERS The number of concurrent worker processes used by the Baserow backend gunicorn server to process incoming requests
BASEROW_AIRTABLE_IMPORT_SOFT_TIME_LIMIT The maximum amount of seconds an Airtable migration import job can run. 1800 seconds - 30 minutes
INITIAL_TABLE_DATA_LIMIT The amount of rows that can be imported when creating a table. Defaults to empty which means unlimited rows.
BASEROW_ROW_PAGE_SIZE_LIMIT The maximum number of rows that can be requested at once. 200
BASEROW_FILE_UPLOAD_SIZE_LIMIT_MB The max file size in MB allowed to be uploaded by users into a Baserow File Field. 1048576 (1 TB or 1024*1024)
BASEROW_OPENAI_UPLOADED_FILE_SIZE_LIMIT_MB The max file size in MB allowed to be loaded in RAM and uploaded to OpenAI servers. See also OpenAI docs. 512
BATCH_ROWS_SIZE_LIMIT Controls how many rows can be created, deleted or updated at once using the batch endpoints. 200
BATCH_ROWS_SIZE_LIMIT Controls how many rows can be created, deleted or updated at once using the batch endpoints. 200
BASEROW_MAX_SNAPSHOTS_PER_GROUP Controls how many application snapshots can be created per group. -1 (unlimited)
BASEROW_SNAPSHOT_EXPIRATION_TIME_DAYS Controls when snapshots expire, set in number of days. Expired snapshots will be automatically deleted. 360
BASEROW_CELERY_SEARCH_UPDATE_HARD_TIME_LIMIT How long the Postgres full-text search Celery tasks can run for being killed. 1800
BASEROW_USE_PG_FULLTEXT_SEARCH By default, Baserow will use Postgres full-text as its search backend. If the product is installed on a system with limited disk space, and less accurate results / degraded search performance is acceptable, then switch this setting off by setting it to false. true
BASEROW_AUTO_VACUUM Whether Baserow should perform a VACUUM on a table in a background task after one or more fields changed in the table when full text search is enabled. true
BASEROW_ASGI_HTTP_MAX_CONCURRENCY Specifies a limit for concurrent requests handled by a single gunicorn worker. The default is: no limit.
BASEROW_IMPORT_EXPORT_RESOURCE_REMOVAL_AFTER_DAYS Specifies the number of days after which an import/export resource will be automatically deleted. 5

Backend Database Configuration

Name Description Defaults
DATABASE_HOST The hostname of the postgres database Baserow will use to store its data in. Defaults to db in the standalone and compose installs. If not provided in the `baserow/baserow` install then the embedded Postgres will be setup and used.
DATABASE_USER The username of the database user Baserow will use to connect to the database at DATABASE_HOST baserow
DATABASE_PORT The port Baserow will use when trying to connect to the postgres database at DATABASE_HOST 5432
DATABASE_NAME The database name Baserow will use to store data in. baserow
DATABASE_PASSWORD The password of DATABASE_USER on the postgres server at DATABASE_HOST Required to be set by you in the docker-compose and standalone installs. Automatically generated by the baserow/baserow image if not provided and stored in /baserow/data/.pgpass.
DATABASE_PASSWORD_FILE Only supported by the baserow/baserow image If set Baserow will attempt to read the above DATABASE_PASSWORD from this file location instead.
DATABASE_OPTIONS Optional extra options as a JSON formatted string to use when connecting to the database, see this documentation for more details.
DATABASE_URL Alternatively to setting the individual DATABASE_ parameters above instead you can provide one standard postgres connection string in the format of: postgresql://[user[:password]@][netloc][:port][/dbname][?param1=value1&…]. Please note this will completely override all other DATABASE_* settings and ignore them.
MIGRATE_ON_STARTUP If set to “true” when the Baserow backend service starts up it will automatically apply database migrations. Set to any other value to disable. If you disable this then you must remember to manually apply the database migrations when upgrading Baserow to a new version. true
BASEROW_TRIGGER_SYNC_TEMPLATES_AFTER_MIGRATION If set to “true” when after a migration Baserow will automatically sync all builtin Baserow templates in the background. If you are using a postgres database which is constrained to fewer than 10000 rows then we recommend you disable this as the Baserow templates will go over that row limit. To disable this set to any other value than “true” true
BASEROW_SYNC_TEMPLATES_TIME_LIMIT The number of seconds before the background sync templates job will timeout if not yet completed. 1800
SYNC_TEMPLATES_ON_STARTUP Deprecated please use BASEROW_TRIGGER_SYNC_TEMPLATES_AFTER_MIGRATION If provided has the same effect of BASEROW_TRIGGER_SYNC_TEMPLATES_AFTER_MIGRATION for backwards compatibility reasons. If BASEROW_TRIGGER_SYNC_TEMPLATES_AFTER_MIGRATION is set it will override this value. true
DONT_UPDATE_FORMULAS_AFTER_MIGRATION Baserow’s formulas have an internal version number. When upgrading Baserow if the formula language has also changed then after the database migration has run Baserow will also automatically recalculate all formulas if they have a different version. Set this to any non empty value to disable this automatic update if you would prefer to run the update_formulas management command manually yourself. Formulas might break if you forget to do so after an upgrade of Baserow until and so it is recommended to leave this empty.
POSTGRES_STARTUP_CHECK_ATTEMPTS When Baserow’s Backend service starts up it first checks to see if the postgres database is available. It checks 5 times by default, after which if it still has not connected it will crash. 5
BASEROW_PREVENT_POSTGRESQL_DATA_SYNC_CONNECTION_\TO_DATABASE If true, then it’s impossible to connect to the Baserow PostgreSQL database using the PostgreSQL data sync. true
BASEROW_POSTGRESQL_DATA_SYNC_BLACKLIST Optionally provide a comma separated list of hostnames that the Baserow PostgreSQL data sync can’t connect to. (e.g. “localhost,baserow.io”)

Redis Configuration

Name Description Defaults
REDIS_HOST The hostname of the redis database Baserow will use for caching and real time collaboration. Defaults to redis in the standalone and compose installs. If not provided in the `baserow/baserow` install then the embedded Redis will be setup and used.
REDIS_PORT The port Baserow will use when trying to connect to the redis database at REDIS_HOST 6379
REDIS_USER The username of the redis user Baserow will use to connect to the redis at REDIS_HOST
REDIS_PASSWORD The password of REDIS_USER on the redis server at REDIS_HOST Required to be set by you in the docker-compose and standalone installs. Automatically generated by the baserow/baserow image if not provided and stored in /baserow/data/.redispass.
REDIS_PASSWORD_FILE Only supported by the baserow/baserow image If set Baserow will attempt to read the above REDIS_PASSWORD from this file location instead.
REDIS_PROTOCOL The redis protocol used when connecting to the redis at REDIS_HOST Can either be ‘redis’ or ‘rediss’. redis
REDIS_URL Alternatively to setting the individual REDIS_ parameters above instead you can provide one standard redis connection string in the format of: redis://:[password]@[redishost]:[redisport]

Celery Configuration

Name Description Defaults
BASEROW_CELERY_BEAT_STARTUP_DELAY The number of seconds the celery beat worker sleeps before starting up. 15
BASEROW_CELERY_BEAT_DEBUG_LEVEL The logging level for the celery beat service. INFO
BASEROW_AMOUNT_OF_WORKERS The number of concurrent celery worker processes used to process asynchronous tasks. If not set will default to the number of available cores. Each celery process uses memory, to reduce Baserow’s memory footprint consider setting and reducing this variable. 1 for the All-in-one, Heroku and Cloudron images. Defaults to empty and hence the number of available cores in the standalone images.
BASEROW_RUN_MINIMAL When BASEROW_AMOUNT_OF_WORKERS is 1 and this is set to a non empty value Baserow will not run the export-worker but instead run both the celery export and normal tasks on the normal celery worker. Set this to lower the memory usage of Baserow in expense of performance.
BASEROW_MAX_HEALTHY_CELERY_QUEUE_SIZE Used as maximum values in the /api/_health/celery-queue/?queue=celery&queue=export endpoint. If the number of tasks in the queue exceeds this value, then the health check fails. 10

Webhook Configuration

Name Description Defaults
BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS If set to any non empty value allows webhooks to access all addresses. Enabling this flag is a security risk as it will allow users to send webhook requests to internal addresses on your network. Instead consider using the three variables below first to allow access to only some internal network hostnames or IPs.
BASEROW_WEBHOOKS_URL_REGEX_BLACKLIST Disabled if BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS is set. List of comma seperated regexes used to validate user configured webhook URLs, will show the user an error if any regexes match their webhook URL and prevent it from running. Applied before and so supersedes BASEROW_WEBHOOKS_IP_WHITELIST and BASEROW_WEBHOOKS_IP_BLACKLIST. Do not include any schema like http://, https:// as regexes will only be run against the hostname/IP of the user configured URL. For example set this to ^(?!(www\.)?allowedhost\.com).* to block all hostnames and IPs other than allowedhost.com or www.allowedhost.com.
BASEROW_WEBHOOKS_IP_WHITELIST Disabled if BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS is set. List of comma seperated IP addresses or ranges that webhooks will be allowed to use after the webhook URL has been resolved to an IP using DNS. Only checked if the URL passes the BASEROW_WEBHOOKS_URL_REGEX_BLACKLIST. Takes precedence over BASEROW_WEBHOOKS_IP_BLACKLIST meaning that a whitelisted IP will always be let through regardless of the ranges in BASEROW_WEBHOOKS_IP_BLACKLIST. So use BASEROW_WEBHOOKS_IP_WHITELIST to punch holes the ranges in BASEROW_WEBHOOKS_IP_BLACKLIST, and not the other way around. Accepts a string in the format: “127.0.0.1/32,192.168.1.1/32”
BASEROW_WEBHOOKS_IP_BLACKLIST Disabled if BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS is set. List of comma seperated IP addresses or ranges that webhooks will be denied from using after the URL has been resolved to an IP using DNS. Only checked if the URL passes the BASEROW_WEBHOOKS_URL_REGEX_BLACKLIST. BASEROW_WEBHOOKS_IP_WHITELIST supersedes any ranges specified in this variable. Accepts a string in the format: “127.0.0.1/32,192.168.1.1/32”
BASEROW_WEBHOOKS_URL_CHECK_TIMEOUT_SECS Disabled if BASEROW_WEBHOOKS_ALLOW_PRIVATE_ADDRESS is set. How long to wait before timing out and returning an error when checking if an url can be accessed for a webhook. 10 seconds
BASEROW_WEBHOOKS_MAX_CONSECUTIVE_TRIGGER_FAILURES The number of consecutive trigger failures that can occur before a webhook is disabled. 8
BASEROW_WEBHOOKS_MAX_RETRIES_PER_CALL The max number of retries per webhook call. 8
BASEROW_WEBHOOKS_MAX_PER_TABLE The max number of webhooks per Baserow table. 20
BASEROW_WEBHOOKS_MAX_CALL_LOG_ENTRIES The maximum number of call log entries stored per webhook. 10
BASEROW_WEBHOOKS_REQUEST_TIMEOUT_SECONDS How long to wait on making the webhook request before timing out. 5
BASEROW_MAX_WEBHOOK_CALLS_IN_QUEUE_PER_WEBHOOK Maximum number of calls that can be in the webhook’s queue. Can be useful to limit when massive numbers of webhooks are triggered due an automation loop. If not set or set to 0, then there is no limit. 0

Generative AI configuration

Name Description Defaults
BASEROW_OPENAI_API_KEY Provide an OpenAI API key to allow using OpenAI for the generative AI features like the AI field. (https://platform.openai.com/api-keys)
BASEROW_OPENAI_ORGANIZATION Optionally provide an OpenAI organization name that will be used when making an API connection.
BASEROW_OPENAI_MODELS Provide a comma separated list of OpenAI models (https://platform.openai.com/docs/models/overview) that you would like to enable in the instance (e.g. gpt-3.5-turbo,gpt-4-turbo-preview). Note that this only works if an OpenAI API key is set. If this variable is not provided, the user won’t be able to choose a model.
BASEROW_OPENROUTER_API_KEY Provide an Open Router API key to allow using Open Router for the generative AI features like the AI field. (https://openrouter.ai/settings/keys)
BASEROW_OPENROUTER_ORGANIZATION Optionally provide an Open Router organization name that will be used when making an API connection.
BASEROW_OPENROUTER_MODELS Provide a comma separated list of Open Router models (https://openrouter.ai/models) that you would like to enable in the instance (e.g. openai/gpt-4o,anthropic/claude-3-haiku). Note that this only works if an OpenAI API key is set. If this variable is not provided, the user won’t be able to choose a model.
BASEROW_ANTHROPIC_API_KEY Provide an Anthropic API key to allow using Anthropic for the generative AI features like the AI field. (https://docs.anthropic.com/en/api/getting-started)
BASEROW_ANTHROPIC_MODELS Provide a comma separated list of Anthropic models (https://docs.anthropic.com/en/docs/about-claude/models) that you would like to enable in the instance (e.g. claude-3-5-sonnet-20241022,claude-3-opus-20240229). Note that this only works if an Anthropic API key is set. If this variable is not provided, the user won’t be able to choose a model.
BASEROW_MISTRAL_API_KEY Provide a Mistral API key to allow using Mistral for the generative AI features like the AI field. (https://docs.mistral.ai/getting-started/quickstart/)
BASEROW_MISTRAL_MODELS Provide a comma separated list of Mistral models (https://docs.mistral.ai/getting-started/models/models_overview/) that you would like to enable in the instance (e.g. mistral-large-latest,mistral-small-latest). Note that this only works if an Mistral API key is set. If this variable is not provided, the user won’t be able to choose a model.
BASEROW_OLLAMA_HOST Provide an OLLAMA host to allow using OLLAMA for generative AI features like the AI field.
BASEROW_OLLAMA_MODELS Provide a comma separated list of Ollama models (https://ollama.com/library) that you would like to enable in the instance (e.g. llama2). Note that this only works if an Ollama host is set. If this variable is not provided, the user won’t be able to choose a model.

Backend Misc Configuration

Name Description Defaults
BASEROW_ENABLE_SECURE_PROXY_SSL_HEADER Set to any non-empty value to ensure Baserow generates https:// next links provided by paginated API endpoints. Baserow will still work correctly if not enabled, this is purely for giving the correct https url for clients of the API. If you have setup Baserow to use Caddy’s auto HTTPS or you have put Baserow behind
a reverse proxy which:
* Handles HTTPS
* Strips the X-Forwarded-Proto header from all incoming requests.
* Sets the X-Forwarded-Proto header and sends it to Baserow.
Then you can safely set BASEROW_ENABLE_SECURE_PROXY_SSL_HEADER=yes to ensure Baserow
generates https links for pagination correctly.
ADDITIONAL_APPS A comma separated list of additional django applications to add to the INSTALLED_APPS django setting
HOURS_UNTIL_TRASH_PERMANENTLY_DELETED Items from the trash will be permanently deleted after this number of hours.
DISABLE_ANONYMOUS_PUBLIC_VIEW_WS_CONNECTIONS When sharing views publicly a websocket connection is opened to provide realtime updates to viewers of the public link. To disable this set any non empty value. When disabled publicly shared links will need to be refreshed to see any updates to the view.
BASEROW_WAIT_INSTEAD_OF_409_CONFLICT_ERROR When updating or creating various resources in Baserow if another concurrent operation is ongoing (like a snapshot, duplication, import etc) which would be affected by your modification a 409 HTTP error will be returned. If you instead would prefer Baserow to not return a 409 and just block waiting until the operation finishes and then to perform the requested operation set this flag to any non-empty value.
BASEROW_JOB_CLEANUP_INTERVAL_MINUTES How often the job cleanup task will run. 5
BASEROW_JOB_EXPIRATION_TIME_LIMIT How long before a Baserow job will be kept before being cleaned up. 30 * 24 * 60 (24 days)
BASEROW_JOB_SOFT_TIME_LIMIT The number of seconds a Baserow job can run before being terminated. 1800
BASEROW_MAX_FILE_IMPORT_ERROR_COUNT The max number of per row errors than can occur in a file import before an overall failure is declared 30
MINUTES_UNTIL_ACTION_CLEANED_UP How long before actions are cleaned up, actions are used to let you undo/redo so this is effectively the max length of time you can undo/redo can action. 120
BASEROW_DISABLE_MODEL_CACHE When set to any non empty value the model cache used to speed up Baserow will be disabled. Useful to enable when debugging Baserow errors if they are possibly caused by the model cache itself.
BASEROW_STORAGE_USAGE_JOB_CRONTAB The crontab controlling when the file usage job runs when enabled in the settings page 0 0 * * *
BASEROW_ROW_COUNT_JOB_CRONTAB The crontab controlling when the row counting job runs when enabled in the settings page 0 3 * * *
DJANGO_SETTINGS_MODULE INTERNAL The settings python module to load when starting up the Backend django server. You shouldn’t need to set this yourself unless you are customizing the settings manually.
BASEROW_BACKEND_BIND_ADDRESS INTERNAL The address that Baserow’s backend service will bind to.
BASEROW_BACKEND_PORT INTERNAL Controls which port the Baserow backend service binds to.
BASEROW_WEBFRONTEND_BIND_ADDRESS INTERNAL The address that Baserow’s web-frontend service will bind to.
BASEROW_INITIAL_CREATE_SYNC_TABLE_DATA_LIMIT The maximum number of rows you can import in a synchronous way 5000
BASEROW_MAX_ROW_REPORT_ERROR_COUNT The maximum row error count tolerated before a file import fails. Before this max error count the import will continue and the non failing rows will be imported and after it, no rows are imported at all. 30
BASEROW_ROW_HISTORY_CLEANUP_INTERVAL_MINUTES Sets the interval for periodic clean up check of the row edit history in minutes. 30
BASEROW_ROW_HISTORY_RETENTION_DAYS The number of days that the row edit history will be kept. 180
BASEROW_ICAL_VIEW_MAX_EVENTS The maximum number of events returned from ical feed endpoint. Empty value means no limit.
BASEROW_ENTERPRISE_AUDIT_LOG_CLEANUP_INTERVAL_MINUTES Sets the interval for periodic clean up check of the enterprise audit log in minutes. 30
BASEROW_ENTERPRISE_AUDIT_LOG_RETENTION_DAYS The number of days that the enterprise audit log will be kept. 365

Backend Application Builder Configuration

Name Description Defaults
BASEROW_BUILDER_DOMAINS A comma separated list of domain names that can be used as the domains to create sub domains in the application builder.

Misc Configuration

Name Description Defaults
SENTRY_DSN If provided, will instantiate Sentry SDK for error monitoring for both Frontend and Backend. “” (empty string)
SENTRY_BACKEND_DSN If provided, will instantiate Sentry SDK for the backend with this DSN. It will override SENTRY_DSN “” (empty string)

User file upload Configuration

Baserow needs somewhere to store the following types of files:

  • Files uploaded by users in Baserow Table’s using the File Field type
  • User triggered exports of tables and views

Filesystem Default

By default, Baserow will store these files on the file system at the location set by the MEDIA_ROOT environment variable, which is /baserow/media in our baserow/backend image or $DATA_DIR/media for the baserow/baserow all-in-one image.

Warning: If Baserow is storing user files on the file system and MEDIA_ROOT is not part of a persistent volume, then all uploaded user files will be lost if you remove the container. Our default docker commands and docker compose files ensure this location will be mounted to a persistent docker volume for you if you are using them.

Serving User Files when stored on the file system

Any uploaded files need to be served back to the user when they visit Baserow in the browser. Our baserow/baserow image and some of our docker-compose.yml templates come with a Caddy reverse proxy which is configured to serve back any user uploaded files which are stored in the data/media volume.

If you do not wish to use our default Caddy see the following guides on configuring an Apache, NGinx or Traefik to serve user files for you:

File Service alternatives

You can alternatively configure Baserow to store files by uploading them to an external service. This is highly recommended if you wish to horizontally scale Baserow or deploy it in a distributed manner.

We support the following types of file storage service. Follow each link to documentation showing all available UPPER_CASE settings that you can set as environment variables to set up the desired provider.

Our default docker compose files do not by default include all available user file environment variables to reduce their size. When using these docker compose files make sure that any AWS_, GS_ or AZURE_ env vars you wish to use have been added to the x-backend-variables section.

CORS Issues

When using a 3rd party file storage service which is serving files from another domain than your Baserow, you need to make sure CORS is configured correctly.

User File Variables Table

Name Description Defaults
MEDIA_URL INTERNAL The URL at which user uploaded media files will be made available $PUBLIC_BACKEND_URL/media/
MEDIA_ROOT INTERNAL The folder in which the backend will store user uploaded files /baserow/media or $DATA_DIR/media for the baserow/baserow all-in-one image

AWS_ACCESS_KEY_ID The access key for your AWS account. When set to anything other than empty will switch Baserow to use a S3 compatible bucket for storing user file uploads.
AWS_SECRET_ACCESS_KEY The access secret key for your AWS account.
AWS_SECRET_ACCESS_KEY_FILE_PATH Optional The path to a file containing access secret key for your AWS account.
AWS_STORAGE_BUCKET_NAME Your Amazon Web Services storage bucket name.
AWS_S3_REGION_NAME Optional Name of the AWS S3 region to use (eg. eu-west-1)
AWS_S3_ENDPOINT_URL Optional Custom S3 URL to use when connecting to S3, including scheme.
AWS_S3_CUSTOM_DOMAIN Optional Your custom domain where the files can be downloaded from.
AWS_* Optional All other AWS_ prefixed settings mentioned here are also supported.

GC_* All GC_ prefixed settings mentioned here are also supported.
GS_CREDENTIALS_FILE_PATH Optional The path to a Google service account file for credentials.

AZURE_* All AZURE_ prefixed settings mentioned here are also supported.
AZURE_ACCOUNT_KEY_FILE_PATH Optional The path to a file containing your Azure account key.
BASEROW_SERVE_FILES_THROUGH_BACKEND Set this value to true to have the backend serve files. This feature is disabled by default. This setting does not automatically secure your storage server; additional measures should be taken if it was public to ensure it becomes inaccessible to unauthorized users. Note that it only works if the instance is on the enterprise plan.
BASEROW_SERVE_FILES_THROUGH_BACKEND_PERMISSION If this variable is not set or is left empty, the default behavior is equivalent to setting it to DISABLED, meaning no checks will be performed on users attempting to download files. To restrict file downloads to authenticated users, set this variable to SIGNED_IN. For an even stricter control, where only authenticated users with access to the workspace containing the file can download it, set the variable to WORKSPACE_ACCESS.
BASEROW_SERVE_FILES_THROUGH_BACKEND_EXPIRE_SECONDS When this variable is unset, file links are permanent and always accessible, provided the necessary permissions are met. If assigned a positive integer, this value specifies the link’s validity period in seconds. After this duration expires, the link becomes invalid, preventing further file downloads.

Email Configuration

Name Description Defaults
FROM_EMAIL The email address Baserow will send emails from.
EMAIL_SMTP If set to any non empty value then Baserow will start sending emails using the configuration options below. If not set then Baserow will not send emails and just log them to the Celery worker logs instead.
EMAIL_SMTP_USE_TLS If set to any non empty value then Baserow will attempt to send emails using TLS.Whether to use a TLS (secure) connection when talking to the SMTP server. This is used for explicit TLS connections, generally on port 587. If you are experiencing hanging connections, see the implicit TLS setting EMAIL_SMTP_USE_SSL.
EMAIL_SMTP_USE_SSL If set to any non empty value then an implicit TLS (secure) connection will be used when talking to the SMTP server. In most email documentation this type of TLS connection is referred to as SSL. It is generally used on port 465. If you are experiencing problems, see the explicit TLS setting EMAIL_SMTP_USE_TLS. Note that EMAIL_SMTP_USE_TLS/EMAIL_SMTP_USE_SSL are mutually exclusive, so only set one of those settings to True.
EMAIL_SMTP_HOST The host of the external SMTP server that Baserow should use to send emails.
EMAIL_SMTP_PORT The port used to connect to $EMAIL_SMTP_HOST on.
EMAIL_SMTP_USER The username to authenticate with $EMAIL_SMTP_HOST when sending emails.
EMAIL_SMTP_PASSWORD The password to authenticate with $EMAIL_SMTP_HOST when sending emails.
EMAIL_SMTP_SSL_CERTFILE_PATH If EMAIL_SMTP_USE_SSL or EMAIL_SMTP_USE_TLS is set, you can optionally specify the path to a PEM-formatted certificate chain file to use for the SSL connection. If using docker then you will need to mount in this file into all the Baserow backend containers.
EMAIL_SMTP_SSL_KEYFILE_PATH EMAIL_SMTP_USE_SSL or EMAIL_SMTP_USE_TLS is set, you can optionally specify the path to a PEM-formatted private key file to use for the SSL connection. If using docker then you will need to mount in this file into all the Baserow backend containers. Note that setting EMAIL_SMTP_SSL_CERTFILE_PATH and EMAIL_SMTP_SSL_KEYFILE_PATH doesn’t result in any certificate checking. They’re passed to the underlying SSL connection. Please refer to the documentation of Python’s ssl.wrap_socket() function for details on how the certificate chain file and private key file are handled.

Web-frontend Configuration

Name Description Defaults
DOWNLOAD_FILE_VIA_XHR Set to `1` to force download links to download files via XHR query to bypass `Content-Disposition: inline` that can’t be overridden in another way. If your files are stored under another origin, you also must add CORS headers to your server. 0
BASEROW_DISABLE_PUBLIC_URL_CHECK When opening the Baserow login page a check is run to ensure the PUBLIC_BACKEND_URL/BASEROW_PUBLIC_URL variables are set correctly and your browser can correctly connect to the backend. If misconfigured an error is shown. If you wish to disable this check and warning set this to any non empty value.
ADDITIONAL_MODULES Internal A list of file paths to Nuxt module.js files to load as additional Nuxt modules into Baserow on startup.
BASEROW_DISABLE_GOOGLE_DOCS_FILE_PREVIEW Set to `true` or `1` to disable Google docs file preview.
BASEROW_MAX_SNAPSHOTS_PER_GROUP Controls how many application snapshots can be created per workspace. 50
BASEROW_USE_PG_FULLTEXT_SEARCH By default, Baserow will use Postgres full-text as its search backend. If the product is installed on a system with limited disk space, and less accurate results / degraded search performance is acceptable, then switch this setting off by setting it to false. true
BASEROW_UNIQUE_ROW_VALUES_SIZE_LIMIT Sets the limit for the automatic detection of multiselect options when converting a text field to a multiselect field. Increase the value to detect more options automatically, but consider performance implications. 100
BASEROW_BUILDER_DOMAINS A comma separated list of domain names that can be used as the domains to create sub domains in the application builder.
BASEROW_FRONTEND_SAME_SITE_COOKIE String value indicating what the sameSite value of the created cookies should be. lax
BASEROW_DISABLE_SUPPORT Set to any value to disable the support features in Baserow.

SSO Configuration

Name Description Defaults
BASEROW_ALLOW_MULTIPLE_SSO_PROVIDERS_FOR_SAME_ACCOUNT By default Baserow will show a “please use the provider that you originally signed up with” error if you attempt to login to an email which has been already registered in your Baserow server with a different SSO provider/authentication method. This is to increase the security of your Baserow server. However, if you wish to allow a user who for example signed up initially using email and password to now login using a new SSO provider, and are comfortable with the increased risk of allowing this, then set this environment variable to any non empty value to disable this check. When turned on this environment variable will allow a Baserow account to be logged into by any available authentication method and not just the first one that particular user signed up with. If you later turn off this environment variable by removing it, users who have previously logged into their account with multiple different providers will be able to continue to use all of those providers to login, however any new users will be forced to use the first provider they signed up with.

baserow/baserow Image only Configuration

Name Description Defaults
NO_COLOR Set this to any non empty value to disable colored logging in the all-in-one baserow image.
DATA_DIR For the all-in-one image only, this controls where Baserow will store all data that needs to be persisted. Inside this folder Baserow will store
- Its postgres database
- Redis database
- Any autogenerated secrets like the django SECRET_KEY, the postgresql database user password and the redis user password
- Caddy will store its state + any certificates and keys it uses during auto https
DISABLE_VOLUME_CHECK For the all-in-one image only setting this to any non empty value will disable the check it runs on startup that the “/baserow/data/” directory is mounted to a volume.

Plugin Configuration

Name Description Defaults
BASEROW_PLUGIN_GIT_REPOS A comma separated list of plugin git repos to install on startup.
BASEROW_PLUGIN_URLS A comma separated list of plugin urls to install on startup.
BASEROW_DISABLE_PLUGIN_INSTALL_ON_STARTUP When set to any non-empty values no automatic startup check and/or install of plugins will be run. Disables the above two env variables.
BASEROW_PLUGIN_DIR INTERNAL Sets the folder where the Baserow plugin scripts look for plugins. In the all-in-one image /baserow/data/plugins, otherwise /baserow/plugins

Posthog configuration

Name Description Defaults
POSTHOG_PROJECT_API_KEY Set this to your Posthog project API key for product analytics.
POSTHOG_HOST Set this to your Posthog host for product analytics.