In this tutorial, we’ll go over how to create a plugin for the open-source online database Baserow.
There is existing documentation on how to create and install Baserow plugins, but in this tutorial, we’ll go into more detail. I will also share my thoughts as a newcomer to Baserow.
We’ll introduce two new field types:
Here’s what the translation field and result look like:
The ChatGPT field allows you to enter a prompt that references other fields, like this:
The full example code is here: https://github.com/Language-Tools/baserow-translate-plugin
To complete this tutorial, you’ll need the following:
Find more info about Baserow installation: https://baserow.io/docs/installation%2Finstall-on-ubuntu
At a super high level, Baserow is a Python Django app, and by writing Python code, you can extend it in various ways. This tutorial focuses on introducing new field types (think Text, Date, Number, Formula, etc.) which take input from another field, and produce a result.
As always with programming, start with the most simple solution that works. Baserow has various import methods, a REST API, and webhooks. In most cases, you can use Baserow with no or very little programming. In this case, we are introducing some new logic and GUI changes, so the plugin framework is suitable for us.
Technically, you can do lots of things with plugins, and pretty much customize every aspect of Baserow. Currently, plugins need to be installed offline and deployed in a standalone Baserow instance. There is no such thing as an online marketplace where anyone can install a plugin with a click. If you are familiar with self-hosting apps, this is not a concern.
You need the Python cookiecutter module, it’s essentially a Python module that lets you clone a directory in a certain way. These days the only way to install Python modules is using a virtual env, so let’s do that.
# create the virtual env
python3.9 -m venv translate-plugin
# activate it
source translate-plugin/bin/activate
# update pip
pip install --upgrade pip
# finally, install cookiecutter pip module
pip install cookiecutter
Now, go to the directory where you want to create your plugin code. I put all my Python projects in ~/python
cd ~/python
cookiecutter gl:baserow/baserow --directory plugin-boilerplate
# now, enter the project name, it's up to you what you choose
[1/3] project_name (My Baserow Plugin): Translate Plugin
[2/3] project_slug (translate-plugin):
[3/3] project_module (translate_plugin):
Okay, now we have a directory here: ~/python/translate-plugin
luc@vocabai$ ls -ltr
total 56
-rw-r--r--. 1 luc luc 467 Aug 23 06:08 Caddyfile
-rw-r--r--. 1 luc luc 250 Aug 23 06:08 Caddyfile.dev
-rw-r--r--. 1 luc luc 179 Aug 23 06:08 Dockerfile
-rw-r--r--. 1 luc luc 2249 Aug 23 06:08 README.md
-rw-r--r--. 1 luc luc 1211 Aug 23 06:08 backend-dev.Dockerfile
-rw-r--r--. 1 luc luc 212 Aug 23 06:08 backend.Dockerfile
-rw-r--r--. 1 luc luc 1169 Aug 23 06:08 dev.Dockerfile
-rw-r--r--. 1 luc luc 1371 Aug 23 06:08 docker-compose.dev.yml
-rw-r--r--. 1 luc luc 6769 Aug 23 06:08 docker-compose.multi-service.dev.yml
-rw-r--r--. 1 luc luc 3599 Aug 23 06:08 docker-compose.multi-service.yml
-rw-r--r--. 1 luc luc 344 Aug 23 06:08 docker-compose.yml
-rw-r--r--. 1 luc luc 211 Aug 23 06:08 web-frontend.Dockerfile
-rw-r--r--. 1 luc luc 884 Aug 23 06:08 web-frontend-dev.Dockerfile
drwxr-xr-x. 1 luc luc 32 Aug 23 06:08 plugins
First, we’re going to start up Baserow. We haven’t added any custom code, so when things start up, you’ll just have a self-hosted Baserow instance, but we want to make sure everything is working. Because I am overriding $BASEROW_PUBLIC_URL
, and because I’ve already got a reverse proxy running on port 443, I need to make a small change to docker-compose.dev.yml
first. Also, I want to be able to set the OpenAI API key.
Change this:
ports:
- "80:80"
- "443:443"
environment:
BASEROW_PUBLIC_URL: <http://localhost:8000>
to this:
ports:
- "8000:80"
- "8443:443"
environment:
BASEROW_PUBLIC_URL: ${BASEROW_PUBLIC_URL:-<http://localhost:8000>}
OPENAI_API_KEY: ${OPENAI_API_KEY}
Now we can pretty much follow the official instructions from Baserow:
# Enable Docker buildkit
export COMPOSE_DOCKER_CLI_BUILD=1
export DOCKER_BUILDKIT=1
# Set these variables so the images are built and run with the same uid/gid as your
# user. This prevents permission issues when mounting your local source into
# the images.
export PLUGIN_BUILD_UID=$(id -u)
export PLUGIN_BUILD_GID=$(id -g)
# this is specific to my machine. I have a special firewall setup, so I need to use port 8000 and a particular hostname
export BASEROW_PUBLIC_URL=http://`hostname -s`.webdev.ipv6n.net:8000
# and now, this command builds the docker contains and starts up everything
docker compose -f docker-compose.dev.yml up --build
Note that the last command for me is docker compose
. You may also see docker-compose
, and to be completely honest, I have no idea what the difference is, but docker compose
works for me. After running this, the docker containers will get built, and this will take a while, a few minutes. Even after Baserow is up, it needs to do stuff like migrations and downloading templates. So you’ll need to be patient. You should be seeing output like this:
luc@vocabai$ docker compose -f docker-compose.dev.yml up --build
[+] Building 23.6s (10/11)
=> [translate-plugin internal] load build definition from dev.Dockerfile 0.0s
=> => transferring dockerfile: 1.27kB 0.0s
=> [translate-plugin internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [translate-plugin internal] load metadata for docker.io/baserow/baserow:1.19.1 0.4s
=> [translate-plugin base 1/1] FROM docker.io/baserow/baserow:1.19.1@sha256:74a9f6a34af69488d82280a918231cbe10ae9ac68ab5b0582ae9b30844 0.0s
=> [translate-plugin internal] load build context 0.0s
=> => transferring context: 16.64kB 0.0s
=> CACHED [translate-plugin stage-1 2/7] COPY --from=base --chown=1000:1000 /baserow /baserow 0.0s
=> CACHED [translate-plugin stage-1 3/7] RUN groupmod -g 1000 baserow_docker_group && usermod -u 1000 baserow_docker_user 0.0s
=> CACHED [translate-plugin stage-1 4/7] COPY --chown=1000:1000 ./plugins/translate_plugin/backend/requirements/dev.txt /tmp/plugin-de 0.0s
=> CACHED [translate-plugin stage-1 5/7] RUN . /baserow/venv/bin/activate && pip3 install -r /tmp/plugin-dev-requirements.txt && chown 0.0s
=> [translate-plugin stage-1 6/7] COPY --chown=1000:1000 ./plugins/translate_plugin/ /baserow/data/plugins/translate_plugin/ 0.0s
=> [translate-plugin stage-1 7/7] RUN /baserow/plugins/install_plugin.sh --folder /baserow/data/plugins/translate_plugin --dev 23.2s
=> => # warning " > eslint-loader@4.0.2" has unmet peer dependency "webpack@^4.0.0 || ^5.0.0".
=> => # warning "eslint-plugin-jest > @typescript-eslint/utils > @typescript-eslint/typescript-estree > tsutils@3.21.0" has unmet peer depen
=> => # dency "typescript@>=2.8.0 || >= 3.2.0-dev || >= 3.3.0-dev || >= 3.4.0-dev || >= 3.5.0-dev || >= 3.6.0-dev || >= 3.6.0-beta || >= 3.7
=> => # .0-dev || >= 3.7.0-beta".
[...]
Eventually, you should get to this:
translate-plugin | [WEBFRONTEND][2023-08-22 22:28:14] ℹ Compiling Server
translate-plugin | [WEBFRONTEND][2023-08-22 22:28:16] ✔ Server: Compiled successfully in 32.66s
translate-plugin | [WEBFRONTEND][2023-08-22 22:28:17] ✔ Client: Compiled successfully in 34.91s
translate-plugin | [WEBFRONTEND][2023-08-22 22:28:17] ℹ Waiting for file changes
translate-plugin | [WEBFRONTEND][2023-08-22 22:28:17] ℹ Memory usage: 905 MB (RSS: 1.77 GB)
translate-plugin | [BACKEND][2023-08-22 22:28:17] INFO 2023-08-22 22:27:46,801 daphne.server.listen_success:159- Listening on TCP address 127.0.0.1:8000
translate-plugin | [BACKEND][2023-08-22 22:28:17] INFO 2023-08-22 22:28:17,126 django.channels.server.log_action:168- HTTP GET /api/_health/ 200 [0.02, 127.0.0.1:34740]
translate-plugin | [BASEROW-WATCHER][2023-08-22 22:28:17] Waiting for Baserow to become available, this might take 30+ seconds...
translate-plugin | [BASEROW-WATCHER][2023-08-22 22:28:17] =======================================================================
translate-plugin | [BASEROW-WATCHER][2023-08-22 22:28:17] Baserow is now available at <http://vocabai.webdev.ipv6n.net:8000>
And that’s when you know you can open the web interface. In my case, I go to http://vocabai.webdev.ipv6n.net:8000. If you see the Baserow login page, you know that everything is up and running. You’ll need to create a user so you can access Baserow and enter some data for later.
Open backend/requirements/base.txt
. You can add additional Python modules there, and we need the two following modules, so just append them at the end of the file:
argostranslate
openai
argostranslate
is the open-source machine translation module (https://github.com/argosopentech/argos-translate), and openai
is the module you need to make OpenAI ChatGPT API calls.
Now, open plugins/translate_plugin/backend/src/translate_plugin/apps.py
. We want to add some initialization code when the plugin first starts up. Add a new function:
def install_argos_translate_package(from_code, to_code):
import argostranslate.package
argostranslate.package.update_package_index()
available_packages = argostranslate.package.get_available_packages()
package_to_install = next(
filter(lambda x: x.from_code == from_code and x.to_code == to_code, available_packages)
)
argostranslate.package.install_from_path(package_to_install.download())
Add this at the beginning of the ready(self)
function:
# install argostranslate language packs. they need to be installed by the user id running baserow,
# as their data will be stored in $HOME/.local/share/argos-translate/
install_argos_translate_package('en', 'fr')
install_argos_translate_package('fr', 'en')
# configure OpenAI
openai_api_key = os.environ.get('OPENAI_API_KEY', '')
if openai_api_key:
import openai
openai.api_key = openai_api_key
Add import os
at the top of the file.
What does this do? The ArgosTranslate library is required to install language packs, and we’re installing just French and English, to keep things simple. We’re also configuring the OpenAI API key (you’ll need one to try the ChatGPT field).
Now start up again. You should see Docker install the argostranslate and openAI python modules, and then Baserow will start up again.
docker compose -f docker-compose.dev.yml up --build
This plugin introduces new field types. When you add those fields to a table in your Baserow instance, the configuration of these fields needs to be stored somewhere, and in a particular format. We use a field model for that. Create the following file: plugins/translate_plugin/backend/src/translate_plugin/models.py
The content should be the following:
from django.db import models
from baserow.contrib.database.fields.models import Field
class TranslationField(Field):
source_field = models.ForeignKey(
Field,
on_delete=models.CASCADE,
help_text="The field to translate.",
null=True,
blank=True,
related_name='+'
)
source_language = models.CharField(
max_length=255,
blank=True,
default="",
help_text="Target Language",
)
target_language = models.CharField(
max_length=255,
blank=True,
default="",
help_text="Target Language",
)
class ChatGPTField(Field):
prompt = models.CharField(
max_length=4096,
blank=True,
default="",
help_text="Prompt for chatgpt",
)
The TranslationField
model is more complicated. It references another field (which contains the text to translate), which is a ForeignKey
(a link to another table row in database terminology), and also the to and from languages, which are strings.
The ChatGPTField
model is very simple, it just contains a string, long enough to contain a ChatGPT prompt.
After creating the field models, we need to tell our Baserow plugin how these new fields will behave. Let’s open a new file: plugins/translate_plugin/backend/src/translate_plugin/field_types.py
The code for the field types will be complicated, so I won’t copy it entirely here, you can look at the sample code and copy from there (https://github.com/Language-Tools/baserow-translate-plugin). But I’ll cover a few points.
First, we need to declare the field type properly, the type will uniquely identify the field, and model_class indicates the field model we’ll use to store the attributes of the field.
class TranslationFieldType(FieldType):
type = 'translation'
model_class = TranslationField
Here’s an important method: get_field_dependencies
. It tells Baserow what fields we depend on. If we change the source field, we want the translation field’s content to change.
def get_field_dependencies(self, field_instance: Field,
field_lookup_cache: FieldCache):
if field_instance.source_field != None:
return [
FieldDependency(
dependency=field_instance.source_field,
dependant=field_instance
)
]
return []
Let’s also look at the code for row_of_dependency_updated
. This is the method that will get called when the source field content changes. Let’s say we want to translate from French to English. If you modify the text in the French column, the method row_of_dependency_updated
will get called, and the translation code will be called. Notice that it can get called for a single row or multiple rows. Ultimately it calls the translation.translate
method, which we’ll define later.
def row_of_dependency_updated(
self,
field,
starting_row,
update_collector,
field_cache: "FieldCache",
via_path_to_starting_table,
):
# Minor change, can use this property to get the internal/db column name
source_internal_field_name = field.source_field.db_column
target_internal_field_name = field.db_column
source_language = field.source_language
target_language = field.target_language
# Would be nice if instead Baserow did this for you before calling this func!
# Not suggesting you do anything, but instead the Baserow project itself should
# have a nicer API here :)
if isinstance(starting_row, TableModelQuerySet):
# if starting_row is TableModelQuerySet (when creating multiple rows in a batch), we want to iterate over its TableModel objects
row_list = starting_row
elif isinstance(starting_row, list):
# if we have a list, it's a list of TableModels, iterate over them
row_list = starting_row
else:
# we got a single TableModel, transform it into a list of one element
row_list = [starting_row]
rows_to_bulk_update = []
for row in row_list:
source_value = getattr(row, source_internal_field_name)
translated_value = translation.translate(source_value, source_language,
target_language)
setattr(row, target_internal_field_name, translated_value)
rows_to_bulk_update.append(row)
model = field.table.get_model()
model.objects.bulk_update(rows_to_bulk_update,
fields=[target_internal_field_name])
ViewHandler().field_value_updated(field)
super().row_of_dependency_updated(
field,
starting_row,
update_collector,
field_cache,
via_path_to_starting_table,
)
So we saw above that row_of_dependency_updated
gets called when a single row is being edited. What if we add the Translation field to an existing table which already full of rows? That’s when the after_create
and after_update
methods come in. If your table already contains the French column, and you want to add a translation to German, then after the field is created, after_create
will be called, and the code will be expected to populate all the translations. If you later change your mind and decide you want to translate to Italian instead, you’ll edit the field properties, and then the after_update
method will get called, populating the German translation for every row.
def after_create(self, field, model, user, connection, before, field_kwargs):
self.update_all_rows(field)
def after_update(
self,
from_field,
to_field,
from_model,
to_model,
user,
connection,
altered_column,
before,
to_field_kwargs
):
self.update_all_rows(to_field)
def update_all_rows(self, field):
source_internal_field_name = field.source_field.db_column
target_internal_field_name = field.db_column
source_language = field.source_language
target_language = field.target_language
table_id = field.table.id
translation.translate_all_rows(table_id, source_internal_field_name,
target_internal_field_name,
source_language,
target_language)
This field type is simpler, it only stores a single piece of text, the prompt. However to make it useful, that prompt can references other fields in your Baserow table. The goal is to be able to do things like that:
Translate from German to French: {German Field}
and German Field is a text field in baserow. For each row, Baserow will expand this to for example Translate from German to French: Guten Tag before sending it to the OpenAI API.
So we need some logic to expand the variables in the prompt, and here it is. The get_field_dependencies
method will examine the prompt and correctly declare which fields we depend on.
def get_fields_in_prompt(self, prompt):
fields_to_expand = re.findall(r'{(.*?)}', prompt)
return fields_to_expand
def get_field_dependencies(self, field_instance: Field,
field_lookup_cache: FieldCache):
"""getting field dependencies is more complex here, because the user can add new field
variables, which creates a new dependency"""
if field_instance.prompt != None:
# need to parse the prompt to find the fields it depends on
fields_to_expand = self.get_fields_in_prompt(field_instance.prompt)
result = []
for field_name in fields_to_expand:
# for each field that we found in the prompt, add a dependency
result.append(
FieldDependency(
dependency=field_lookup_cache.lookup_by_name(field_instance.table, field_name),
dependant=field_instance
)
)
return result
return []
row_of_dependency_updated
also has some special logic, to fully expand the variables inside the prompt:
for row in row_list:
# fully expand the prompt
expanded_prompt = prompt_template
for field_name in fields_to_expand:
internal_field_name = field_cache.lookup_by_name(field.table, field_name).db_column
field_value = getattr(row, internal_field_name)
# now, replace inside the prompt
expanded_prompt = expanded_prompt.replace('{' + field_name + '}', field_value)
# call chatgpt API
translated_value = translation.chatgpt(expanded_prompt)
setattr(row, target_internal_field_name, translated_value)
rows_to_bulk_update.append(row)
Besides that, the ChatGPTFieldType
shares a lot of similarities with TranslationFieldType
.
We need to call the translation APIs somewhere. Let’s create the file plugins/translate_plugin/backend/src/translate_plugin/translation.py
. Refer to the sample code for full contents, but i’ll comment on some methods here.
This method translates a single field. It calls the ArgosTranslate library, which is a free open-source machine translation library.
def translate(text, source_language, target_language):
# call argos translate
logger.info(f'translating [{text}] from {source_language} to {target_language}')
return argostranslate.translate.translate(text, source_language, target_language)
If we add a translation field to an existing table, we need to populate a translation for every row of the table. We use the following method, which will iterate over all the rows, identify the source and target fields, call the translation method, and save the new record.
def translate_all_rows(table_id, source_field_id, target_field_id, source_language, target_language):
base_queryset = Table.objects
# Didn't see like we needed to select the workspace for every row that we get?
table = base_queryset.get(id=table_id)
# <https://docs.djangoproject.com/en/4.0/ref/models/querysets/>
table_model = table.get_model()
for row in table_model.objects.all():
text = getattr(row, source_field_id)
translated_text = translate(text, source_language, target_language)
setattr(row, target_field_id, translated_text)
row.save()
# notify the front-end that rows have been updated
table_updated.send(None, table=table, user=None, force_table_refresh=True)
The ChatGPT method is simple, it takes a single prompt and returns the output:
def chatgpt(prompt):
# call OpenAI chatgpt
logger.info(f'calling chatgpt with prompt [{prompt}]')
chat_completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}])
return chat_completion['choices'][0]['message']['content']
However, populating a ChatGPT field on all rows requires doing this prompt template expansion logic again:
def chatgpt_all_rows(table_id, target_field_id, prompt, prompt_field_names):
base_queryset = Table.objects
table = base_queryset.get(id=table_id)
table_model = table.get_model()
# we'll build this map on the first row
field_name_to_field_id_map = {}
for row in table_model.objects.all():
# do we need to build the map ?
if len(field_name_to_field_id_map) == 0:
for field in row.get_fields():
field_name_to_field_id_map[field.name] = field.db_column
# full expand the prompt
expanded_prompt = prompt
for field_name in prompt_field_names:
internal_field_name = field_name_to_field_id_map[field_name]
field_value = getattr(row, internal_field_name)
if field_value == None:
field_value = ''
expanded_prompt = expanded_prompt.replace('{' + field_name + '}', field_value)
# call chatgpt api, and save row.
chatgpt_result = chatgpt(expanded_prompt)
setattr(row, target_field_id, chatgpt_result)
row.save()
# notify the front-end that rows have been updated
table_updated.send(None, table=table, user=None, force_table_refresh=True)
We’re almost done with the backend code. One more thing, we need to register our two new field types. Open plugins/translate_plugin/backend/src/translate_plugin/apps.py
. At the end of the ready(self)
, add the following:
# register our new field type
from baserow.contrib.database.fields.registries import field_type_registry
from .field_types import TranslationFieldType, ChatGPTFieldType
field_type_registry.register(TranslationFieldType())
field_type_registry.register(ChatGPTFieldType())
After that’s done, let’s restart our instance again:
docker compose -f docker-compose.dev.yml up --build
We need to do one more thing. We have to apply a django migration, because we added a new model. In database terminology, a migration just means you’re updating your table schema in some automated way. This is documented here: /docs/plugins%2Ffield-type
# Set these env vars to make sure mounting your source code into the container uses
# the correct user and permissions.
export PLUGIN_BUILD_UID=$(id -u)
export PLUGIN_BUILD_GID=$(id -g)
docker container exec translate-plugin /baserow.sh backend-cmd manage makemigrations translate_plugin
docker container exec translate-plugin /baserow.sh backend-cmd manage migrate translate_plugin
You should see the following output:
~/python/translate-plugin
luc@vocabai$ docker container exec translate-plugin /baserow.sh backend-cmd manage makemigrations translate_plugin
[STARTUP][2023-08-30 15:04:20] No DATABASE_HOST or DATABASE_URL provided, using embedded postgres.
[STARTUP][2023-08-30 15:04:20] Using embedded baserow redis as no REDIS_HOST or REDIS_URL provided.
[STARTUP][2023-08-30 15:04:20] Importing REDIS_PASSWORD secret from /baserow/data/.redispass
[STARTUP][2023-08-30 15:04:20] Importing SECRET_KEY secret from /baserow/data/.secret
[STARTUP][2023-08-30 15:04:20] Importing BASEROW_JWT_SIGNING_KEY secret from /baserow/data/.jwt_signing_key
[STARTUP][2023-08-30 15:04:20] Importing DATABASE_PASSWORD secret from /baserow/data/.pgpass
OTEL_RESOURCE_ATTRIBUTES=service.namespace=Baserow,service.version=1.19.1,deployment.environment=unknown
Loaded backend plugins: translate_plugin
Migrations for 'translate_plugin':
/baserow/data/plugins/translate_plugin/backend/src/translate_plugin/migrations/0001_initial.py
- Create model ChatGPTField
- Create model TranslationField
~/python/translate-plugin
luc@vocabai$ docker container exec translate-plugin /baserow.sh backend-cmd manage migrate translate_plugin
[STARTUP][2023-08-30 15:04:33] No DATABASE_HOST or DATABASE_URL provided, using embedded postgres.
[STARTUP][2023-08-30 15:04:33] Using embedded baserow redis as no REDIS_HOST or REDIS_URL provided.
[STARTUP][2023-08-30 15:04:33] Importing REDIS_PASSWORD secret from /baserow/data/.redispass
[STARTUP][2023-08-30 15:04:33] Importing SECRET_KEY secret from /baserow/data/.secret
[STARTUP][2023-08-30 15:04:33] Importing BASEROW_JWT_SIGNING_KEY secret from /baserow/data/.jwt_signing_key
[STARTUP][2023-08-30 15:04:33] Importing DATABASE_PASSWORD secret from /baserow/data/.pgpass
OTEL_RESOURCE_ATTRIBUTES=service.namespace=Baserow,service.version=1.19.1,deployment.environment=unknown
Loaded backend plugins: translate_plugin
Operations to perform:
Apply all migrations: translate_plugin
Clearing Baserow's internal generated model cache...
Done clearing cache.
Running migrations:
Applying translate_plugin.0001_initial... OK
Submitting the sync templates task to run asynchronously in celery after the migration...
Created 133 operations...
Deleted 27 un-registered operations...
Checking to see if formulas need updating...
2023-08-30 15:04:37.304 | INFO | baserow.contrib.database.formula.migrations.handler:migrate_formulas:167 - Found 0 batches of formulas to migrate from version None to 5.
Finished migrating formulas: : 0it [00:00, ?it/s]
Syncing default roles: 100%|██████████| 7/7 [00:00<00:00, 64.91it/s]
And we are now completely done with the backend changes. Let’s move on to the frontend.
We finished the backend changes, but we can’t yet use our two new fields, because we need to make some GUI changes first.
We need to create some GUI components for the two new field types. Those are VueJS / Nuxt components. I’m not an expert in frontend code, I just pretty much randomly try stuff until it works. Open this file:
plugins/translate_plugin/web-frontend/modules/translate-plugin/components/TranslationSubForm.vue
I won’t paste the full source here (refer to the sample code), but I’ll comment a few snippets. Over here, we add a dropdown to select the source field. That’s the field that contains the text to translate. The available fields need to be in a member variable called tableFields
, which is a computed property in the VueJS component.
<div class="control">
<label class="control__label control__label--small">
Select Source Field
</label>
<Dropdown
v-model="values.source_field_id"
>
<DropdownItem
v-for="field in tableFields"
:key="field.id"
:name="field.name"
:value="field.id"
:icon="field.icon"
></DropdownItem>
</Dropdown>
</div>
These two are very simple. We just ask the user to type in the source and target language (like fr
and en
):
<div class="control">
<label class="control__label control__label--small">
Type in language to translate from
</label>
<div class="control__elements">
<input
v-model="values.source_language"
class="input"
type="text"
/>
</div>
</div>
<div class="control">
<label class="control__label control__label--small">
Type in language to translate to
</label>
<div class="control__elements">
<input
v-model="values.target_language"
class="input"
type="text"
/>
</div>
</div>
Now, here’s the code and state for the component. It specifies which values we’ll be storing, and adds a computed property that tells the component what the different fields available in the table are (so that the user can select a source field).
export default {
name: 'TranslationSubForm',
mixins: [form, fieldSubForm],
data() {
return {
allowedValues: ['source_field_id', 'source_language', 'target_language'],
values: {
source_field_id: '',
source_language: '',
target_language: ''
}
}
},
methods: {
isFormValid() {
return true
},
},
computed: {
tableFields() {
console.log("computed: tableFields");
// collect all fields, including primary field in this table
const primaryField = this.$store.getters['field/getPrimary'];
const fields = this.$store.getters['field/getAll']
let allFields = [primaryField];
allFields = allFields.concat(fields);
// remove any undefined field
allFields = allFields.filter((f) => {
return f != undefined
});
console.log('allFields: ', allFields);
return allFields;
},
}
}
The ChatGPT GUI component is much simpler. Its code will be in plugins/translate_plugin/web-frontend/modules/translate-plugin/components/ChatGPTSubForm.vue
. It just contains the ChatGPT prompt, so it’s easy to implement:
<template>
<div>
<div class="control">
<label class="control__label control__label--small">
Type in prompt for ChatGPT, you may reference other fields such as {Field 1}
</label>
<div class="control__elements">
<input
v-model="values.prompt"
class="input"
type="text"
/>
</div>
</div>
</div>
</template>
<script>
import form from '@baserow/modules/core/mixins/form'
import fieldSubForm from '@baserow/modules/database/mixins/fieldSubForm'
export default {
name: 'ChatGPTSubForm',
mixins: [form, fieldSubForm],
data() {
return {
allowedValues: ['prompt'],
values: {
prompt: ''
}
}
},
methods: {
isFormValid() {
return true
},
}
}
</script>
We need to create a Field Type object on the frontend as well. Open plugins/translate_plugin/web-frontend/modules/translate-plugin/fieldtypes.js
, I’ll comment on some of the code:
The getType
method corresponds to the field type we used in the Python code in the backend.
export class TranslationFieldType extends FieldType {
static getType() {
return 'translation'
}
Make sure this points to the VueJS component we created earlier:
getFormComponent() {
return TranslationSubForm
}
For the rest of the functions, we use pretty much the same as a regular Baserow text field, so I won’t comment on those, that code is straightforward.
In the frontend as well, we need to register those field types. Open plugins/translate_plugin/web-frontend/modules/translate-plugin/plugin.js
It should look like this:
import { PluginNamePlugin } from '@translate-plugin/plugins'
import {TranslationFieldType} from '@baserow-translate-plugin/fieldtypes'
import {ChatGPTFieldType} from '@baserow-translate-plugin/fieldtypes'
export default (context) => {
const { app } = context
app.$registry.register('plugin', new PluginNamePlugin(context))
app.$registry.register('field', new TranslationFieldType(context))
app.$registry.register('field', new ChatGPTFieldType(context))
}
We are done with all the changes, let’s run the baserow plugin. If you have an OpenAI API key, you can set the corresponding environment variable:
export OPENAI_API_KEY=<your OpenAI API key>
docker compose -f docker-compose.dev.yml up --build
Eventually, you should see this:
translate-plugin | [WEBFRONTEND][2023-08-31 01:32:39] ℹ Compiling Server
translate-plugin | [WEBFRONTEND][2023-08-31 01:32:39] ✔ Server: Compiled successfully in 17.76s
translate-plugin | [BASEROW-WATCHER][2023-08-31 01:32:40] Waiting for Baserow to become available, this might take 30+ seconds...
translate-plugin | [WEBFRONTEND][2023-08-31 01:32:40] ✔ Client: Compiled successfully in 17.96s
translate-plugin | [WEBFRONTEND][2023-08-31 01:32:40] ℹ Waiting for file changes
translate-plugin | [WEBFRONTEND][2023-08-31 01:32:40] ℹ Memory usage: 1.12 GB (RSS: 1.59 GB)
translate-plugin | [BACKEND][2023-08-31 01:32:42] INFO 2023-08-31 01:32:32,085 daphne.server.listen_success:159- Listening on TCP address 127.0.0.1:8000
translate-plugin | [BACKEND][2023-08-31 01:32:42] INFO 2023-08-31 01:32:42,225 django.channels.server.log_action:168- HTTP GET /api/settings/ 200 [0.05, 127.0.0.1:52476]
translate-plugin | [BACKEND][2023-08-31 01:32:42] INFO 2023-08-31 01:32:42,225 django.channels.server.log_action:168- HTTP GET /api/settings/ 200 [0.05, 127.0.0.1:52476]
translate-plugin | [BACKEND][2023-08-31 01:32:42] INFO 2023-08-31 01:32:42,269 django.channels.server.log_action:168- HTTP GET /api/auth-provider/login-options/ 200 [0.04, 127.0.0.1:52480]
translate-plugin | [CADDY][2023-08-31 01:32:42] {"level":"info","ts":1693445540.1934621,"msg":"serving initial configuration"}
translate-plugin | [BASEROW-WATCHER][2023-08-31 01:32:50] Waiting for Baserow to become available, this might take 30+ seconds...
translate-plugin | [BASEROW-WATCHER][2023-08-31 01:32:50] =======================================================================
translate-plugin | [BASEROW-WATCHER][2023-08-31 01:32:50] Baserow is now available at <http://vocabai.webdev.ipv6n.net:8000>
translate-plugin | [BACKEND][2023-08-31 01:32:50] INFO 2023-08-31 01:32:42,269 django.channels.server.log_action:168- HTTP GET /api/auth-provider/login-options/ 200 [0.04, 127.0.0.1:52480]
translate-plugin | [BACKEND][2023-08-31 01:32:50] INFO 2023-08-31 01:32:50,229 django.channels.server.log_action:168- HTTP GET /api/_health/ 200 [0.01, 127.0.0.1:53788]
translate-plugin | 2023-08-31 01:32:50,231 INFO success: caddy entered RUNNING state, process has stayed up for > than 30 seconds (startsecs)
Note that the URL will be different in your case.
Login to your baserow instance running the plugin. In my case, I need to go to
http://vocabai.webdev.ipv6n.net:8000
Create a user, and then I have access to the dashboard. Then create a new table:
Next, we’ll modify the Name field. We will rename this field as the English field:
Then, we’ll create a French translation field:
You should see the automatic translation take place when you edit the text in the English field.
Now, let’s add a ChatGPT field. We’ll ask a question about the English text, using the right prompt. You could also ask for a translation instead.
You should see the result of the ChatGPT queries:
We’ve explored the process of building a plugin that seamlessly integrates Baserow with Argos Translate. We covered the steps of designing a unique field type to make this integration possible. So, whether it’s a translation or another Python project, you’re all set to supercharge your Baserow experience.
In case you’ve run into an issue while following the tutorial, feel free to reach out to ask for help in the Baserow community or check the Baserow docs.
For more insights into Luc’s projects, check out - https://www.vocab.ai/.
The following articles may also be helpful:
Want to write a new article for Baserow or contribute to an existing one? Check out our contribution guidelines for writing articles.