Baserow’s import feature converts your existing data into organized tables; bring your spreadsheets to life with database capabilities.
Create Baserow tables by importing CSV, Excel, JSON, or XML files, or paste data directly from spreadsheets. Convert existing data into structured tables with automatic field detection.
This guide covers creating new tables through import. To add data to existing tables, see Import data into an existing table.
Importing lets you create fully populated tables from existing data files or spreadsheets. Instead of manually entering data, upload your file and Baserow automatically creates a table with appropriate fields and rows. This is the fastest way to migrate data from spreadsheets, exports, or other platforms into Baserow.
| Format | Best for | File extension |
|---|---|---|
| Paste data | Quick transfers from spreadsheets | N/A |
| CSV | Simple tabular data from any spreadsheet app | .csv |
| JSON | Structured data exports from APIs or apps | .json |
| XML | Hierarchical data from technical systems | .xml |
All import methods are limited to 5,000 rows per table. For larger datasets, split your file and import in batches, or contact support for enterprise solutions.
Clean your data:
Format considerations:
All imported data comes in as text fields initially. After import, you can convert fields to their appropriate types (numbers, dates, select options, etc.). Alternatively, import into an existing table to use pre-configured field types.

Fastest option for copying data directly from Excel, Google Sheets, or other spreadsheet applications. This method works great for quick data transfers without saving intermediate files.
Step-by-step:
CSV (Comma-Separated Values) is the universal export format supported by all spreadsheet applications.
Step-by-step:
Troubleshooting CSV imports:
JSON files store structured data and are common exports from APIs, databases, and web applications.
Step-by-step:
Supported JSON format:
[
{
"to": "Tove",
"from": "Jani",
"heading": "Reminder",
"body": "Don't forget me this weekend!"
},
{
"to": "Bram",
"from": "Nigel",
"heading": "Reminder",
"body": "Don't forget the export feature"
}
]
JSON requirements: File must be an array of objects with consistent keys across all objects. Nested objects are flattened during import.
XML files contain hierarchical data often exported from technical systems or enterprise applications.
Step-by-step:
Complex nested structures are flattened. For best results, use XML files with simple, tabular structures.
All imported data starts as text fields. Convert them to appropriate types for better functionality:
If your imported data references other tables, set up relationships:
Customize how you work with imported data:
Start from scratch: Create blank tables when building custom structures without existing data.
Use templates: Browse templates for pre-built structures that match your use case.
Duplicate existing: Duplicate tables to reuse proven structures with or without data.
Data sync: Set up data sync for automatic updates from external sources.
If import shows “pending” or “running” for extended periods, it could be because of system timeout or file processing issues. Jobs automatically fail and clear after 90-95 minutes. Wait for the timeout, then try again with a smaller file or a different format.
If information doesn’t align with headers correctly, it could be because of incorrect separator detection (CSV) or a malformed file structure. For CSV, manually select the correct column separator. For other formats, check that your source file is properly formatted before export.
If accented letters, symbols, or emojis show as gibberish, it could be because of an encoding mismatch between the source file and import settings. Try different encoding options (UTF-8 usually works best). If problems persist, save your source file with UTF-8 encoding before importing.
If an error message about exceeding 5,000 rows, it could be because the file contains more than the maximum allowed rows. Split your file into multiple smaller files under 5,000 rows each. Import separately and merge if needed, or contact support for enterprise options.
If dates, numbers, or other formatted data import as plain text, it could be because all imports default to text fields for safety. This is expected behavior. After import, manually convert fields to appropriate types using the field edit menu. For automatic type detection, import into existing tables with pre-configured fields.
Creating via import generates a new table from your file with automatic structure detection. All fields start as text type. Importing into an existing table adds data to pre-configured fields with specific types, giving you better control over data validation and formatting. Use creation for new datasets, existing table imports for structured data entry.
No, import one file at a time. Each import creates a separate table. If you need data from multiple files in one table, either combine them before importing or import separately and use Link to table fields to connect them.
All imported data comes in as text fields to prevent data loss from incorrect type conversions. After import, convert fields to appropriate types (Number, Date, etc.) using the field edit menu. This gives you control over formatting and validation.
Split large files into multiple smaller files under 5,000 rows each. Import them as separate tables, then consolidate if needed. For regular large imports, consider enterprise plans with higher limits or use the Baserow API for programmatic data insertion.
Yes, all import methods show a preview before creating the table. This lets you verify column separation, encoding, and data structure. If the preview looks wrong, adjust settings (separator, encoding) or cancel and fix your source file.
Now that you’ve imported your data, explore these features:
Optimize your table:
Work with your data:
Expand functionality:
Still need help? If you’re looking for something else, please feel free to make recommendations or ask us questions; we’re ready to assist you.