How to Convert DBF to CSV (dBase, FoxPro, Visual FoxPro)

Step-by-step guide to converting DBF files to CSV. Covers dBase, FoxPro, and Visual FoxPro with solutions for encoding issues, memo fields, and large file handling.

You have a .dbf file and you need the data in CSV or Excel format. Maybe you inherited a FoxPro application from the 1990s, or you are migrating data out of a legacy accounting system, or you just received shapefiles from a GIS department and need the attribute table in a spreadsheet.

DBF files are one of the oldest database formats still in active use. They were created for dBase in 1983 and adopted by FoxPro, Visual FoxPro, Clipper, and dozens of other applications over the following decades. Despite their age, DBF files remain common in government agencies, GIS workflows, insurance systems, and any organization that never fully migrated away from DOS-era or early Windows software.

Converting DBF to CSV sounds straightforward, but there are pitfalls that can silently corrupt your data: character encoding mismatches that mangle accented characters, memo fields that disappear entirely, date formats that shift, and row limits that truncate large files without warning.

This guide covers four methods for converting DBF to CSV, from quick-and-easy to robust, and explains the common problems you need to watch for.

Why DBF-to-CSV Conversion is Still Necessary

If DBF is such an old format, why are people still converting these files in 2026? Several reasons:

Legacy System Migrations: Companies running FoxPro or Clipper applications from the 1990s and 2000s are migrating to modern platforms. The data lives in .dbf files that need to be extracted into CSV, Excel, or SQL for import into the new system.

GIS and Mapping Data: The ESRI Shapefile format, still the most widely exchanged GIS data format, uses .dbf files to store attribute data. Anyone working with shapefiles who needs the tabular data in a spreadsheet is doing a DBF-to-CSV conversion.

Government and Regulatory Data: Many government agencies publish data in DBF format. Census data, voter registration files, property records, and environmental datasets are frequently distributed as .dbf files.

Accounting and ERP Archives: Legacy accounting software (Sage, Peachtree, MYOB older versions) stored data in dBase-format files. When these systems are decommissioned, the historical data still needs to be accessible.

Insurance and Healthcare Legacy Systems: Claim processing systems built on FoxPro or dBase are still running in some organizations, and their data needs to feed into modern analytics and reporting tools.

Method 1: Using Microsoft Excel

Excel can open DBF files directly and save them as CSV. This is the quickest approach for small, simple files.

Step 1: Open Microsoft Excel.

Step 2: Go to File > Open and change the file type filter to “All Files” or “dBase Files.”

Step 3: Navigate to your .dbf file and open it. Excel will load the data into a worksheet.

Step 4: Review the data for obvious problems (garbled characters, missing columns, incorrect dates).

Step 5: Go to File > Save As, choose “CSV (Comma delimited)” as the format, and save.

Pros:

  • No additional software to install
  • Familiar interface for most business users
  • Quick for small files

Cons:

  • Row limit: Excel supports a maximum of 1,048,576 rows. If your DBF file has more records, Excel silently truncates the data without any warning. You will not know records are missing unless you compare row counts.
  • Encoding problems: Excel often misinterprets the character encoding of DBF files, especially those created on DOS systems (CP437 or CP850). Accented characters, umlauts, and currency symbols may display as garbage characters.
  • No memo field support: Excel cannot read .fpt or .dbt memo files. Any memo fields in your DBF will appear as empty cells or show meaningless reference numbers.
  • Date format assumptions: Excel may reformat dates based on your system locale rather than the format stored in the DBF file. A date stored as 2024-03-15 might become 3/15/2024 or 15/03/2024 depending on your regional settings, and this reformatting can introduce ambiguity.
  • Newer Excel versions may drop support: Microsoft has been gradually reducing DBF support in recent Excel versions. Some Office 365 builds cannot open .dbf files at all.

Best for: Quick, one-off conversions of small DBF files (under 100,000 rows) with ASCII-only data and no memo fields.

Method 2: Using LibreOffice Calc

LibreOffice Calc handles DBF files better than Excel in several important ways, particularly around character encoding.

Step 1: Download and install LibreOffice (free, open-source) from libreoffice.org if you don’t already have it.

Step 2: Open LibreOffice Calc.

Step 3: Go to File > Open, and select your .dbf file. LibreOffice will display an import dialog.

Step 4: In the import dialog, you can select the character encoding. This is a significant advantage over Excel. If your DBF file was created on a DOS system, try CP850 or CP437. For Windows-based FoxPro or dBase applications, try CP1252. For more modern files, try UTF-8.

Step 5: Review the data preview in the import dialog. If characters look wrong, change the encoding and preview again.

Step 6: Click OK to import. Review the full dataset for accuracy.

Step 7: Save as CSV via File > Save As > select “Text CSV (.csv)” format. In the export dialog, choose UTF-8 as the output encoding to ensure maximum compatibility.

Pros:

  • Free and open-source
  • Encoding selection dialog lets you try different encodings before committing
  • Cross-platform (Windows, Mac, Linux)
  • Better DBF format support than recent Excel versions
  • Can output CSV in UTF-8 regardless of source encoding

Cons:

  • Still no memo field support: Like Excel, LibreOffice does not read .fpt or .dbt memo files. Memo fields will be empty.
  • Row limit for editing: While LibreOffice can technically handle more rows than Excel (1,048,576), performance degrades significantly with large files. Files over 500,000 rows may cause freezing or crashes.
  • Date handling quirks: LibreOffice may interpret date fields differently depending on the DBF version. dBase III dates (stored as YYYYMMDD strings) are usually fine, but FoxPro DateTime fields can lose time components.
  • No batch conversion: You must open and save each file individually. If you have dozens of .dbf files, this becomes tedious.

Best for: Small to medium DBF files where character encoding is a concern but memo fields are not needed.

Method 3: Command-Line Tools (Python, ogr2ogr)

For developers, data engineers, or anyone comfortable with the command line, scripting tools offer the most control over the conversion process.

Option 3A: Python with dbfread

The dbfread library is the most popular Python tool for reading DBF files. It supports dBase III, dBase IV, FoxPro, and Visual FoxPro formats.

import csv
from dbfread import DBF

# Open DBF file with explicit encoding
table = DBF('customers.dbf', encoding='cp1252')

# Write to CSV
with open('customers.csv', 'w', newline='', encoding='utf-8') as f:
    writer = csv.writer(f)
    writer.writerow(table.field_names)  # Header row
    for record in table:
        writer.writerow(record.values())

To handle memo fields, ensure the .fpt or .dbt file is in the same directory as the .dbf file. The library reads it automatically.

To handle encoding detection, you can try multiple encodings programmatically:

from dbfread import DBF

encodings = ['utf-8', 'cp1252', 'cp850', 'cp437', 'iso-8859-1']

for enc in encodings:
    try:
        table = DBF('data.dbf', encoding=enc)
        # Try reading first record to verify
        for record in table:
            break
        print(f"Working encoding: {enc}")
        break
    except (UnicodeDecodeError, Exception):
        continue

Pros:

  • Full control over encoding, output format, and data transformation
  • Handles memo fields if .fpt/.dbt files are present
  • No row limits (processes records as a stream)
  • Scriptable for batch conversion of many files
  • Free and open-source

Cons:

  • Requires Python installation and basic programming knowledge
  • You need to handle edge cases (null values, binary data, deleted records) in your code
  • No GUI, which makes it inaccessible for non-technical users

Option 3B: ogr2ogr (for GIS/Shapefile data)

If your DBF file is part of a shapefile or comes from a GIS workflow, ogr2ogr from the GDAL/OGR library is purpose-built for this conversion.

ogr2ogr -f "CSV" output.csv input.dbf

For encoding control:

ogr2ogr -f "CSV" output.csv input.dbf -lco SEPARATOR=COMMA -oo ENCODING=CP1252

Pros:

  • Designed specifically for GIS data formats
  • Handles spatial reference information if present
  • Widely available on GIS workstations
  • Can convert between many formats (not just CSV)

Cons:

  • Requires GDAL installation, which can be complex on Windows
  • Overkill if your DBF file is not GIS-related
  • Less intuitive syntax than Python for non-GIS users

Best for: Developers and data engineers who need automation, batch processing, or full control over the conversion pipeline.

Method 4: Online Converters (DBRescue)

Online conversion services let you upload a .dbf file and download the CSV output without installing anything. This is the most practical option when you need reliable handling of encoding and memo fields without writing code.

Step 1: Navigate to dbrescue.xyz in your web browser.

Step 2: Upload your .dbf file. If your file has memo fields, upload the .fpt or .dbt file as well (upload both files together).

Step 3: DBRescue automatically detects the DBF version (dBase III, dBase IV, FoxPro, Visual FoxPro), identifies the character encoding, and locates associated memo files.

Step 4: Review the extraction preview: table structure, field types, record count, and detected encoding.

Step 5: Download your data as CSV (one file per table, UTF-8 encoded for maximum compatibility).

Pros:

  • No software installation required
  • Works on any operating system (Windows, Mac, Linux, even mobile)
  • Automatic encoding detection and conversion to UTF-8
  • Reads memo fields from .fpt and .dbt files
  • Handles all DBF variants (dBase III/IV, FoxPro, Visual FoxPro)
  • No row limits (server-side processing)
  • Handles multi-file FoxPro databases
  • Free for files under 50MB

Cons:

  • Requires internet connection
  • Upload time depends on file size and connection speed
  • Files containing highly sensitive data may require evaluation against your security policies before uploading

Success Rate: 95%+ for standard DBF files across all variants.

Cost: Free for files under 50MB. Larger files: $9.99 for standard processing.

Best for: Anyone who needs a reliable conversion without installing software or writing code, especially when encoding or memo fields are involved.

Common Pitfalls When Converting DBF to CSV

Converting DBF to CSV is not just a format change. Several technical issues can silently corrupt your data during conversion. Understanding these pitfalls will help you verify your output and choose the right tool.

Pitfall 1: Character Encoding Mismatch

This is the single most common problem with DBF conversions, and it is often invisible until someone notices garbled text weeks later.

DBF files were created in an era before Unicode became standard. Different systems used different code pages to represent characters beyond basic ASCII:

  • CP437 (DOS United States): Used by early dBase and Clipper applications on DOS. Maps box-drawing characters and some accented letters.
  • CP850 (DOS Western European): Common in European DOS installations. Covers most Western European accented characters but maps them to different byte values than CP1252.
  • CP1252 (Windows Western European): Used by FoxPro and dBase applications running on Windows. Similar to ISO-8859-1 but with extra characters in the 0x80-0x9F range.
  • CP866 (DOS Cyrillic): Used by Russian and Eastern European DOS applications.
  • UTF-8: Only used by the most modern applications. Rare in legacy DBF files.

The problem occurs when your conversion tool assumes one encoding but the file uses another. For example, the German character “u with umlaut” is byte 0xFC in CP850, 0xFC in CP1252 (same byte, different meaning depending on context), and a multi-byte sequence in UTF-8. If a tool reads a CP850 file as UTF-8, that byte becomes an invalid sequence and may be replaced with a question mark, a replacement character, or dropped entirely.

How to detect encoding problems: After conversion, search the CSV for unexpected characters: question marks where accented letters should be, diamond symbols with question marks inside them, or sequences like “Ô or “” before letters (which indicate UTF-8 bytes being read as CP1252).

How to fix encoding problems: Determine the original encoding (check the DBF header byte at offset 29, which stores a code page identifier, or ask the application vendor), then re-convert with the correct encoding specified.

Pitfall 2: Missing Memo Field Data

Memo fields are one of the most valuable parts of many DBF databases, and they are the most commonly lost during conversion.

In the DBF format, character fields are limited to 254 bytes. To store longer text (descriptions, notes, comments, addresses, document content), applications use memo fields. The memo data is stored in a separate companion file:

  • .fpt files for FoxPro and Visual FoxPro memo fields
  • .dbt files for dBase III and dBase IV memo fields

The .dbf file itself only stores a pointer (a block number) referencing the location in the memo file. If you convert the .dbf file without the .fpt or .dbt file present, memo fields will appear as empty values or meaningless integers.

How this data gets lost: Users copy or email only the .dbf file, leaving the .fpt/.dbt file behind. Or they zip only the .dbf file. Or the conversion tool doesn’t support memo files at all (Excel, LibreOffice).

How to prevent it: Always keep .dbf, .fpt (or .dbt), and .cdx (index) files together. When sharing or transferring DBF files, zip the entire directory. Before converting, check your directory listing for companion files with the same base name.

Pitfall 3: Date Format Inconsistencies

DBF files store dates in a specific internal format (YYYYMMDD as an 8-character string for dBase, binary timestamps for FoxPro DateTime fields). During conversion, tools must interpret this format and write it to CSV in some human-readable form.

Problems arise when:

  • The conversion tool applies locale-specific formatting: A date stored as 20240315 might be written as “3/15/2024” (US format) or “15/03/2024” (European format) depending on your system locale. If the CSV is then opened on a system with a different locale, dates become ambiguous. Is “01/02/2024” January 2nd or February 1st?
  • FoxPro DateTime fields lose time components: Some tools convert DateTime fields to date-only format, discarding the time portion. If your application recorded timestamps (order times, log entries), this data loss may be significant.
  • Null dates become placeholder values: Empty date fields in DBF may convert to “00/00/0000”, “12/30/1899” (the OLE Automation epoch), or other placeholder dates rather than empty strings.

Best practice: When converting dates, use ISO 8601 format (YYYY-MM-DD) in the CSV output. This format is unambiguous regardless of locale.

Pitfall 4: Large File Truncation

DBF files can be very large. A FoxPro table with 10 million records and 50 fields is not unusual in production systems. GUI tools often cannot handle these files.

Excel: Hard limit of 1,048,576 rows. Files with more rows are silently truncated. No error, no warning. You will only discover the problem if you compare the row count in the CSV to the record count in the original DBF.

LibreOffice: Technically supports more rows but becomes unusably slow or crashes with files over 500,000-1,000,000 rows.

Memory-based tools: Some converters load the entire file into memory before writing CSV. A 2GB DBF file will require several gigabytes of RAM and may cause out-of-memory errors.

Solution: For large files, use streaming tools (Python dbfread processes records one at a time without loading the entire file) or server-side services (DBRescue processes files on dedicated infrastructure without client-side limitations).

Pitfall 5: Multi-File FoxPro Databases (.dbc Containers)

Visual FoxPro introduced the database container (.dbc), which groups multiple .dbf tables into a single logical database with shared relationships, stored procedures, and metadata. A FoxPro .dbc database typically consists of:

  • database.dbc: The container file (itself a DBF file) storing metadata
  • database.dct: Memo file for the container
  • database.dcx: Index file for the container
  • table1.dbf, table2.dbf, …: The actual data tables
  • table1.fpt, table2.fpt, …: Memo files for each table
  • table1.cdx, table2.cdx, …: Index files for each table

If you only convert individual .dbf files, you lose the relationships between tables and any stored procedures defined at the database level.

Solution: Identify all files belonging to the database (look for the .dbc file and all .dbf files in the same directory), convert each table to CSV, and document the relationships separately. Tools like DBRescue can process multi-file uploads and preserve the table relationship metadata.

Pitfall 6: Deleted Records Still Present

DBF files have a deletion flag mechanism. When a record is “deleted” in a dBase or FoxPro application, it is not actually removed from the file. Instead, a deletion flag byte is set on that record. The record remains in the file until the database is “packed” (compacted).

Some conversion tools include these deleted records in the CSV output. Others skip them. The behavior varies and is not always documented.

How to check: Compare the record count in the converted CSV to the expected active record count. If the CSV has significantly more rows than expected, it likely includes deleted records.

Best practice: Use a tool that explicitly handles deleted records and either filters them out or marks them with a flag column so you can decide what to keep.

Comparison: Which Method Should You Use?

FeatureExcelLibreOfficePython/CLIDBRescue
Encoding selectionNoYesYesAuto-detect
Memo field supportNoNoYes (with file)Yes (with file)
Max rows~1M~1M practicalUnlimitedUnlimited
Batch conversionNoNoYesYes
Installation requiredYes (Office)Yes (free)Yes (Python)No
Technical skill neededLowLowHighLow
CostOffice licenseFreeFreeFree under 50MB

Frequently Asked Questions

What is a DBF file?

A DBF file is a database table format originally created for dBase in the 1980s. It was adopted by FoxPro, Visual FoxPro, Clipper, and many other applications. DBF files are still widely used in GIS software (shapefiles use DBF for attribute data), legacy accounting systems, and government databases. Each .dbf file contains one table of structured data.

Why do special characters look wrong after converting DBF to CSV?

DBF files use legacy character encodings like CP437, CP850, or CP1252 rather than modern UTF-8. If your conversion tool assumes the wrong encoding, characters like accented letters, currency symbols, and umlauts will display as garbled text. The fix is to specify the correct encoding during conversion or use a tool that auto-detects the encoding from the DBF file header.

What are .fpt and .dbt files, and do I need them?

.fpt (FoxPro) and .dbt (dBase) files store memo field data – long text content that exceeds the 254-character limit of standard DBF character fields. If your DBF file has memo fields and you convert without the .fpt or .dbt file present, those fields will appear empty or show only a placeholder. Always keep memo files in the same directory as the .dbf file during conversion.

Can I convert a DBF file with millions of rows to CSV?

Yes, but not all tools handle large files well. Excel is limited to about 1,048,576 rows and will silently truncate larger files. LibreOffice has similar practical limits. For large DBF files, use command-line tools like Python’s dbfread library or an online service like DBRescue that processes files server-side without row limits.

How do I convert multiple DBF files from a FoxPro database at once?

FoxPro databases (.dbc containers) consist of multiple .dbf files plus a .dbc catalog file that tracks relationships and metadata. To convert them all, you need to process each .dbf file individually. Tools like DBRescue can handle multi-file uploads and will extract all tables from a FoxPro database in a single operation.

Key Takeaways

  • DBF-to-CSV conversion is still common due to legacy system migrations, GIS data workflows, and government data distribution.
  • Excel is the quickest method but has serious limitations: no encoding control, no memo fields, and a hard row limit of about 1 million rows.
  • LibreOffice is better for encoding issues (it lets you choose the code page) but still cannot read memo fields.
  • Python and command-line tools give you full control and are best for batch processing or automation, but require technical skill.
  • Character encoding is the most common source of data corruption during conversion. Always verify that accented characters and special symbols display correctly after conversion.
  • Memo fields (.fpt and .dbt files) are frequently lost because users transfer only the .dbf file. Always keep companion files together.
  • For large files, avoid GUI tools that load everything into memory. Use streaming tools or server-side processing.

Convert Your DBF Files Without the Headaches

Need to convert DBF files to CSV but worried about encoding issues, missing memo fields, or large file handling? DBRescue handles it all automatically.

Upload your .dbf file (and any .fpt or .dbt memo files), and we will detect the encoding, extract memo field content, and deliver clean UTF-8 CSV files. Works with dBase III, dBase IV, FoxPro, and Visual FoxPro files of any size.

Try Free DBF Extraction

Free for files under 50MB. Larger files get upfront, transparent pricing. Most conversions complete within minutes.