bin/invoiceplane-db-import.sh is a staged database import, recovery, and reconciliation tool for InvoicePlane.
This project intentionally does not support direct database imports.
Instead, it uses a reconcile-only model.
Direct SQL imports assume:
- the source schema matches the destination schema
- the application version is identical
- all required columns and tables exist
In practice, these assumptions are often false.
This leads to:
- failed imports
- missing data
- corrupted state
- silent inconsistencies
This script avoids those problems by:
- importing the dump into a temporary database
- using the InvoicePlane installer to create a correct, current schema
- comparing temp data against the live schema
- applying controlled merge strategies
- skipping unsafe or runtime-managed tables
The key idea is:
the destination schema is always created fresh by the installer
This means:
- the schema always matches the running application
- old data is adapted into the new structure
- schema differences are handled safely
Reconcile mode:
- works across InvoicePlane versions
- tolerates schema changes
- avoids destructive overwrites
- produces consistent results
This script is reconcile-only by design.
Direct import mode is intentionally not supported, because it is less reliable and more dangerous.
This tool exists to safely recover older InvoicePlane data into a current Docker deployment without assuming that every table can be blindly copied.
It is designed to:
- optionally back up the current live database (live mode only)
- import an old dump into a temporary database
- analyze shared tables and schema differences
- assign a strategy per table
- execute only supported strategies
- verify results based on strategy
- produce clear, auditable reports
Prompts before applying any live changes.
bin/invoiceplane-db-import.sh --yes --dump /path/to/file.sqlbin/invoiceplane-db-import.sh --dry-run --dump /path/to/file.sqlDry run will:
- recreate the temp DB
- import the dump into temp
- analyze schemas and classify strategies
- generate full reports
- NOT modify the live database
- NOT create a live DB backup
Expected output includes:
Mode : DRY RUNDRY RUN: skipping live DB backupDRY RUN: no live changes applied
--dump <file>— path to SQL dump--yes— non-interactive execution--dry-run— analysis only, no live mutation--help— show usage
Checks for:
- repo root
.env- docker / docker compose
- python3
- database container availability
In live execution mode:
- creates a full backup of the current database
- stored under
.backup/(ignored by Git)
In dry run:
- backup is intentionally skipped
- drops and recreates the temp database
- imports the dump without modification
- discovers shared tables
- compares schema structure
- classifies compatibility
Each table is assigned a declared strategy.
Typical strategies:
REPLACE_EXACTREPLACE_MAPPEDMERGE_BY_PKSPECIAL_SETTINGSSPECIAL_CUSTOM_VALUESSKIPMANUAL_REVIEW
- only supported strategies are executed
- unsafe tables are not forced
- critical failures are flagged explicitly
Verification depends on strategy:
- replace → row count match
- merge → primary key coverage
- settings → key/value validation
- custom → owner/field coverage
Reports include:
- planned strategy
- attempted methods
- winning method
- verification mode
- final status
- row counts before/after
- notes
These tables are treated as a core recovery cluster:
ip_clientsip_productsip_invoicesip_invoice_itemsip_invoice_amountsip_invoice_item_amountsip_paymentsip_payment_methods
These represent the business-critical invoice and payment model.
- merged by
setting_key - not blindly replaced
- require definition-aware handling
- orphan/null values are filtered or reported
- merged by
(owner_id, field_id)
Tables such as:
- users
- sessions
- logs
- import/version tracking
are intentionally skipped or handled separately.
In live execution mode:
- a full database backup is created before any changes
- rollback = restore that backup
Dry run:
- no backup is created
- no rollback is required
This tool is intentionally conservative.
It prioritizes:
- safety
- transparency
- auditability
It does not:
- blindly force schema compatibility
- silently drop ambiguous data
- pretend success without verification
If a table cannot be handled safely, it will be reported for manual review.