Managing Data

Atlas stores all tracking data in a local SQLite database. This page covers tools for maintaining, exporting, and cleaning up that data.

Forgetting Files

The forget command permanently deletes all history for an entity. This is a true deletion — not a soft delete.

Forget a Single Entity

Without --confirm, Atlas shows what would be deleted:

atlas forget a1b2c3d4-e5f6-47a8-9f0b-1c2d3e4f5a6b

Confirm the deletion:

atlas forget a1b2c3d4-e5f6-47a8-9f0b-1c2d3e4f5a6b --confirm

Forget by Pattern

Delete all entities whose current path matches a glob pattern:

atlas forget --pattern "*.log"

Without --confirm, Atlas shows the blast radius:

Would permanently forget 47 entities matching "*.log":

  23 active, 18 deleted, 6 disconnected

This action is irreversible.
To confirm: atlas forget --pattern "*.log" --confirm

Confirm the mass deletion:

atlas forget --pattern "*.log" --confirm

Pattern matching uses the same engine as ignore patterns — the same glob syntax and matching behavior. This pairs well with the ignore system: add a pattern to stop tracking new files, then forget existing matches.

What Gets Deleted

For each forgotten entity, Atlas removes:

  • All hash history entries
  • All path history entries
  • All name history entries
  • All edges (in both directions)
  • All file traits
  • The current state record

A minimal audit entry is kept in the forget_log (entity ID and timestamp only — no content or metadata).

This is irreversible. There is no undo.

Scanning for Edges

Re-scan tracked files to discover or update structural references (edges):

atlas scan

Limit to a specific path:

atlas scan ~/projects/myapp
atlas scan ~/projects/myapp/index.html

See Edges & Lineage for more on what scanning discovers.

Rebuilding State

The current_state table is a denormalized cache for fast queries. If it ever gets out of sync, rebuild it from the raw history:

atlas rebuild

This is a safe, idempotent operation — it reads the append-only history tables and reconstructs the current state of every entity.

Verifying Integrity

Check the database for inconsistencies:

atlas verify

This looks for:

  • Orphaned history records (entries without a parent entity)
  • Inconsistencies between current_state and raw history
  • Foreign key violations

Exporting Data

Export all history as JSON Lines (one JSON object per line):

atlas export

Export a specific table:

atlas export --table entities
atlas export --table hash_history
atlas export --table edges

Available tables: entities, hash_history, path_history, name_history, current_state, edges, file_traits.

The first line is a version header for forward compatibility:

{"atlas_export_version":1,"schema_version":14}

Subsequent lines include a _table field indicating their source:

{"id":"abc123...","created_at":"2025-03-10T14:30:00+00:00","_table":"entities"}

See JSON Output for more on working with Atlas’s data programmatically.

Database Location

All Atlas data lives in ~/.atlas/:

FilePurpose
atlas.dbSQLite database (primary data store)
atlas.db-walWrite-ahead log (auto-managed by SQLite)
atlas.db-shmShared memory file (auto-managed by SQLite)
atlas.pidDaemon process ID (while running)
atlas.logDaemon output log
config.tomlUser configuration (optional)
.consentFirst-run consent marker

Backups

The simplest backup is to copy the database file while the daemon is stopped:

atlas stop
cp ~/.atlas/atlas.db ~/backups/atlas-backup.db
atlas start

Alternatively, use atlas export to create a portable JSON Lines dump that doesn’t depend on SQLite.