Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction

Picture a university campus. The architects laid careful concrete sidewalks connecting every building. But students cut across the grass, wearing paths between the library and the dorms, from the quad to the parking lot. Those worn trails across the lawn show where the sidewalks should have been built.

These are desire paths: physical traces of actual human behavior, revealing gaps between designed infrastructure and real needs.

dp brings this concept to AI coding assistants. When Claude Code, Cursor, or any AI tool hallucinates a function that doesn’t exist, when it invokes a tool that isn’t available, when it fails trying to use capabilities it wishes it had—those failures are signals. They’re desire paths in your workflow, pointing to features that should exist.

dp captures these failed tool calls, aggregates them into patterns, and surfaces the most common ones. Instead of watching the same errors scroll past day after day, you can see what your AI really needs, find the closest matches in its actual toolset, and wire up aliases to fix the gap.

Quick Demo

Here’s the workflow:

# Install
go install github.com/scbrown/desire-path/cmd/dp@latest

# Connect to Claude Code
dp init --source claude-code

# Work normally in Claude Code; desires accumulate automatically
# ...some time passes...

# See the failures
dp list

# View aggregated patterns ranked by frequency
dp paths

# Inspect a specific pattern
dp inspect read_file

# Find close matches among known tools
dp similar read_file

# Wire up the fix
dp alias read_file Read

Done. Now when your AI tries to call read_file, it gets routed to Read. The desire path becomes a real sidewalk.

What It Does

  • Captures failures: Hook into AI tool output streams to record every failed tool invocation
  • Finds patterns: Aggregate similar failures into paths ranked by frequency
  • Suggests fixes: Use Levenshtein-based similarity to match hallucinated tools to real ones
  • Creates aliases: Map the hallucinated names to actual tools, fixing the gap
  • Tracks everything: Optional full invocation logging for deeper analysis (success + failure)

What It Doesn’t Do

dp is not a proxy, not a wrapper, not a runtime interceptor. It doesn’t sit between your AI and its tools. It’s a passive observer and a pattern analyzer. You run it once to set up hooks, then it watches quietly and builds a database of desire paths. When you’re ready, you query that database and act on the insights.

Why This Matters

AI coding assistants evolve fast. Their tool sets change, their output formats shift, and they constantly hallucinate new capabilities before those capabilities actually exist. Instead of treating these failures as noise, dp treats them as signal. Every failed tool call is a vote for a feature request. dp counts the votes.

Get Started

Ready to map your desire paths? Head to Getting Started to install dp and hook it into your AI tool.

Getting Started

This guide walks through installing dp, connecting it to Claude Code, and running your first analysis.

Installation

If you have Go 1.24+ installed:

go install github.com/scbrown/desire-path/cmd/dp@latest

Make sure $GOPATH/bin (or $HOME/go/bin) is in your PATH.

Option 2: Install from Source

Clone the repository and build:

git clone https://github.com/scbrown/desire-path.git
cd desire-path
make install

This builds the dp binary and copies it to $GOPATH/bin.

Option 3: Download a Binary Release

Visit the GitHub Releases page and download the pre-built binary for your platform. Extract it and move the dp binary somewhere in your PATH.

Set Up Claude Code Integration

dp works by hooking into Claude Code’s event system. Run:

dp init --source claude-code

This command updates ~/.claude/settings.json to add a PostToolUseFailure hook that runs dp record --source claude-code whenever a tool call fails. The operation is idempotent—safe to run multiple times without duplicating hooks.

What Just Happened?

dp init added a JSON snippet to your Claude Code settings:

{
  "hooks": {
    "PostToolUseFailure": [
      {
        "matcher": ".*",
        "hooks": [
          {
            "type": "command",
            "command": "dp record --source claude-code",
            "timeout": 5000
          }
        ]
      }
    ]
  }
}

Now every time Claude Code attempts a tool call that fails, the hook fires asynchronously, passing the failure payload to dp record. The command parses the JSON, extracts universal fields (tool name, session ID, error message, working directory), and writes a desire record to ~/.dp/desires.db.

Claude Code continues immediately—dp runs in the background and won’t slow down your session.

Accumulate Desires

Just use Claude Code normally. Every tool call failure is now being recorded. After a few sessions, you’ll have data to analyze.

If you want to test the system right away without waiting for real failures, you can manually inject a fake desire:

echo '{"tool_name":"read_file","error":"unknown tool","session_id":"test","cwd":"/tmp"}' | dp record --source claude-code

Check that it was recorded:

dp list

You should see your test desire (or real ones, if you’ve been using Claude Code since running dp init).

First Analysis

List Desires

Show the raw failures:

dp list

Add filters:

# Only desires from the last 24 hours
dp list --since 24h

# Only desires matching a specific tool name
dp list --tool read_file

# Limit to 10 results
dp list --limit 10

View Aggregated Paths

Paths are aggregated desire patterns ranked by frequency:

dp paths

This shows which tool names failed most often, how many times each failed, and when they were first and last seen.

Inspect a Specific Pattern

Dive deep into a single tool name:

dp inspect read_file

This returns:

  • Total occurrences
  • First/last seen timestamps
  • Histogram of failures over time
  • Top error messages
  • Top input payloads (truncated)
  • Whether an alias already exists

Find Similar Tools

Find known tools similar to a hallucinated name:

dp similar read_file

dp uses Levenshtein distance with camelCase normalization, prefix bonuses, and suffix bonuses to rank known tools by similarity. By default it shows the top 5 matches with scores above 0.5.

The known tools list is configurable—see Configuration for details.

Create an Alias

Once you’ve identified the correct real tool, wire up the alias:

dp alias read_file Read

Now any system consuming the dp database can map read_fileRead. For example, a Claude Code plugin could intercept tool calls, check the aliases table, and rewrite the tool name before execution.

dp doesn’t currently perform this rewriting automatically—it just stores the mapping. You can list all aliases with:

dp aliases

Delete an alias:

dp alias --delete read_file

Optional: Track All Invocations

By default, dp only captures failures (via PostToolUseFailure). If you want to track all tool calls—successes and failures—for deeper analysis (like success rates, invocation frequency, session analysis), enable full tracking:

dp init --source claude-code --track-all

This adds two additional hooks:

  • PostToolUse → dp ingest --source claude-code
  • PostToolUseFailure → dp ingest --source claude-code

The ingest command writes invocation records (not desire records). Invocations include a boolean is_error field to distinguish successes from failures. This generates significantly more data—every tool call fires the hook—so only enable it if you need invocation-level analytics.

View invocation stats:

dp stats --invocations

Export invocation data:

# Export as JSON
dp export --type invocations

# Export as CSV
dp export --type invocations --format csv > invocations.csv

# Filter by date
dp export --type invocations --since 2026-02-01

Next Steps

Configuration

dp stores configuration in ~/.dp/config.toml. You can view and update settings using the dp config command or by editing the file directly.

Config File Location

By default: ~/.dp/config.toml

Override with the DESIRE_PATH_CONFIG environment variable:

export DESIRE_PATH_CONFIG=/path/to/custom/config.toml

Valid Configuration Keys

KeyTypeDescriptionDefault
db_pathstringPath to the SQLite database file~/.dp/desires.db
default_sourcestringDefault source tag for recorded desires when --source is not specified"" (empty)
known_toolsstringComma-separated list of known tool names used by dp similar"" (empty—uses built-in list)
default_formatstringDefault output format: "table" or "json""table"

Usage Examples

View Current Config

# Show all settings
dp config

# Show a specific key
dp config db_path

Set Values

# Change database path
dp config db_path /data/dp/desires.db

# Set default source
dp config default_source claude-code

# Use JSON output by default
dp config default_format json

Configure Known Tools

The known_tools setting controls the list of tool names used by dp similar for similarity matching. If empty, dp uses a built-in default list (currently: Read, Write, Edit, Bash, Grep, Glob, WebSearch, WebFetch).

Set a custom list:

dp config known_tools "Read,Write,Edit,Bash,CustomTool,AnotherTool"

dp splits on commas and trims whitespace, so you can use spaces for readability:

dp config known_tools "Read, Write, Edit, Bash, CustomTool"

Reset to Default

To clear a setting and revert to the default, set it to an empty string:

dp config known_tools ""

Global Flags

Some configuration can be overridden per-command using global flags:

Database Path

# Use a different database for one command
dp --db /tmp/test.db list

# Set via environment variable
export DESIRE_PATH_DB=/tmp/test.db
dp list

Precedence: --db flag > DESIRE_PATH_DB env var > db_path config > default (~/.dp/desires.db)

Output Format

# Force JSON output
dp --json paths

# Force table output (default)
dp paths

The --json flag overrides default_format config for that command.

Advanced: Direct File Editing

~/.dp/config.toml is plain TOML. Example:

db_path = "/data/dp/desires.db"
default_source = "claude-code"
known_tools = ["Read", "Write", "Edit", "Bash", "CustomTool"]
default_format = "json"

If the file doesn’t exist, dp creates it on first write. Invalid TOML causes an error — use dp config for safer editing.

Migration note: Legacy config.json files are automatically migrated to TOML on first load.

Database Configuration

Database Path

The SQLite database stores desires, invocations, aliases, and schema metadata. By default it lives at ~/.dp/desires.db.

Change the path permanently:

dp config db_path /new/path/desires.db

Or temporarily:

dp --db /tmp/test.db list

SQLite Options

dp uses pure-Go SQLite (via modernc.org/sqlite) with WAL mode enabled for concurrent reads and writes. There are no user-configurable SQLite options—the database is tuned for dp’s access patterns.

If you need to inspect the database directly:

sqlite3 ~/.dp/desires.db

Tables: desires, invocations, aliases, schema_version.

Source Plugin Configuration

Each source plugin (like claude-code) may have its own configuration needs. Check the integration docs:

For Claude Code specifically, the hook configuration lives in ~/.claude/settings.json, not in dp’s config file. Run dp init --source claude-code to set it up.

Environment Variables

VariableDescriptionExample
DESIRE_PATH_DBOverride database pathexport DESIRE_PATH_DB=/tmp/test.db
DESIRE_PATH_CONFIGOverride config file pathexport DESIRE_PATH_CONFIG=/etc/dp/config.toml

Environment variables take precedence over config file settings but are overridden by command-line flags.

Tips

  • Use --json output with jq for scripting: dp paths --json | jq '.[] | select(.count > 10)'
  • Keep known_tools in sync with your AI’s actual tool set for better similar results
  • If you’re testing or developing, point --db at a temporary database to avoid polluting your real data
  • The config file is optional—all keys have sensible defaults

For command-specific options, see the Command Reference.

Concepts

The desire_path CLI tracks how AI coding assistants try to interact with your system. Four core concepts form the backbone of dp:

Desires

A desire is a single failed tool call from an AI assistant. When Claude Code tries to call read_file but that tool doesn’t exist, that failure gets recorded as a desire. Each one captures what the AI wanted to do, what went wrong, and the context around it.

Learn more about desires →

Paths

A path emerges when the same tool fails repeatedly. Like a worn trail across a lawn, frequent failures for read_file form a pattern that says “build a sidewalk here.” Paths show you what capabilities to prioritize building.

Learn more about paths →

Aliases

An alias maps a hallucinated tool name to a real one. When Claude keeps calling read_file but your tool is actually named Read, create an alias. This connects desires to reality and helps you understand what the AI is actually trying to accomplish.

Learn more about aliases →

Invocations

Invocations track ALL tool calls, not just failures. When enabled, you get the full picture: success rates, usage patterns, session timelines. This turns desire_path from a failure tracker into a comprehensive telemetry system for AI tool usage.

Learn more about invocations →


The flow: AI assistants generate desires (failures). Repeated desires form paths (patterns). Aliases connect desires to real tools. Invocations expand tracking to include successes, giving you the complete story.

Desires

A desire is a single failed AI tool call. It’s the atomic unit of desire_path: one moment where an AI assistant tried to do something and couldn’t.

What Gets Captured

Every desire records:

  • tool_name (required): The tool the AI tried to call
  • tool_input: The arguments it tried to pass (JSON)
  • error: The error message or reason for failure
  • source: Which AI system generated this (e.g., “claude-code”)
  • session_id: Groups desires from the same conversation
  • cwd: The working directory when the call failed
  • timestamp: When it happened (auto-generated)
  • metadata: Additional context as JSON (optional)

Behind the scenes, each desire also gets a UUID for tracking.

How Desires Get Recorded

Manual Recording

Pipe JSON to dp record:

echo '{
  "tool_name": "read_file",
  "tool_input": {"path": "/etc/config.yaml"},
  "error": "tool not found",
  "source": "claude-code"
}' | dp record --source claude-code

Automatic Recording

Set up hooks in your AI tool to automatically capture failures:

# Initialize desire_path for Claude Code
dp init --source claude-code

# Now failures get recorded automatically as you work

When Claude Code tries to call a non-existent tool, the failure flows into desire_path without manual intervention.

Example Scenario

You’re working with Claude Code. It tries to call read_file to read a configuration file. But that tool doesn’t exist in Claude Code’s toolkit (the real tool is Read).

The failure gets recorded as a desire:

{
  "id": "550e8400-e29b-41d4-a716-446655440000",
  "tool_name": "read_file",
  "tool_input": {
    "path": "/home/user/config.yaml"
  },
  "error": "tool 'read_file' not found",
  "source": "claude-code",
  "session_id": "session-abc123",
  "cwd": "/home/user/project",
  "timestamp": "2026-02-09T10:30:00Z"
}

Why This Matters

Individual desires are data points. But when you see read_file fail 47 times across multiple sessions, that’s a path — a clear signal that the AI expects this capability. That’s when you know to build an adapter, create an alias, or extend your toolset.

Desires are the raw material. Paths are the insight.

Paths

A path is an aggregated pattern of repeated desires. When the same tool name fails multiple times, it forms a well-worn trail — a signal that this capability is genuinely needed.

The Metaphor

Think of a university campus. Planners lay out sidewalks where they think people should walk. But students take shortcuts across the grass. Over time, these shortcuts become visible worn paths. Smart planners pave over the desire paths because they reveal where sidewalks actually belong.

Desire paths in AI tooling work the same way. When Claude Code repeatedly tries to call read_file and fails, that repeated pattern tells you: “Build this. I need this.”

What Paths Show

Each path aggregates desires by tool_name and shows:

  • pattern: The tool name that keeps failing (e.g., read_file)
  • count: How many times it’s failed
  • first_seen: When this pattern first appeared
  • last_seen: Most recent occurrence
  • alias_to: If you’ve mapped this to a real tool (optional)

Viewing Paths

Use dp paths to see the patterns:

dp paths

Output:

PATTERN         COUNT   FIRST SEEN            LAST SEEN             ALIAS
read_file       47      2026-02-01 09:15:23   2026-02-09 10:30:45   Read
execute_bash    23      2026-02-03 14:22:10   2026-02-09 08:12:33
list_dir        15      2026-02-05 11:05:42   2026-02-08 16:44:21   Glob
write_output    8       2026-02-07 13:30:12   2026-02-09 09:18:55

What To Do With Paths

High Count = High Priority

A tool that fails 47 times is screaming “I should exist.” That’s your top priority for building or aliasing.

Low Count = Wait and Watch

A tool that fails twice might be a one-off mistake or an edge case. Don’t build infrastructure for noise.

Recent Activity = Active Pain Point

If last_seen is today and count is climbing, this is actively blocking work right now.

Paths vs. Desires

  • Desires are raw events: individual failures with full context
  • Paths are aggregated insights: patterns showing what matters

You don’t fix individual desires. You fix paths. The frequency tells you what to prioritize.

Example Workflow

  1. Check paths: dp paths
  2. See read_file has failed 47 times
  3. Discover your tool is actually called Read
  4. Create an alias: dp alias read_file Read
  5. Now dp paths shows the mapping

Paths reveal the problem. Aliases (or building new tools) solve it.

Aliases

An alias maps a hallucinated tool name to a real one. When the AI keeps calling read_file but your actual tool is named Read, an alias bridges that gap.

Why Aliases Exist

AI assistants often hallucinate tool names that feel natural but don’t match your actual API. Claude Code might try:

  • read_file when the real tool is Read
  • execute_command when it’s Bash
  • search_files when it’s Glob

These aren’t bugs in the AI — they’re reasonable guesses. But they create friction. Aliases let you connect desires to reality without renaming your tools or retraining models.

Creating an Alias

Basic syntax:

dp alias <hallucinated_name> <real_tool_name>

Example:

dp alias read_file Read

Now when you run dp paths or dp similar, desires for read_file show their connection to Read.

Upsert Behavior

Creating an alias twice updates the target:

dp alias read_file Read      # Maps read_file → Read
dp alias read_file ReadFile  # Updates to read_file → ReadFile

No error, no duplicate entries. The latest mapping wins.

Listing Aliases

See all current mappings:

dp aliases

Output:

ALIAS            MAPS TO
read_file        Read
execute_bash     Bash
search_files     Glob

Deleting an Alias

Remove a mapping when it’s no longer needed:

dp alias --delete read_file

The desires remain in your database, but the mapping is gone. dp paths will show the raw pattern again.

How Aliases Appear in Commands

In dp paths

Without alias:

PATTERN         COUNT   FIRST SEEN            LAST SEEN
read_file       47      2026-02-01 09:15:23   2026-02-09 10:30:45

With alias:

PATTERN         COUNT   FIRST SEEN            LAST SEEN             ALIAS
read_file       47      2026-02-01 09:15:23   2026-02-09 10:30:45   Read

In dp similar

Suggestions incorporate alias information to show what the AI is trying to accomplish with tools that actually exist.

When To Use Aliases

Perfect Match, Wrong Name

The AI’s concept matches your tool exactly, just different naming:

  • read_fileRead
  • execute_bashBash

Subset Mapping

The AI wants something specific that’s part of a broader tool:

  • search_codebaseGrep
  • list_directoryGlob

Don’t Alias If…

  • The concepts don’t actually align (forcing a bad mapping creates confusion)
  • You’re planning to build the hallucinated tool anyway (let the path data guide development)

Command Correction Rules

Aliases extend beyond tool names. Command correction rules fix mistakes inside tool parameters — like wrong CLI flags, deprecated command names, or outdated paths.

The Problem

AI assistants don’t just hallucinate tool names. They also misremember CLI flags:

  • scp -r when the correct flag is scp -R
  • grep pattern when you prefer rg pattern
  • curl -k when you need curl --cacert cert.pem

These are the same kind of desire path: the AI’s intent is correct, but the specifics are wrong.

Rule Types

TypeWhat It FixesExample
FlagWrong CLI flag-r-R in scp
CommandWrong command namegreprg
LiteralWrong string in a commanduser@old:user@new:
RegexPattern-based replacementcurl -kcurl --cacert cert.pem

How They Work

Rules are scoped to a specific command within a pipeline. Given a Bash tool call like:

cat file.txt | scp -rP 22 user@host:/path

A flag rule for scp with -r-R would:

  1. Parse the pipeline into segments: cat file.txt and scp -rP 22 user@host:/path
  2. Find the segment where the command is scp
  3. Locate the -r flag within -rP (combined flags)
  4. Rewrite to -RP: cat file.txt | scp -RP 22 user@host:/path

The correction is precise: it only touches the right command and the right flag.

Creating Rules

# Flag correction
dp alias --cmd scp --flag r R --message "scp uses -R for recursive"

# Command substitution
dp alias --cmd grep --replace rg

# Literal replacement
dp alias --cmd scp "user@old:" "user@new:"

Enforcement

Rules are enforced by dp pave --hook, which installs a PreToolUse hook in Claude Code. When a tool call matches a rule, the hook rewrites the parameters before the tool executes. The AI sees a note about the correction in the response.

See dp pave for details on the hook mechanism.

Aliases as Documentation

Your alias list is a rosetta stone between AI expectations and your actual API. It documents the gap between “what AI assistants naturally try to do” and “what your system actually provides.”

That gap is valuable data. Don’t just fix it — learn from it.

Invocations

Invocations track ALL tool calls, not just failures. When enabled, desire_path becomes a comprehensive telemetry system for AI tool usage.

The Difference

  • Desires: Only failed tool calls
  • Invocations: Every tool call (success or failure)

Invocations give you the full picture: success rates, usage patterns, which tools get hammered, which never get touched, and how sessions unfold over time.

Enabling Invocations

Turn on full tracking when initializing:

dp init --source claude-code --track-all

The --track-all flag activates invocation recording. Now every tool call flows into your desire_path database, not just the failures.

What Gets Captured

Each invocation records:

  • source: Which AI system made the call (e.g., “claude-code”)
  • instance_id: Specific AI session or conversation
  • host_id: Machine where the call happened
  • tool_name: The tool that was called
  • is_error: Boolean — did it succeed or fail?
  • error: Error message if is_error is true (null otherwise)
  • cwd: Working directory during the call
  • timestamp: When it happened
  • metadata: Additional context as JSON (optional)

Viewing Invocation Stats

Get aggregated statistics:

dp stats --invocations

This might show:

TOOL NAME       TOTAL   SUCCESS   FAILED   SUCCESS RATE
Read            324     320       4        98.8%
Bash            156     142       14       91.0%
Glob            89      89        0        100.0%
Edit            67      63        4        94.0%
read_file       47      0         47       0.0%

Insights immediately visible:

  • Read works reliably (98.8% success)
  • Bash has issues (14 failures worth investigating)
  • read_file fails every time (needs aliasing or building)

Exporting Invocation Data

Pull raw data for deeper analysis:

dp export --type invocations --format json > invocations.json

Now you can:

  • Load into analytics tools
  • Build custom dashboards
  • Track trends over time
  • Correlate with other metrics

The Source Plugin System

Invocations use desire_path’s plugin architecture. Each AI tool has its own parser:

dp init --source claude-code --track-all   # Claude Code invocations
dp init --source aider --track-all          # Aider invocations
dp init --source custom-ai --track-all      # Your custom AI tool

The source plugin handles:

  • Parsing that AI tool’s specific output format
  • Extracting tool call data
  • Recording to the desire_path database

Different AI tools structure their telemetry differently. Plugins normalize everything into a common schema.

Use Cases

Success Rate Monitoring

Which tools are fragile? If Bash fails 10% of the time, maybe error handling needs work.

Usage Patterns

Which tools actually get used? You might discover Claude Code calls Read 10x more than anything else — worth optimizing.

Session Timelines

Replay how a conversation unfolded: “First it read the file, then globbed for tests, then ran bash commands.” Debug AI reasoning by seeing the sequence.

Hallucination Detection

Tools with 0% success rate are hallucinations. But you already knew that from desires. What’s new: tools with 50% success rate might have naming collisions or API confusion.

When To Enable Invocations

Don’t Enable If…

  • You only care about failures (desires are enough)
  • Storage or performance is constrained (invocations generate more data)
  • You’re just getting started (start simple, expand later)

Do Enable If…

  • You’re building production AI tooling (comprehensive telemetry matters)
  • You want to optimize tool implementations (need success data to measure improvements)
  • You’re analyzing AI behavior patterns (full session data reveals reasoning flows)
  • You’re running experiments (A/B testing tool changes requires success metrics)

The Full Picture

Desires tell you what’s broken. Invocations tell you what’s working, how often, and why.

Together, they turn desire_path from a failure tracker into an AI observability platform. You’re not just fixing problems — you’re understanding how AI assistants interact with your system at every level.

More data. Richer insights. Better tools.

Command Reference

The desire_path CLI provides commands for recording, analyzing, and fixing tool call failures in AI coding workflows.

Command Categories

Record & Ingest

Commands for capturing tool call data from AI coding tools.

  • record - Record a failed tool call from stdin
  • ingest - Ingest tool call data from a source plugin
  • init - Set up integration with AI coding tools

Query & Analyze

Commands for exploring recorded desire paths.

  • list - List recent desires
  • paths - Show aggregated paths ranked by frequency
  • inspect - Show detailed view of a specific desire path
  • stats - Show summary statistics
  • export - Export raw desire or invocation data

Map & Fix

Commands for resolving tool name mismatches.

  • similar - Find known tools similar to a tool name
  • alias - Create, update, or delete tool name aliases and command correction rules
  • aliases - List all configured aliases and rules
  • pave - Turn aliases into active tool-call intercepts

Configure

Commands for managing configuration.

  • config - Show or modify configuration

All Commands

CommandDescription
recordRecord a failed tool call from stdin
ingestIngest tool call data from a source plugin
initSet up integration with AI coding tools
listList recent desires
pathsShow aggregated paths ranked by frequency
inspectShow detailed view of a specific desire path
statsShow summary statistics
exportExport raw desire or invocation data
similarFind known tools similar to a tool name
aliasCreate, update, or delete tool name aliases and correction rules
aliasesList all configured aliases and rules
paveTurn aliases into active tool-call intercepts
configShow or modify configuration

Global Flags

All commands support these global flags:

FlagDefaultDescription
–db PATH~/.dp/desires.dbPath to the SQLite database
–jsonfalseOutput results as JSON

dp record

Record a failed tool call from stdin

Usage

dp record [flags]

Flags

FlagDefaultDescription
–source“”Source identifier for the tool call

Examples

$ echo '{"tool_name":"read_file","error":"unknown tool"}' | dp record --source claude-code
Recorded desire: read_file

$ echo '{"tool_name":"file_read","error":"tool not found","input":{"path":"/etc/hosts"}}' | dp record --source cursor
Recorded desire: file_read

Details

The record command expects a JSON object from stdin containing at minimum a tool_name field. The JSON can include additional fields like error, input, timestamp, and metadata which will be stored with the desire.

If no --source flag is provided, the source will be recorded as empty. The source helps identify which AI coding tool generated the failed call.

The command reads the entire stdin buffer before parsing, so it works with both piped input and heredocs.

If the JSON is malformed or missing the required tool_name field, an error is returned and nothing is recorded.

dp ingest

Ingest tool call data from a source plugin

Usage

dp ingest [flags]

Flags

FlagDefaultDescription
–source“”Source plugin name (required)

Examples

$ cat payload.json | dp ingest --source claude-code
Ingested 3 desires from claude-code

$ curl https://api.example.com/tool-calls | dp ingest --source custom-plugin
Ingested 12 desires from custom-plugin

$ dp ingest --source nonexistent < data.json
Error: source plugin "nonexistent" not found
Available sources: claude-code, cursor

Details

The ingest command reads raw data from stdin and uses a source plugin to parse it into desire records. Unlike dp record, which expects pre-formatted JSON, dp ingest delegates parsing to a plugin that understands the source’s native format.

The --source flag is required. If omitted, the command will error and list available source plugins.

Source plugins are responsible for:

  • Parsing the input format
  • Extracting tool names, errors, inputs, and metadata
  • Handling batches of tool calls

This command is useful for bulk imports, integrating with custom AI tools, or processing historical data that wasn’t captured in real-time.

Plugins are discovered from the source/ package. To add a new source, implement the Source interface and register it in the plugin registry.

dp init

Set up integration with AI coding tools

Usage

dp init [flags]

Flags

FlagDefaultDescription
–source“”Source plugin name (required)
–track-allfalseRecord all invocations, not just failures
–claude-codefalseDEPRECATED: Use –source claude-code instead

Examples

$ dp init --source claude-code
Initialized desire_path integration for claude-code
Updated: /home/user/.config/claude-code/settings.json

$ dp init --source claude-code --track-all
Initialized desire_path integration for claude-code (tracking all invocations)
Updated: /home/user/.config/claude-code/settings.json

$ dp init --source cursor
Initialized desire_path integration for cursor
Updated: /home/user/.cursor/config.json

Details

The init command configures hooks in your AI coding tool’s settings to automatically capture tool call data. It locates the tool’s configuration file, merges in the necessary hooks, and preserves existing settings.

The integration is non-destructive: init will never clobber existing configuration. It merges hooks intelligently, so you can run init multiple times safely.

By default, only failed tool calls are recorded. Use --track-all to capture every tool invocation, which is useful for analyzing usage patterns and generating comprehensive statistics.

The --claude-code flag is deprecated. Use --source claude-code instead for consistency with other commands.

After running init, the AI tool will automatically send tool call data to desire_path. You don’t need to manually pipe output or modify your workflow.

If the source plugin doesn’t support automatic initialization (no config file to modify), init will print instructions for manual setup.

dp list

List recent desires

Usage

dp list [flags]

Flags

FlagDefaultDescription
–since“”Duration or timestamp (30m, 24h, 7d, etc.)
–source“”Filter by source identifier
–tool“”Filter by tool name
–limit50Maximum number of desires to show

Examples

$ dp list
TIMESTAMP            SOURCE       TOOL           ERROR
2026-02-09 14:32:15  claude-code  read_file      unknown tool
2026-02-09 14:31:42  claude-code  file_read      tool not found
2026-02-09 14:28:33  cursor       edit_document  invalid parameters
2026-02-09 14:15:09  claude-code  grep_search    command failed
2026-02-09 13:58:21  claude-code  write_file     permission denied

5 desires shown (limit: 50)

$ dp list --since 1h --source claude-code
TIMESTAMP            SOURCE       TOOL        ERROR
2026-02-09 14:32:15  claude-code  read_file   unknown tool
2026-02-09 14:31:42  claude-code  file_read   tool not found
2026-02-09 14:15:09  claude-code  grep_search command failed

3 desires shown (limit: 50)

$ dp list --tool read_file --limit 10
TIMESTAMP            SOURCE       TOOL       ERROR
2026-02-09 14:32:15  claude-code  read_file  unknown tool
2026-02-08 16:22:44  cursor       read_file  file not found
2026-02-08 11:05:33  claude-code  read_file  unknown tool

3 desires shown (limit: 10)

Details

The list command displays recent desire paths in reverse chronological order (newest first). Each row shows when the failure occurred, which AI tool generated it, the attempted tool name, and the error message.

Use --since to filter by time. Accepts durations like “30m”, “2h”, “7d”, or absolute timestamps in RFC3339 format.

Use --source to filter by AI coding tool. This is useful when you’re debugging integration issues with a specific tool.

Use --tool to filter by the attempted tool name. This helps identify recurring failures for a particular tool.

The --limit flag caps the number of results. Default is 50. Set to 0 for unlimited results (not recommended for large datasets).

Combine filters to narrow down results:

$ dp list --since 24h --source claude-code --tool read_file

For programmatic access, use the global --json flag to get structured output.

dp paths

Show aggregated paths ranked by frequency

Usage

dp paths [flags]

Flags

FlagDefaultDescription
–top20Number of top paths to show
–since“”Filter by RFC3339 timestamp

Examples

$ dp paths
RANK  PATTERN        COUNT  FIRST_SEEN           LAST_SEEN            ALIAS
1     read_file      142    2026-01-15 09:23:11  2026-02-09 14:32:15  Read
2     file_read      89     2026-01-18 11:05:44  2026-02-09 14:31:42  Read
3     grep_search    67     2026-01-20 08:15:22  2026-02-09 14:15:09  Grep
4     edit_document  45     2026-01-22 13:44:33  2026-02-09 14:28:33  Edit
5     write_file     38     2026-01-25 10:12:09  2026-02-09 13:58:21  Write
6     search_grep    31     2026-01-28 15:33:44  2026-02-08 16:47:22  Grep
7     bash_exec      28     2026-02-01 09:08:15  2026-02-07 18:22:33  Bash
8     run_command    24     2026-02-03 12:55:09  2026-02-06 14:11:55  Bash
9     file_write     19     2026-02-04 08:44:21  2026-02-05 16:33:12  Write
10    glob_find      17     2026-02-05 11:22:44  2026-02-09 09:14:28  Glob

Showing top 10 of 47 unique patterns

$ dp paths --top 5 --since 2026-02-01T00:00:00Z
RANK  PATTERN      COUNT  FIRST_SEEN           LAST_SEEN            ALIAS
1     read_file    48     2026-02-01 08:15:33  2026-02-09 14:32:15  Read
2     file_read    32     2026-02-01 09:22:11  2026-02-09 14:31:42  Read
3     bash_exec    28     2026-02-01 09:08:15  2026-02-07 18:22:33  Bash
4     run_command  24     2026-02-03 12:55:09  2026-02-06 14:11:55  Bash
5     edit_file    21     2026-02-02 14:33:22  2026-02-08 11:44:09  Edit

Showing top 5 of 29 unique patterns

Details

The paths command aggregates desire records by tool name pattern and ranks them by frequency. This reveals which tool name variations are most commonly attempted by AI coding tools.

The ALIAS column shows if a mapping has been configured using dp alias. When an alias exists, desire_path can automatically find the correct tool name.

Use --top to control how many patterns to display. The default is 20, which typically covers the most actionable patterns.

Use --since to analyze patterns from a specific date forward. This is useful after making changes to your tool configuration to see if new patterns emerge.

This command is essential for:

  • Identifying the most frequent tool name mismatches
  • Prioritizing which aliases to create
  • Understanding how different AI tools name the same capabilities
  • Tracking whether integration improvements reduce failure rates

Pattern counts represent unique failure instances, not total attempts. Use dp inspect to drill into a specific pattern.

dp inspect

Show detailed view of a specific desire path

Usage

dp inspect <pattern> [flags]

Flags

FlagDefaultDescription
–since“”Duration or timestamp (30m, 24h, 7d, etc.)
–top5Number of top inputs/errors to show

Examples

$ dp inspect read_file
Pattern: read_file
Total occurrences: 142
Date range: 2026-01-15 09:23:11 to 2026-02-09 14:32:15
Sources: claude-code (128), cursor (14)
Alias: Read

Activity by day:
2026-02-09 ████████████████████ 24
2026-02-08 ██████████████ 17
2026-02-07 ████████ 9
2026-02-06 ██████████ 12
2026-02-05 ███████████ 13
2026-02-04 ██████ 7
2026-02-03 ████████████ 14
2026-02-02 ███████████ 13
2026-02-01 ████████ 10
(earlier days: 23 total)

Top errors:
unknown tool                89 (62.7%)
tool not found             31 (21.8%)
invalid tool name          15 (10.6%)
tool unavailable            7 (4.9%)

Top inputs:
{"path": "/etc/hosts"}                           18
{"file_path": "/home/user/config.json"}          12
{"path": "/var/log/app.log", "offset": 0}        9
{"file_path": "/tmp/data.txt"}                   8
{"path": "/home/user/.bashrc"}                   7

$ dp inspect grep% --since 7d
Pattern: grep% (SQL LIKE wildcard)
Total occurrences: 45
Date range: 2026-02-02 08:15:22 to 2026-02-09 14:15:09
Sources: claude-code (41), cursor (4)
Matched patterns: grep_search (31), grep_find (9), grep_files (5)

Activity by day:
2026-02-09 ██████████ 8
2026-02-08 ████████ 6
2026-02-07 ██████ 5
2026-02-06 ████████ 7
2026-02-05 ██████ 5
2026-02-04 ████ 4
2026-02-03 ██████ 5
2026-02-02 ██████ 5

Top errors:
command failed             28 (62.2%)
unknown tool              12 (26.7%)
invalid parameters         5 (11.1%)

Details

The inspect command provides a deep dive into a specific desire pattern. Use it to understand:

  • How frequently the pattern occurs
  • When it first appeared and when it was last seen
  • Which AI tools are generating this pattern
  • Whether an alias has been configured
  • How activity trends over time
  • What error messages are associated with it
  • What input parameters are commonly attempted

The pattern argument supports SQL LIKE wildcards:

  • Use % to match any sequence of characters: grep% matches grep_search, grep_find, etc.
  • Use _ to match any single character: read_fil_ matches read_file, read_fils, etc.

Without wildcards, the pattern is matched exactly.

The histogram shows daily activity with a simple bar chart. The width of each bar represents relative frequency within the time window.

Use --since to focus on recent activity. This helps identify if a pattern is actively occurring or historical.

Use --top to control how many top errors and inputs are displayed. Default is 5, which usually captures the most common cases.

This command is invaluable for debugging specific integration issues and understanding why a particular tool name is failing.

dp stats

Show summary statistics

Usage

dp stats [flags]

Flags

FlagDefaultDescription
–invocationsfalseShow invocation stats instead of desires

Examples

$ dp stats
Desire Statistics

Total desires: 1,247
Unique tool patterns: 47
Unique sources: 3
Date range: 2026-01-15 09:23:11 to 2026-02-09 14:32:15

Activity windows:
  Last 24 hours: 89 desires
  Last 7 days: 412 desires
  Last 30 days: 1,247 desires

Top sources:
  claude-code: 1,089 (87.3%)
  cursor: 142 (11.4%)
  copilot: 16 (1.3%)

Top desires:
  read_file: 142 (11.4%)
  file_read: 89 (7.1%)
  grep_search: 67 (5.4%)
  edit_document: 45 (3.6%)
  write_file: 38 (3.0%)

Top tools (by alias):
  Read: 231 (18.5%)
  Grep: 98 (7.9%)
  Edit: 76 (6.1%)
  Write: 57 (4.6%)
  Bash: 52 (4.2%)

$ dp stats --invocations
Invocation Statistics

Total invocations: 8,432
Successful: 7,185 (85.2%)
Failed: 1,247 (14.8%)
Unique tools: 23
Unique sources: 3
Date range: 2026-01-15 09:23:11 to 2026-02-09 14:32:15

Activity windows:
  Last 24 hours: 645 invocations (89 failures)
  Last 7 days: 3,128 invocations (412 failures)
  Last 30 days: 8,432 invocations (1,247 failures)

Top sources:
  claude-code: 7,344 (87.1%)
  cursor: 1,028 (12.2%)
  copilot: 60 (0.7%)

Top tools:
  Read: 2,847 (33.8%)
  Bash: 1,923 (22.8%)
  Edit: 1,204 (14.3%)
  Write: 891 (10.6%)
  Grep: 745 (8.8%)

Failure rates by tool:
  Read: 231/2,847 (8.1%)
  Grep: 98/745 (13.2%)
  Edit: 76/1,204 (6.3%)
  Write: 57/891 (6.4%)
  Bash: 52/1,923 (2.7%)

Details

The stats command provides a high-level overview of your desire_path data. It’s useful for:

  • Understanding the scale of tool call failures
  • Identifying which AI tools have the most integration issues
  • Tracking improvement over time
  • Prioritizing which patterns to fix first

By default, stats shows desire (failure) data. Use --invocations to see statistics about all tool invocations, both successful and failed. Invocation tracking must be enabled with dp init --track-all for this data to be available.

Activity windows show rolling counts for the last 24 hours, 7 days, and 30 days. This helps identify trends: is the failure rate increasing, decreasing, or stable?

Top sources reveal which AI tools are generating the most failures. A high failure rate from one source might indicate a configuration issue or incompatibility.

Top desires show the most frequently attempted tool names. These are your highest-priority candidates for creating aliases.

When invocation tracking is enabled, the failure rate breakdown shows which tools have the highest error rates. This can reveal whether certain tools are more prone to naming mismatches or integration issues.

Run stats periodically to track the health of your AI coding tool integrations.

dp export

Export raw desire or invocation data

Usage

dp export [flags]

Flags

FlagDefaultDescription
–formatjsonOutput format: json or csv
–since“”Filter by RFC3339 timestamp or YYYY-MM-DD
–typedesiresData type to export: desires or invocations

Examples

$ dp export --format json
{"id":1,"tool_name":"read_file","error":"unknown tool","source":"claude-code","timestamp":"2026-02-09T14:32:15Z","input":"{\"path\":\"/etc/hosts\"}"}
{"id":2,"tool_name":"file_read","error":"tool not found","source":"claude-code","timestamp":"2026-02-09T14:31:42Z","input":"{\"file_path\":\"/home/user/config.json\"}"}
{"id":3,"tool_name":"edit_document","error":"invalid parameters","source":"cursor","timestamp":"2026-02-09T14:28:33Z","input":"{}"}

$ dp export --format csv --since 2026-02-01
id,tool_name,error,source,timestamp,input,metadata
89,read_file,unknown tool,claude-code,2026-02-09T14:32:15Z,"{\"path\":\"/etc/hosts\"}",
90,file_read,tool not found,claude-code,2026-02-09T14:31:42Z,"{\"file_path\":\"/home/user/config.json\"}",
91,edit_document,invalid parameters,cursor,2026-02-09T14:28:33Z,"{}",

$ dp export --format json --since 2026-02-08T00:00:00Z | jq -r '.tool_name' | sort | uniq -c | sort -rn
     24 read_file
     18 file_read
     12 grep_search
      9 edit_document
      7 write_file

$ dp export --format json --type invocations --since 2026-02-09
{"id":1,"tool_name":"Read","success":true,"source":"claude-code","timestamp":"2026-02-09T14:35:22Z","duration_ms":45,"input":"{\"file_path\":\"/etc/hosts\"}"}
{"id":2,"tool_name":"read_file","success":false,"source":"claude-code","timestamp":"2026-02-09T14:32:15Z","duration_ms":12,"error":"unknown tool"}
{"id":3,"tool_name":"Bash","success":true,"source":"claude-code","timestamp":"2026-02-09T14:30:08Z","duration_ms":234,"input":"{\"command\":\"ls -la\"}"}

Details

The export command outputs raw data for external processing, analysis, or archival. It’s designed for piping to other tools or importing into data analysis platforms.

JSON format outputs JSONL (JSON Lines): one complete JSON object per line. This format is easy to process with jq, jless, or stream into other systems:

dp export --format json | jq 'select(.source == "claude-code")'

CSV format includes headers and is suitable for importing into spreadsheets or databases. Fields containing commas or quotes are properly escaped.

Use --since to export data from a specific date forward. Accepts RFC3339 timestamps (2026-02-09T00:00:00Z) or simple dates (2026-02-09).

Use --type to choose between exporting desire (failure) data or invocation (all tool call) data. Invocation data is only available if you’ve enabled tracking with dp init --track-all.

Common export workflows:

Analyze tool name patterns:

dp export --format json | jq -r '.tool_name' | sort | uniq -c | sort -rn

Find all errors from a specific source:

dp export --format json | jq 'select(.source == "claude-code") | .error' | sort | uniq -c

Export to CSV for spreadsheet analysis:

dp export --format csv > desires.csv

Backup your data:

dp export --format json > backup-$(date +%Y%m%d).jsonl

The export command never modifies the database. It’s read-only and safe to run at any time.

dp similar

Find known tools similar to a tool name

Usage

dp similar <tool-name> [flags]

Flags

FlagDefaultDescription
–known“”Comma-separated list of known tools (overrides defaults)
–threshold0.5Minimum similarity score (0.0 to 1.0)
–top5Maximum number of suggestions to show

Examples

$ dp similar read_file
Checking alias mappings...
Found alias: read_file -> Read

Suggestions for "read_file":
1. Read     (score: 0.82, reason: alias)

Recommended action: Use "Read" instead of "read_file"

$ dp similar file_reader
Checking alias mappings...
No alias found for "file_reader"

Suggestions for "file_reader":
1. Read     (score: 0.73)
2. Write    (score: 0.58)
3. Edit     (score: 0.56)

Recommended action: Consider using "Read" or create an alias with:
    dp alias file_reader Read

$ dp similar grepsearch --threshold 0.4
Checking alias mappings...
No alias found for "grepsearch"

Suggestions for "grepsearch":
1. Grep        (score: 0.78)
2. WebSearch   (score: 0.45)
3. Glob        (score: 0.42)

Recommended action: Consider using "Grep" or create an alias with:
    dp alias grepsearch Grep

$ dp similar custom_tool --known "CustomRead,CustomWrite,CustomEdit"
Checking alias mappings...
No alias found for "custom_tool"

Suggestions for "custom_tool":
1. CustomEdit   (score: 0.54)
2. CustomWrite  (score: 0.51)

No strong matches found. Consider checking the tool name.

Details

The similar command helps resolve tool name mismatches by finding the closest matching known tool. It uses two strategies:

  1. Alias lookup: First checks if an explicit alias has been configured with dp alias
  2. Similarity matching: Calculates Levenshtein distance to find phonetically or structurally similar tool names

Default known tools (Claude Code conventions):

  • Read
  • Write
  • Edit
  • Bash
  • Glob
  • Grep
  • Task
  • WebFetch
  • WebSearch
  • NotebookEdit

Override the known tools list with --known for custom tool environments:

dp similar mytool --known "Tool1,Tool2,Tool3"

The similarity score ranges from 0.0 (completely different) to 1.0 (identical). The --threshold flag filters out weak matches. Default is 0.5, which typically excludes spurious suggestions.

Use --top to limit suggestions. Default is 5, which is usually sufficient to find the right match.

When an alias exists, it’s always shown first with a score of 1.0 and marked as “alias”. This makes aliases the authoritative source for mappings.

Use similar interactively when debugging why a tool call failed:

$ dp list --limit 1
# See a failed tool name
$ dp similar <that-tool-name>
# Get suggestions and create alias if needed

Integrate similar into your workflow by running it after reviewing dp paths to batch-create aliases for the most common patterns.

dp alias

Create, update, or delete tool name aliases and command correction rules.

Usage

dp alias <from> <to>
dp alias --delete <from>
dp alias --cmd <name> --flag <old> <new>
dp alias --cmd <name> --replace <new>
dp alias --cmd <name> <from> <to>
dp alias --tool <tool> --param <param> <from> <to>
dp aliases

Flags

FlagDefaultDescription
–deletefalseDelete an existing alias or rule
–cmd NAMECommand name for CLI corrections (implies tool=Bash, param=command)
–flag OLD,NEWFlag correction within a command (requires –cmd)
–replace NEWSubstitute the command itself (requires –cmd)
–tool NAMETool name for parameter corrections (advanced)
–param NAMEParameter name to correct (requires –tool)
–regexfalseTreat FROM as a regex pattern (requires –tool/–param)
–message TEXTCustom message shown when correction fires

Tool Name Aliases

Map a hallucinated tool name to the correct one:

dp alias read_file Read
dp alias search_files Grep
dp alias --delete read_file

Command Flag Corrections

Fix incorrect CLI flags scoped to a specific command:

# scp uses -R (not -r) for recursive
dp alias --cmd scp --flag r R

# With a custom message
dp alias --cmd scp --flag r R --message "scp uses -R for recursive"

# Delete the rule
dp alias --delete --cmd scp --flag r

When dp pave --hook is active, this automatically rewrites scp -r to scp -R in any Bash tool call. Combined flags are handled too: -rP 22 becomes -RP 22.

Command Substitution

Replace one command with another:

# Use ripgrep instead of grep
dp alias --cmd grep --replace rg

# With a message
dp alias --cmd grep --replace rg --message "Use ripgrep instead of grep"

This rewrites grep -rn pattern . to rg -rn pattern . while leaving other commands in a pipeline untouched.

Literal Replacement

Replace a literal string within a specific command’s context:

dp alias --cmd scp "user@old-host:" "user@new-host:" --message "Host migrated"

Advanced: Tool/Param Corrections

For non-Bash tools or arbitrary parameter corrections:

# Correct a path in an MCP tool
dp alias --tool MyMCPTool --param input_path "/old/path" "/new/path"

# Regex replacement
dp alias --tool Bash --param command --regex "curl -k" "curl --cacert cert.pem"

Listing Rules

dp aliases

Output:

FROM            TO           TYPE      COMMAND   CREATED
read_file       Read         alias               2026-02-01 09:15:33
r               R            flag      scp       2026-02-01 09:16:12
grep            rg           command   grep      2026-02-01 09:17:44

Validation

  • --cmd and --tool/--param are mutually exclusive
  • --flag requires --cmd
  • --replace requires --cmd
  • --flag and --replace are mutually exclusive
  • --regex requires --tool/--param
  • --tool and --param must appear together

Details

The alias command manages both tool name mappings and command correction rules. Tool name aliases define how incorrect tool names should be resolved. Command correction rules define how parameters should be rewritten when a tool is called.

Aliases and rules are upserted: creating one that already exists updates it. This makes it safe to run commands idempotently.

Rules are identified by a composite key: (from, tool, param, command, match_kind). This means you can have multiple rules for the same command targeting different flags.

When you create rules, they take effect immediately if dp pave --hook is installed. The hook checks rules on every tool call and applies corrections transparently.

Common workflow for command corrections:

  1. Notice the AI keeps using wrong flags (e.g., scp -r fails)
  2. Create a rule: dp alias --cmd scp --flag r R
  3. Install the hook: dp pave --hook
  4. Future scp -r calls are automatically corrected to scp -R

Best practices:

  • Add --message to explain why the correction exists
  • Use --cmd for CLI corrections (most common case)
  • Use --tool/--param only for MCP or non-Bash tools
  • Review rules with dp aliases periodically

dp pave

Turn aliases and correction rules into active tool-call intercepts.

Usage

dp pave --hook
dp pave --agents-md
dp pave --agents-md --append AGENTS.md

Flags

FlagDefaultDescription
–hookfalseInstall a PreToolUse intercept hook in Claude Code
–agents-mdfalseGenerate AGENTS.md / CLAUDE.md rules from alias data
–append FILEAppend generated rules to FILE (with –agents-md)
–settings PATH~/.claude/settings.jsonPath to Claude Code settings file

Modes

–hook: Real-Time Intercept

Installs a PreToolUse hook into ~/.claude/settings.json that runs dp pave-check on every tool call. The hook has two behaviors:

Phase 1 — Tool Name Blocking: If the AI calls a tool that has a tool-name alias (e.g., read_file when the real tool is Read), the hook blocks the call with exit code 2 and tells Claude to use the correct name.

Phase 2 — Parameter Correction: If the tool name is valid but the parameters contain known mistakes (e.g., scp -r instead of scp -R), the hook rewrites the parameters automatically via updatedInput and allows the call to proceed with corrected values.

dp pave --hook

Output:

PreToolUse intercept hook installed!
Hallucinated tool names matching aliases will now be blocked automatically.
Manage aliases with: dp alias <from> <to>

Running again is safe — it detects the existing hook and reports “already installed.”

–agents-md: Static Rules

Generates markdown rules from your aliases and correction rules. Output has two sections:

Tool Name Corrections — tells the AI which tool names are wrong:

# Tool Name Corrections

The following tool names are INCORRECT. Use the correct names instead:

- Do NOT call `read_file`. Use `Read` instead.
- Do NOT call `search_files`. Use `Grep` instead.

Command Corrections — documents parameter correction rules:

# Command Corrections

## scp

- Flag `-r` should be `-R` (scp uses -R for recursive)

## grep → rg

- Use `rg` instead of `grep`

By default, output goes to stdout. Use --append to write to a file:

dp pave --agents-md --append AGENTS.md

Belt and Suspenders

Use both modes together for maximum coverage:

dp pave --hook --agents-md --append AGENTS.md
  • --hook is reactive: catches mistakes at call time
  • --agents-md is preventive: stops mistakes before they happen

How pave-check Works

The dp pave-check command is an internal hook handler. It reads a JSON payload from stdin and performs two phases:

Phase 1: Tool Name Check

Looks up the tool_name in the alias table. If found, blocks the call:

  • Exit code 2 + error message on stderr
  • Claude Code shows the message and retries with the correct tool name

Phase 2: Parameter Corrections

Queries correction rules for the tool name via GetRulesForTool. For each matching rule:

MatchKindWhat It Does
flagCorrects a CLI flag within a specific command (e.g., -r-R in scp)
commandSubstitutes a command name (e.g., greprg)
literalReplaces a literal string within a command segment
regexApplies a regex replacement across the full parameter value

If corrections are applied:

  • Exit code 0 + JSON on stdout with updatedInput
  • Claude Code uses the corrected parameters transparently

If no corrections match:

  • Exit code 0 with no output (allow as-is)

Flag-Aware Matching

The flag match kind uses a shell-aware command parser (cmdparse) that:

  • Splits commands on |, &&, ||, ; to isolate segments
  • Respects quoted strings (won’t match flags inside quotes)
  • Handles combined short flags: -rP 22-RP 22
  • Scopes corrections to the right command in a pipeline

Example: Given a rule --cmd scp --flag r R:

scp -r file.txt host:/          →  scp -R file.txt host:/
scp -rP 22 file host:/          →  scp -RP 22 file host:/
cat file | scp -r host:/        →  cat file | scp -R host:/
echo "-r" | scp file host:/     →  (no change — "-r" is in quotes, not a flag)

Pipe Scoping

Command substitutions only affect the matching segment. Given --cmd grep --replace rg:

cat file | grep pattern | wc -l  →  cat file | rg pattern | wc -l

Only grep is replaced; cat and wc are untouched.

Exit Codes

CodeMeaning
0Allow (optionally with updatedInput corrections)
2Block (tool name alias matched)

Hook Timeout

The pave-check hook has a 3-second timeout. It typically completes in <50ms. If the database is locked or unreachable, the hook fails open (allows the call).

Troubleshooting

Hook Not Firing

Verify installation:

cat ~/.claude/settings.json | jq '.hooks.PreToolUse'

Check that dp is in your PATH:

which dp

Corrections Not Applying

List your rules to verify they exist:

dp aliases --json

Test manually:

echo '{"tool_name":"Bash","tool_input":{"command":"scp -r file host:/"}}' | dp pave-check

You should see JSON output with updatedInput if a matching rule exists.

Hook Timing Out

The default timeout is 3000ms. If your database is on a slow disk:

  1. Check database size: ls -lh ~/.dp/desires.db
  2. Consider running VACUUM if the database has grown large
  3. Increase the timeout in ~/.claude/settings.json if needed

Examples

# Install the hook
dp pave --hook

# Generate rules to stdout
dp pave --agents-md

# Append rules to CLAUDE.md
dp pave --agents-md --append CLAUDE.md

# Both at once
dp pave --hook --agents-md --append AGENTS.md

# JSON output
dp pave --agents-md --json

dp config

Show or modify configuration

Usage

dp config
dp config <key>
dp config <key> <value>

Flags

None

Examples

$ dp config
Configuration (from /home/user/.dp/config.toml):

db_path: /home/user/.dp/desires.db
default_source: claude-code
known_tools: Read,Write,Edit,Bash,Glob,Grep,Task,WebFetch,WebSearch,NotebookEdit
default_format: json

$ dp config default_source
claude-code

$ dp config default_source cursor
Updated configuration: default_source = cursor

$ dp config known_tools "Read,Write,Edit,Bash,CustomTool"
Updated configuration: known_tools = Read,Write,Edit,Bash,CustomTool

$ dp config db_path /home/user/projects/myapp/.dp/desires.db
Updated configuration: db_path = /home/user/projects/myapp/.dp/desires.db

Details

The config command manages desire_path’s persistent configuration. Configuration is stored in ~/.dp/config.toml and applies to all invocations unless overridden by flags.

Valid configuration keys:

db_path

Path to the SQLite database file. Default: ~/.dp/desires.db

Use this to:

  • Store project-specific desires in the project directory
  • Separate desire data by workspace or client
  • Back up or version control desire history

Can be overridden per-command with the global --db flag.

default_source

Default source identifier for commands that accept --source. Default: empty

Use this to:

  • Avoid typing --source repeatedly
  • Set a consistent source when working primarily with one AI tool
  • Simplify commands: dp record instead of dp record --source claude-code

Can be overridden per-command with the --source flag.

known_tools

Comma-separated list of known tool names for similarity matching. Default: Read,Write,Edit,Bash,Glob,Grep,Task,WebFetch,WebSearch,NotebookEdit

Use this to:

  • Customize the tool set for your environment
  • Add custom tools to the suggestion engine
  • Match your AI coding tool’s specific tool naming conventions

Can be overridden in dp similar with the --known flag.

default_format

Default export format: json or csv. Default: json

Use this to:

  • Set a preferred output format for exports
  • Simplify commands: dp export instead of dp export --format csv

Can be overridden in dp export with the --format flag.

Configuration precedence (highest to lowest):

  1. Command-line flags (e.g., --db, --source)
  2. Config file values
  3. Built-in defaults

To reset a configuration value to its default, delete the key from ~/.dp/config.toml manually or set it to an empty string:

$ dp config default_source ""
Updated configuration: default_source = (unset)

If the config file doesn’t exist, it’s created automatically on first write. The file is TOML formatted for easy manual editing:

db_path = "/home/user/.dp/desires.db"
default_source = "claude-code"
known_tools = ["Read", "Write", "Edit", "Bash", "Glob", "Grep", "Task", "WebFetch", "WebSearch", "NotebookEdit"]
default_format = "json"

Integrations

dp uses a source plugin system to integrate with different AI coding assistants. Each AI tool has its own output format, hook mechanism, and session model. Source plugins abstract these differences behind a common interface, allowing dp to record desires and invocations from any tool.

How It Works

  1. Hook Installation: The AI tool (like Claude Code) provides event hooks that trigger on tool calls or failures. dp installs shell commands as hook handlers.
  2. Payload Extraction: When the hook fires, it passes a JSON payload to dp. The source plugin parses this payload and extracts universal fields.
  3. Normalization: The plugin maps tool-specific fields (like Claude Code’s session_id) to universal fields (like instance_id).
  4. Storage: dp writes the normalized data to its SQLite database.

Plugin Architecture

Every source plugin implements the source.Source interface:

type Source interface {
    Name() string
    Extract(raw []byte) (*Fields, error)
}

The Extract method receives raw bytes (usually JSON) and returns structured Fields:

type Fields struct {
    ToolName   string          // Required: the tool that was invoked
    InstanceID string          // Optional: session or invocation ID
    ToolInput  json.RawMessage // Optional: raw JSON input to the tool
    CWD        string          // Optional: working directory
    Error      string          // Optional: error message (for failures)
    Extra      map[string]json.RawMessage // Source-specific fields
}

Plugins can optionally implement source.Installer to support dp init:

type Installer interface {
    Install(opts InstallOpts) error
}

This allows dp init --source <name> to automatically configure hooks in the AI tool’s settings.

Currently Supported Tools

Claude Code

Status: Fully supported

Claude Code provides PostToolUseFailure and PostToolUse hooks. dp uses these to capture failed tool calls (for desires) or all tool calls (for invocations).

See the Claude Code Integration Guide for setup instructions and details.

Planned Integrations

The following tools are planned but not yet implemented:

  • Cursor: Cursor AI editor (pending hook API documentation)
  • Gemini CLI: Google’s AI CLI (pending output format spec)
  • GitHub Copilot CLI: gh copilot command output parsing
  • Cody: Sourcegraph’s Cody assistant

Writing Your Own Plugin

If you’re using an AI tool that dp doesn’t yet support, you can write a plugin. It’s just a Go file that implements source.Source and calls source.Register in init().

See Writing a Source Plugin for a complete guide with examples.

Plugin Registry

All plugins self-register at startup via init() functions. dp discovers plugins by importing them:

import (
    _ "github.com/scbrown/desire-path/internal/source" // registers claude-code
    // Add more plugin imports here
)

List available plugins:

dp init --list

This shows all registered source names that can be used with --source.

Hook Execution Model

dp hooks are designed to be:

  • Asynchronous: The AI tool doesn’t block waiting for dp to finish
  • Isolated: dp failures don’t affect the AI tool’s operation
  • Lightweight: Writes are fast; database is append-only with WAL mode

Typical hook latency: <10ms for desire recording, <20ms for invocation ingestion.

Next Steps

Claude Code Integration

Claude Code is Anthropic’s official CLI for Claude. It provides a hook system that allows external commands to run on various events. dp integrates with Claude Code by installing hooks that capture tool call failures (and optionally all tool calls) for analysis.

Quick Setup

dp init --source claude-code

This command updates ~/.claude/settings.json to add a PostToolUseFailure hook. It’s idempotent—safe to run multiple times.

What Are Claude Code Hooks?

Claude Code fires hooks at specific lifecycle events. The most relevant for dp:

  • PostToolUseFailure: Fires when a tool call fails (tool not found, invalid input, execution error)
  • PostToolUse: Fires after every tool call, whether it succeeds or fails

Hooks receive a JSON payload describing the event. They execute asynchronously—Claude Code doesn’t wait for the hook to complete, so dp processing never slows down your session.

Hook Configuration

Default Setup (Failures Only)

dp init --source claude-code writes this to ~/.claude/settings.json:

{
  "hooks": {
    "PostToolUseFailure": [
      {
        "matcher": ".*",
        "hooks": [
          {
            "type": "command",
            "command": "dp record --source claude-code",
            "timeout": 5000
          }
        ]
      }
    ]
  }
}

The matcher: ".*" means “match all sessions.” Every time a tool call fails, Claude Code pipes the failure JSON to dp record --source claude-code via stdin. dp parses the JSON, extracts fields, and writes a desire record to ~/.dp/desires.db.

Full Tracking (Successes + Failures)

To track all tool invocations (not just failures), enable full tracking:

dp init --source claude-code --track-all

This adds two additional hooks:

{
  "hooks": {
    "PostToolUseFailure": [
      {
        "matcher": ".*",
        "hooks": [
          {
            "type": "command",
            "command": "dp record --source claude-code",
            "timeout": 5000
          },
          {
            "type": "command",
            "command": "dp ingest --source claude-code",
            "timeout": 5000
          }
        ]
      }
    ],
    "PostToolUse": [
      {
        "matcher": ".*",
        "hooks": [
          {
            "type": "command",
            "command": "dp ingest --source claude-code",
            "timeout": 5000
          }
        ]
      }
    ]
  }
}

Now:

  • PostToolUseFailure runs both dp record (for desires) and dp ingest (for invocations)
  • PostToolUse runs dp ingest for all successful calls

This generates more data—every tool call fires a hook—so only enable it if you need invocation-level analytics (success rates, call frequency, session analysis, etc.).

Hook Payload Format

Claude Code passes a JSON object on stdin. Example for a failed tool call:

{
  "session_id": "a1b2c3d4-5678-90ab-cdef-1234567890ab",
  "hook_event_name": "PostToolUseFailure",
  "tool_name": "read_file",
  "tool_use_id": "toolu_01ABC123",
  "tool_input": {
    "file_path": "/tmp/nonexistent.txt"
  },
  "error": "File not found: /tmp/nonexistent.txt",
  "cwd": "/home/user/project",
  "transcript_path": "/home/user/.claude/transcripts/2026-02-09-session.json",
  "permission_mode": "normal"
}

Field Mapping

dp’s claude-code plugin maps these fields to universal Fields:

Claude Code FieldUniversal FieldNotes
tool_nameToolNameRequired
session_idInstanceIDSession identifier
tool_inputToolInputPreserved as raw JSON
cwdCWDWorking directory
errorErrorError message (only present on failures)

Everything else goes into Extra:

  • tool_use_id: Claude’s internal ID for the tool call
  • transcript_path: Path to the session transcript file
  • hook_event_name: Which hook fired (PostToolUseFailure or PostToolUse)
  • permission_mode: Permission level (normal, strict, etc.)

These fields are stored in the metadata column as JSON, available for queries but not indexed.

Commands Used by Hooks

dp record --source claude-code

Records a desire (failed tool call). Reads JSON from stdin, extracts fields using the claude-code plugin, generates a UUID and timestamp, and writes to the desires table.

Example manual invocation:

echo '{"tool_name":"read_file","error":"unknown tool","session_id":"test","cwd":"/tmp"}' \
  | dp record --source claude-code

dp ingest --source claude-code

Records an invocation (any tool call, success or failure). Reads JSON from stdin, extracts fields, sets is_error based on presence of error field, and writes to the invocations table.

Example manual invocation:

echo '{"tool_name":"Read","session_id":"test","cwd":"/tmp"}' \
  | dp ingest --source claude-code

Idempotency

dp init --source claude-code is idempotent. If hooks already exist, it won’t add duplicates. It merges the new hooks into the existing hooks config, preserving any other hooks you’ve configured.

Run it multiple times safely:

dp init --source claude-code
dp init --source claude-code
dp init --source claude-code

The second and third runs do nothing (hook already present).

Switching from default to full tracking:

dp init --source claude-code              # adds PostToolUseFailure → dp record
dp init --source claude-code --track-all  # adds PostToolUse/PostToolUseFailure → dp ingest

The second command adds the ingest hooks without removing the record hook. This is safe—both commands write to different tables (desires vs invocations).

Hook Execution Details

  • Timeout: 5 seconds (configurable in the JSON). If dp takes longer, Claude Code kills the process.
  • Stdin/Stdout: Hook receives JSON on stdin. Stdout/stderr are discarded (not shown to the user).
  • Exit Code: Ignored. Hook failures don’t affect Claude Code.
  • Async: Hook runs in the background. Claude Code continues immediately.

Typical execution time: ~5-10ms for dp record, ~10-20ms for dp ingest.

Troubleshooting

Hooks Not Firing

Check that dp is in your PATH:

which dp

If it’s not found, Claude Code can’t execute the hook. Install dp to a location in PATH (like $HOME/go/bin or /usr/local/bin).

Verify the hooks are installed:

cat ~/.claude/settings.json | jq '.hooks'

You should see PostToolUseFailure with a dp record command.

No Desires Being Recorded

Manually trigger a failure and check the database:

echo '{"tool_name":"test_tool","error":"test error","session_id":"manual","cwd":"/tmp"}' \
  | dp record --source claude-code

dp list --limit 1

If the desire appears, hooks are working. If not, check that ~/.dp/desires.db is writable.

Enable verbose logging (if dp supported it—currently it doesn’t) or use strace to debug:

strace -e trace=open,write dp record --source claude-code < payload.json

Database Locked Errors

SQLite uses WAL mode for concurrent reads/writes, but if another process holds a write lock (like a long-running transaction), writes may block briefly. This is rare—most writes complete in milliseconds.

If you see “database is locked” errors frequently:

  1. Check for long-running dp commands (like dp export on a huge database)
  2. Verify no other process is holding the database open
  3. Check disk I/O (slow disks can cause lock contention)

SQLite’s busy timeout is set to 5 seconds—writes retry automatically during that window.

Data Storage

Desires Table

Schema:

CREATE TABLE desires (
    id TEXT PRIMARY KEY,
    tool_name TEXT NOT NULL,
    tool_input TEXT,
    error TEXT NOT NULL,
    source TEXT,
    session_id TEXT,
    cwd TEXT,
    timestamp TEXT NOT NULL,
    metadata TEXT
);

Each dp record writes one row.

Invocations Table

Schema:

CREATE TABLE invocations (
    id TEXT PRIMARY KEY,
    source TEXT NOT NULL,
    instance_id TEXT,
    host_id TEXT,
    tool_name TEXT NOT NULL,
    is_error INTEGER NOT NULL,
    error TEXT,
    cwd TEXT,
    timestamp TEXT NOT NULL,
    metadata TEXT
);

Each dp ingest writes one row. is_error is 1 if error field was present in the payload, 0 otherwise.

Query Examples

List all Claude Code desires:

dp list --source claude-code

View aggregated paths:

dp paths

Inspect a specific tool name:

dp inspect read_file

Show invocation stats (requires --track-all):

dp stats --invocations

List all invocations from a session:

sqlite3 ~/.dp/desires.db "SELECT tool_name, is_error, timestamp FROM invocations WHERE instance_id = 'session-id-here' ORDER BY timestamp;"

Performance Notes

  • Desire recording: ~5ms per record (dominated by SQLite write + fsync)
  • Invocation ingestion: ~10ms per record (slightly larger payloads)
  • Database size: ~1KB per desire, ~1.5KB per invocation (including JSON metadata)
  • After 10,000 desires: ~10MB database
  • After 100,000 invocations: ~150MB database

WAL mode keeps reads fast even during writes. Queries are instant up to ~1M records.

PreToolUse Hook: Active Correction

Beyond recording failures, dp can actively intercept and correct tool calls using the PreToolUse hook.

Setup

dp pave --hook

This adds a PreToolUse hook to ~/.claude/settings.json:

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": ".*",
        "hooks": [
          {
            "type": "command",
            "command": "dp pave-check",
            "timeout": 3000
          }
        ]
      }
    ]
  }
}

How It Works

The PreToolUse hook fires before every tool call. dp pave-check reads the JSON payload from stdin and performs two checks:

1. Tool Name Blocking

If the tool name matches a tool-name alias (e.g., read_fileRead), the hook exits with code 2 and writes an error message to stderr. Claude Code blocks the call and shows the message, prompting the AI to use the correct tool name.

2. Parameter Rewriting

If the tool name is valid but parameters contain known mistakes, the hook rewrites them via updatedInput. For example, if you have a rule --cmd scp --flag r R:

Input payload:

{
  "tool_name": "Bash",
  "tool_input": {"command": "scp -r file.txt host:/"}
}

Hook output (exit 0):

{
  "hookSpecificOutput": {
    "permissionDecision": "allow",
    "updatedInput": {"command": "scp -R file.txt host:/"},
    "additionalContext": "Corrected: -r → -R"
  }
}

Claude Code uses the corrected command value transparently. The AI sees the additionalContext note explaining what changed.

Exit Code Protocol

Exit CodeMeaningBehavior
0 (no output)Allow as-isTool call proceeds unchanged
0 (with JSON)Allow with correctionsTool call proceeds with updatedInput
2BlockTool call is rejected, error shown to AI

Fail-Safe Design

The hook is designed to never break your workflow:

  • JSON parse errors → allow (don’t block on malformed payloads)
  • Database unavailable → allow (don’t block if store is down)
  • Rule application errors → allow (skip broken rules)
  • Timeout (3s) → Claude Code kills the hook and proceeds

Creating Correction Rules

See dp alias for the full flag reference. Quick examples:

# Flag correction
dp alias --cmd scp --flag r R

# Command substitution
dp alias --cmd grep --replace rg

# Regex replacement
dp alias --tool Bash --param command --regex "curl -k" "curl --cacert cert.pem"

See dp pave for more details on the hook mechanism and troubleshooting.

Next Steps

Writing a Source Plugin

Source plugins let dp integrate with any AI coding assistant. If you’re using a tool that dp doesn’t yet support, you can write a plugin in about 50 lines of Go. This guide shows you how.

Plugin Interface

Every plugin implements source.Source:

package source

type Source interface {
    // Name returns the unique identifier for this source (e.g., "my-tool").
    Name() string

    // Extract parses raw bytes and returns universal fields.
    Extract(raw []byte) (*Fields, error)
}

The Extract method receives raw bytes (usually JSON from a hook or log) and returns structured fields:

type Fields struct {
    ToolName   string          // Required: the tool that was invoked
    InstanceID string          // Optional: session or invocation ID
    ToolInput  json.RawMessage // Optional: raw JSON input to the tool
    CWD        string          // Optional: working directory
    Error      string          // Optional: error message (for failures)
    Extra      map[string]json.RawMessage // Source-specific fields
}

Only ToolName is required. Everything else is optional. Source-specific fields (anything not mapped to the universal fields above) go into Extra as raw JSON.

Minimal Plugin Example

Here’s a skeleton plugin for a hypothetical “my-tool”:

package source

import (
    "encoding/json"
    "fmt"
)

// myTool implements Source for the "my-tool" AI assistant.
type myTool struct{}

// Register the plugin at startup.
func init() {
    Register(&myTool{})
}

// Name returns the source identifier.
func (m *myTool) Name() string {
    return "my-tool"
}

// Extract parses the my-tool JSON format and returns Fields.
func (m *myTool) Extract(raw []byte) (*Fields, error) {
    // Example input format:
    // {"name": "read_file", "input": {...}, "err": "not found", "session": "abc123", "dir": "/tmp"}

    var payload struct {
        Name    string          `json:"name"`
        Input   json.RawMessage `json:"input"`
        Err     string          `json:"err"`
        Session string          `json:"session"`
        Dir     string          `json:"dir"`
    }

    if err := json.Unmarshal(raw, &payload); err != nil {
        return nil, fmt.Errorf("my-tool: parsing JSON: %w", err)
    }

    if payload.Name == "" {
        return nil, fmt.Errorf("my-tool: missing required field: name")
    }

    fields := &Fields{
        ToolName:   payload.Name,
        InstanceID: payload.Session,
        ToolInput:  payload.Input,
        CWD:        payload.Dir,
        Error:      payload.Err,
    }

    return fields, nil
}

Save this as internal/source/mytool.go.

Field Mapping Guidelines

Map your tool’s output to universal fields using these conventions:

Universal FieldPurposeExamples
ToolNameThe tool/function/command that was invoked"Read", "execute_shell", "query_database"
InstanceIDSession, request, or invocation IDSession UUID, request trace ID, user ID
ToolInputRaw JSON input parametersTool arguments as JSON (preserve as-is)
CWDWorking directory at time of call"/home/user/project"
ErrorError message if the call failed"File not found", "Permission denied"
ExtraEverything elseAnything specific to your tool

ToolName (Required)

Must not be empty. This is the key field—dp aggregates desires by tool name. Use the name the AI tried to invoke, even if it doesn’t exist.

InstanceID (Optional)

Ideally a session or request ID that groups related tool calls. Used for:

  • Session-level analysis
  • Tracing a sequence of calls
  • Filtering by session in queries

If your tool doesn’t have sessions, use a user ID, request timestamp, or leave it empty.

ToolInput (Optional)

Preserve the original input as raw JSON. Don’t parse or transform it—just copy the bytes. This allows:

  • Inspecting common input patterns
  • Debugging why a tool failed
  • Replaying tool calls

If input is not JSON, wrap it in a JSON string:

fields.ToolInput = json.RawMessage(`"` + rawInput + `"`)

CWD (Optional)

The working directory when the tool was invoked. Useful for:

  • Resolving relative paths
  • Understanding context
  • Project-level aggregation

If your tool doesn’t provide this, leave it empty.

Error (Optional)

Only set this if the tool call failed. For failures, this field should contain a human-readable error message. For successes, leave it empty.

dp uses Error != "" to determine if a desire should be recorded.

Extra (Optional)

Everything not mapped to the universal fields goes here. Examples:

  • Internal IDs (like Claude Code’s tool_use_id)
  • Metadata (like transcript_path, permission_mode)
  • Timing information (like duration_ms)
  • Custom tags or labels

Store as raw JSON:

fields.Extra = map[string]json.RawMessage{
    "tool_id": json.RawMessage(`"xyz123"`),
    "duration_ms": json.RawMessage(`42`),
}

Full Plugin with Extra Fields

Expanding the example:

func (m *myTool) Extract(raw []byte) (*Fields, error) {
    var payload map[string]json.RawMessage
    if err := json.Unmarshal(raw, &payload); err != nil {
        return nil, fmt.Errorf("my-tool: parsing JSON: %w", err)
    }

    var toolName string
    if v, ok := payload["name"]; ok {
        if err := json.Unmarshal(v, &toolName); err != nil {
            return nil, fmt.Errorf("my-tool: parsing name: %w", err)
        }
    }
    if toolName == "" {
        return nil, fmt.Errorf("my-tool: missing required field: name")
    }

    fields := &Fields{ToolName: toolName}

    // Map optional universal fields
    if v, ok := payload["session"]; ok {
        json.Unmarshal(v, &fields.InstanceID)
    }
    if v, ok := payload["input"]; ok {
        fields.ToolInput = v
    }
    if v, ok := payload["dir"]; ok {
        json.Unmarshal(v, &fields.CWD)
    }
    if v, ok := payload["err"]; ok {
        json.Unmarshal(v, &fields.Error)
    }

    // Collect everything else into Extra
    knownFields := map[string]bool{
        "name": true, "session": true, "input": true, "dir": true, "err": true,
    }
    extra := make(map[string]json.RawMessage)
    for k, v := range payload {
        if !knownFields[k] {
            extra[k] = v
        }
    }
    if len(extra) > 0 {
        fields.Extra = extra
    }

    return fields, nil
}

This pattern—unmarshal to map[string]json.RawMessage, extract known fields, collect unknowns into Extra—works for most JSON-based tools.

Installer Interface (Optional)

If you want to support dp init --source my-tool, implement source.Installer:

type Installer interface {
    Install(opts InstallOpts) error
}

type InstallOpts struct {
    SettingsPath string // Override settings file location (empty = use default)
    TrackAll     bool   // Install hooks for all invocations (not just failures)
}

Example:

func (m *myTool) Install(opts InstallOpts) error {
    settingsPath := opts.SettingsPath
    if settingsPath == "" {
        home, err := os.UserHomeDir()
        if err != nil {
            return fmt.Errorf("determine home directory: %w", err)
        }
        settingsPath = filepath.Join(home, ".mytool", "config.json")
    }

    // Read existing config
    data, err := os.ReadFile(settingsPath)
    if os.IsNotExist(err) {
        data = []byte("{}")
    } else if err != nil {
        return fmt.Errorf("read config: %w", err)
    }

    var config map[string]interface{}
    if err := json.Unmarshal(data, &config); err != nil {
        return fmt.Errorf("parse config: %w", err)
    }

    // Add hook configuration
    // (Details depend on your tool's hook system)
    config["hooks"] = map[string]interface{}{
        "on_failure": "dp record --source my-tool",
    }

    if opts.TrackAll {
        config["hooks"].(map[string]interface{})["on_call"] = "dp ingest --source my-tool"
    }

    // Write config back
    newData, err := json.MarshalIndent(config, "", "  ")
    if err != nil {
        return fmt.Errorf("marshal config: %w", err)
    }

    if err := os.MkdirAll(filepath.Dir(settingsPath), 0o700); err != nil {
        return fmt.Errorf("create config directory: %w", err)
    }

    if err := os.WriteFile(settingsPath, newData, 0o644); err != nil {
        return fmt.Errorf("write config: %w", err)
    }

    return nil
}

Make sure the implementation is idempotent—running dp init --source my-tool twice shouldn’t break anything or add duplicate hooks.

Registering the Plugin

In your plugin file’s init() function:

func init() {
    Register(&myTool{})
}

Then import the plugin package for side effects in cmd/dp/main.go:

package main

import (
    _ "github.com/scbrown/desire-path/internal/source" // registers claude-code
    _ "github.com/scbrown/desire-path/internal/source/mytool" // register your plugin
)

If your plugin lives in the same package as claudecode.go (i.e., internal/source/mytool.go), you don’t need a separate import—the package init() runs automatically.

Testing

Write tests in internal/source/mytool_test.go:

package source

import (
    "encoding/json"
    "testing"
)

func TestMyToolExtract(t *testing.T) {
    plugin := &myTool{}

    input := `{"name":"read_file","input":{"path":"/tmp/test.txt"},"err":"not found","session":"abc","dir":"/home/user"}`

    fields, err := plugin.Extract([]byte(input))
    if err != nil {
        t.Fatalf("Extract failed: %v", err)
    }

    if fields.ToolName != "read_file" {
        t.Errorf("ToolName = %q, want %q", fields.ToolName, "read_file")
    }
    if fields.InstanceID != "abc" {
        t.Errorf("InstanceID = %q, want %q", fields.InstanceID, "abc")
    }
    if fields.Error != "not found" {
        t.Errorf("Error = %q, want %q", fields.Error, "not found")
    }
    if fields.CWD != "/home/user" {
        t.Errorf("CWD = %q, want %q", fields.CWD, "/home/user")
    }

    var input map[string]interface{}
    if err := json.Unmarshal(fields.ToolInput, &input); err != nil {
        t.Fatalf("ToolInput not valid JSON: %v", err)
    }
    if input["path"] != "/tmp/test.txt" {
        t.Errorf("ToolInput.path = %v, want %q", input["path"], "/tmp/test.txt")
    }
}

func TestMyToolMissingName(t *testing.T) {
    plugin := &myTool{}

    input := `{"input":{},"err":"error"}`

    _, err := plugin.Extract([]byte(input))
    if err == nil {
        t.Fatal("expected error for missing name, got nil")
    }
}

Run tests:

go test ./internal/source/...

Integration Testing

Test the full pipeline:

# Build dp with your plugin
go install ./cmd/dp

# Test extraction via dp record
echo '{"name":"test_tool","err":"test error","session":"manual"}' | dp record --source my-tool

# Verify it was recorded
dp list --limit 1

Real-World Example: Claude Code Plugin

See internal/source/claudecode.go for a complete, production-quality plugin. Key features:

  • Strict JSON validation with helpful error messages
  • Mapping Claude-specific fields (session_id, tool_use_id) to universal + Extra
  • Idempotent Install() implementation with hook merging
  • Comprehensive tests

Use it as a reference when building your own plugin.

Plugin Checklist

Before shipping your plugin:

  • Implements source.Source interface
  • Name() returns a unique, kebab-case identifier
  • Extract() validates ToolName is non-empty
  • Extract() returns helpful error messages
  • Maps universal fields correctly
  • Stores extra fields in Extra
  • Registers via source.Register() in init()
  • Imported in cmd/dp/main.go
  • Has tests covering success and error cases
  • Documented in docs/book/src/integrations/<plugin-name>.md
  • (Optional) Implements source.Installer for dp init support
  • (Optional) Install() is idempotent

Distribution

If your plugin is for a public tool, consider submitting it upstream via a pull request to the desire-path repo. If it’s for an internal/proprietary tool, maintain it in a separate module and import it:

import (
    _ "github.com/yourorg/desire-path-my-tool-plugin"
)

Plugins don’t need to live in the main dp repository—they just need to call source.Register() at startup.

Next Steps

Evaluations

Periodic data-driven evaluations of desirepath’s collected failure data. Each evaluation analyzes the current dataset, identifies actionable patterns, and tracks improvements over time.

EvaluationDateRecordsKey Finding
2026-03-05 Baseline2026-03-0513,92582% of failures are Bash CLI misuse; 3 aliases prevent ~250/mo

Evaluation: 2026-03-05 Baseline

Analyst: aegis/crew/stryder Date: 2026-03-05 Dataset: 13,925 desires, 13,806 invocations (2026-02-09 to 2026-03-05) Source: 100% Claude Code PostToolUseFailure hooks across Gas Town multi-agent system

Executive Summary

First comprehensive evaluation of desirepath data from a production multi-agent system. 13,925 tool failures recorded across 25 days from ~15 agents. Three key findings: (1) CLI misuse dominates failures at 93%, (2) only 3 aliases exist covering a fraction of correctable errors, (3) MCP infrastructure downtime causes 440 silent failures with no alerting.

Dataset Overview

MetricValue
Total desires13,925
Unique tool names29
Date range2026-02-09 to 2026-03-05 (25 days)
Daily average557 failures/day
Sourceclaude-code (13,903), transcript-analysis (22)
Aliases configured3

Failure Distribution by Tool

ToolCount%Category
Bash12,93292.9%Command execution
Read5023.6%File access
mcp__homelab__batch_probe1931.4%MCP infrastructure
mcp__homelab__prometheus_query490.4%MCP infrastructure
WebFetch470.3%Network
mcp__homelab__container_status430.3%MCP infrastructure
mcp__homelab__service_health410.3%MCP infrastructure
Other MCP tools760.5%MCP infrastructure
Other420.3%Various

Analysis by Error Category

1. CLI Misuse (Bash) — 93% of all failures

The 12,932 Bash failures break down into subcategories:

SubcategoryCount% of BashActionable?
gt unknown commands7435.7%Yes — document or implement
bd unknown flags3642.8%Yes — aliases or flag additions
Command not found1781.4%Yes — install or alias
Git push rejected3342.6%Partially — workflow issue
Not a git repo1881.5%Yes — cwd detection
Git unstaged changes620.5%Partially — workflow issue
bd sync required1120.9%Yes — auto-sync or docs
Normal dev errors~10,95184.6%No — expected during development

Key insight: ~15% of Bash failures (1,981) are correctable through aliases, documentation, or tooling improvements. The remaining 85% are normal development friction (test failures, build errors, typos).

2. Top Non-Existent GT Commands

Agents repeatedly try commands that don’t exist:

CommandCountWhat Agent Expected
gt deacon pending175Check deacon task queue
gt await-signal101Wait for async event
gt mol hook43Hook a molecule (correct: gt hook)
gt health41System health check
gt plugin status39Check plugin state
gt mq integration list28List MQ integrations
gt wisp26Manage wisps directly
gt plugin due25Check plugin schedule
gt sessions20List active sessions (correct: gt session)
gt rig health19Rig health check

Recommendation: File desire-path beads for the top 5. Either implement or create aliases with helpful error messages.

3. Top Non-Existent BD Flags

Flag AttemptedCountCorrect Alternative
–gated64(removed feature)
–wisp35(not a filter)
–assign27–assignee (-a)
–rig23(use prefix routing)
–comment21–append-notes
–prefix14(use prefix routing)
–mol11(not a filter)
–stdin11(pipe via heredoc)
–owner10–assignee (-a)
–epic7(not implemented)

Current alias coverage: Only 3 aliases exist:

  1. --assign--assignee (bd flag) — covers 27 failures
  2. --owner--assignee (bd flag) — covers 10 failures
  3. bd note Xbd update X --append-notes — covers ~8 failures

Gap: --comment--append-notes could prevent 21 more failures/month. --gated appears 64 times but was a removed feature — needs a helpful error message.

4. Read Tool Failures

Error TypeCountRoot Cause
EISDIR (read directory)162Agent used Read instead of ls/Bash
File not found218Agent guessed wrong path
File too large8Exceeded 25K token limit

Recommendation: Bobbin could inject directory tree output on EISDIR errors (bead aegis-qalm1v filed). tree command now installed on luvu + kota.

5. MCP Server Downtime

MCP ToolFailuresError
batch_probe193no available server
prometheus_query49no available server
container_status43no available server
service_health41no available server
list_containers21no available server
container_logs20no available server
Other MCP73no available server

Total: 440 MCP failures, all “no available server” — homelab-mcp was down. No alerts fired. Bead aegis-ixx1e9 filed for maldoon to add monitoring.

6. Env-Need Analysis (dp env-needs output)

dp env-needs reports 43 “missing tools” but many are false positives. The env-need categorizer incorrectly flags shell builtins and installed tools:

Reported MissingActual StatusIssue
ls, cd, cat, echoShell builtinsFalse positive — these are Bash builtins/coreutils, always available
ssh, git, grepInstalledFalse positive — exit code != “not found”
justNot installedTrue positive — just (justfile runner) not on luvu
dig, nslookup, hostNot installedTrue positive — DNS tools missing
sqlite3Not installedTrue positive — was missing, now installed
python3InstalledFalse positive — python3 exists, python doesn’t

Recommendation: env-need categorizer needs refinement. Should check if the command actually produced “command not found” vs other errors. High false positive rate (>50%) reduces trust in the output.

Agent Workspace Analysis

WorkspaceFailuresPrimary Issues
deacon3,007gt mol squash wrong flags, patrol loop errors
aegis/crew/ellie1,042gt mol squash, read failures
deacon/dogs/boot982Patrol loop startup errors
aegis/crew/malcolm627CLI misuse, path guessing
aegis/witness603Patrol loop errors
aegis/crew/goldblum603Build errors, flag guessing
aegis/refinery/rig510Merge queue errors
aegis/crew/ian443Build/test errors
bucket/refinery/rig418Merge queue errors
mayor394CLI dispatch errors

Insight: Deacon + dogs account for 29% of all failures. Most are repetitive patrol loop errors (gt mol squash with wrong flags). A single alias or patrol fix would eliminate thousands of failures.

Turn Pattern Analysis

dp turns shows tool call sequences. The dominant pattern is long Bash-only turns (264, 152, 151 calls). This indicates agents spending many turns retrying failed Bash commands rather than changing approach.

Recommendation: Consider a “struggling detection” feature — if an agent has >5 consecutive Bash failures on similar commands, surface documentation or suggest an alternative approach.

Alias Effectiveness

Current Aliases (3)

AliasTypeEstimated Monthly Prevents
–assign → –assigneeflag (bd)~27
–owner → –assigneeflag (bd)~10
bd note → bd update –append-notesregex~8
Total~45
AliasTypeEstimated Monthly Prevents
–comment → –append-notesflag (bd)~21
gt sessions → gt sessioncommand~20
gt mol hook → gt hookcommand~43
gt health → gt rig statuscommand~41
–no-digest → (removed, explain)flag (bd)~8
Total~133

Unrealized Value

With current 3 aliases: ~45 prevents/month (0.3% of failures) With 8 aliases: ~178 prevents/month (1.3% of failures) With doc-mapping for top 50 patterns: ~800 informed/month (5.7% of failures)

dp Feature Utilization Assessment

dp FeatureCurrently Used?ValueAction Needed
dp recordYes (PostToolUseFailure hook)HighWorking well
dp ingestYes (via record)HighWorking well
dp statsManually by operatorsMediumCould auto-report
dp pathsManually by operatorsMediumGood for evaluation
dp aliasesYes (3 configured)HighAdd 5 more aliases
dp pave --hookYes (PreToolUse)HighWorking well
dp pave --agents-mdNot usedMediumShould generate rules
dp env-needsNot used (high false positives)LowNeeds refinement
dp turnsNot usedLowUseful for evaluation only
dp similarNot usedLowNiche use case
dp suggestNot implemented yetHighWould synthesize all signals
dp serveNot usedLowNo consumer yet

Immediate Actions

  1. Add 5 new aliasesdp alias for –comment, gt sessions, gt mol hook, gt health, –no-digest
  2. Run dp pave --agents-md — Generate AGENTS.md rules from alias data and append to agent instruction files
  3. Fix env-needs false positives — Check for “command not found” in error text, not just exit code

Future Features Needed

  1. dp suggest (planned, dp-9 design exists) — Synthesize all data sources into prioritized recommendations
  2. dp map (bead aegis-420cz6) — Map documentation to failing tool patterns
  3. Struggling detection — Identify agents retrying same failure pattern
  4. Recovery tracking (bead aegis-gvr2vh) — Detect when fixes reduce failures

Methodology Notes

  • Data extracted via Python sqlite3 queries against ~/.dp/desires.db
  • dp CLI commands (stats, paths, env-needs, aliases, turns) used for built-in analytics
  • Error categorization done via substring matching on error text
  • Agent identification via cwd field (workspace path → agent name)
  • All counts are raw (no dedup by session or time window)

Next Evaluation

Schedule next evaluation for 2026-03-15 (10 days). Track:

  • Did the 5 new aliases reduce failures?
  • Did MCP monitoring (aegis-ixx1e9) reduce downtime?
  • Did bobbin tagging sweep improve injection quality (feedback noise ratio)?
  • Total desire count growth rate
  • New error patterns emerging

Architecture

dp is a single-binary CLI built in Go with minimal dependencies. It uses a plugin architecture for source integrations, a SQLite database for storage, and Levenshtein-based similarity matching for suggestions. This page explains how the pieces fit together.

Data Flow

graph LR
    A[AI Tool Hook] --> B[dp record/ingest]
    B --> C[Source.Extract]
    C --> D[Fields]
    D --> E[ingest.Ingest]
    E --> F[Invocation/Desire]
    F --> G[SQLite]
    G --> H[Queries]
    H --> I[CLI Output]
  1. Hook Trigger: AI tool (e.g., Claude Code) fires a hook on tool call failure (or success, if full tracking is enabled)
  2. Command Execution: Hook runs dp record or dp ingest with --source <name>, passing JSON via stdin
  3. Source Plugin: The named source plugin’s Extract method parses the JSON into universal Fields
  4. Normalization: ingest.Ingest converts Fields to Invocation or Desire, generating UUID and timestamp
  5. Storage: Data is written to SQLite database with WAL mode enabled
  6. Query: CLI commands (list, paths, inspect, etc.) query the database
  7. Output: Results rendered as table or JSON

Core Components

1. Source Plugin System

Located in internal/source/.

Registry Pattern: Plugins self-register via init() functions. The source package maintains a global registry mapping source names to Source implementations.

type Source interface {
    Name() string
    Extract(raw []byte) (*Fields, error)
}

Fields Struct: Universal representation of a tool call:

type Fields struct {
    ToolName   string          // Required
    InstanceID string          // Optional: session/request ID
    ToolInput  json.RawMessage // Optional: raw input params
    CWD        string          // Optional: working directory
    Error      string          // Optional: error message
    Extra      map[string]json.RawMessage // Source-specific fields
}

Installer Interface (optional):

type Installer interface {
    Install(opts InstallOpts) error
}

Allows dp init --source <name> to automatically configure hooks.

Current Plugins:

  • claude-code: Parses Claude Code hook JSON, maps session_idInstanceID, extracts tool_use_id/transcript_path into Extra

Adding Plugins: Create a new file internal/source/<name>.go, implement Source, call Register() in init(), import package in cmd/dp/main.go.

See Writing a Plugin for details.

2. Ingest Pipeline

Located in internal/ingest/.

Function: Ingest(ctx, store, raw, sourceName) orchestrates the pipeline:

  1. Fetch source plugin by name
  2. Call Extract(raw) to get Fields
  3. Validate ToolName is non-empty
  4. Convert Fieldsmodel.Invocation
  5. Generate UUID and timestamp if missing
  6. Marshal Extra into Metadata JSON column
  7. Write to database via store.RecordInvocation()

Error Handling: Returns descriptive errors if source is unknown, extraction fails, or storage fails.

3. Data Model

Located in internal/model/.

Desire: A failed tool call.

type Desire struct {
    ID        string          // UUID
    ToolName  string          // Name of the tool that failed
    ToolInput json.RawMessage // Raw input params
    Error     string          // Error message
    Source    string          // Source plugin name (e.g., "claude-code")
    SessionID string          // Session/instance ID
    CWD       string          // Working directory
    Timestamp time.Time       // When it happened
    Metadata  json.RawMessage // Extra fields from plugin
}

Path: Aggregated pattern of repeated desires.

type Path struct {
    ID        string    // Tool name (used as ID)
    Pattern   string    // Tool name
    Count     int       // Occurrences
    FirstSeen time.Time // First failure
    LastSeen  time.Time // Most recent failure
    AliasTo   string    // Alias target (if one exists)
}

Alias: Mapping from hallucinated tool name to real tool name.

type Alias struct {
    From      string    // Hallucinated name
    To        string    // Real tool name
    CreatedAt time.Time // When alias was created
}

Invocation: Any tool call (success or failure), used for full tracking.

type Invocation struct {
    ID         string          // UUID
    Source     string          // Source plugin name
    InstanceID string          // Session/request ID
    HostID     string          // Machine ID (future)
    ToolName   string          // Tool that was invoked
    IsError    bool            // Whether call failed
    Error      string          // Error message (if IsError)
    CWD        string          // Working directory
    Timestamp  time.Time       // When it happened
    Metadata   json.RawMessage // Extra fields
}

4. Storage Layer

Located in internal/store/.

Interface: Store defines the persistence API. Commands interact with Store, not directly with SQL.

type Store interface {
    RecordDesire(ctx, Desire) error
    ListDesires(ctx, ListOpts) ([]Desire, error)
    GetPaths(ctx, PathOpts) ([]Path, error)
    SetAlias(ctx, from, to string) error
    GetAliases(ctx) ([]Alias, error)
    DeleteAlias(ctx, from string) (bool, error)
    Stats(ctx) (Stats, error)
    InspectPath(ctx, InspectOpts) (*InspectResult, error)
    RecordInvocation(ctx, Invocation) error
    ListInvocations(ctx, InvocationOpts) ([]Invocation, error)
    InvocationStats(ctx) (InvocationStatsResult, error)
    Close() error
}

Implementation: sqliteStore in internal/store/sqlite.go.

Database: Pure-Go SQLite via modernc.org/sqlite (no CGo, cross-compiles easily).

Concurrency: WAL mode enabled for concurrent reads and writes. Writers don’t block readers.

Schema:

CREATE TABLE desires (
    id TEXT PRIMARY KEY,
    tool_name TEXT NOT NULL,
    tool_input TEXT,
    error TEXT NOT NULL,
    source TEXT,
    session_id TEXT,
    cwd TEXT,
    timestamp TEXT NOT NULL,
    metadata TEXT
);

CREATE TABLE invocations (
    id TEXT PRIMARY KEY,
    source TEXT NOT NULL,
    instance_id TEXT,
    host_id TEXT,
    tool_name TEXT NOT NULL,
    is_error INTEGER NOT NULL,
    error TEXT,
    cwd TEXT,
    timestamp TEXT NOT NULL,
    metadata TEXT
);

CREATE TABLE aliases (
    from_name TEXT PRIMARY KEY,
    to_name TEXT NOT NULL,
    created_at TEXT NOT NULL
);

CREATE TABLE schema_version (
    version INTEGER NOT NULL
);

Indexes:

  • desires.tool_name (for GetPaths, InspectPath)
  • desires.timestamp (for time-based filtering)
  • invocations.tool_name, invocations.instance_id, invocations.timestamp

Migrations: Managed in sqlite.go via schema_version table. New migrations bump the version and run idempotently.

5. Analysis Engine

Located in internal/analyze/.

Similarity Matching: Used by dp similar to find known tools similar to a hallucinated name.

Algorithm:

  1. Normalize: Convert both strings to lowercase, split camelCase/underscores/hyphens into words

    • readFile"read file"
    • Read_File"read file"
    • read-file"read file"
  2. Levenshtein Distance: Compute edit distance between normalized strings

  3. Normalize Score: score = 1 - (distance / max_length)

  4. Bonuses:

    • Prefix: Add 0.1 * (common_prefix_length / max_length)
    • Suffix: Add 0.05 * (common_suffix_length / max_length)
  5. Filter: Keep suggestions with score >= threshold (default: 0.5)

  6. Rank: Sort by score descending, return top N (default: 5)

Example:

Hallucinated: "read_file"
Known tools: ["Read", "Write", "ReadFile", "EditFile"]

Normalized: "read file"

Scores:
- Read:     normalize("read") = "read"
            distance("read file", "read") = 5
            base = 1 - (5/9) = 0.44
            prefix = 0.1 * (4/9) = 0.044
            total = 0.484 (below threshold, filtered out)

- ReadFile: normalize("ReadFile") = "read file"
            distance("read file", "read file") = 0
            base = 1.0
            total = 1.0 ✓

- EditFile: normalize("EditFile") = "edit file"
            distance("read file", "edit file") = 4
            base = 1 - (4/9) = 0.56
            prefix = 0.1 * (0/9) = 0
            suffix = 0.05 * (5/9) = 0.028
            total = 0.588 ✓

Suggestions: [("ReadFile", 1.0), ("EditFile", 0.588)]

Customization: Set known_tools in config to override the default list:

dp config known_tools "Read,Write,Edit,Bash,CustomTool"

6. CLI Commands

Located in cmd/dp/ and internal/cli/.

Framework: Cobra (github.com/spf13/cobra) for command parsing, flags, and help.

Structure:

cmd/dp/main.go         → entry point, registers commands
internal/cli/*.go      → command implementations

Common Pattern:

func runList(cmd *cobra.Command, args []string) error {
    db := openDatabase()
    defer db.Close()

    opts := store.ListOpts{
        Since:    parseTime(since),
        Source:   source,
        Limit:    limit,
    }

    desires, err := db.ListDesires(cmd.Context(), opts)
    if err != nil {
        return err
    }

    if jsonOutput {
        return printJSON(desires)
    }
    return printTable(desires)
}

Global Flags:

  • --db PATH: Override database path
  • --json: Force JSON output

Output Formats:

  • Table: Human-readable, uses golang.org/x/term for width detection
  • JSON: Machine-readable, one JSON array or object

Error Handling: Commands return errors, Cobra prints them to stderr and exits with code 1.

Dependencies

From go.mod:

require (
    github.com/google/uuid v1.6.0           // UUID generation
    github.com/spf13/cobra v1.10.2          // CLI framework
    golang.org/x/term v0.39.0               // Terminal width detection
    modernc.org/sqlite v1.44.3              // Pure-Go SQLite
)

Why These?

  • uuid: Standard, fast, no deps
  • cobra: Best CLI framework in Go, used by kubectl, gh, etc.
  • term: Stdlib extension for terminal queries
  • sqlite: Pure Go (no CGo), cross-compiles to any platform, fast enough for millions of rows

No Other Deps: No ORM, no logging framework, no config parser beyond encoding/json, no HTTP libraries. dp is self-contained.

Build and Release

Makefile:

install:
    go install ./cmd/dp

Releases: Automated via GoReleaser (.goreleaser.yml). Pushes binary artifacts to GitHub Releases for Linux, macOS, Windows.

Binary Size: ~8MB (includes SQLite, CLI framework, compression libraries).

Performance Characteristics

  • Desire recording: ~5ms (database write + fsync)
  • Path aggregation: ~10ms for 10k desires
  • Similarity matching: ~1ms for 100 known tools
  • Database size: ~1KB per desire, ~1.5KB per invocation
  • Query latency: <1ms for indexed lookups, <10ms for full scans up to 100k rows

Scaling:

  • SQLite handles millions of rows fine with proper indexes
  • WAL mode keeps reads fast during writes
  • If you exceed ~10M records, consider archiving old data or partitioning by date
  • For multi-machine aggregation, export to JSON and load into a central database

Security Notes

  • Database file permissions: 0600 (user-only read/write)
  • Config file permissions: 0644 (world-readable, but no secrets stored)
  • Hooks run with user’s shell environment—ensure dp binary is trusted
  • No network access, no external API calls, no telemetry

Extension Points

Want to extend dp? Here are the main interfaces:

  1. Source Plugins: Add support for new AI tools (see Writing a Plugin)
  2. Store Implementations: Swap SQLite for Postgres, MySQL, etc. (implement store.Store)
  3. Analysis Algorithms: Replace Levenshtein with ML embeddings, fuzzy matching, etc. (modify internal/analyze)
  4. Output Formats: Add CSV, Markdown, HTML (modify CLI commands)
  5. Webhooks: Add dp serve command to expose a REST API (new package)

All interfaces are defined in internal/ packages—keep the public API minimal (cmd/dp is the only entry point).

Code Layout

desire-path/
├── cmd/dp/                 # CLI entry point
├── internal/
│   ├── analyze/            # Similarity matching
│   ├── cli/                # Command implementations
│   ├── config/             # Config file parsing
│   ├── ingest/             # Data ingestion pipeline
│   ├── model/              # Data types (Desire, Path, Alias, Invocation)
│   ├── source/             # Source plugin system
│   └── store/              # Storage interface + SQLite implementation
├── docs/book/              # This documentation (mdbook)
├── go.mod
├── Makefile
└── README.md

Conventions:

  • internal/ packages are not importable by external code (Go convention)
  • Interfaces in internal/store/ and internal/source/ allow mocking for tests
  • No init-time side effects except plugin registration
  • Error messages include context: "source plugin: operation: detail: error"

Testing

Run all tests:

go test ./...

Test Coverage:

  • Unit tests for each package (*_test.go files)
  • Integration tests using in-memory SQLite (:memory:)
  • Example-based tests in source package for plugin validation

No External Dependencies: Tests don’t need Docker, external databases, or network access. They run in <1s.

Future Architecture Considerations

Potential improvements:

  • Multi-database aggregation: Collect desires from multiple machines, merge into a central store
  • Streaming ingestion: Replace hook-based recording with a long-running daemon that tails logs
  • Machine learning: Train embeddings for better similarity matching
  • Distributed tracing: Correlate tool calls across services using OpenTelemetry
  • Web UI: Visualize paths, trends, session replays

These would require architectural changes (client/server split, service discovery, etc.), but the current design keeps things simple and fast for single-user CLI workflows.

Next Steps