This tutorial was AI-generated and is being fleshed out. Content may be incomplete or change.

The AI Assistant Workflow

Watch how your AI assistant logs decisions in real-time as it builds a feature. This is what deciduous looks like in practice.

Multi-Assistant Support

This example uses Claude Code, but the same workflow applies to OpenCode and Windsurf. Initialize with deciduous init --opencode or deciduous init --windsurf to set up your preferred assistant.

A Complete Session: Adding Rate Limiting

Here's an actual Claude Code session where a user asks for rate limiting. Watch how Claude logs the goal, considers options, makes decisions, and tracks the outcome—all in real-time.

Claude Code Session
❚❚
Animation paused
You
Add rate limiting to the API endpoints
Claude
I'll implement rate limiting for your API. Let me first log this goal to the decision graph.
Claude
deciduous add goal "Add rate limiting to API" -c 90 -p "User asked: Add rate limiting to the API endpoints"
Created node #47: goal "Add rate limiting to API"
Claude
Now I need to decide on the approach. Let me log this decision point:
Claude
deciduous add decision "Choose rate limiting strategy" -c 75
Created node #48: decision "Choose rate limiting strategy"
Claude
deciduous link 47 48 -r "Deciding implementation approach"
Created edge: #47 -> #48 (leads_to)
Claude
I see a few options here. Let me log them:
Claude
deciduous add option "Redis-based with sliding window" -c 80
Created node #49: option "Redis-based with sliding window"
Claude
deciduous add option "In-memory with token bucket" -c 70
Created node #50: option "In-memory with token bucket"
Claude
deciduous link 48 49 -r "Scales across instances, persistent"
deciduous link 48 50 -r "Simpler, but single-instance only"
Claude
Looking at your infrastructure (I see you're using Redis for sessions already), I recommend Redis-based rate limiting. This will scale across your multiple API instances.
Claude
deciduous add action "Implementing Redis rate limiter" -c 85 -f "src/middleware/ratelimit.ts"
Created node #51: action "Implementing Redis rate limiter"
Claude
deciduous link 48 51 --edge-type chosen -r "Redis already in stack, needs horizontal scaling"
Created edge: #48 -> #51 (chosen)
Claude
Now let me implement this...
Writing: src/middleware/ratelimit.ts, src/config/limits.ts, tests/ratelimit.test.ts
Claude
[Claude writes the rate limiter code, creates middleware, adds tests...]
Claude
Rate limiting is implemented! Let me commit and record the outcome:
Claude
git add -A && git commit -m "feat: add Redis-based rate limiting"
[feature/rate-limiting abc123] feat: add Redis-based rate limiting
4 files changed, 187 insertions(+)
Claude
deciduous add outcome "Rate limiting working, 429s returned correctly" -c 95 --commit HEAD
Created node #52: outcome "Rate limiting working" [commit: abc123]
Claude
deciduous link 51 52 -r "Implementation verified with tests"
Created edge: #51 -> #52 (leads_to)
Claude
deciduous sync
Exported graph to docs/graph-data.json
  52 nodes, 48 edges
Claude
Done! I've added Redis-based rate limiting to all API endpoints. The decision graph now captures:
What we built: Redis sliding window rate limiter
Why Redis: Already in stack, horizontal scaling needed
What was rejected: In-memory token bucket (single-instance)
Commit: abc123
Files: src/middleware/ratelimit.ts, src/config/limits.ts

What Just Happened

Let's break down the five key patterns you saw:

1. Log the Goal First

Claude immediately creates a goal node with -p to capture the user's original request:

deciduous add goal "Add rate limiting" -c 90 -p "User asked: ..."

The -p flag captures the prompt that triggered this work. This is crucial for future context recovery—when a new session starts, Claude can see exactly what the user originally asked for.

2. Decisions Before Code

Before writing any code, Claude logs the decision point and the options being considered:

deciduous add decision "Choose rate limiting strategy" -c 75
deciduous add option "Redis-based with sliding window" -c 80
deciduous add option "In-memory with token bucket" -c 70

This captures the reasoning before it happens, when the context is fresh. The options exist in the graph even if they're not chosen.

3. Mark the Chosen Path

When Claude picks an approach, it uses the chosen edge type:

deciduous link 48 51 --edge-type chosen -r "Redis already in stack"

This makes it clear which option was selected and why. The rationale lives in the edge, not just a comment.

4. Link Commits to Outcomes

After committing, Claude links the outcome to the git commit:

deciduous add outcome "Rate limiting working" -c 95 --commit HEAD

This creates traceability between decisions and code. You can trace any commit back to the goal that spawned it.

5. Sync Before Push

Finally, deciduous sync exports the graph for the web viewer:

deciduous sync

The Complete Decision Chain

After this session, the graph looks like this:

Decision Graph
    #47 [goal] Add rate limiting to API
        │
        │ leads_to: "Deciding implementation approach"
        ▼
    #48 [decision] Choose rate limiting strategy
        │
        ├────────────────────┬────────────────────┐
        │                    │                    │
        ▼                    ▼                    ▼
#49 [option]       #50 [option]       #51 [action]
Redis sliding      In-memory          Implementing Redis
window             token bucket       rate limiter
                                              │
                                              │ chosen#52 [outcome]
                                      Rate limiting working
                                      [commit: abc123]

Anyone (human or AI) can now query this to understand:

Mid-Stream Direction Changes

Sometimes the user changes direction during implementation. Here's how Claude captures that:

Direction Change
❚❚
Animation paused
You
Actually, use the token bucket algorithm instead of sliding window
Claude
Got it! Let me capture this direction change and adjust the implementation.
Claude
deciduous add action "Switching to token bucket algorithm" -c 85 -p "User said: use token bucket instead of sliding window"
Created node #53: action "Switching to token bucket algorithm"
Claude
deciduous link 51 53 -r "User requested algorithm change"
Created edge: #51 -> #53 (leads_to)
Claude
Now I'll update the implementation to use token bucket. The graph now shows the pivot point—future sessions will see why we switched.

The -p flag on the new action captures the exact moment the user redirected. This preserves the context of why the approach changed mid-stream.

The Workflow Summary

The Claude Code Workflow
  1. User makes a request
  2. Claude logs a goal with the prompt (-p)
  3. Claude logs the decision point
  4. Claude logs options being considered
  5. Claude implements and logs the action with --edge-type chosen
  6. Claude commits and logs the outcome with --commit HEAD
  7. Claude runs deciduous sync
Want to build a graph from an existing project? Project Archaeology →