Losselot

Audio forensics meets AI-assisted development - A living museum

View the Project on GitHub notactuallytreyanastasio/losselot

Development Story: Building in Public with AI

This project is being developed in public, with AI assistance, while documenting the process as it happens.


The Cast

Human: The one asking questions, making final decisions, and providing domain expertise about audio.

Claude: AI assistant handling code generation, research, and documentation. Working within explicit constraints and using external memory systems.


Timeline

Phase 1: Core Detection

The project started as a simple transcode detector:

Key decisions:

Phase 2: Web UI

Terminal output wasn’t enough. We needed visualization:

Key decisions:

Phase 3: Lo-Fi Detection (CFCC)

A file called charlie.flac was incorrectly flagged as a transcode. It was actually a legitimate lo-fi recording.

The problem: How to distinguish MP3 brick-wall cutoff from natural tape rolloff?

The solution: Cross-Frequency Coherence Coefficient (CFCC)

This decision is documented in the graph: nodes 1-8.

Phase 4: Decision Graph

We realized decisions were getting lost between sessions. The CFCC decision involved:

All of this lived only in chat history that would eventually be lost.

The solution: SQLite-backed decision graph

Phase 5: Claude Tooling

With the decision graph in place, we built tooling around it:

Phase 6: This Site

The project had become more than an audio tool. It was also:

Hence: the “living museum” you’re reading now.


What Didn’t Work

Approach A: Temporal Cutoff Variance

Before CFCC, we considered measuring how cutoff frequency varies over time:

Why we rejected it:

This is documented in the graph as node 3 (rejected via edge 7).

Early UI Approaches

The first web UI was a single massive HTML file. Problems:

What we learned:

Context Loss

Multiple times, sessions ended with important context lost:

What we learned:


The Meta Experiment

This project is simultaneously:

  1. A useful tool - Losselot actually detects fake lossless audio
  2. A development methodology - The decision graph approach works
  3. A documentation style - Living docs that update with the code
  4. An AI collaboration model - Human decides, AI executes within constraints

What Makes It Work

Clear constraints:

External memory:

Session continuity:

What’s Still Hard

Context window limits:

Knowing when to document:

Maintaining the system:


Current State

As of writing this page:

Nodes: 16
Edges: 15
Pending decisions: Multiple site structure choices
Recent work: GitHub Pages setup

The graph shows we’re in the middle of building this very documentation site. Node 13 (goal), node 14 (decision), node 15 (structure option).


What’s Next

Ideas in the pipeline (not yet in the graph):

Whether these happen depends on what’s useful. The graph will document the decisions either way.


Try the Workflow

If you want to try this approach in your own project:

  1. Set up a decision graph - SQLite is simple enough
  2. Create context recovery - Whatever helps you resume
  3. Document as you go - Not after the fact
  4. Constrain the AI - Explicit rules, external state
  5. Build in public - Accountability helps

The specific tools matter less than the principles:


← Claude Tooling Back to Home →