How It Works
Architecture and internals of the autodocs generation pipeline
How It Works
Autodocs orchestrates three layers: an AI CLI for code understanding, a generation pipeline with content-hash-based change detection, and a Fumadocs app for rendering.
Generation pipeline
When you run npx autodocs generate, the following happens:
- Load config — reads
autodocs.config.json, merges with defaults, and validates glob patterns and theme - Pre-create sections — creates configured
sectionssubdirectories in the output directory (e.g.,docs/guide/,docs/api/) - Resolve CLI agents — locates the
cli-agentsbinary (see CLI agents below) - Load skill prompt — reads
.autodocs/skill.mdif it exists, otherwise loads the built-inSKILL.mdfrom the package'sskill/directory - Hash source files — walks the project tree, computes SHA-256 hashes (first 16 hex chars) of all files matching
include/excludepatterns - Detect changes — compares current hashes against the cache in
.autodocs/cache/source-cache.json; if nothing changed and--forceisn't set, exits early - Build the prompt — assembles the skill prompt, project config summary, existing docs context, and change context into a single task string
- Invoke the AI CLI — spawns
cli-agentswith JSON streaming (--json,--skip-permissions) and streams the prompt - Stream progress — parses newline-delimited JSON events and displays real-time progress with a spinner (via ora)
- Cache results — on success, saves updated source hashes to
.autodocs/cache/source-cache.json
Change detection
Autodocs uses content hashing for change detection:
- Every source file matching
include/excludeis hashed with SHA-256 (truncated to 16 hex characters) - Hashes are stored in
.autodocs/cache/source-cache.json - On the next run, current hashes are compared to cached hashes — files with different hashes (new, modified, or deleted) are flagged as changed
- The AI receives a list of changed files with instructions to update only affected doc pages
If no source files have changed, generation is skipped:
No source files changed since last generation. Use --force to regenerate.The --force flag bypasses the cache entirely, triggering a full regeneration.
Existing docs awareness
Before invoking the AI, autodocs scans the output directory for existing .mdx files. It builds a context string listing each page with its title and whether it has generated: true in its frontmatter (auto-generated) or not (manually edited). This lets the AI understand what's already documented and maintain consistency across incremental updates.
Source file walking
The file walker skips directories that start with . (dotfiles), as well as node_modules, target, and dist directories — regardless of the configured exclude patterns. Within non-skipped directories, files are matched against include and exclude globs using picomatch.
The .autodocs/ directory
The .autodocs/ directory (added to .gitignore by init) serves multiple purposes:
| Path | Purpose |
|---|---|
.autodocs/cache/source-cache.json | Content hashes for change detection |
.autodocs/skill.md | Optional custom skill prompt override |
.autodocs/ (root) | Fumadocs Next.js app scaffolded for dev and build |
Fumadocs scaffolding
When you run dev or build, autodocs scaffolds a full Fumadocs/Next.js app inside .autodocs/:
- Copy template — copies the Fumadocs template from
templates/fumadocs/in the autodocs package into.autodocs/ - Copy docs — copies your
docs/MDX files into.autodocs/content/docs/ - Generate CSS — writes
app/global.cssimporting your selected theme:@import 'tailwindcss'; @import 'fumadocs-ui/css/<theme>.css'; @import 'fumadocs-ui/css/preset.css'; - Generate layout config — writes
lib/layout.shared.tsxwith your site title and GitHub config, exportingsiteName,gitConfig, andbaseOptions()for Fumadocs layouts - Install dependencies — runs
npm installifpackage.jsonis newer thanpackage-lock.json(ornode_modulesis missing); otherwise runsnpx fumadocs-mdxto regenerate the content layer
The template directory is resolved from the autodocs package, searching both ../templates/fumadocs and ../../templates/fumadocs relative to the compiled source.
CLI agents
Autodocs uses @cueframe/cli-agents to invoke AI CLIs. This Rust binary abstracts over the differences between Claude Code, Codex, and Gemini CLI, providing a unified JSON streaming interface.
The binary is resolved in this order:
AUTODOCS_CLI_AGENTS_PATHenvironment variable (if set, used directly)- The
@cueframe/cli-agentsnpm package — calls thebinaryPath()export cli-agentson the system PATH (forcargo installusers)
If none of these resolve, an error is thrown with installation instructions.
The subprocess is managed by execa with cancelSignal for graceful cancellation. The following flags are always passed:
| Flag | Purpose |
|---|---|
--json | Enable JSON streaming output |
--skip-permissions | Skip interactive permission prompts |
--append-system-prompt | Inject additional system instructions |
--cwd | Set the working directory for the AI CLI |
When --cli <name> is specified, the corresponding --cli flag is prepended to the argument list.
Event streaming
During generation, the AI subprocess emits newline-delimited JSON events. Autodocs parses these and displays progress in real-time:
| Event type | Description |
|---|---|
thinking_delta | AI reasoning text (displayed dimmed, flushed before tool events) |
tool_start | AI began using a tool — shows file paths for Read/Write/Edit, commands for Bash |
tool_end | Tool completed — non-suppressed errors are displayed |
error | An error occurred during generation |
done | Generation complete — includes success status, duration, and cost |
Written files are tracked and displayed with a green checkmark. Each file path is shown only once, even if written multiple times.
Suppressed errors
Certain error patterns are suppressed to reduce noise during generation:
"Cancelled:"— user cancellation"does not exist"— file not found (common during exploration)"ENOENT"— file system errors"has not been read yet"— Write tool prerequisite warnings
Deployment
The deploy command wraps build with a Vercel deployment step:
- Build — runs the full
buildpipeline (scaffold +next build) - Link — if no
.vercel/directory exists in the project root, runsvercel link --yesto connect to a Vercel project - Copy Vercel config — copies the
.vercel/directory from the project root into.autodocs/.vercel/so the deploy targets the correct project - Deploy — runs
vercel deploy --prod --yes(or without--prodfor preview deployments)
The Vercel link is stored in the project root (not .autodocs/), so it persists across scaffolding runs.
Completion summary
On success, autodocs caches the new source hashes and displays a summary:
✔ Done — 5 files · 23s · $0.12The summary includes the number of files written, elapsed time (if reported by the subprocess), and API cost (if reported).