Chatons Developer Guide
Architecture and implementation reference for contributors. Verified against the codebase as of March 10, 2026.
1. Architecture Overview
Chatons is an Electron desktop application with five layers:
| Layer | Responsibility |
|---|---|
| Electron main process | Boot, IPC, persistence, updates, host integrations, status bar |
| React renderer | Application UI, state management, i18n |
| Pi runtime bridge | Per-conversation AI sessions, tool execution |
| ACP orchestration layer | Typed internal agent-to-agent envelopes, task state, and subagent audit trail |
| SQLite storage | App data, caches, extension KV/queue, automation rules, memory |
| Extension runtime | Built-in and user-installed extensions, LLM tools, channels, packaged extension web apps |
Key Entry Points
| File | Purpose |
|---|---|
electron/main.ts | App bootstrap, window creation, IPC registration |
electron/ipc/workspace.ts | Pi bootstrap, workspace IPC, project terminal |
electron/ipc/workspace-handlers.ts | IPC handler registration (all workspace:* channels) |
electron/pi-sdk-runtime.ts | Per-conversation Pi session management |
electron/acp/router.ts | ACP coordination, persistence fanout, renderer broadcasts |
electron/acp/store.ts | ACP SQLite persistence helpers |
electron/extensions/runtime.ts | Extension host, event dispatch, storage bridge |
src/App.tsx | Renderer entry, loading gate, app shell |
src/features/workspace/store/provider.tsx | Workspace state provider |
2. App Modes
Chatons has two UI modes, defined in src/features/workspace/types.ts:
type AppMode = 'workspace' | 'assistant'
type AssistantView = 'home' | 'conversations' | 'memory' | 'automations' | 'channels'
The mode is toggled by SidebarModeSwitcher (bottom of sidebar). Default mode is workspace.
Workspace Mode
Traditional coding-focused view: sidebar with conversations/projects, main panel with conversation timeline or settings, composer.
Assistant Mode
Dashboard-focused view with a different sidebar and main panel. Routed by AssistantMainView.tsx:
| View | Component | Purpose |
|---|---|---|
home | AssistantDashboard | Overview: greeting, quick actions, channel status, recent activity, memory insights, automations summary |
conversations | Switches to workspace mode | N/A |
memory | AssistantMemoryView | Full memory browser |
automations | AssistantAutomationsView | Full automation rules browser |
channels | AssistantChannelsView | Channel extension list with config |
Assistant Onboarding
Separate from provider onboarding. Gated by assistantOnboardingCompleted setting. Three steps:
- Channel selection (loads marketplace, identifies channel extensions by category/tags)
- Personalization (assistant name, user name)
- Confirmation
Stored settings: assistantName, assistantUserName, assistantChannelId, assistantOnboardingCompleted.
Dashboard Components
| Component | Data Source |
|---|---|
StatusHeader | Time-based greeting + channel connected count |
QuickActions | 4 hardcoded actions (talk, schedule, memory, settings) |
ChannelsStatus | Extension list filtered by kind: "channel", queries channelStatus from UI entries |
RecentActivity | 5 most recent conversations sorted by lastMessageAt |
MemoryInsights | Calls memory.stats plus memory.list on @chaton/memory, shows an accurate count and 5-entry preview |
AutomationsSummary | Calls automation.list_scheduled_tasks on @chaton/automation, shows rules |
3. Technology Stack
| Area | Technology |
|---|---|
| Desktop shell | Electron |
| Renderer UI | React + TypeScript |
| Localization | i18next (useTranslation()) |
| Database | SQLite via better-sqlite3 |
| AI runtime | @mariozechner/pi-coding-agent SDK |
| Git support | GitService in electron/lib/git/ |
| Extension host | Custom runtime in electron/extensions/runtime/ |
| Telemetry | Sentry (@sentry/electron/main), gated by user consent |
| Status bar | macOS-only tray icon (electron/lib/status-bar.ts) |
| Sandbox | electron/lib/sandbox/sandbox-manager.ts (Node.js and Python) |
4. App Startup Sequence
Main Process (electron/main.ts)
- Set app name to
Chatons - Register
chatons://protocol handler for deep links - Register
chaton-extension://for packaged extension assets - Override
userDatapath to<appData>/Chatons - Initialize Sentry telemetry (disabled in dev mode, gated by consent)
- On
app.whenReady():- Bootstrap Pi agent directory (
ensurePiAgentBootstrapped()) - Initialize logging
- Initialize Pi manager
- Initialize sandbox manager
- Clean up orphaned worktrees
- Prefetch changelogs from GitHub
- Register IPC handlers (workspace, Pi, update)
- Create main browser window
- Flush any pending deep link URL
- Bootstrap Pi agent directory (
- On
before-quit: Stop all Pi runtimes - On
window-all-closed: Do nothing (app stays alive on all platforms)
Packaged desktop builds also ship a bundled npm CLI in the app resources and run it via process.execPath with ELECTRON_RUN_AS_NODE=1 for extension install/update/publish flows. This keeps those actions independent from the GUI process PATH.
macOS Window Behavior
On macOS, closing the window hides it instead of quitting. The app stays in the background with a status bar icon. Cmd+Q or the menu "Quit" action triggers actual quit.
On Windows and Linux, closing the last window destroys it but leaves the process alive in the background. Chatons now clears the main-window reference on actual destruction so activation, notifications, and deep-link handling can recreate a fresh window instead of targeting a stale destroyed instance.
Renderer (src/App.tsx)
The renderer blocks the main UI behind a LoadingSplash component until workspace hydration completes. The loading screen shows the mascot video and rotating cat-themed messages.
Once hydrated, AppShell renders the sidebar, topbar, main view, and composer. If onboarding has not been completed (no providers configured), the OnboardingFlow is shown instead.
5. Pi Integration
Chatons uses a dedicated Pi directory: <userData>/.pi/agent
This is not the user's global ~/.pi directory. The app forces PI_CODING_AGENT_DIR to its own managed location.
Bootstrap (ensurePiAgentBootstrapped)
Creates the following if missing:
| Path | Purpose |
|---|---|
<userData>/.pi/agent/settings.json | Pi settings (enabledModels, etc.) |
<userData>/.pi/agent/models.json | Provider definitions and model metadata |
<userData>/.pi/agent/auth.json | Provider credentials (API keys, OAuth tokens) |
<userData>/.pi/agent/sessions/ | Pi session state |
<userData>/.pi/agent/worktrees/chaton/ | Git worktree storage |
<userData>/.pi/agent/bin/ | Pi binary fallback |
<userData>/workspace/global | Global workspace directory |
Also syncs API keys between models.json and auth.json, and migrates openai-codex base URLs if needed.
CLI Execution
For Pi CLI commands (model sync, skill management):
- Prefers bundled
@mariozechner/pi-coding-agent/dist/cli.js - Falls back to
<userData>/.pi/agent/bin/pi
All CLI executions force PI_CODING_AGENT_DIR to the managed directory.
Internal one-shot LLM tasks should not default to this CLI path when they need the same runtime semantics as a normal conversation. Structured memory capture and AI title refinement use short-lived hidden Pi runtimes instead, so model selection, auth reloads, and provider behavior match the main conversation flow.
Model Scoping
Source of truth: settings.json > enabledModels
When a user stars/unstars a model in the UI, the actual Pi settings file is updated. This is consistent across onboarding, the composer model picker, and provider settings.
For new conversation creation, the saved composer model selection is also reused as the fallback conversation model when a UI entry point creates a thread without explicitly passing modelProvider and modelId. This keeps the conversation row, the visible picker, and the first local Pi runtime session aligned.
For full details, see Pi Integration.
6. Per-Conversation Runtimes
PiSessionRuntimeManager in electron/pi-sdk-runtime.ts creates one PiSdkRuntime per conversation.
Internal Multi-Agent Flow
Chatons now uses ACP as the internal control plane for multi-agent coordination during local coding sessions.
Design intent:
- Pi remains the only execution runtime.
- ACP is only the typed message/task/status/result layer between cooperating roles.
- The orchestrator and its subagents are all attached to a single conversation thread.
- SQLite is the durable source of truth for ACP message history, agent state, and task-list history.
Current implementation path:
electron/core-tools.tsstill exposes the orchestration tools (create_task_list,spawn_subagent,run_subagent,run_subagents, and status/result helpers).electron/pi-sdk-runtime.tsmaps runtime-backed subagents into ACP agent registration and status/result updates.electron/acp/store.tsstores ACP envelopes inacp_messages, agent state inacp_agent_states, and task history inacp_task_lists.electron/acp/router.tsbroadcastschaton:acp:eventso the renderer can update in real time.src/hooks/use-conversation-side-panel.tsxrehydrates ACP state per conversation and keeps the existing task/subagent panel synchronized with persisted orchestration data.
This keeps ACP bounded and auditable. It is not intended as a second chat transcript or as an unbounded free-chat network of hidden agents.
This local runtime lifecycle applies only to local conversations. Cloud conversations may appear in the same workspace state, but they must not start a local Pi runtime. Their source of truth is the connected cloud instance plus its authenticated bootstrap payload. Remote execution now lives in apps/runtime-headless/server.ts, which materializes a managed Pi agent directory from the internal control-plane access grant and runs a real Pi session server-side. For repository-backed cloud projects, that remote runtime now maintains a shared project source checkout and a per-conversation Git worktree instead of a session-local clone. The authenticated cloud account now also carries a subscription tier and current usage counters, which the desktop exposes in Settings while the cloud control plane remains the only authority for quota enforcement and session admission.
Cloud projects, cloud conversations, and cloud message history are all owned by the cloud control plane. The Electron main process can mirror them into local SQLite for startup hydration and renderer compatibility, but cloud creation and transcript persistence must go through cloud-api rather than being synthesized locally.
Cloud memory now follows the same rule. cloud-api owns durable memory persistence and stats, while runtime-headless exposes the same memory tool contract as desktop by forwarding remote tool calls to authenticated cloud memory routes.
The Postgres-backed cloud control plane now also stores normalized organization, membership, and provider rows alongside the older compatibility workspace blob. Access checks for the main project/conversation/message paths have started moving to membership-based joins, and the main cloud entity tables now also persist explicit organization_id values. The current compatibility bootstrap still exposes a single organization view, selected from the caller's memberships with owner-first ordering, so the full shared-organization data model migration is still in progress.
The browser-facing cloud.chatons.ai surface follows the same rule: signup, login, organization setup, provider setup, and desktop handoff are now implemented inside the landing/ app, but they still call back into cloud-api as the source of truth. The landing app is presentation and browser-session glue, not a second control plane.
In production, that landing client should default its web auth/bootstrap base URL to https://cloud.chatons.ai; alternate environments can still override it through VITE_CHATONS_CLOUD_API_URL.
For the default hosted Kubernetes layout, cloud.chatons.ai is also the canonical public base URL configured into cloud-api, while api.chatons.ai may exist as an alias to the same service and realtime.chatons.ai remains the dedicated websocket host.
The cloud control plane now also publishes explicit public API, realtime, and runtime URLs in bootstrap state. Desktop Chatons should persist those server-owned endpoints with the cloud instance and use them for websocket/runtime routing instead of deriving sibling ports locally.
The web auth surface now also owns password-based login, email verification, and password reset initiation, but those flows remain server-driven by cloud-api, including token issuance and SMTP-backed mail delivery.
Desktop cloud sign-in now reuses that same browser auth surface instead of collecting identity inline on /oidc/authorize. cloud-api issues a browser-session cookie during web login/signup, redirects unauthenticated OIDC authorize requests back through the normal login page with a return_to parameter, and renders /oidc/authorize as a consent page for the already signed-in web user. Because the hosted cloud.chatons.ai ingress may terminate directly on cloud-api, the control plane should also keep fallback GET renderers for /cloud/login and /cloud/signup so the browser redirect target remains valid even when the richer landing frontend is not serving that host.
Cloud subscription enforcement is also now split between the durable paid/default plan on the user record and an optional admin-granted complimentary subscription window. When such a grant is active, it becomes the effective plan used by account responses, UI display, quota checks, and runtime admission until it expires or is replaced.
Working Directory Selection
The runtime cwd is chosen in order:
- Conversation worktree path (if worktree is enabled for this thread)
- Project repository path (if conversation is project-linked)
- Global workspace directory (
<userData>/workspace/global)
Project worktrees are only persisted after Chatons confirms the created directory is a valid Git worktree/repository. Failed creation attempts are cleaned up instead of leaving an empty folder recorded as the conversation runtime cwd.
Access Mode
| Mode | Tool cwd | Effect |
|---|---|---|
secure | Conversation working directory | AI restricted to project scope |
open | Filesystem root (/) | AI has full access |
Changing access mode on an existing thread restarts that runtime. Chatons also sends a hidden technical system steer to the agent after restart so it can adapt to the new filesystem boundary without exposing that bookkeeping message in the user-visible transcript.
Meta-Harness Integration
Chatons now includes a first-phase Meta-Harness implementation around the local Pi runtime.
Current architecture:
electron/meta-harness/types.tsdefines the typedHarnessCandidatesurfaceelectron/meta-harness/bootstrap.tsperforms the bounded environment-bootstrap probe and returns additive prompt sectionselectron/meta-harness/archive.tsstores candidates, prompt text, scores, traces, and frontier metadata under the managed Pi directoryelectron/pi-sdk-runtime.tsnow maps harness tool permissions and hook policies onto Pi's nativebeforeToolCall/afterToolCallprimitiveselectron/meta-harness/evaluator.tsevaluates stored candidates against a narrow benchmark in isolated ephemeral local conversationselectron/core-tools.tsexposes maintainer-facing Meta-Harness tools for listing candidates, inspecting the frontier, storing candidates, promoting one as active, and running evaluationelectron/ipc/pi.tsexposes lightweight IPC readers for candidate/frontier inspection
The runtime seam remains electron/pi-sdk-runtime.ts. Harness application happens before session creation finishes so the environment snapshot can be inserted before the first model turn. This implementation is intentionally bounded: it optimizes harness behavior around a fixed model rather than letting a proposer rewrite arbitrary runtime files.
Prompt Composition
At session creation, Chatons injects additional system prompts covering:
- Current access mode and its constraints
- Thread action suggestion format
- How the model should explain secure-mode limitations
The runtime exposes get_access_mode so the model can re-check the live mode.
Conversation memory retrieval is no longer injected automatically at turn start. The memory extension exposes runtime tools such as memory.search and memory.get, and the model is expected to call them only when recalled context is likely to help with the active request. This avoids contaminating the opening prompt and reduces false assumptions caused by stale memory being present before the model has decided it needs it.
Event Bridge
Pi runtime events are forwarded to the renderer via pi:event IPC channel. Events include message lifecycle, tool execution, compaction, retry, extension UI requests, and runtime status/errors. Events are also logged through the logging pipeline with source: "pi".
The workspace handler layer also registers a runtime event subscription. That subscription is explicitly disposed during shutdown so same-process reload paths do not accumulate duplicate Pi listeners.
Settings Lock Handling
settings.json.lock files older than 5 minutes are cleaned up on startup. SettingsManager.create() is retried with exponential backoff (100ms, 200ms, 400ms) on lock contention.
7. Provider and Auth System
Credential Storage
| File | Content |
|---|---|
models.json | Provider definitions, model metadata, optionally API keys |
auth.json | Canonical credential storage (API keys and OAuth tokens) |
Chatons synchronizes credentials between both files to maintain compatibility.
The provider models arrays remain authoritative in models.json even after API keys are migrated to auth.json; normalization must not remove models simply because credentials now live only in auth.json.
For custom providers with explicit models, Chatons must also keep models.json valid for Pi's ModelRegistry even when the true secret is stored only in auth.json. If models.json omits apiKey entirely for such a provider, Pi can reject the whole custom provider registry and surface only built-in models.
Known local no-auth providers such as lmstudio, ollama, local, and localhost are a special case: Chatons strips any stale api_key entry for them during sync and runtime startup so local OpenAI-compatible endpoints can run without an Authorization header when the backend allows it.
For those providers, Chatons may still persist apiKey: "!" in models.json as a Pi-compatibility placeholder when the provider defines explicit custom models. That sentinel must remain a literal placeholder during runtime model resolution; if it is interpreted as a shell command, Pi drops the provider fallback credential and wrongly rejects model switches with No API key.
OAuth Providers
| Provider | Flow | Details |
|---|---|---|
ChatGPT (openai-codex) | PKCE + local HTTP server on port 1455 | Browser callback handled by Pi SDK |
Claude Pro (anthropic) | PKCE, user pastes code | User copies code from claude.ai |
GitHub Copilot (github-copilot) | Device flow | User enters code on github.com |
OAuth credentials are stored in auth.json as { type: "oauth", access, refresh, expires }.
Provider Cards
Provider selection in onboarding and settings groups cards by company:
- OpenAI group: "OpenAI" (API key,
api.openai.com/v1) and "ChatGPT" (OAuth,chatgpt.com/backend-api) - Mistral group: "Mistral" (
api.mistral.ai/v1) and "Mistral Vibe" (vibe.mistral.ai/v1)
Base URL Normalization
When saving a provider, Chatons probes multiple URL variants (http://host:port, with /, with /v1, with /v1/) and scores candidates using compatibility endpoints (/models, /chat/completions, /responses).
This avoids selecting a base URL that answers /models but fails chat generation endpoints (for example missing /v1 leading to Cannot POST /chat/completions).
For custom HTTP providers, this probing and model discovery runs through a Node http/https transport in the main process instead of ambient fetch. This was required because packaged Electron runs could fail against local-network endpoints even when the same binary worked in terminal/dev mode.
Auth Diagnostics
On 401 errors, Chatons logs a sanitized debug trail (provider ID, credential source, masked key fingerprint). Raw keys are never logged.
8. Renderer State
Workspace Store
Main state management files:
| File | Purpose |
|---|---|
src/features/workspace/store.tsx | Store hook and WorkspaceProvider |
src/features/workspace/store/provider.tsx | State provider implementation |
src/features/workspace/store/state.ts | State shape and initial values |
src/features/workspace/store/pi-events.ts | Pi runtime event handlers |
src/features/workspace/store/context.ts | React context |
Responsibilities: hydrate projects/conversations/settings, track Pi runtime state per conversation, insert optimistic user messages, apply runtime events to message state, coordinate notices and extension interactions.
Composer Behavior
- Queueing: Messages sent while the AI is busy are queued, not dropped. Queue is visible and editable.
- Thread actions: The runtime can emit
set_thread_actionswith up to 4 action badges. These appear above the textarea and clear on next send. - Attachments: Images become image payloads, small text files become inline text, binary/large files become base64 previews.
9. Data Model (SQLite)
Migrations live in electron/db/migrations/. Currently 14 migration files.
Tables
| Table | Purpose |
|---|---|
projects | Imported project repositories |
conversations | Conversation metadata, model selection, access mode, worktree path |
conversation_messages_cache | Cached message payloads |
app_settings | Key-value app settings |
pi_models_cache | Cached model list from providers |
quick_actions_usage | Quick action usage tracking |
extension_kv | Extension key-value storage (JSON values) |
extension_queue | Extension job queue (queued/processing/done/dead) |
automation_rules | Automation rule definitions |
automation_runs | Automation execution history |
memory_entries | Memory system entries with embeddings |
project_custom_terminal_commands | Custom terminal commands per project |
composer_drafts | Draft messages in the composer |
Key Conversation Fields
Model choice, worktree state, and access mode are stored per conversation in the database -- they are persistent, not ephemeral UI state:
project_id,model_provider,model_id,thinking_levelworktree_path,access_mode- Runtime error state
10. Project Terminal
Implemented in electron/ipc/workspace.ts (buildDetectedProjectCommands, detectProjectType) and src/components/shell/ProjectTerminalDialog.tsx.
Project Type Detection
detectProjectType() checks for:
| File | Type |
|---|---|
package.json | node |
pyproject.toml, requirements.txt, setup.py | python |
Cargo.toml | rust |
go.mod | go |
CMakeLists.txt, Makefile, makefile | c |
Command Generation
Based on type, buildDetectedProjectCommands() generates commands like npm run dev, cargo test, python manage.py runserver, etc. Custom commands are also supported and stored per project.
Execution Model
- Commands run in the host environment, not through Pi
- Requires
openaccess mode - Output is streamed live
- Multiple concurrent runs are tracked with tabs
- Not an interactive PTY terminal -- it is a managed process runner with streamed output
11. Extension Platform
Full documentation: Extensions, Extensions API
Architecture
- Extensions live in
~/.chaton/extensions/<extension-id>/ - Built-in extensions (
@chaton/automation,@chaton/memory,@chaton/ide-launcher,@chaton/tps-monitor, etc.) are bundled inelectron/extensions/builtin/ - Extension manifest:
chaton.extension.json - Runtime:
electron/extensions/runtime/(host, storage, types, capabilities, UI bridge, automation, memory) - API surface:
window.chaton.*(notwindow.chatonExtension.api.*)
Capabilities
13 capabilities: ui.menu, ui.mainView, storage.kv, storage.files, events.subscribe, events.publish, queue.publish, queue.consume, llm.tools, host.notifications, host.conversations.read, host.conversations.write, host.projects.read
Extensions can now also contribute ui.topbarItems for lightweight topbar actions without adding a sidebar entry.
Channel Extensions
Extensions with kind: "channel" appear under the Channels sidebar entry. They bridge external messaging platforms into Chatons global threads. See Channels.
The local @thibautrey/chatons-channel-even-realities channel exposes a local HTTP server on http://127.0.0.1:42619/messages for Even Realities glasses, routes incoming POST payloads into global channel conversations, and ignores the incoming transport model field so the Chatons conversation model remains the source of truth. Its extension server also relies on a readyUrl healthcheck, and Chatons now reuses that already-live endpoint across reloads instead of trying to bind a second process to port 42619.
Conversations created by that channel also receive a dedicated Pi runtime prompt section telling the agent it is replying for smart glasses and should keep answers very short and quick by default.
12. Built-in Memory Extension
Extension ID: @chaton/memory
Implemented in electron/extensions/runtime/memory.ts.
Search Engine
Uses a local hashed trigram vector strategy:
- Text is normalized (NFKC, lowercase, whitespace collapsed)
- Character trigrams are extracted
- Each trigram is hashed (FNV-1a) to a position in a 256-dimension vector
- Vectors are L2-normalized
- Search uses cosine similarity
This is entirely offline -- no external embedding service or model download needed. It is practical for factual memory lookup but is not equivalent to a neural embedding model.
Automation Suggestions
After completed conversations, Chatons now runs a lightweight background analyzer that looks for repeated request patterns with a strong automation upside.
- It runs alongside structured memory capture, after normal conversation completion
- It stores small local counters in app settings rather than full extra transcripts
- It only suggests from a small allowlist of patterns that Chatons is explicitly prepared to automate
- It suppresses suggestions when a matching automation already exists and rate-limits repeated prompts
- Suggestion notifications deep link into the automation main view with a prefilled draft target
Scopes
global: Personal facts, preferences, user contextproject: Project-specific decisions, conventions, architecture notes
13. Worktrees
Current State
- Disabled by default per conversation
- Enabled explicitly via the topbar branch icon
- Supports commit, merge to base branch, push, and VS Code integration
- Worktree paths are stored in the conversation database row
- Orphaned worktrees are cleaned up on app startup
- Worktree removal prefers
git worktree remove --forcewhen native Git is available
Limitations
- Some metadata (ahead/behind counts) is approximate
- Push may be unavailable depending on environment
- Automatic worktree merge is only enabled when native Git is available
- Should not be treated as fully authoritative Git tooling
14. Updates
Implemented in electron/lib/update/update-service.ts.
- Checks GitHub releases for the
thibautrey/chatonrepository - Compares against
app.getVersion() - Downloads DMGs/installers to
<userData>/updates/ - On macOS, opens the DMG in Finder
- Cleans update artifacts from
<userData>/updates/after a successful install is detected on a later launch - Purges stale update artifacts that are older than 7 days
- Changelogs are prefetched from GitHub on startup and stored locally
15. Telemetry
Implemented in electron/lib/telemetry/sentry.ts.
- Backend: Sentry
- Not initialized in dev mode
- Gated by
allowAnonymousTelemetrysetting (user consent required) - Captures: uncaught exceptions, unhandled rejections, render process crashes, unresponsive events
- Renderer errors forwarded via IPC
- Consent can be changed anytime in Settings > Sidebar
16. Deep Links
Chatons registers the chatons:// protocol.
Currently supported route:
chatons://extensions/install/@scope/package-name
chatons://cloud/connect?base_url=https://cloud.chatons.ai
chatons://cloud/auth/callback?...
On macOS: handled via open-url event. On Windows/Linux: handled via single-instance lock and command-line argv.
For dev-only desktop automation, Chatons may skip the normal single-instance lock when launched with CHATON_ALLOW_AUTOMATION_INSTANCE=1. This is reserved for QA harnesses that need to launch a second real Electron window alongside an already-running dev session.
Pending deep links that arrive before the window is ready are queued and processed after a 1.5-second delay.
17. Sandbox Manager
Initialized on startup. Provides sandboxed execution for Node.js and Python commands via electron/lib/sandbox/sandbox-manager.ts.
Available Methods
| Method | Purpose |
|---|---|
executeNodeCommand(command, args, cwd, timeout) | Run a Node.js command |
executeNpmCommand(args, cwd) | Run an npm command |
executePythonCommand(args, cwd, timeout) | Run a Python command |
executePipCommand(args, cwd) | Run a pip command |
checkNodeAvailability() | Returns { available, version } |
checkPythonAvailability(cwd) | Returns Python availability info |
cleanup() | Releases both sandbox environments |
IPC channels: sandbox:executeNodeCommand, sandbox:executeNpmCommand, sandbox:executePythonCommand, sandbox:executePipCommand, sandbox:checkNodeAvailability, sandbox:checkPythonAvailability, sandbox:cleanup.
Internally delegates to NodeSandbox and PythonSandbox classes.
The Node sandbox resolves node through Electron's embedded runtime and prefers the bundled npm CLI in packaged builds before falling back to any system npm.
18. Logging System
Implemented in electron/lib/logging/log-manager.ts.
Log Format
Each log entry is a JSON line:
{
"timestamp": "2026-03-10T14:30:00.000Z",
"source": "electron",
"level": "info",
"message": "Session started",
"data": {}
}
Sources: electron, pi, frontend. Levels: info, warn, error, debug.
Storage
- Directory:
<userData>/logs/ - Filename pattern:
chaton-YYYY-MM-DDTHH-MM-SS-SSSZ.log - Max file size: 1 MB (then rotates to a new file)
- Max log files: 5 (oldest are deleted on startup)
How It Works
- On initialization,
captureConsoleLogs()wrapsconsole.log/warn/error/debugto capture all main-process output - Logs are buffered in memory (flushed every 10 entries or on shutdown)
- Pi runtime events are logged with
source: "pi" - Renderer errors are forwarded via IPC with
source: "frontend"
IPC Channels
| Channel | Purpose |
|---|---|
logs:getLogs | Read recent log entries (default: last 100) |
logs:clearLogs | Delete current log file and clear buffer |
logs:getLogFilePath | Return current log file path |
19. Performance Tracing
tracing:start and tracing:stop IPC handlers use Electron's contentTracing API to capture Chrome-level performance traces.
tracing:startbegins recording with default trace categoriestracing:stopstops recording and returns the path to the trace file- The trace file can be loaded in
chrome://tracingfor analysis - Only one tracing session can be active at a time
Useful for diagnosing renderer performance issues, animation jank, or IPC bottlenecks.
20. Local Provider Detection
Chatons can detect locally running AI providers during onboarding.
Ollama Detection (ollama:detect)
- Check binary:
command -v ollama(Unix) /where ollama(Windows) - Check API:
GET http://127.0.0.1:11434/api/tags - Returns
{ installed, apiRunning, baseUrl: "http://localhost:11434/v1" }
LM Studio Detection (lmstudio:detect)
- Check app path per platform:
- macOS:
/Applications/LM Studio.app - Windows:
%LOCALAPPDATA%\Programs\LM Studio - Linux:
~/LM-Studioor~/Applications/LM-Studio
- macOS:
- Check API:
GET http://127.0.0.1:1234/v1/models - Returns
{ installed, apiRunning, baseUrl: "http://localhost:1234/v1" }
VS Code Detection (vscode:detect)
Checks for code binary via which code (macOS/Linux) or where code (Windows). Used for the worktree "Open in VS Code" integration.
21. Internationalization (i18n)
Setup
- Library: i18next with
react-i18next - Configuration:
src/lib/i18n.ts - Default language:
fr(French) - Fallback language:
en(English)
Translation Pattern
Source strings are in French. English translations are provided in the en resource block. Both languages share the same keys.
const { t } = useTranslation()
// t('Nouvelle conversation') -> "New conversation" (en) / "Nouvelle conversation" (fr)
Language Persistence
Language preference is stored in the SQLite app_settings table via:
settings:getLanguagePreference(read)settings:updateLanguagePreference(write)
The preference persists across app restarts.
22. Composer Drafts
Draft messages are persisted to the composer_drafts SQLite table.
Key Format
- Active conversation: keyed by conversation ID
- Draft project thread: keyed by
draft:<projectId> - Global draft: keyed by
"global"
IPC Channels
| Channel | Purpose |
|---|---|
composer:saveDraft | Save draft (key, content) |
composer:getDraft | Load draft by key |
composer:getAllDrafts | Load all drafts |
composer:deleteDraft | Delete draft by key |
Drafts are saved automatically as the user types and restored when switching between conversations.
23. Conversation Completion Chime
Implemented in src/lib/audio/conversation-success-chime.ts.
- Plays a
.wavfile at 24% volume when a conversation action completes - 1.5-second cooldown between plays (prevents rapid-fire)
- Controlled by
enableConversationChimesetting (default:true) - Configurable in Settings > Audio
- Silently fails if browser autoplay policy blocks audio
24. Quick Action Usage Tracking
Quick action cards (shown in empty conversations) are ordered by a decay-scored usage ranking.
Algorithm
- Each usage is recorded in the
quick_actions_usagetable - Score decays exponentially with a 14-day half-life
decayedScore = storedScore * exp(-ln(2) * elapsed / HALF_LIFE_MS)- Higher-scored actions appear first
IPC
quickActions:recordUse-- Record a usage eventquickActions:listUsage-- Read usage data for all actions
Scope Filtering
Quick actions declare a scope that controls when they appear:
| Scope | When Shown |
|---|---|
always | Always visible |
global-thread | Only in global (non-project) threads |
project-thread | Only in project threads |
global-or-no-thread | In global threads or when no conversation is selected |
25. Skills Marketplace
The skills marketplace is a structured catalog accessible from the Skills panel.
Data Sources
| IPC Channel | Purpose |
|---|---|
skills:listCatalog | Full skill catalog |
skills:getMarketplace | Curated marketplace (featured, new, trending, by category) |
skills:getMarketplaceFiltered | Filtered marketplace with query, category, language, sort |
skills:getRatings | All ratings (optionally filtered by skill) |
skills:addRating | Submit a rating (1-5 stars + optional review) |
skills:getAverageRating | Average rating for a specific skill |
Marketplace Structure
{
featured: ExternalSkill[] // Curated selection (max 6)
new: ExternalSkill[] // Recently added (max 8)
trending: ExternalSkill[] // Popular/recommended (max 8)
byCategory: Array<{ name: string; count: number; items: ExternalSkill[] }>
}
Skill Metadata
Each ExternalSkill includes: source, title, description, author, installs, stars, category, tags, language, lastUpdated, featured, popularity (new | trending | popular | recommended), repository, documentation, dependencies, rating.
26. Extension Marketplace
The extension marketplace follows a similar structure to the skills marketplace.
Data Sources
| Function | Purpose |
|---|---|
listChatonsExtensionCatalog() | Raw catalog entries from npm registry + bundled entries |
getExtensionMarketplace() | Structured marketplace (featured, new, trending, by category) |
getExtensionMarketplaceAsync() | Async version with fresh npm catalog fetch |
checkForExtensionUpdates() | Compare installed versions against catalog |
updateAllChatonsExtensions() | Batch-update all extensions |
Marketplace Structure
Same shape as skills marketplace: featured, new, trending, byCategory.
Extension Catalog Entries
Each entry includes: id, name, version, description, author, category, tags, featured, popularity, lastUpdated, npmUrl, iconUrl.
Update Flow
extensions:checkUpdatescompares installed versions against the catalog- Returns
{ id, currentVersion, latestVersion }[] - UI shows update count badge on the Extensions sidebar entry
- Individual or batch update via
extensions:update/extensions:updateAll
27. Performance Monitor
A debug-only performance monitor is available at window.__perf:
window.__perf.enable() // Start recording
window.__perf.disable() // Stop recording
window.__perf.report() // Print performance report
Implemented in src/features/workspace/store/perf-monitor.ts. Records component render counts and can be used to diagnose UI performance issues.
28. Skills vs Extensions
| Skills | Extensions | |
|---|---|---|
| Managed by | Pi runtime (CLI commands) | Chatons extension registry |
| Storage | Pi skills directory | ~/.chaton/extensions/ |
| Has UI | No | Yes (main views, menu items, quick actions) |
| Has storage API | No | Yes (KV, files, queue) |
| Has events | No | Yes (subscribe/publish) |
| Has LLM tools | Yes (Pi-native) | Yes (via manifest + apiCall) |
If you need a UI, storage, events, or host APIs, build an extension. If you need a simple tool the AI can call, a Pi skill may suffice.
29. Documentation Contract
Any change affecting user workflows, extension APIs, configuration semantics, architecture, or technical limitations must update documentation in the same changeset.
Primary docs to keep aligned:
- User Guide
- Developer Guide (this file)
- Pi Integration
- Extensions
- Automation Extension
- Documentation Audit
See also: AGENTS.md in the repository root for maintainer-facing reference.