Chatons Developer Guide

Architecture and implementation reference for contributors. Verified against the codebase as of March 10, 2026.


1. Architecture Overview

Chatons is an Electron desktop application with five layers:

LayerResponsibility
Electron main processBoot, IPC, persistence, updates, host integrations, status bar
React rendererApplication UI, state management, i18n
Pi runtime bridgePer-conversation AI sessions, tool execution
ACP orchestration layerTyped internal agent-to-agent envelopes, task state, and subagent audit trail
SQLite storageApp data, caches, extension KV/queue, automation rules, memory
Extension runtimeBuilt-in and user-installed extensions, LLM tools, channels, packaged extension web apps

Key Entry Points

FilePurpose
electron/main.tsApp bootstrap, window creation, IPC registration
electron/ipc/workspace.tsPi bootstrap, workspace IPC, project terminal
electron/ipc/workspace-handlers.tsIPC handler registration (all workspace:* channels)
electron/pi-sdk-runtime.tsPer-conversation Pi session management
electron/acp/router.tsACP coordination, persistence fanout, renderer broadcasts
electron/acp/store.tsACP SQLite persistence helpers
electron/extensions/runtime.tsExtension host, event dispatch, storage bridge
src/App.tsxRenderer entry, loading gate, app shell
src/features/workspace/store/provider.tsxWorkspace state provider

2. App Modes

Chatons has two UI modes, defined in src/features/workspace/types.ts:

type AppMode = 'workspace' | 'assistant'
type AssistantView = 'home' | 'conversations' | 'memory' | 'automations' | 'channels'

The mode is toggled by SidebarModeSwitcher (bottom of sidebar). Default mode is workspace.

Workspace Mode

Traditional coding-focused view: sidebar with conversations/projects, main panel with conversation timeline or settings, composer.

Assistant Mode

Dashboard-focused view with a different sidebar and main panel. Routed by AssistantMainView.tsx:

ViewComponentPurpose
homeAssistantDashboardOverview: greeting, quick actions, channel status, recent activity, memory insights, automations summary
conversationsSwitches to workspace modeN/A
memoryAssistantMemoryViewFull memory browser
automationsAssistantAutomationsViewFull automation rules browser
channelsAssistantChannelsViewChannel extension list with config

Assistant Onboarding

Separate from provider onboarding. Gated by assistantOnboardingCompleted setting. Three steps:

  1. Channel selection (loads marketplace, identifies channel extensions by category/tags)
  2. Personalization (assistant name, user name)
  3. Confirmation

Stored settings: assistantName, assistantUserName, assistantChannelId, assistantOnboardingCompleted.

Dashboard Components

ComponentData Source
StatusHeaderTime-based greeting + channel connected count
QuickActions4 hardcoded actions (talk, schedule, memory, settings)
ChannelsStatusExtension list filtered by kind: "channel", queries channelStatus from UI entries
RecentActivity5 most recent conversations sorted by lastMessageAt
MemoryInsightsCalls memory.stats plus memory.list on @chaton/memory, shows an accurate count and 5-entry preview
AutomationsSummaryCalls automation.list_scheduled_tasks on @chaton/automation, shows rules

3. Technology Stack

AreaTechnology
Desktop shellElectron
Renderer UIReact + TypeScript
Localizationi18next (useTranslation())
DatabaseSQLite via better-sqlite3
AI runtime@mariozechner/pi-coding-agent SDK
Git supportGitService in electron/lib/git/
Extension hostCustom runtime in electron/extensions/runtime/
TelemetrySentry (@sentry/electron/main), gated by user consent
Status barmacOS-only tray icon (electron/lib/status-bar.ts)
Sandboxelectron/lib/sandbox/sandbox-manager.ts (Node.js and Python)

4. App Startup Sequence

Main Process (electron/main.ts)

  1. Set app name to Chatons
  2. Register chatons:// protocol handler for deep links
  3. Register chaton-extension:// for packaged extension assets
  4. Override userData path to <appData>/Chatons
  5. Initialize Sentry telemetry (disabled in dev mode, gated by consent)
  6. On app.whenReady():
    • Bootstrap Pi agent directory (ensurePiAgentBootstrapped())
    • Initialize logging
    • Initialize Pi manager
    • Initialize sandbox manager
    • Clean up orphaned worktrees
    • Prefetch changelogs from GitHub
    • Register IPC handlers (workspace, Pi, update)
    • Create main browser window
    • Flush any pending deep link URL
  7. On before-quit: Stop all Pi runtimes
  8. On window-all-closed: Do nothing (app stays alive on all platforms)

Packaged desktop builds also ship a bundled npm CLI in the app resources and run it via process.execPath with ELECTRON_RUN_AS_NODE=1 for extension install/update/publish flows. This keeps those actions independent from the GUI process PATH.

macOS Window Behavior

On macOS, closing the window hides it instead of quitting. The app stays in the background with a status bar icon. Cmd+Q or the menu "Quit" action triggers actual quit.

On Windows and Linux, closing the last window destroys it but leaves the process alive in the background. Chatons now clears the main-window reference on actual destruction so activation, notifications, and deep-link handling can recreate a fresh window instead of targeting a stale destroyed instance.

Renderer (src/App.tsx)

The renderer blocks the main UI behind a LoadingSplash component until workspace hydration completes. The loading screen shows the mascot video and rotating cat-themed messages.

Once hydrated, AppShell renders the sidebar, topbar, main view, and composer. If onboarding has not been completed (no providers configured), the OnboardingFlow is shown instead.


5. Pi Integration

Chatons uses a dedicated Pi directory: <userData>/.pi/agent

This is not the user's global ~/.pi directory. The app forces PI_CODING_AGENT_DIR to its own managed location.

Bootstrap (ensurePiAgentBootstrapped)

Creates the following if missing:

PathPurpose
<userData>/.pi/agent/settings.jsonPi settings (enabledModels, etc.)
<userData>/.pi/agent/models.jsonProvider definitions and model metadata
<userData>/.pi/agent/auth.jsonProvider credentials (API keys, OAuth tokens)
<userData>/.pi/agent/sessions/Pi session state
<userData>/.pi/agent/worktrees/chaton/Git worktree storage
<userData>/.pi/agent/bin/Pi binary fallback
<userData>/workspace/globalGlobal workspace directory

Also syncs API keys between models.json and auth.json, and migrates openai-codex base URLs if needed.

CLI Execution

For Pi CLI commands (model sync, skill management):

  1. Prefers bundled @mariozechner/pi-coding-agent/dist/cli.js
  2. Falls back to <userData>/.pi/agent/bin/pi

All CLI executions force PI_CODING_AGENT_DIR to the managed directory.

Internal one-shot LLM tasks should not default to this CLI path when they need the same runtime semantics as a normal conversation. Structured memory capture and AI title refinement use short-lived hidden Pi runtimes instead, so model selection, auth reloads, and provider behavior match the main conversation flow.

Model Scoping

Source of truth: settings.json > enabledModels

When a user stars/unstars a model in the UI, the actual Pi settings file is updated. This is consistent across onboarding, the composer model picker, and provider settings.

For new conversation creation, the saved composer model selection is also reused as the fallback conversation model when a UI entry point creates a thread without explicitly passing modelProvider and modelId. This keeps the conversation row, the visible picker, and the first local Pi runtime session aligned.

For full details, see Pi Integration.


6. Per-Conversation Runtimes

PiSessionRuntimeManager in electron/pi-sdk-runtime.ts creates one PiSdkRuntime per conversation.

Internal Multi-Agent Flow

Chatons now uses ACP as the internal control plane for multi-agent coordination during local coding sessions.

Design intent:

  • Pi remains the only execution runtime.
  • ACP is only the typed message/task/status/result layer between cooperating roles.
  • The orchestrator and its subagents are all attached to a single conversation thread.
  • SQLite is the durable source of truth for ACP message history, agent state, and task-list history.

Current implementation path:

  • electron/core-tools.ts still exposes the orchestration tools (create_task_list, spawn_subagent, run_subagent, run_subagents, and status/result helpers).
  • electron/pi-sdk-runtime.ts maps runtime-backed subagents into ACP agent registration and status/result updates.
  • electron/acp/store.ts stores ACP envelopes in acp_messages, agent state in acp_agent_states, and task history in acp_task_lists.
  • electron/acp/router.ts broadcasts chaton:acp:event so the renderer can update in real time.
  • src/hooks/use-conversation-side-panel.tsx rehydrates ACP state per conversation and keeps the existing task/subagent panel synchronized with persisted orchestration data.

This keeps ACP bounded and auditable. It is not intended as a second chat transcript or as an unbounded free-chat network of hidden agents.

This local runtime lifecycle applies only to local conversations. Cloud conversations may appear in the same workspace state, but they must not start a local Pi runtime. Their source of truth is the connected cloud instance plus its authenticated bootstrap payload. Remote execution now lives in apps/runtime-headless/server.ts, which materializes a managed Pi agent directory from the internal control-plane access grant and runs a real Pi session server-side. For repository-backed cloud projects, that remote runtime now maintains a shared project source checkout and a per-conversation Git worktree instead of a session-local clone. The authenticated cloud account now also carries a subscription tier and current usage counters, which the desktop exposes in Settings while the cloud control plane remains the only authority for quota enforcement and session admission. Cloud projects, cloud conversations, and cloud message history are all owned by the cloud control plane. The Electron main process can mirror them into local SQLite for startup hydration and renderer compatibility, but cloud creation and transcript persistence must go through cloud-api rather than being synthesized locally. Cloud memory now follows the same rule. cloud-api owns durable memory persistence and stats, while runtime-headless exposes the same memory tool contract as desktop by forwarding remote tool calls to authenticated cloud memory routes. The Postgres-backed cloud control plane now also stores normalized organization, membership, and provider rows alongside the older compatibility workspace blob. Access checks for the main project/conversation/message paths have started moving to membership-based joins, and the main cloud entity tables now also persist explicit organization_id values. The current compatibility bootstrap still exposes a single organization view, selected from the caller's memberships with owner-first ordering, so the full shared-organization data model migration is still in progress. The browser-facing cloud.chatons.ai surface follows the same rule: signup, login, organization setup, provider setup, and desktop handoff are now implemented inside the landing/ app, but they still call back into cloud-api as the source of truth. The landing app is presentation and browser-session glue, not a second control plane. In production, that landing client should default its web auth/bootstrap base URL to https://cloud.chatons.ai; alternate environments can still override it through VITE_CHATONS_CLOUD_API_URL. For the default hosted Kubernetes layout, cloud.chatons.ai is also the canonical public base URL configured into cloud-api, while api.chatons.ai may exist as an alias to the same service and realtime.chatons.ai remains the dedicated websocket host. The cloud control plane now also publishes explicit public API, realtime, and runtime URLs in bootstrap state. Desktop Chatons should persist those server-owned endpoints with the cloud instance and use them for websocket/runtime routing instead of deriving sibling ports locally. The web auth surface now also owns password-based login, email verification, and password reset initiation, but those flows remain server-driven by cloud-api, including token issuance and SMTP-backed mail delivery. Desktop cloud sign-in now reuses that same browser auth surface instead of collecting identity inline on /oidc/authorize. cloud-api issues a browser-session cookie during web login/signup, redirects unauthenticated OIDC authorize requests back through the normal login page with a return_to parameter, and renders /oidc/authorize as a consent page for the already signed-in web user. Because the hosted cloud.chatons.ai ingress may terminate directly on cloud-api, the control plane should also keep fallback GET renderers for /cloud/login and /cloud/signup so the browser redirect target remains valid even when the richer landing frontend is not serving that host. Cloud subscription enforcement is also now split between the durable paid/default plan on the user record and an optional admin-granted complimentary subscription window. When such a grant is active, it becomes the effective plan used by account responses, UI display, quota checks, and runtime admission until it expires or is replaced.

Working Directory Selection

The runtime cwd is chosen in order:

  1. Conversation worktree path (if worktree is enabled for this thread)
  2. Project repository path (if conversation is project-linked)
  3. Global workspace directory (<userData>/workspace/global)

Project worktrees are only persisted after Chatons confirms the created directory is a valid Git worktree/repository. Failed creation attempts are cleaned up instead of leaving an empty folder recorded as the conversation runtime cwd.

Access Mode

ModeTool cwdEffect
secureConversation working directoryAI restricted to project scope
openFilesystem root (/)AI has full access

Changing access mode on an existing thread restarts that runtime. Chatons also sends a hidden technical system steer to the agent after restart so it can adapt to the new filesystem boundary without exposing that bookkeeping message in the user-visible transcript.

Meta-Harness Integration

Chatons now includes a first-phase Meta-Harness implementation around the local Pi runtime.

Current architecture:

  • electron/meta-harness/types.ts defines the typed HarnessCandidate surface
  • electron/meta-harness/bootstrap.ts performs the bounded environment-bootstrap probe and returns additive prompt sections
  • electron/meta-harness/archive.ts stores candidates, prompt text, scores, traces, and frontier metadata under the managed Pi directory
  • electron/pi-sdk-runtime.ts now maps harness tool permissions and hook policies onto Pi's native beforeToolCall / afterToolCall primitives
  • electron/meta-harness/evaluator.ts evaluates stored candidates against a narrow benchmark in isolated ephemeral local conversations
  • electron/core-tools.ts exposes maintainer-facing Meta-Harness tools for listing candidates, inspecting the frontier, storing candidates, promoting one as active, and running evaluation
  • electron/ipc/pi.ts exposes lightweight IPC readers for candidate/frontier inspection

The runtime seam remains electron/pi-sdk-runtime.ts. Harness application happens before session creation finishes so the environment snapshot can be inserted before the first model turn. This implementation is intentionally bounded: it optimizes harness behavior around a fixed model rather than letting a proposer rewrite arbitrary runtime files.

Prompt Composition

At session creation, Chatons injects additional system prompts covering:

  • Current access mode and its constraints
  • Thread action suggestion format
  • How the model should explain secure-mode limitations

The runtime exposes get_access_mode so the model can re-check the live mode.

Conversation memory retrieval is no longer injected automatically at turn start. The memory extension exposes runtime tools such as memory.search and memory.get, and the model is expected to call them only when recalled context is likely to help with the active request. This avoids contaminating the opening prompt and reduces false assumptions caused by stale memory being present before the model has decided it needs it.

Event Bridge

Pi runtime events are forwarded to the renderer via pi:event IPC channel. Events include message lifecycle, tool execution, compaction, retry, extension UI requests, and runtime status/errors. Events are also logged through the logging pipeline with source: "pi".

The workspace handler layer also registers a runtime event subscription. That subscription is explicitly disposed during shutdown so same-process reload paths do not accumulate duplicate Pi listeners.

Settings Lock Handling

settings.json.lock files older than 5 minutes are cleaned up on startup. SettingsManager.create() is retried with exponential backoff (100ms, 200ms, 400ms) on lock contention.


7. Provider and Auth System

Credential Storage

FileContent
models.jsonProvider definitions, model metadata, optionally API keys
auth.jsonCanonical credential storage (API keys and OAuth tokens)

Chatons synchronizes credentials between both files to maintain compatibility. The provider models arrays remain authoritative in models.json even after API keys are migrated to auth.json; normalization must not remove models simply because credentials now live only in auth.json. For custom providers with explicit models, Chatons must also keep models.json valid for Pi's ModelRegistry even when the true secret is stored only in auth.json. If models.json omits apiKey entirely for such a provider, Pi can reject the whole custom provider registry and surface only built-in models. Known local no-auth providers such as lmstudio, ollama, local, and localhost are a special case: Chatons strips any stale api_key entry for them during sync and runtime startup so local OpenAI-compatible endpoints can run without an Authorization header when the backend allows it. For those providers, Chatons may still persist apiKey: "!" in models.json as a Pi-compatibility placeholder when the provider defines explicit custom models. That sentinel must remain a literal placeholder during runtime model resolution; if it is interpreted as a shell command, Pi drops the provider fallback credential and wrongly rejects model switches with No API key.

OAuth Providers

ProviderFlowDetails
ChatGPT (openai-codex)PKCE + local HTTP server on port 1455Browser callback handled by Pi SDK
Claude Pro (anthropic)PKCE, user pastes codeUser copies code from claude.ai
GitHub Copilot (github-copilot)Device flowUser enters code on github.com

OAuth credentials are stored in auth.json as { type: "oauth", access, refresh, expires }.

Provider Cards

Provider selection in onboarding and settings groups cards by company:

  • OpenAI group: "OpenAI" (API key, api.openai.com/v1) and "ChatGPT" (OAuth, chatgpt.com/backend-api)
  • Mistral group: "Mistral" (api.mistral.ai/v1) and "Mistral Vibe" (vibe.mistral.ai/v1)

Base URL Normalization

When saving a provider, Chatons probes multiple URL variants (http://host:port, with /, with /v1, with /v1/) and scores candidates using compatibility endpoints (/models, /chat/completions, /responses).

This avoids selecting a base URL that answers /models but fails chat generation endpoints (for example missing /v1 leading to Cannot POST /chat/completions).

For custom HTTP providers, this probing and model discovery runs through a Node http/https transport in the main process instead of ambient fetch. This was required because packaged Electron runs could fail against local-network endpoints even when the same binary worked in terminal/dev mode.

Auth Diagnostics

On 401 errors, Chatons logs a sanitized debug trail (provider ID, credential source, masked key fingerprint). Raw keys are never logged.


8. Renderer State

Workspace Store

Main state management files:

FilePurpose
src/features/workspace/store.tsxStore hook and WorkspaceProvider
src/features/workspace/store/provider.tsxState provider implementation
src/features/workspace/store/state.tsState shape and initial values
src/features/workspace/store/pi-events.tsPi runtime event handlers
src/features/workspace/store/context.tsReact context

Responsibilities: hydrate projects/conversations/settings, track Pi runtime state per conversation, insert optimistic user messages, apply runtime events to message state, coordinate notices and extension interactions.

Composer Behavior

  • Queueing: Messages sent while the AI is busy are queued, not dropped. Queue is visible and editable.
  • Thread actions: The runtime can emit set_thread_actions with up to 4 action badges. These appear above the textarea and clear on next send.
  • Attachments: Images become image payloads, small text files become inline text, binary/large files become base64 previews.

9. Data Model (SQLite)

Migrations live in electron/db/migrations/. Currently 14 migration files.

Tables

TablePurpose
projectsImported project repositories
conversationsConversation metadata, model selection, access mode, worktree path
conversation_messages_cacheCached message payloads
app_settingsKey-value app settings
pi_models_cacheCached model list from providers
quick_actions_usageQuick action usage tracking
extension_kvExtension key-value storage (JSON values)
extension_queueExtension job queue (queued/processing/done/dead)
automation_rulesAutomation rule definitions
automation_runsAutomation execution history
memory_entriesMemory system entries with embeddings
project_custom_terminal_commandsCustom terminal commands per project
composer_draftsDraft messages in the composer

Key Conversation Fields

Model choice, worktree state, and access mode are stored per conversation in the database -- they are persistent, not ephemeral UI state:

  • project_id, model_provider, model_id, thinking_level
  • worktree_path, access_mode
  • Runtime error state

10. Project Terminal

Implemented in electron/ipc/workspace.ts (buildDetectedProjectCommands, detectProjectType) and src/components/shell/ProjectTerminalDialog.tsx.

Project Type Detection

detectProjectType() checks for:

FileType
package.jsonnode
pyproject.toml, requirements.txt, setup.pypython
Cargo.tomlrust
go.modgo
CMakeLists.txt, Makefile, makefilec

Command Generation

Based on type, buildDetectedProjectCommands() generates commands like npm run dev, cargo test, python manage.py runserver, etc. Custom commands are also supported and stored per project.

Execution Model

  • Commands run in the host environment, not through Pi
  • Requires open access mode
  • Output is streamed live
  • Multiple concurrent runs are tracked with tabs
  • Not an interactive PTY terminal -- it is a managed process runner with streamed output

11. Extension Platform

Full documentation: Extensions, Extensions API

Architecture

  • Extensions live in ~/.chaton/extensions/<extension-id>/
  • Built-in extensions (@chaton/automation, @chaton/memory, @chaton/ide-launcher, @chaton/tps-monitor, etc.) are bundled in electron/extensions/builtin/
  • Extension manifest: chaton.extension.json
  • Runtime: electron/extensions/runtime/ (host, storage, types, capabilities, UI bridge, automation, memory)
  • API surface: window.chaton.* (not window.chatonExtension.api.*)

Capabilities

13 capabilities: ui.menu, ui.mainView, storage.kv, storage.files, events.subscribe, events.publish, queue.publish, queue.consume, llm.tools, host.notifications, host.conversations.read, host.conversations.write, host.projects.read

Extensions can now also contribute ui.topbarItems for lightweight topbar actions without adding a sidebar entry.

Channel Extensions

Extensions with kind: "channel" appear under the Channels sidebar entry. They bridge external messaging platforms into Chatons global threads. See Channels.

The local @thibautrey/chatons-channel-even-realities channel exposes a local HTTP server on http://127.0.0.1:42619/messages for Even Realities glasses, routes incoming POST payloads into global channel conversations, and ignores the incoming transport model field so the Chatons conversation model remains the source of truth. Its extension server also relies on a readyUrl healthcheck, and Chatons now reuses that already-live endpoint across reloads instead of trying to bind a second process to port 42619.

Conversations created by that channel also receive a dedicated Pi runtime prompt section telling the agent it is replying for smart glasses and should keep answers very short and quick by default.


12. Built-in Memory Extension

Extension ID: @chaton/memory

Implemented in electron/extensions/runtime/memory.ts.

Search Engine

Uses a local hashed trigram vector strategy:

  1. Text is normalized (NFKC, lowercase, whitespace collapsed)
  2. Character trigrams are extracted
  3. Each trigram is hashed (FNV-1a) to a position in a 256-dimension vector
  4. Vectors are L2-normalized
  5. Search uses cosine similarity

This is entirely offline -- no external embedding service or model download needed. It is practical for factual memory lookup but is not equivalent to a neural embedding model.

Automation Suggestions

After completed conversations, Chatons now runs a lightweight background analyzer that looks for repeated request patterns with a strong automation upside.

  • It runs alongside structured memory capture, after normal conversation completion
  • It stores small local counters in app settings rather than full extra transcripts
  • It only suggests from a small allowlist of patterns that Chatons is explicitly prepared to automate
  • It suppresses suggestions when a matching automation already exists and rate-limits repeated prompts
  • Suggestion notifications deep link into the automation main view with a prefilled draft target

Scopes

  • global: Personal facts, preferences, user context
  • project: Project-specific decisions, conventions, architecture notes

13. Worktrees

Current State

  • Disabled by default per conversation
  • Enabled explicitly via the topbar branch icon
  • Supports commit, merge to base branch, push, and VS Code integration
  • Worktree paths are stored in the conversation database row
  • Orphaned worktrees are cleaned up on app startup
  • Worktree removal prefers git worktree remove --force when native Git is available

Limitations

  • Some metadata (ahead/behind counts) is approximate
  • Push may be unavailable depending on environment
  • Automatic worktree merge is only enabled when native Git is available
  • Should not be treated as fully authoritative Git tooling

14. Updates

Implemented in electron/lib/update/update-service.ts.

  • Checks GitHub releases for the thibautrey/chaton repository
  • Compares against app.getVersion()
  • Downloads DMGs/installers to <userData>/updates/
  • On macOS, opens the DMG in Finder
  • Cleans update artifacts from <userData>/updates/ after a successful install is detected on a later launch
  • Purges stale update artifacts that are older than 7 days
  • Changelogs are prefetched from GitHub on startup and stored locally

15. Telemetry

Implemented in electron/lib/telemetry/sentry.ts.

  • Backend: Sentry
  • Not initialized in dev mode
  • Gated by allowAnonymousTelemetry setting (user consent required)
  • Captures: uncaught exceptions, unhandled rejections, render process crashes, unresponsive events
  • Renderer errors forwarded via IPC
  • Consent can be changed anytime in Settings > Sidebar

Chatons registers the chatons:// protocol.

Currently supported route:

chatons://extensions/install/@scope/package-name
chatons://cloud/connect?base_url=https://cloud.chatons.ai
chatons://cloud/auth/callback?...

On macOS: handled via open-url event. On Windows/Linux: handled via single-instance lock and command-line argv.

For dev-only desktop automation, Chatons may skip the normal single-instance lock when launched with CHATON_ALLOW_AUTOMATION_INSTANCE=1. This is reserved for QA harnesses that need to launch a second real Electron window alongside an already-running dev session.

Pending deep links that arrive before the window is ready are queued and processed after a 1.5-second delay.


17. Sandbox Manager

Initialized on startup. Provides sandboxed execution for Node.js and Python commands via electron/lib/sandbox/sandbox-manager.ts.

Available Methods

MethodPurpose
executeNodeCommand(command, args, cwd, timeout)Run a Node.js command
executeNpmCommand(args, cwd)Run an npm command
executePythonCommand(args, cwd, timeout)Run a Python command
executePipCommand(args, cwd)Run a pip command
checkNodeAvailability()Returns { available, version }
checkPythonAvailability(cwd)Returns Python availability info
cleanup()Releases both sandbox environments

IPC channels: sandbox:executeNodeCommand, sandbox:executeNpmCommand, sandbox:executePythonCommand, sandbox:executePipCommand, sandbox:checkNodeAvailability, sandbox:checkPythonAvailability, sandbox:cleanup.

Internally delegates to NodeSandbox and PythonSandbox classes.

The Node sandbox resolves node through Electron's embedded runtime and prefers the bundled npm CLI in packaged builds before falling back to any system npm.


18. Logging System

Implemented in electron/lib/logging/log-manager.ts.

Log Format

Each log entry is a JSON line:

{
  "timestamp": "2026-03-10T14:30:00.000Z",
  "source": "electron",
  "level": "info",
  "message": "Session started",
  "data": {}
}

Sources: electron, pi, frontend. Levels: info, warn, error, debug.

Storage

  • Directory: <userData>/logs/
  • Filename pattern: chaton-YYYY-MM-DDTHH-MM-SS-SSSZ.log
  • Max file size: 1 MB (then rotates to a new file)
  • Max log files: 5 (oldest are deleted on startup)

How It Works

  • On initialization, captureConsoleLogs() wraps console.log/warn/error/debug to capture all main-process output
  • Logs are buffered in memory (flushed every 10 entries or on shutdown)
  • Pi runtime events are logged with source: "pi"
  • Renderer errors are forwarded via IPC with source: "frontend"

IPC Channels

ChannelPurpose
logs:getLogsRead recent log entries (default: last 100)
logs:clearLogsDelete current log file and clear buffer
logs:getLogFilePathReturn current log file path

19. Performance Tracing

tracing:start and tracing:stop IPC handlers use Electron's contentTracing API to capture Chrome-level performance traces.

  • tracing:start begins recording with default trace categories
  • tracing:stop stops recording and returns the path to the trace file
  • The trace file can be loaded in chrome://tracing for analysis
  • Only one tracing session can be active at a time

Useful for diagnosing renderer performance issues, animation jank, or IPC bottlenecks.


20. Local Provider Detection

Chatons can detect locally running AI providers during onboarding.

Ollama Detection (ollama:detect)

  1. Check binary: command -v ollama (Unix) / where ollama (Windows)
  2. Check API: GET http://127.0.0.1:11434/api/tags
  3. Returns { installed, apiRunning, baseUrl: "http://localhost:11434/v1" }

LM Studio Detection (lmstudio:detect)

  1. Check app path per platform:
    • macOS: /Applications/LM Studio.app
    • Windows: %LOCALAPPDATA%\Programs\LM Studio
    • Linux: ~/LM-Studio or ~/Applications/LM-Studio
  2. Check API: GET http://127.0.0.1:1234/v1/models
  3. Returns { installed, apiRunning, baseUrl: "http://localhost:1234/v1" }

VS Code Detection (vscode:detect)

Checks for code binary via which code (macOS/Linux) or where code (Windows). Used for the worktree "Open in VS Code" integration.


21. Internationalization (i18n)

Setup

  • Library: i18next with react-i18next
  • Configuration: src/lib/i18n.ts
  • Default language: fr (French)
  • Fallback language: en (English)

Translation Pattern

Source strings are in French. English translations are provided in the en resource block. Both languages share the same keys.

const { t } = useTranslation()
// t('Nouvelle conversation') -> "New conversation" (en) / "Nouvelle conversation" (fr)

Language Persistence

Language preference is stored in the SQLite app_settings table via:

  • settings:getLanguagePreference (read)
  • settings:updateLanguagePreference (write)

The preference persists across app restarts.


22. Composer Drafts

Draft messages are persisted to the composer_drafts SQLite table.

Key Format

  • Active conversation: keyed by conversation ID
  • Draft project thread: keyed by draft:<projectId>
  • Global draft: keyed by "global"

IPC Channels

ChannelPurpose
composer:saveDraftSave draft (key, content)
composer:getDraftLoad draft by key
composer:getAllDraftsLoad all drafts
composer:deleteDraftDelete draft by key

Drafts are saved automatically as the user types and restored when switching between conversations.


23. Conversation Completion Chime

Implemented in src/lib/audio/conversation-success-chime.ts.

  • Plays a .wav file at 24% volume when a conversation action completes
  • 1.5-second cooldown between plays (prevents rapid-fire)
  • Controlled by enableConversationChime setting (default: true)
  • Configurable in Settings > Audio
  • Silently fails if browser autoplay policy blocks audio

24. Quick Action Usage Tracking

Quick action cards (shown in empty conversations) are ordered by a decay-scored usage ranking.

Algorithm

  • Each usage is recorded in the quick_actions_usage table
  • Score decays exponentially with a 14-day half-life
  • decayedScore = storedScore * exp(-ln(2) * elapsed / HALF_LIFE_MS)
  • Higher-scored actions appear first

IPC

  • quickActions:recordUse -- Record a usage event
  • quickActions:listUsage -- Read usage data for all actions

Scope Filtering

Quick actions declare a scope that controls when they appear:

ScopeWhen Shown
alwaysAlways visible
global-threadOnly in global (non-project) threads
project-threadOnly in project threads
global-or-no-threadIn global threads or when no conversation is selected

25. Skills Marketplace

The skills marketplace is a structured catalog accessible from the Skills panel.

Data Sources

IPC ChannelPurpose
skills:listCatalogFull skill catalog
skills:getMarketplaceCurated marketplace (featured, new, trending, by category)
skills:getMarketplaceFilteredFiltered marketplace with query, category, language, sort
skills:getRatingsAll ratings (optionally filtered by skill)
skills:addRatingSubmit a rating (1-5 stars + optional review)
skills:getAverageRatingAverage rating for a specific skill

Marketplace Structure

{
  featured: ExternalSkill[]   // Curated selection (max 6)
  new: ExternalSkill[]        // Recently added (max 8)
  trending: ExternalSkill[]   // Popular/recommended (max 8)
  byCategory: Array<{ name: string; count: number; items: ExternalSkill[] }>
}

Skill Metadata

Each ExternalSkill includes: source, title, description, author, installs, stars, category, tags, language, lastUpdated, featured, popularity (new | trending | popular | recommended), repository, documentation, dependencies, rating.


26. Extension Marketplace

The extension marketplace follows a similar structure to the skills marketplace.

Data Sources

FunctionPurpose
listChatonsExtensionCatalog()Raw catalog entries from npm registry + bundled entries
getExtensionMarketplace()Structured marketplace (featured, new, trending, by category)
getExtensionMarketplaceAsync()Async version with fresh npm catalog fetch
checkForExtensionUpdates()Compare installed versions against catalog
updateAllChatonsExtensions()Batch-update all extensions

Marketplace Structure

Same shape as skills marketplace: featured, new, trending, byCategory.

Extension Catalog Entries

Each entry includes: id, name, version, description, author, category, tags, featured, popularity, lastUpdated, npmUrl, iconUrl.

Update Flow

  1. extensions:checkUpdates compares installed versions against the catalog
  2. Returns { id, currentVersion, latestVersion }[]
  3. UI shows update count badge on the Extensions sidebar entry
  4. Individual or batch update via extensions:update / extensions:updateAll

27. Performance Monitor

A debug-only performance monitor is available at window.__perf:

window.__perf.enable()   // Start recording
window.__perf.disable()  // Stop recording
window.__perf.report()   // Print performance report

Implemented in src/features/workspace/store/perf-monitor.ts. Records component render counts and can be used to diagnose UI performance issues.


28. Skills vs Extensions

SkillsExtensions
Managed byPi runtime (CLI commands)Chatons extension registry
StoragePi skills directory~/.chaton/extensions/
Has UINoYes (main views, menu items, quick actions)
Has storage APINoYes (KV, files, queue)
Has eventsNoYes (subscribe/publish)
Has LLM toolsYes (Pi-native)Yes (via manifest + apiCall)

If you need a UI, storage, events, or host APIs, build an extension. If you need a simple tool the AI can call, a Pi skill may suffice.


29. Documentation Contract

Any change affecting user workflows, extension APIs, configuration semantics, architecture, or technical limitations must update documentation in the same changeset.

Primary docs to keep aligned:

See also: AGENTS.md in the repository root for maintainer-facing reference.

On this page

Chatons Developer Guide1. Architecture OverviewKey Entry Points2. App ModesWorkspace ModeAssistant ModeAssistant OnboardingDashboard Components3. Technology Stack4. App Startup SequenceMain Process (electron/main.ts)macOS Window BehaviorRenderer (src/App.tsx)5. Pi IntegrationBootstrap (ensurePiAgentBootstrapped)CLI ExecutionModel Scoping6. Per-Conversation RuntimesInternal Multi-Agent FlowWorking Directory SelectionAccess ModeMeta-Harness IntegrationPrompt CompositionEvent BridgeSettings Lock Handling7. Provider and Auth SystemCredential StorageOAuth ProvidersProvider CardsBase URL NormalizationAuth Diagnostics8. Renderer StateWorkspace StoreComposer Behavior9. Data Model (SQLite)TablesKey Conversation Fields10. Project TerminalProject Type DetectionCommand GenerationExecution Model11. Extension PlatformArchitectureCapabilitiesChannel Extensions12. Built-in Memory ExtensionSearch EngineAutomation SuggestionsScopes13. WorktreesCurrent StateLimitations14. Updates15. Telemetry16. Deep Links17. Sandbox ManagerAvailable Methods18. Logging SystemLog FormatStorageHow It WorksIPC Channels19. Performance Tracing20. Local Provider DetectionOllama Detection (ollama:detect)LM Studio Detection (lmstudio:detect)VS Code Detection (vscode:detect)21. Internationalization (i18n)SetupTranslation PatternLanguage Persistence22. Composer DraftsKey FormatIPC Channels23. Conversation Completion Chime24. Quick Action Usage TrackingAlgorithmIPCScope Filtering25. Skills MarketplaceData SourcesMarketplace StructureSkill Metadata26. Extension MarketplaceData SourcesMarketplace StructureExtension Catalog EntriesUpdate Flow27. Performance Monitor28. Skills vs Extensions29. Documentation Contract