documentation
OpenJet
**An AI coding agent that runs entirely on your machine.**
Repo Markdown Docs
These links go directly to the upstream repository markdown files instead of flattening them into this page.
Docs
Docs / Runtimes
Docs / Usage
Docs / Sdk
Docs / Examples
Docs / Deployment
**An AI coding agent that runs entirely on your machine.**
This is Claude Code for local LLMs. OpenJet handles the model, the runtime, and the setup without having to manually wrangle complex confirgurations. You get a coding agent in your terminal that reads your files, edits your code, runs commands, and stays out of the cloud.
OpenJet has three primary surfaces in one repo:
- **CLI + chat TUI** for interactive local agent work
- **Python SDK** for embedding sessions, hardware profiling, and auto-
llama.cppconfiguration - **Benchmarking tools** for running
llama-benchand sweep comparisons against your active model profile
Get started
git clone https://github.com/l-forster/open-jet.git
cd open-jet
./install.sh
open-jet --setupThat's it. Setup detects your hardware, picks a model that fits your RAM, downloads it, and gets everything running. Already have a .gguf? It finds that too.
Then just:
open-jetOther entrypoints from the same install:
open-jet benchmark --sweepfrom openjet.sdk import OpenJetSession, recommend_hardware_configWhat you get
An agent in your terminal that can actually do things:
- **Read and edit your code** — search files, apply edits, write new ones
- **Run shell commands** — with explicit approval before anything executes
- **Resume sessions** — close the terminal, come back later, pick up where you left off
- **Work on constrained hardware** — automatic context condensing, model unload/reload around heavy tasks
- **Device access** — cameras, microphones, GPIO for edge and embedded work
- **Python SDK** — automate the same agent from scripts and external apps
- **Hardware profiling + auto-config** — recommend model/runtime settings for local
llama.cpp - **Benchmark sweeps** — compare prompt/gen throughput across GPU layers, batch sizes, and thread counts
Why this exists
Cloud coding agents need API keys, send your code to someone else's server, and cost money per token. Local chat tools give you a chat window but not an agent — no file access, no shell, no session recovery.
OpenJet closes that gap. It's built for local models on real hardware, where memory is tight, context windows are short, and sessions get interrupted. Everything runs on your machine, nothing leaves it.
Docs
CLI + chat TUI
- Usage: CLI
- Usage: Slash commands
- Usage: Device sources
- Usage: Workflow harness
- Usage: Session state and logging
SDK + hardware profiling
Benchmarking
Examples and deployment
Community
Discord: https://discord.gg/pspKHtExSa
X: https://x.com/FlouisLF
License
OpenJet core is licensed under Apache-2.0.
That means individual developers and companies can use, modify, and embed the core SDK and CLI freely under the Apache terms. Future paid offerings for hosted, team, or enterprise functionality may be shipped separately under commercial terms.
External contributions are accepted under the contributor terms in CONTRIBUTING.md and CLA.md.