Built for Jetson-class hardware
OpenJet is designed around Jetson deployment realities: limited RAM, local storage, operator access through terminal workflows, and device-side execution.
jetson llm
If you are specifically looking for a Jetson LLM, this page is about running local models on NVIDIA Jetson hardware rather than sending operational context to a hosted API.
OpenJet gives you a terminal-native Jetson LLM workflow for Nano, Xavier NX, Orin Nano, Orin NX, and AGX Orin. It keeps inference, logs, and approvals on the device, while exposing a practical interface for local operations on constrained Jetson hardware.
benefits
OpenJet is designed around Jetson deployment realities: limited RAM, local storage, operator access through terminal workflows, and device-side execution.
The workflow is aimed at Jetson Nano, Xavier NX, Orin Nano, Orin NX, and AGX Orin deployments where on-device inference matters more than remote convenience.
OpenJet keeps the model, logs, and approved actions on the Jetson, which is a better fit for field systems and embedded deployments than cloud-first agents.
use cases
faq
OpenJet is built for Jetson-class hardware and local inference workflows. It is designed to run on NVIDIA Jetson devices while keeping logs, model execution, and approvals on the device.
OpenJet is aimed at Jetson Nano, Xavier NX, Orin Nano, Orin NX, and AGX Orin environments.
A Jetson LLM keeps execution close to the hardware, reduces cloud dependency, and makes more sense when logs, tools, and operators already live on the device.
next step