Open-JetDocs

jetson llm

Jetson LLM for NVIDIA Jetson devices

If you are specifically looking for a Jetson LLM, this page is about running local models on NVIDIA Jetson hardware rather than sending operational context to a hosted API.

OpenJet gives you a terminal-native Jetson LLM workflow for Nano, Xavier NX, Orin Nano, Orin NX, and AGX Orin. It keeps inference, logs, and approvals on the device, while exposing a practical interface for local operations on constrained Jetson hardware.

benefits

Why local wins

Built for Jetson-class hardware

OpenJet is designed around Jetson deployment realities: limited RAM, local storage, operator access through terminal workflows, and device-side execution.

Works across the Jetson line

The workflow is aimed at Jetson Nano, Xavier NX, Orin Nano, Orin NX, and AGX Orin deployments where on-device inference matters more than remote convenience.

Local model plus local actions

OpenJet keeps the model, logs, and approved actions on the Jetson, which is a better fit for field systems and embedded deployments than cloud-first agents.

use cases

Common deployment paths

faq

Jetson LLM FAQ

What makes OpenJet a Jetson LLM?

OpenJet is built for Jetson-class hardware and local inference workflows. It is designed to run on NVIDIA Jetson devices while keeping logs, model execution, and approvals on the device.

Which Jetson devices does OpenJet target?

OpenJet is aimed at Jetson Nano, Xavier NX, Orin Nano, Orin NX, and AGX Orin environments.

Why use a Jetson LLM instead of a hosted model?

A Jetson LLM keeps execution close to the hardware, reduces cloud dependency, and makes more sense when logs, tools, and operators already live on the device.

next step

See how OpenJet runs local models on constrained hardware.