ChatGPT / OpenAI
Bring your OpenAI API key. Choose the model you trust for the work — Magellan is the interface, not the inference.
Use your own Anthropic, OpenAI, or OpenAI-compatible (Ollama / LM Studio) key. Operators select which tool output gets attached as context. Magellan can explain, suggest, and draft — operators run every command.
Selected output from any tool — terminal, HTTP response, DB query result, packet capture, switch config — attaches to a Magellan thread. Visible attachment pills show what was sent. Secrets are redacted before the request leaves your machine. The operator runs every command.
ShellYard does not need to be the model. ShellYard is the secure interface between your infrastructure workflow and the model you choose.
Bring your OpenAI API key. Choose the model you trust for the work — Magellan is the interface, not the inference.
Same flow with Anthropic — paste your key, pick a Claude model, work with it inside the terminal.
Point Magellan at a local Ollama (or LM Studio / OpenAI-compatible) endpoint and keep prompts and responses on your network.
Magellan is layered so AI assistance moves from understanding to action only when an operator is ready. Nothing runs autonomously.
Magellan operates inside the active Space. Use Space boundaries to keep client context, terminal output, notes, docs, and credentials separated — so a question about Acme Corp doesn't drag in context from a different tenant.
On Enterprise, each Space gets its own customer-managed KMS key. Per-Space cryptographic erasure on offboarding means data — including any persisted Magellan-generated notes — is unrecoverable when a client engagement ends.
Magellan is BYO key on every tier. Free is enough to try the full flow with your own provider. Team adds shared Spaces and audit; Enterprise adds per-Space customer-managed KMS.