ShellYard
Magellan · Model-agnostic AI in the operator's terminal

Bring your AI
into the work.

Use your own Anthropic, OpenAI, or OpenAI-compatible (Ollama / LM Studio) key. Operators select which tool output gets attached as context. Magellan can explain, suggest, and draft — operators run every command.

Magellan

The work operators were already doing — with a model in the loop.

Selected output from any tool — terminal, HTTP response, DB query result, packet capture, switch config — attaches to a Magellan thread. Visible attachment pills show what was sent. Secrets are redacted before the request leaves your machine. The operator runs every command.

Explain selected terminal output
Draft commands and scripts
Identify likely causes of errors
Summarize logs and configs
Generate remediation steps
Convert terminal sessions into notes / runbooks
Help build safer repeatable workflows

Use the model you trust

ShellYard does not need to be the model. ShellYard is the secure interface between your infrastructure workflow and the model you choose.

ChatGPT / OpenAI

Bring your OpenAI API key. Choose the model you trust for the work — Magellan is the interface, not the inference.

Claude / Anthropic

Same flow with Anthropic — paste your key, pick a Claude model, work with it inside the terminal.

Ollama / local models

Point Magellan at a local Ollama (or LM Studio / OpenAI-compatible) endpoint and keep prompts and responses on your network.

Human-approved AI for infrastructure work

Magellan is layered so AI assistance moves from understanding to action only when an operator is ready. Nothing runs autonomously.

  1. 1
    Explain
    Magellan reads selected terminal output, configs, or tool results and tells you what they mean.
  2. 2
    Suggest
    Magellan proposes the next diagnostic step or remediation, framed as a recommendation rather than an action.
  3. 3
    Draft
    Magellan writes the command, script, or runbook entry. Nothing runs.
  4. 4
    Review
    Operator reads the draft, edits it if needed, and decides whether to send it.
  5. 5
    Execute
    Only when you explicitly run it. Magellan does not autonomously execute.
MSPs / Multi-tenant ops

AI separated by client Space

Magellan operates inside the active Space. Use Space boundaries to keep client context, terminal output, notes, docs, and credentials separated — so a question about Acme Corp doesn't drag in context from a different tenant.

On Enterprise, each Space gets its own customer-managed KMS key. Per-Space cryptographic erasure on offboarding means data — including any persisted Magellan-generated notes — is unrecoverable when a client engagement ends.

Magellan FAQ

Which AI providers does Magellan support?
Magellan is BYO-key. Bring your own API key for Anthropic, OpenAI, or any OpenAI-compatible endpoint such as Ollama, LM Studio, or OpenRouter.
Does ShellYard see my prompts or responses?
No. Prompts and responses pass directly between your client and your chosen AI provider. They do not flow through ShellYard infrastructure.
Can Magellan run commands automatically?
No. Magellan can explain, suggest, and draft commands. Every command is reviewed by an operator before execution. Nothing runs autonomously.
What context does Magellan have?
Only what you select. Reference selected terminal output, command history, log lines, or tool results into the chat. Magellan does not auto-tap your full session.
Does Magellan work across multiple clients?
Magellan operates inside the active Space. Use Space boundaries to keep client context, terminal output, notes, docs, and credentials separated.
Can I restrict Magellan to local models only?
Yes — on Enterprise, AI policy controls can pin Magellan to Ollama or a specific provider list for sensitive environments where prompts must stay on the local network.

AI in the workflow, not a separate browser tab.

Magellan is BYO key on every tier. Free is enough to try the full flow with your own provider. Team adds shared Spaces and audit; Enterprise adds per-Space customer-managed KMS.