← Back to Catalog
Agent Skill

llm-cli

Pipe any file through any LLM provider — OpenAI, Anthropic, Gemini, Ollama.

Process textual and multimedia files with various LLM providers using the llm CLI. Supports both non-interactive and interactive modes with model selection, config persistence, and file input handling.

What it does

A universal CLI wrapper for multiple LLM providers — OpenAI, Anthropic, Google Gemini, and Ollama (local). Process text, pipe files through models, and start interactive conversations, all with a unified interface.

Supported providers

  • OpenAI — GPT-5, GPT-4.1, o3, o3-mini
  • Anthropic — Claude Sonnet 4.5, Claude Opus 4.1, Claude Opus 4
  • Google Gemini — Gemini 2.5 Pro, Gemini 2.5 Flash
  • Ollama — Llama 3.2, Mistral Large, DeepSeek Coder, and other local models

Key features

  • Model aliases — use friendly names like “claude”, “gemini”, “gpt” instead of full model IDs
  • Interactive mode — start a conversation with any provider
  • File processing — pipe documents, code, or any text through an LLM
  • Config persistence — save preferred provider and model settings

When to use

When you need to process text or files with a specific LLM provider, compare outputs across providers, or use a local model via Ollama.