The Claude Opus 4.6 model ID in the Anthropic API is claude-opus-4-6. Anthropic lists this directly in the “what’s new” documentation for Claude 4.6, including a table that maps the product name to the API model ID. This ID is what you place in the model field when calling the Messages API. Official reference: What’s new in Claude 4.6.
In practice, you’ll use the model ID in every API request, and you’ll want to treat it like a deploy-time configuration value so you can roll forward/back without code changes. A simple pattern is:
MODEL_ID=claude-opus-4-6in environment/configAn application-level “model routing” layer if you have multiple workloads (fast chat vs deep agent tasks)
Logging that records
{model_id, prompt_version, max_tokens, stream}per request for observability
Here’s a short example in JSON form (same as you’d send to the Messages API):
{
"model": "claude-opus-4-6",
"max_tokens": 1200,
"messages": [
{ "role": "user", "content": "Generate a unified diff that fixes the failing test." }
]
}
Also note that some platforms wrap Anthropic models with their own IDs (for example, cloud marketplaces). If you’re using Anthropic’s native developer platform, the Anthropic model ID above is the correct one. If you’re using a hosted integration, you may need that platform’s “provider-qualified” model name, but you should always map it back to the Anthropic canonical ID internally to keep your code portable.
For RAG-based developer tools, the model ID isn’t the hard part; consistency is. You’ll typically pin the model ID and version your prompts and retrieval strategy, then measure changes with an eval set. If you’re building on a vector database such as Milvus or Zilliz Cloud, log the retrieved chunk IDs alongside the model ID so you can debug regressions (“Did retrieval change, or did the model change?”). Treat claude-opus-4-6 as one knob in a system of knobs—prompt templates, retrieval filters, token budgets, and validators—which together determine production quality.