Tracing Anthropic
MLflow Tracing provides automatic tracing capability for Anthropic LLMs. By enabling auto tracing
for Anthropic by calling the mlflow.anthropic.autolog()
function, MLflow will capture nested traces and log them to the active MLflow Experiment upon invocation of Anthropic Python SDK.
import mlflow
mlflow.anthropic.autolog()
MLflow trace automatically captures the following information about Anthropic calls:
- Prompts and completion responses
- Latencies
- Model name
- Additional metadata such as
temperature
,max_tokens
, if specified. - Function calling if returned in the response
- Any exception if raised
Supported APIs
MLflow supports automatic tracing for the following Anthropic APIs:
Chat Completion | Function Calling | Streaming | Async | Image | Batch |
---|---|---|---|---|---|
✅ | ✅ | - | ✅ (*1) | - | - |
(*1) Async support was added in MLflow 2.21.0.
To request support for additional APIs, please open a feature request on GitHub.
Basic Example
import anthropic
import mlflow
# Enable auto-tracing for Anthropic
mlflow.anthropic.autolog()
# Optional: Set a tracking URI and an experiment
mlflow.set_tracking_uri("http://localhost:5000")
mlflow.set_experiment("Anthropic")
# Configure your API key.
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
# Use the create method to create new message.
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude"},
],
)
Async
MLflow Tracing has supported the asynchronous API of the Anthropic SDK since MLflow 2.21.0. Its usage is the same as the synchronous API.
import anthropic
# Enable trace logging
mlflow.anthropic.autolog()
client = anthropic.AsyncAnthropic()
response = await client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude"},
],
)