Skip to main content
GitHub

HuggingFace

Auto-instrument HuggingFace Inference API.

Risicare automatically instruments the HuggingFace Inference API.

Installation

pip install risicare huggingface_hub

Auto-Instrumentation

import risicare
from huggingface_hub import InferenceClient
 
risicare.init()
 
client = InferenceClient()
 
# Automatically traced
response = client.text_generation(
    "Hello, how are you?",
    model="meta-llama/Llama-3-70B-Instruct"
)

Captured Attributes

Chat Completion

AttributeDescription
gen_ai.systemhuggingface
gen_ai.request.modelModel name/ID
gen_ai.response.modelModel name returned by API
gen_ai.response.idResponse ID
gen_ai.request.temperatureSampling temperature
gen_ai.request.max_tokensMax output tokens
gen_ai.request.streamWhether streaming was requested
gen_ai.request.has_toolsWhether tools were provided
gen_ai.usage.prompt_tokensInput tokens
gen_ai.usage.completion_tokensOutput tokens
gen_ai.usage.total_tokensTotal tokens
gen_ai.completion.tool_callsNumber of tool calls made
gen_ai.completion.finish_reasonStop reason
gen_ai.latency_msRequest latency in milliseconds

Text Generation

AttributeDescription
gen_ai.systemhuggingface
gen_ai.request.modelModel name/ID
gen_ai.request.streamWhether streaming was requested
gen_ai.request.temperatureSampling temperature
gen_ai.request.max_tokensMax output tokens (from max_new_tokens)
gen_ai.completion.contentGenerated text content
gen_ai.completion.finish_reasonStop reason
gen_ai.usage.completion_tokensOutput tokens (from generated_tokens)
gen_ai.latency_msRequest latency in milliseconds

Model Attribute

gen_ai.response.model is only captured for chat_completion calls, not text_generation.

Chat Completions

response = client.chat_completion(
    messages=[{"role": "user", "content": "Hello!"}],
    model="meta-llama/Llama-3-70B-Instruct",
    max_tokens=500
)

Streaming

stream = client.text_generation(
    "Write a story",
    model="meta-llama/Llama-3-70B-Instruct",
    stream=True
)
 
for chunk in stream:
    print(chunk, end="")

Async Support

from huggingface_hub import AsyncInferenceClient
 
client = AsyncInferenceClient()
 
response = await client.text_generation(
    "Hello!",
    model="meta-llama/Llama-3-70B-Instruct"
)
ModelTask
meta-llama/Llama-3-70B-InstructText Generation
mistralai/Mistral-7B-Instruct-v0.3Text Generation
sentence-transformers/all-MiniLM-L6-v2Embeddings
stabilityai/stable-diffusion-xl-base-1.0Image Generation

Next Steps