Documentation Index
Fetch the complete documentation index at: https://braintrust.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
Applies to:
- Plan -
- Deployment -
Summary
When serving prompts from code, you can still link prompt versions to traces and experiments using two approaches. For Braintrust-managed prompts, useload_prompt and prompt.build() to automatically attach metadata.prompt.id to LLM spans, enabling native “Go to origin prompt” links in the trace UI. For hardcoded prompts served directly from your codebase, log custom metadata fields like prompt.name and prompt.version to track versions manually, though this won’t provide native prompt linking.
Configuration steps
Option 1: Managed prompts (native trace linking)
Load a Braintrust-managed prompt by slug. Callingprompt.build() attaches metadata.prompt.id to the LLM span automatically, enabling the native “Go to origin prompt” link in the trace UI.
version for reproducible experiments; use environment to route to the env-assigned version.
Option 2: Hardcoded/code-served prompts (custom metadata)
If prompts are not loaded from Braintrust, native prompt linking is not available. Log version identity as custom metadata instead.Viewing and testing raw prompts
For hardcoded prompts, click Run in the trace view to inspect and test the raw prompt ad hoc without saving it to Braintrust.Comparison
| Approach | Native trace link | Filterable | Ad-hoc testing |
|---|---|---|---|
load_prompt + prompt.build | ✅ | ✅ | ✅ |
| Hardcoded + custom metadata | ❌ | ✅ | ✅ (via Run) |
Native linking from slug alone (without load_prompt) is not yet supported. Track BT-4319 for updates.