M
Mastraโ€ข4w ago
Karl

Tracing prompts with Langfuse

I am using Langfuse for prompt management, and I can consume the prompts just fine. However, I cannot have the tracing link to the prompts to the ones created in langfuse, and versions. Here is an example of how I set it up
const headerAuditing = createStep({
id: "header-auditing",
description: "Diagnose the header",
inputSchema: productDataSchema,
outputSchema: scoreSchema,
execute: async ({ mastra, inputData }) => {
const agent = mastra.getAgent("headerAuditingAgent")
const message = `
### Attribute to evaluate

${inputData.header}
`
const score = await observe(async () => {
updateActiveObservation(
{ prompt: instructionsHeaderAuditing },
{ asType: "generation" },
)
return await agent.generate(message, {
structuredOutput: {
schema: scoreSchema,
},
})
})()

return score.object
},
})
const headerAuditing = createStep({
id: "header-auditing",
description: "Diagnose the header",
inputSchema: productDataSchema,
outputSchema: scoreSchema,
execute: async ({ mastra, inputData }) => {
const agent = mastra.getAgent("headerAuditingAgent")
const message = `
### Attribute to evaluate

${inputData.header}
`
const score = await observe(async () => {
updateActiveObservation(
{ prompt: instructionsHeaderAuditing },
{ asType: "generation" },
)
return await agent.generate(message, {
structuredOutput: {
schema: scoreSchema,
},
})
})()

return score.object
},
})
However, this is still not working. I can just see the raw prompt in Langfuse. Do we have any other tracing integration with Langfuse?
4 Replies
Mastra Triager
Mastra Triagerโ€ข4w ago
๐Ÿ“ Created GitHub issue: https://github.com/mastra-ai/mastra/issues/10172 ๐Ÿ” If you're experiencing an error, please provide a minimal reproducible example to help us resolve it quickly. ๐Ÿ™ Thank you @Karl for helping us improve Mastra!
Grayson
Graysonโ€ข4w ago
Can you try this?
// header-auditing step
const headerAuditing = createStep({
id: "header-auditing",
description: "Diagnose the header",
inputSchema: productDataSchema,
outputSchema: scoreSchema,
execute: async ({ mastra, inputData, tracingContext }) => {
const prompt = await langfuse.prompts.get({
name: "header-auditing",
variant: "default",
});

tracingContext.currentSpan?.update({
metadata: {
promptName: prompt?.name,
promptVersion: prompt?.version,
promptId: prompt?.promptId,
},
});

const agent = mastra.getAgent("headerAuditingAgent");
const score = await agent.generate(
`### Attribute to evaluate\n\n${inputData.header}`,
{
structuredOutput: { schema: scoreSchema },
tracingContext, // keep the metadata on child spans
},
);

return score.object;
},
});
// header-auditing step
const headerAuditing = createStep({
id: "header-auditing",
description: "Diagnose the header",
inputSchema: productDataSchema,
outputSchema: scoreSchema,
execute: async ({ mastra, inputData, tracingContext }) => {
const prompt = await langfuse.prompts.get({
name: "header-auditing",
variant: "default",
});

tracingContext.currentSpan?.update({
metadata: {
promptName: prompt?.name,
promptVersion: prompt?.version,
promptId: prompt?.promptId,
},
});

const agent = mastra.getAgent("headerAuditingAgent");
const score = await agent.generate(
`### Attribute to evaluate\n\n${inputData.header}`,
{
structuredOutput: { schema: scoreSchema },
tracingContext, // keep the metadata on child spans
},
);

return score.object;
},
});
// mastra config with Langfuse exporter
import { LangfuseExporter } from "@mastra/langfuse";

export const mastra = new Mastra({
observability: {
configs: {
langfuse: {
serviceName: "header-audit",
exporters: [
new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
baseUrl: process.env.LANGFUSE_BASE_URL,
}),
],
},
},
},
});
// mastra config with Langfuse exporter
import { LangfuseExporter } from "@mastra/langfuse";

export const mastra = new Mastra({
observability: {
configs: {
langfuse: {
serviceName: "header-audit",
exporters: [
new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
secretKey: process.env.LANGFUSE_SECRET_KEY!,
baseUrl: process.env.LANGFUSE_BASE_URL,
}),
],
},
},
},
});
- Fetch the prompt via Langfuse Prompt Management, update the active span with promptName/promptVersion/promptId, and forward tracingContext into the agent call.
- Mastraโ€™s Langfuse exporter ships that span metadata to Langfuse, so the trace links to the managed prompt version instead of showing raw text.
Karl
KarlOPโ€ข4w ago
Thanks @Grayson . I think there is a mismatch in function API, for example there is no "langfuseClient.prompts", and the prompt object has no "promptId", only name and version. Anyway, I attempted
const headerAuditing = createStep({
id: "header-auditing",
description: "Diagnose the header",
inputSchema: productDataSchema,
outputSchema: scoreSchema,
execute: async ({ mastra, inputData, tracingContext }) => {
tracingContext.currentSpan?.update({
metadata: {
promptName: instructionsHeaderAuditing.name,
promptVersion: instructionsHeaderAuditing.version,
}
})
const agent = mastra.getAgent("headerAuditingAgent")
const message = `
### Attribute to evaluate

${inputData.header}
`
const score = await agent.generate(message, {
structuredOutput: {
schema: scoreSchema,
},
tracingContext,
})

return score.object
},
})
const headerAuditing = createStep({
id: "header-auditing",
description: "Diagnose the header",
inputSchema: productDataSchema,
outputSchema: scoreSchema,
execute: async ({ mastra, inputData, tracingContext }) => {
tracingContext.currentSpan?.update({
metadata: {
promptName: instructionsHeaderAuditing.name,
promptVersion: instructionsHeaderAuditing.version,
}
})
const agent = mastra.getAgent("headerAuditingAgent")
const message = `
### Attribute to evaluate

${inputData.header}
`
const score = await agent.generate(message, {
structuredOutput: {
schema: scoreSchema,
},
tracingContext,
})

return score.object
},
})
But this is still not working. The prompt object "instrctionsHeaderAuditing" has been predefined globally in another file.
export const instructionsHeaderAuditing = await langfuseClient.prompt.get(
"diagnostic/product-header",
)
export const instructionsHeaderAuditing = await langfuseClient.prompt.get(
"diagnostic/product-header",
)
Zack
Zackโ€ข4w ago
I think the ability to link prompts in langfuse hasn't been added yet: https://github.com/mastra-ai/mastra/pull/8762#issuecomment-3402806448
GitHub
Add langfuse prompt tracing by zack-ashen ยท Pull Request #8762 ยท ...
Description This adds the ability to link prompts from llm generations to stored langfuse prompts. Specifically following this reference. When you add a Langfuse object to the metadata of a generat...

Did you find this page helpful?