Trace summarization

Summarization is in beta

Summarization is currently in beta. We'd love to hear your feedback as we develop this feature.

Trace summarization uses AI to generate human-readable summaries of your LLM traces and events. This helps you quickly understand complex multi-step AI interactions without reading through raw inputs and outputs.

How it works

When viewing a trace or generation event, click the Summary tab to generate an AI-powered summary. The summary includes:

  • Title: A brief description of what the trace accomplished
  • Flow diagram: An ASCII visualization of the execution flow
  • Summary points: Key highlights and actions from the trace
  • Interesting notes: Notable observations like errors or unusual patterns

Summarization modes

Choose between two summarization modes based on your needs:

ModeDescriptionBest for
MinimalQuick 3-5 bullet points with key highlightsFast overview of what happened
DetailedComprehensive 5-10 points with full contextDeep understanding of complex traces

Requirements

Summarization requires AI data processing to be enabled for your organization. When you first use the feature, you'll be prompted to approve AI data processing. This consent applies organization-wide.

To manage this setting, go to SettingsOrganizationGeneralPostHog AI data analysis.

Rate limits

To ensure fair usage, summarization has the following rate limits:

LimitValue
Burst50 requests/minute
Sustained200 requests/hour
Daily cap500 requests/day

Summaries are cached, so regenerating the same trace won't count against your limits unless you explicitly request a refresh.

Providing feedback

After generating a summary, you can rate it using the thumbs up/down buttons. This feedback helps us improve the summarization quality.

Community questions

Was this page useful?

Questions about this page? or post a community question.