Analytics
Monitor workspace performance, job trends, and resource usage from a single dashboard.
The Analytics dashboard gives you a real-time view of how work is flowing through your workspace. Track job throughput, spot bottlenecks, measure agent performance, and understand resource usage — all from the Analytics page in the sidebar.
Overview
Analytics is available on Team plans and above. It has three tabs:
- Activity — real-time status, job volume, token consumption, and usage trends.
- Performance — efficiency metrics, failure rates, response times, and runner health.
- Landscape — an organization-wide view across all your workspaces (available on Scale plans and above).
A date range picker in the top-right corner controls the time period for all metrics across every tab.
Activity tab
The Activity tab shows what is happening in your workspace right now and how it compares to the previous period.
Live status
Four cards at the top reflect the current state of your workspace:
- Active Jobs — how many jobs are currently running, with a breakdown of in-progress and queued counts.
- Waiting for Input — jobs where the agent has asked a question and is waiting for your response, with the average wait time.
- Queue Depth — jobs that are ready but have not been picked up by a runner yet, with average wait time.
- Pending Questions — unanswered agent questions, showing how long the oldest question has been waiting.
Summary metrics
Four cards summarize performance over the selected date range, each showing the change compared to the prior period:
- Completed — total jobs completed.
- Completion Rate — percentage of jobs that finished successfully.
- Avg Time to Done — average elapsed time from when a job becomes ready to when it is marked done.
- Retry Rate — percentage of jobs that required one or more retries.
Charts and breakdowns
- Time to Done — Lifecycle Breakdown — a stacked bar comparing how time is spent: queue wait, agent execution, and user wait. Shows current period alongside the prior period so you can see whether things are getting faster or slower.
- User vs Agentic Trend — a line chart tracking the volume of agentic jobs and user jobs over time.
- Model Failure Rates — failure percentage and count for each AI model your agents use. Failures typically occur when the agent process exits unexpectedly or the AI provider returns an error.
- Created vs Completed — a bar chart comparing how many jobs are being created versus completed each day. A growing gap may indicate a capacity bottleneck.
- Jobs by Domain — a horizontal bar chart showing job volume and completion rate broken down by domain.
Activity heatmap
A GitHub-style heatmap showing job activity by day of the week and hour of the day. Color intensity indicates volume. A timezone indicator shows which timezone the chart uses. This helps you understand when your workspace is busiest.
Token usage
Four metric cards summarize AI token consumption for the selected period:
- Total Tokens — total tokens consumed across all jobs, with a sparkline showing the trend.
- Avg / Job — average tokens per job.
- P10 — 10th percentile token usage. Your most efficient jobs use around this many tokens.
- P90 — 90th percentile token usage. Your most token-intensive jobs use around this many. A large gap between P10 and P90 suggests wide variation in job complexity.
Below the cards:
- By Model — a bar chart breaking down token usage by AI model.
- Daily Trend — a line chart of token usage over time.
- Utilization — shows your current usage against your plan limits. Displays concurrency (current versus limit), canvases (current versus limit), and storage used, including account-level storage totals.
Performance tab
The Performance tab measures how well work is being done — efficiency, reliability, cost, and responsiveness.
Response metrics
Five cards at the top:
- Total Questions — how many questions agents asked during the period.
- Answered — how many were answered.
- Avg Response Time — how long it takes on average to respond to agent questions.
- User Jobs Avg Wait — average time user jobs spend waiting before work begins.
- Agentic Avg Wait — average time agentic jobs spend waiting before a runner picks them up.
Agent Efficiency Score
Shows what percentage of total job time is actual agent execution versus waiting (queue time and time waiting for human responses). Each agent type gets its own trend line so you can compare efficiency across agents. The headline number is a weighted average — agents with more jobs have more influence on the score. A low score (under 20%) is normal and indicates that most job time is spent waiting for human input, not running agents.
First-Attempt Success Rate
Shows the percentage of jobs that complete on their first attempt, broken down by agent type. A high first-attempt rate means your job descriptions are clear and your agents are producing work that passes review on the first try. The trend sparkline shows whether this rate is improving or declining over time.
Charts and breakdowns
- Completed vs Failed — a bar chart over time showing successful completions alongside failures, with totals.
- Agent Attempts per Job — a grouped histogram showing how many attempts jobs typically require (1, 2, 3, 4, 5+), with bars for each agent type side by side. The average attempts number is displayed prominently. A high average may indicate that job descriptions need more detail or that review criteria should be clarified.
- Response Time Distribution — a histogram showing how quickly your team responds to agent questions, bucketed into time ranges (0–5 minutes, 5–15 minutes, up to 4+ hours). Median and 90th percentile response times are shown at the top. Use this to identify if a long tail of slow responses is dragging down overall efficiency.
- Tokens per Completed Job — average token consumption per completed job, broken down by agent type with a trend line. The delta indicator is inverted: a decrease (green) means your agents are getting more efficient; an increase (red) means token costs are rising.
- Model Adoption — trend lines for each AI model, with summary cards showing job count, failure count, and average runtime per model.
- Runner Health — a card for each runner in your workspace showing its name, job count, current status (offline, idle, or active), when it was last seen, and failure rate. Use this to quickly identify runners that may need attention.
Landscape tab
The Landscape tab provides an organization-wide view across all your workspaces. This is the executive dashboard — use it to compare workspace activity and spot trends at the org level. The Landscape tab is available on Scale plans and above.
Org-level metrics
Four cards summarize your organization:
- Total Jobs — across all workspaces.
- Completed — total completed jobs across all workspaces.
- Org Agentic % — the percentage of all jobs that are agentic versus user jobs.
- Active Workspaces — how many of your workspaces have recent activity, shown as a ratio of active to total (e.g., 2 out of 3).
Charts and breakdowns
- Workspace Health — a card for each workspace showing member count, a mini activity sparkline, job count, completion percentage, agentic percentage, and health warnings. A low completion rate is flagged so you can investigate.
- Org Activity — Jobs Per Workspace Per Day — a stacked timeline showing daily job counts per workspace, color-coded so you can compare workload distribution.
- Org Automation Trend — a line chart showing how agentic job adoption is trending per workspace over time.
- Token Usage by Workspace — a bar chart comparing token consumption across workspaces.
Date range
The date range picker in the top-right corner controls the time period for all metrics on all three tabs. When you change the date range, every chart, metric card, and comparison recalculates.
Comparison metrics (like "% change") compare your selected period to the immediately preceding period of the same length. For example, selecting the last 7 days compares this week to the previous week.
Related concepts
- Jobs — the work items that analytics tracks
- Job lifecycle — the stages jobs move through
- Workspaces and teams — how workspaces and roles work