ServiceTitan Reporting Guide

The 5 ServiceTitan Reports Every HVAC Operator Should Run Before Their Monday Morning Meeting

ServiceTitan stores the data to find your margin drift, callback root causes, and CSR booking gaps. Most operators have never built these reports. Here is what to pull, what to look for, and why the default views hide the pattern.

By Spaid — February 2026 ≈ 8 min read

Most ServiceTitan operators run the same three reports on repeat: revenue by tech, job count, and customer satisfaction score. These are lag metrics. They tell you what already happened. The five reports below are lead metrics. They tell you what’s about to happen to your margin, your callback rate, and your CSR booking rate if you don’t act.

Each one exists in ServiceTitan right now. Most operators have never built them — not because they don’t want the visibility, but because the default report views don’t surface the pattern in an actionable format. Getting to the real signal requires either a data export and analysis or an API connection that builds the view automatically — which is what a ServiceTitan consultant with API access does on Day 1.

Here are the five. What to pull, what to look for, and why most operators don’t run them.

Report 1: GM Per Completed Job by Tech and Job Type

What to pull: Completed jobs, filtered by job type (your top 10 by volume), sorted by gross margin descending, grouped by tech.

What you’re looking for: The 8–14 point spread between your top and bottom performers. On identical job types. On identical pricebook items. If your top cooling diagnostic tech closes at 41% GM and your bottom is at 27%, that gap is worth $8K–$16K per tech per year — and you can close it once you can see it.

This is not a market problem. Identical job types, identical pricing structure, identical customer base. The variance is in execution: parts selection, diagnostic depth, labor time, parts markup compliance. The report doesn’t explain which — but it tells you exactly where to look and who to observe.

Why most operators don’t run it

ServiceTitan’s default reports show revenue by tech, not margin. To get GM by tech at the job-type level, you need to pull job cost against revenue at the line-item level — which requires exporting invoice data and job cost data together, then joining them. Most operators either don’t do the export or do it manually once a quarter, long after the pattern has already cost them. An API connection that runs the join automatically changes the cadence from quarterly to weekly.

Report 2: Callback Rate by Tech and Job Type

What to pull: Jobs flagged as callbacks (or return visits within 30 days of a completed job on the same address), grouped by original tech and original job type.

What you’re looking for: Any tech running above 12% callback rate on specific job types. Your company average is hiding the pattern. Your worst-callback tech may be running 22% on cooling diagnostics while your best runs 3%. That 19-point spread, at $350–$600 fully loaded per callback, is a six-figure annual line item sitting inside a single tech assignment pattern.

The job-type filter matters as much as the tech filter. A tech with a high overall callback rate may be fine on installs and problematic only on diagnostics. Fixing that requires job-type-specific assignment rules — not a blanket performance conversation.

Why most operators don’t run it

ServiceTitan doesn’t natively flag “callback” as a job type unless your CSRs are disciplined about booking it correctly every time. Most aren’t. This means getting accurate callback data requires either a clean, enforced job type taxonomy across all CSRs — or a pattern analysis that cross-references address, tech, job type, and time window automatically and infers the callback relationship from the data. Without one of those two systems, the callback rate in your reporting is structurally understated.

Report 3: CSR Inbound Booking Rate by Rep

What to pull: Inbound calls answered (from ServiceTitan Phones Pro or connected call tracking), mapped to jobs booked in the same time window, grouped by CSR.

What you’re looking for: Booking rate by rep. If your top CSR books 84% of answered calls and your bottom books 61%, the 23-point gap is worth $300–$800 per missed call at your average ticket. During peak season, a single CSR running 20 points below your best rep costs you $15K–$30K per month — not in a bad month, in every peak month, every year.

This is the most actionable report in this list for peak-season operators. Booking rate variance is almost always a training, scripting, or objection-handling problem — all fixable once you know which rep it is and on which call types the gap appears.

Why most operators don’t run it

This is a cross-system report. ServiceTitan tracks jobs booked. Call tracking tracks calls answered. Getting booking rate by CSR requires connecting both systems and mapping the timestamps — matching answered calls to booked jobs within a time window by rep. Without that connection, you only know jobs were booked. You don’t know how many calls it took to get there, or which rep is leaving the most on the table.

These reports exist in your ServiceTitan today

We build all 5 from your data in the first week.

Then monitor them automatically. No manual exports. No analyst overhead.

Book the 45-minute diagnostic →

Report 4: Unsold Estimate Aging

What to pull: Open estimates (jobs where an estimate was sent but no invoice was generated), sorted by date sent, oldest first. Filter for estimates over 7 days old.

What you’re looking for: Your follow-up gap. If 35–50% of your estimates are over 7 days old with no activity, you have a $150K–$350K per year follow-up problem sitting in your ServiceTitan queue right now. These are customers who asked for a price. They haven’t said no. Someone just hasn’t followed up.

The aging view is the key. An estimate sent Monday that’s still open on Friday is a different problem than one sent three weeks ago. Segmenting by age band — 7–14 days, 14–30 days, 30+ days — tells you the shape of your follow-up system and where the biggest recoverable revenue sits.

Why most operators don’t run it

This requires filtering against job status and invoice status simultaneously — open estimate with no corresponding invoice generated. The default estimate view in ServiceTitan doesn’t surface the age of open estimates in a sorted, actionable format. Operators either don’t pull it at all, or pull it manually and look at it once a month when the pattern has already compounded. The report needs to run weekly, sorted by age, at the start of the review cycle — not as a reactive cleanup exercise.

Report 5: Pricebook Compliance by Tech

What to pull: Line-item invoice amounts vs. standard pricebook prices for the same line items, grouped by tech.

What you’re looking for: Techs who consistently price below book — rounding down, skipping diagnostic fees, giving the customer a break. Quote variance of 15–30% on identical job scopes across techs in the same branch is not a market problem. It’s a compliance problem. At $500–$800 average ticket, a tech running 20% below book on 250 jobs per year is leaving $25K–$40K per tech per year on the table. Multiply that across a team.

The compliance report also surfaces pricebook structure problems. Sometimes a tech is pricing below book because a pricebook item is misconfigured or a category is missing. The report identifies both — tech behavior problems and pricebook integrity problems — which are different root causes that require different fixes.

Why most operators don’t run it

Comparing invoice line items against pricebook items requires joining two tables at the line-item level — the invoice line and the corresponding pricebook entry for that item. ServiceTitan doesn’t surface this as a standard report. It requires a data export and a join analysis, or an API connection that runs the comparison automatically and flags deviations above a threshold. Without that connection, pricebook compliance is essentially invisible unless a manager manually audits individual invoices.

The Monday Morning Checklist

Run these five reports before your Monday ops meeting. Once they’re built, reviewing them takes 20 minutes or less. What you’re looking for in each:

01
GM per completed job by techany tech below your GM floor for a given job type
02
Callback rate by tech and job typeany tech above your callback ceiling on a specific job type
03
CSR inbound booking rate by repany CSR below your booking rate floor
04
Unsold estimate agingany estimate over 14 days with no activity
05
Pricebook compliance by techany tech with more than 10% pricing deviation from book

If any flag is red, you have a root cause to find before the week starts — not after the month ends. The difference between catching a pattern on Monday and catching it at the month-end close is three to four weeks of compounding cost. At 50 techs, that math is not small.

The flags are not the findings. They are the questions. A tech flagged on GM may be getting mismatched job types, may have a parts compliance issue, or may be giving customers unapproved discounts. The Monday report tells you who to watch this week. The root cause comes from watching the work.

What to Do When You Find a Flag

The reports show the pattern. They don’t explain the root cause. A tech running 22% callbacks on cooling diagnostics could be making diagnostic mistakes, could be getting mismatched job types, could be missing a specific parts check on a particular system type. Understanding which one requires observing the work — not just measuring the output.

That’s the difference between a dashboard and a system. The report gives you the question. Embedding with the operation gives you the answer. A tech flagged on pricebook compliance may need retraining. Or the pricebook may have a configuration gap. Or dispatch may be booking them on job types that don’t map correctly to their pricing category. Three different problems, three different fixes, one flag on one report.

The cadence matters as much as the reports themselves. A monthly review is too slow to catch seasonal patterns before they compound. Weekly review of these five metrics — with a clear flag threshold and a clear owner for each flag — is what turns reporting into operations rather than history.

Built From Your ServiceTitan Data

We build these 5 reports from your data in the first week of the engagement — then monitor them automatically every week.

No manual exports. No analyst overhead. You get a clean Monday morning view of GM drift, callback patterns, CSR booking gaps, estimate aging, and pricebook compliance — every week, from your live data.

The first 45 minutes costs you nothing.

Related Reading

Operators running these reports also address:

Start Here

45 minutes. Your data.
No commitment.

We’ll start with a recent export or sample data from your ServiceTitan, show you the biggest leaks, and scope the engagement. Full access happens only if you proceed to the audit.

Accepting 2–3 founding operators · $20M–$100M revenue · 40–120 techs · On a modern FSM