467 tools. One API. AI meets field service data.
An open-source Model Context Protocol server that connects AI assistants like Claude directly to ServiceTitan business data - jobs, revenue, technicians, estimates, and more. Published on npm with zero-install setup.
ServiceTitan had the data, but AI assistants had no real way to reach it.
ServiceTitan has rich business data but no official way for AI tools to access it. The community MCP servers that existed were limited - basic CRUD, no intelligence layer, and no token optimization for LLMs.
Businesses running on ServiceTitan could not ask their AI assistants real questions about operations because the available integrations stopped at raw API calls and developer-centric abstractions.
Without safety controls, response shaping, and business-aware tooling, even a connected assistant would waste tokens on noisy payloads and struggle to return answers an operator could use.
No official ServiceTitan API for AI assistants
Community alternatives had basic CRUD and no business intelligence
LLM token budgets were wasted on raw API responses
No safety system for read versus write operations
A production-grade MCP server built for real business questions.
The server combines broad ServiceTitan coverage with intelligence tooling, safety controls, and response shaping that works for LLMs.
467 tools across 15 domains
Jobs, customers, invoices, estimates, memberships, technicians, dispatch, and more are exposed through a single MCP connection.
An intelligence layer with built-in caching
Revenue reports, business snapshots, technician leaderboards, and pipeline analysis compose multiple API calls into business-ready answers. A 5-minute result cache delivers warm responses in under a second.
LLM-optimized response shaping
Every response is trimmed and structured to respect token budgets instead of dumping raw payloads back into the model context.
Safer operations by default
The server is read-only by default, requires confirmation for writes, and records full audit logging for traceability.
Three transport modes and npm distribution
Stdio for Claude Desktop, SSE for legacy clients, and Streamable HTTP for production remote deployments. Published on npm with zero-install npx support.
A TypeScript server designed around the awkward parts of the ServiceTitan API.
The implementation had to solve both ServiceTitan's API quirks and the ergonomics required for AI-native tooling.
The server uses a route table architecture that normalizes ServiceTitan's module-prefix API structure. OAuth 2.0 client credentials handle auth, with constant-time verification, request IDs on every response, configurable CORS, and keepalive for production operation.
The intelligence layer sits on top of raw access. Instead of mirroring endpoints one-for-one, it composes multiple calls into answers a business owner can use immediately. Revenue pulls from the same source ServiceTitan's own dashboard uses. A result cache delivers warm intelligence queries in under a second.
Eight full audit rounds produced 77 findings across security, data integrity, transport reliability, and documentation. Every finding was remediated, verified, and regression-tested before the npm publish. The build ships as an esbuild bundle at 83KB compressed.
TypeScript with stdio, SSE, and Streamable HTTP transports
OAuth 2.0 with constant-time auth verification
Route table architecture for ServiceTitan's module-prefix API
esbuild bundled: 83KB compressed, 3 entry points
260 tests across 19 files
Published on npm as @rowvyn/servicetitan-mcp
One connection between AI assistants and ServiceTitan operations.
The server turns ServiceTitan from a closed browser workflow into a tool AI assistants can query safely, efficiently, and usefully.
The MCP server gives any AI assistant - Claude, GPT, or custom agents - direct access to ServiceTitan business data with a single connection. Intelligence tools compose multiple API calls into answers a business owner can act on.
It was published on npm, open-sourced on GitHub with Claude Desktop quickstart, full documentation, and comparison benchmarks against community alternatives. Any ServiceTitan customer can be running it in under a minute with npx.
One connection between AI assistants and ServiceTitan operations.
The server turns ServiceTitan from a closed browser workflow into a tool AI assistants can query safely, efficiently, and usefully.
More from this engagement.
Start Your Conversation
If you need AI to work with your operating data instead of around it, let's talk.
