← Back to home
← Back to the blog
5 min read· DirtFleet team

What AI does (and doesn't do) in DirtFleet

We named 14 AI Modes in the framework doc and shipped 6 of them. Here's exactly what each one does, where the LLM is allowed to write to the database, and how we keep the system honest.

Every SaaS in 2026 claims AI. Most ship a chatbot that hallucinates a summary of last quarter's data and call it done. DirtFleet's framework named 14 AI Modes (A–N). We shipped 6 of them and called the other 8 “roadmap, not yet.” This is what's live, where the LLM is allowed near your database, and what guardrails keep it honest.

What's live

  • Mode L · Quick Log Intelligence (partial). Voice-to-Quick-Log endpoint at /api/ai/voice-quick-log. Mechanic records audio, Gemini transcribes + extracts structured fields (labor hours, cost, parts, notes). The form pre-fills; mechanic edits before submit. Guardrail: nothing writes to the DB until the mechanic confirms.
  • Mode G · Guided Diagnostics (DTC narrative). /api/dtc/explain takes a J1939 SPN+FMI + asset make/model and returns a one-line suspected cause + 3–6 imperative checks. Guardrail: read-only; the curated mapping table (lib/dtc-mapping.ts) drives severity, the LLM only fills in the human narrative.
  • Mode H · Asset Health Scanner (anomaly detection). Hours-delta z-score + cost-spike detector run after every log write. Guardrail: raises YELLOW flags only (never RED); dedupes per-asset per-day so a flurry can't spam the queue.
  • Mode I · Virtual Inspector (Tesseract + Gemini OCR). Meter photo → on-device Tesseract; falls back to Gemini when confidence < 0.6. VIN OCR → Tesseract → NHTSA decode. Guardrail: read-only at the LLM layer; extracted numbers go into form fields that the user confirms before submit.
  • Mode B · Batch Insights. Fleet analytics summary takes pre-computed aggregate JSON (utilization, repair spend, top assets) and returns a markdown bullet list with caveats. Guardrail: the LLM never sees raw rows — only pre-aggregated numbers that already passed the tenant scope check.
  • Mode N · Proactive Notifier (auto-PM thresholds). Hour-based PM thresholds raise AUTO_PM flags; this is deterministic, not AI, but it's the same surface (Flag table) the other modes feed. Guardrail: flags are advisory; resolving them is a human decision.

What's NOT live (yet)

  • Mode A · Fleet Advisor. Conversational fleet assistant. The data structure's ready (read-only API keys, the AuditLog for safety); the conversation layer ships when there's real customer demand for it.
  • Mode C · Compliance Copilot. PII redaction on attachments + retention-policy assistant. The pieces are all there (consent ledger, audit log, photo storage with per-org config); the AI layer is roadmap.
  • Mode D · Dispatch Assist. Routing flagged rows to review queues. We have WorkOrder + Flag; AI-aided routing is overkill for current fleet sizes.
  • Mode E · Repair Estimator. Labor-hours forecasting from description + historical data. Needs months of clean cost data first.
  • Mode F · Downtime Forecaster. Same data requirement; predictive maintenance ML is gated on enough fleet-years of clean meter readings.
  • Mode J · Job Narrative Writer. Auto-draft work descriptions from photos + notes. On the list; mechanic appetite for auto-generated text is currently low.
  • Mode K · Knowledge Retrieval. NL search over fleet data. app/api/search/assets is keyword today; semantic search lands with the conversational mode.
  • Mode M · Telematics Monitor. Anomaly detection on streaming telematics. Today's anomaly detector runs on per-log inserts; streaming-aware version ships once the Samsara / Geotab integrations cross a customer-count threshold.

Three rules the AI layer follows

  1. Read-only by default. An LLM never writes a row directly. It produces output that a human (or a deterministic anomaly check) decides what to do with.
  2. Tenant-scoped before the prompt. Any data the LLM sees has already passed the organizationId filter at the repository layer. We never trust the LLM to enforce isolation.
  3. Cost guards everywhere. 25-second timeouts, 1024-token output caps, model = gemini-2.5-flash (not pro). The unit economics matter; runaway AI spend kills the per-asset pricing model.

Why we don't train on customer data

DirtFleet never sends customer data to model training. Period. Every AI call is inference-only against the providers' published APIs. When we add a feature that would meaningfully improve from fine-tuning (e.g., a domain-specific repair-cost model), we'll ask each customer for explicit opt-in consent — tracked in the ConsentRecord ledger — before any of their data leaves for training.

→ Integrations directory · → Start free trial