Home > Offerings > Forward-Deployed Diagnostics
― Forward-Deployed Diagnostics

Three Questions That Determine What Happens Next

The workflow went live. Is it performing, or is the team compensating for it? Does the team running the process agree on how it actually works? Can anyone say how many exceptions hit the queue last month and what they cost?

These are the questions that separate AI workflows that run from AI workflows that stall. Each one points to a different diagnostic. Performance belongs to the Operations Health Check. Process accuracy belongs to the Process X-Ray. Exception volume and cost belong to the Exception Baseline. Each produces evidence and a recommended next step in weeks, not months.

Operations Health Check

The workflow is live. It is not delivering.

Throughput is below target. Exception volume is higher than planned. The team is working harder, not the system. This happens in IT service management, marketing operations, sales operations, and every other function where AI workflows go from pilot to production. The Health Check has variants for each.

Most organizations assume the model is the problem. The Health Check usually finds the root cause somewhere else entirely: undefined escalation paths, upstream data quality the pilot never encountered, missing decision authority, or an operating model that was never designed for production volume. The Health Check evaluates the full operating system around the workflow, not just the technology inside it, using domain-specific frameworks tailored to the client’s function.

  • How it works: Framework-driven evaluation across technology, workflow design, and business readiness, informed by domain-specific process inventories
  • What it produces: Operations Health Report: root cause analysis, scored assessment, exception audit, and prioritized remediation roadmap
  • Scope: Focused (single workflow): 2-4 weeks · Enterprise (multiple workflows): 3-5 weeks

Process X-Ray

Ask three people how the same process works. Listen for how much the answers diverge.

Workarounds, informal approvals, local adaptations, and undocumented exception paths accumulate in every operation. The people who do the work carry a version of the process that looks nothing like the documentation, if documentation exists at all. Organizations that apply AI to a process they have not accurately mapped end up automating the documented version and discovering a different one is running in production.

Process X-Ray deploys observation-based discovery tools that capture how people perform the work, generating process documentation in days that traditional discovery takes weeks to assemble. All data is automatically anonymized and the tooling is enterprise-grade. Stakeholder conversations then add the judgment calls, escalation logic, and institutional knowledge that observation alone cannot capture.

  • How it works: Observation-based discovery tools capture the process from actual user behavior, then stakeholder conversations add the human decision layer
  • What it produces: Agreement on how the process works, documented from observed behavior
  • Scope: Focused (single process): 2-4 weeks · Enterprise (cross-functional): 3-5 weeks

Exception Baseline

How many exceptions hit the queue last month? What did they cost?

Most teams cannot answer that. They can feel the drag but they cannot quantify it. The Exception Baseline produces the number: volume by type, resolution times, cost-to-serve measured in human effort consumed, and the hotspots where the most expensive friction concentrates. Every exception is classified using a five-type taxonomy (Accuracy, Judgment, Data, Process, and Trust) so the response can be targeted. The measurement becomes the “before” that every future improvement is compared against.

  • How it works: If exception data already exists in tickets or queues, Prosable analyzes it; if not, a discovery protocol builds the baseline from observation, classifying every exception using the five-type taxonomy
  • What it produces: What exceptions cost, where they concentrate, and where to act first
  • Scope: Data exists: 1-2 weeks · Observation required: 2-3 weeks

A diagnostic fits when the question is specific and the evidence to answer it does not exist yet.

Each diagnostic runs in weeks and produces a scoped answer with a recommended next step. If the right diagnostic is already obvious, name it. If not, start with a conversation.