In 2025, a 22-person financial services firm came to us with a problem that sounded procedural but was costing them materially. Their compliance approval workflow — a chain of document checks, supervisory sign-offs, and regulatory cross-references — ran across email threads and shared spreadsheets. The error rate on submissions was approximately 13%. For every eight compliant submissions, one had a defect that required follow-up, correction, or resubmission. In a regulated environment, each error carries a cost beyond the correction time: it affects client confidence, creates regulatory exposure, and in some cases triggers formal review processes. Thirteen percent is not a rounding error. It is a structural problem.

This is how we identified what was actually wrong, what we built, and what changed.

The Situation

Twenty-two staff. Compliance work as a core operational function. Every submission required five distinct sign-offs — each from a different person, in sequence. Version control was managed manually: document shared by email, comments appended inline, document versions named by initials and date in the filename. When a step was missed, it was discovered either at the next stage of the chain or — in the worst cases — at submission to the regulator.

The firm had tried to address this with better training and more explicit checklists. Neither made a significant difference. The error rate improved slightly for a few weeks after each intervention, then drifted back. The compliance director told us: "We kept thinking we needed people to be more careful. We were focused on the wrong thing."

The mistake was assuming the errors were caused by carelessness. Carelessness is a symptom. Structural problems look like human error until you map the process — then you see that the errors were almost inevitable given how the process was designed.

The Problem Behind the Problem

When we ran the process mapping session — two hours, two members of the operations team plus the compliance director — we were looking for the structural failures, not the human ones. They surfaced quickly.

There was no single point of truth for submission status. No one person in the approval chain knew the complete state of any given submission without asking someone else. The person at step three did not know whether step two's review had been completed — they assumed it had because the document was in their inbox. Sometimes it had not been. The routing was entirely manual, which meant submissions sat in email inboxes between steps, sometimes for days, with no escalation mechanism when they were not actioned.

The coordination work — tracking where each submission was in the chain, chasing sign-offs, reconciling document versions, identifying what still needed action — was consuming approximately two full-time equivalent hours per day across the team. Not dedicated roles. Distributed attention, constant context-switching, and administrative overhead spread across five people who also had substantive compliance work to do.

The mapping session also surfaced three specific structural failures the team had not fully articulated:

01
Two of the five sign-off steps had overlapping scope. Reviewers at steps two and four were both checking regulatory classification, with slightly different emphasis. Neither knew what the other was checking. This created both redundancy and gaps — each assumed the other was handling the edge cases.
02
The most common error class originated at step two but was not caught until step four. A required cross-reference check was technically assigned to step two, but the document format made it easy to miss. By the time it was caught at step four, correction required looping back through steps two and three — adding two to four days and resetting several sign-offs.
03
There was no defined escalation path for delayed sign-offs. If a reviewer did not action a submission within two days, nothing happened automatically. The document sat. Someone would eventually notice and send a chasing email. The average delay per stalled step was 2.1 days. Across a typical submission with one or two stalled steps, this added four to five days to the approval cycle.

These were not secrets. Everyone in the room, to varying degrees, was aware of each of these issues. But they had never been written down as structural failures in the process specification — which meant they had never been addressable as such. You cannot fix a process problem by telling people to try harder. You fix it by redesigning the process.

What We Built

The system has three functional components, each addressing one of the identified structural failures.

Structured Intake
A validation layer at submission entry. All required documents are attached via a structured form — not email. Required fields are checked programmatically. The cross-reference check identified at step two was moved to intake: the system validates it against the regulatory database before the submission enters the review chain. Any submission missing required elements or failing the cross-reference is flagged and returned to the submitter before it reaches the first reviewer. This eliminated the entire class of errors that originated from incomplete submissions reaching the chain.
Automated Routing and Escalation
Each sign-off request is routed automatically to the correct reviewer with a defined deadline. The routing logic encodes the revised approval chain — the two overlapping steps were consolidated into one, with explicit scope defined. Status is tracked in a central dashboard visible to all parties in real time. If a review is not completed within the defined window, an escalation is triggered: a direct notification to the reviewer, then to their manager if still not actioned within a further defined period. No email chasing. No manual tracking.
Audit Trail
Every action in the workflow — document submitted, review completed, comment added, classification changed, approval given — is logged with timestamp, user, and action type. The compliance director can pull a complete audit history for any submission in under two minutes. This resolved the regulatory exposure concern: the firm now has full, reliable documentation of its compliance process for any submission, retrievable on demand.

The Results

After three months of operation, the numbers were clear:

Metric Before After
Error rate on submissions ~13% ~1.2%
Average approval cycle time 4.3 days 1.8 days
Coordination overhead (estimated FTE) ~2.0 FTE/day ~0.5 FTE/day
Stalled submissions requiring manual chasing ~40% of submissions <5% (auto-escalated)

The 1.2% remaining error rate represents genuine edge cases — unusual submission types requiring human judgment on classification that falls outside standard parameters. These were anticipated. The goal was never zero errors; that would require removing human judgment from decisions where it belongs. The goal was to eliminate the structural errors — the ones that were happening not because of complexity, but because the process gave people no structure to work within. Those are gone.

What the Compliance Director Said

"We kept thinking we needed to train people better. The problem was not the people. It was that the process gave them no structure to work within. Once there was structure, the errors stopped."
— Compliance Director, 22-person financial services firm

This is the most consistent finding across every process rebuild we have done. The human beings in the workflow are not the problem. The architecture around them is. Give people a clear, structured process with defined steps, automated routing, and real-time status visibility — and performance changes without any change to the people doing the work.

What Made This Work

The technology in this build is not complex. A structured intake form with programmatic validation. An automated routing and escalation system. A dashboard. An audit log. None of these are technically ambitious — a competent developer can build all of them. The technology was not the hard part.

What made this work was the process mapping that came before the build. By the time we began technical design, we had a precise specification: exactly what the system needed to do, in what order, under what conditions, with what data, and what the exceptions looked like. The three structural failures identified in the mapping session were all addressed in the design — before a single line of code was written. The build scope was accurate. There were no mid-build discoveries that required redesign.

This sequence — map first, then design, then build — is what separates successful workflow automation from the pattern most AI automation projects follow: build first, discover problems in production, iterate in crisis. That pattern is expensive. It is also avoidable. The information you need to build correctly is always available before you start. You just have to surface it through the mapping process before you start building against it.

For this firm, the mapping session took two hours. The written specification followed within 24 hours. The working prototype was delivered in three weeks. The full system was live and being used by the team within six weeks of our first conversation. Thirteen percent to 1.2% — at the process level, that is a solved problem.

Next step
Have a workflow with a structural problem you keep trying to fix?
Book a 30-minute discovery call. We map the process, identify where the structural failures are, and tell you honestly whether rebuilding it is the right answer — and what that rebuild would look like.
Book a discovery call →