Skip to main content
← Back to Insights

When "Process" Is Human Middleware

When "Process" Is Human Middleware

What’s at stake in most ERP and finance operations isn’t whether the software is “up.” The question is why so many organizations still depend on a specific person being available, attentive, and configured “just right” for the business to run.

From first principles, a process is supposed to be repeatable under variation: different people, different geographies, different devices, different time zones. If the work only succeeds when a particular employee opens a CSV, fixes date formats, and re-saves it before importing, you don’t have a process. You have human middleware.

This is not a moral critique of the people doing the work. It’s a systems critique. The more your operation grows, the more this hidden dependency becomes a failure mode—quietly at first, then all at once.

Human Middleware as a Design Smell

Human middleware is the use of skilled professionals as “biological APIs”: translating formats, reconciling columns, copy-pasting between tools, and making judgment calls that the system never encoded. It often looks like productivity because it’s busy and continuous. But it is actually a sign that the system boundary is in the wrong place.

A helpful test is simple: if the process breaks when someone is on vacation, or when a laptop locale changes from DD/MM/YYYY to MM/DD/YYYY, the process is not robust. It is contingent.

In ERP contexts (NetSuite is a common example), these contingencies show up in predictable places:

  • CSV imports and saved mappings that assume a particular file shape
  • Manual “pre-validation” in Excel (“just check the dates before you import”)
  • Email and chat as integration layers (“send it to HQ and they’ll load it”)
  • Ad hoc reconciliations that fix symptoms instead of inputs

The Invisible Failure Mode: When “It Works” Isn’t Stable

In many scaling organizations, data originates in multiple regions and must land in a central system of record. The local team exports a file. Someone reconciles it in a spreadsheet. They email it to a central administrator who imports it into the ERP.

The fragility is often understood by the people in the loop. They know the import will fail if the delimiter changes, if Excel auto-converts a field, or if an invoice number ends up in an email body instead of the attachment. They also know the worst outcome: the import succeeds but the data is wrong. Silent success is the most expensive failure.

At that point, a technical fix is usually available: normalize formats with a script, validate against a schema, integrate by API, or move to a controlled drop location. But then the decision gets framed as “a small enhancement” that takes 20–40 hours, and the manual method gets defended as “working fine if everyone is careful.” This is the moment fragility becomes policy.

Why Manual Work Keeps Winning the Budget Argument

The root issue is how costs appear. Development time shows up as a visible line item. Manual time often disappears into salaries, goodwill, and institutional knowledge.

Most teams underestimate three compounding costs:

  • Recurring labor: two to six hours per cycle, multiplied by periods, subsidiaries, and exceptions
  • Latency: imports and closes that wait for a person (or a person’s timezone)
  • Silent error risk: values that load “successfully” but poison reporting and reconciliation

When leaders say “it’s fine,” they often mean “it hasn’t failed loudly yet.” But the operational burden is already present, and the risk profile is already deteriorating.

Immutable Ingestion: A First-Principles Constraint

To reduce dependence on human middleware, it helps to adopt a non-negotiable design principle: immutable ingestion.

Immutable ingestion means data enters the system of record through a controlled boundary that is:

  • Human-agnostic: doesn’t depend on a person’s judgment, availability, or habits
  • Locale-agnostic: not sensitive to decimal separators, date conventions, or Excel settings
  • Schema-driven: validated against a canonical definition of what “correct” means
  • Fail-fast with actionable errors: rejects bad inputs with specific feedback

In practice, this means removing the “open the file and fix it” step. Data either conforms at ingestion or it is rejected. The correction happens upstream or in the ingestion layer—not in someone’s spreadsheet.

What immutable ingestion looks like in an ERP environment

  • A canonical template or schema (fields, types, constraints) owned by the system team
  • Automated normalization (ISO dates, numeric formats, currency precision)
  • Validation rules (required fields, permitted values, range checks)
  • Deterministic imports (same input always yields the same output)
  • Logging and alerts (so failures are visible and traceable)

This is not about perfection. It’s about moving variability out of the workflow and into engineered constraints.

Example: Exchange Rates and the Locale Trap

Exchange rates are a good example because the work is “boring” but the downstream impact is large. In a human middleware setup, one region exports rates using DD/MM/YYYY and decimal commas, sends them to HQ, and HQ opens the file in an environment expecting MM/DD/YYYY and decimal periods. The file may import incorrectly, or worse, import “correctly” with wrong meaning.

Under immutable ingestion, the chain changes:

  • The source produces a machine-readable feed (API, SFTP drop, or structured file in a monitored location)
  • A normalization step converts dates to ISO 8601 (YYYY-MM-DD) and standardizes decimals
  • Validation checks detect implausible ranges (e.g., USD/CLP outside expected bounds)
  • The ERP import runs from a controlled, versioned mapping
  • Failures generate specific errors, not email silence

No one needs to “check the file.” The system performs the check every time, identically, and produces evidence when it rejects input.

Systems Implication: Human Middleware Sets a Ceiling on Scale

The deeper implication is structural. Human middleware scales linearly. If volume doubles, the checking and fixing work doubles. You are adding headcount to solve a compute problem.

It also limits change. The moment you add a subsidiary, change a chart of accounts, or alter reporting structures, you have to retrain the middleware. The “process” becomes a set of meetings, reminders, and tribal knowledge. That fragility spreads into close, compliance, and decision-making.

True automation separates logic from labor. It treats global variation—languages, date formats, tax regimes—as engineering constraints to encode, not administrative burdens to endure.

How to Start Replacing Human Middleware (Without a Big Bang)

Most organizations don’t need a rewrite. They need to move one boundary at a time.

1) Inventory ingestion points

List every place data enters your system of record: imports, journals, integrations, “one-time” uploads that happen monthly, and any email-based handoff. If it requires manual pre-processing, it belongs on the list.

2) Classify by risk, not annoyance

Prioritize flows with high downstream blast radius: revenue, cash, tax, intercompany, and exchange rates. Also prioritize anything that can silently corrupt reporting.

3) Define a canonical schema and fail conditions

Document the required fields, allowed formats, and what constitutes a hard reject. Make “reject” normal. The goal is not to be permissive; it is to be correct.

4) Automate normalization and validation first

You can often keep the same CSV import mechanism initially while adding a pre-ingestion validation step that standardizes and rejects. This reduces risk without needing an immediate API integration.

5) Make errors observable

Errors should go to a queue, a ticket, or an alert channel with enough context to fix the source quickly. Silence is the enemy.

Ultimately, the point of an ERP is not that it stores transactions. It’s that it provides a stable system of record. If the correctness of that record depends on someone opening a file and “being careful,” correctness is optional.

What this means for operators is straightforward: treat human middleware as technical debt with an interest rate. The cost of removing it is finite and schedulable. The cost of relying on manual perfection accumulates and eventually arrives as a close failure, an audit problem, or a decision made on bad data.

The takeaway is to move the “care” from people into systems. If your workflow requires a specific person to check their email on a Tuesday, you don’t have a process you can scale.