Automation That Remembers What’s True
This article is in reference to:
Stop Wiring Actions to Events. Build State-Aware Workflows
As seen on: cfcx.work
Automation without memory
Most teams don’t get paged for the failures this post is about. There is no red dashboard, no midnight incident call. Instead there is a slow leak: duplicate invoices, missing updates, half-synced records that quietly distort reports and force humans to reconcile “what the system says” with what actually happened.
The “so what” is operational trust. When every important workflow is wired directly to events, leaders eventually stop believing their own systems. Forecasts need manual caveats, finance closes need ad hoc checks, and customer-facing tools require side spreadsheets to stay honest. The cost is not just rework; it is the inability to confidently answer a basic question: for this specific piece of work, what is true right now?
This piece exists because teams keep having the same quiet failure: their automations “work” in tests, but reality erodes them. Not with dramatic outages, but with slow operational drag — duplicates, skips, partial updates that no one notices until a customer or finance report forces the issue.
The original post is not really about content pipelines or ERP workflows. It is about memory: the gap between how people think work moves through a system and how software actually represents “what is true right now.”
By arguing that most automation failures are state problems dressed up as orchestration issues, the author is pushing against a popular comfort: the idea that more connectors, more triggers, and more events will eventually produce reliable operations. Wiring actions to events feels like progress, but at scale it quietly erodes the very confidence automation was meant to create.
The hidden cost of reacting to motion
Modern stacks encourage motion. Every tool emits events, every platform offers webhooks, every integration canvas invites “when X happens, do Y.” The system looks active and responsive. Dashboards fill with runs, tasks, and logs. On the surface, this feels like work getting done.
The article points at the structural problem underneath: each component is allowed to declare its own version of “finished.” A job returning 200, a webhook firing, a record inserting — each of these becomes a local truth. No one asks whether that truth is consistent across the whole flow.
This is why the examples in the source text feel mundane: jobs finishing twice, steps finishing out of order, validations lagging behind API responses. They are not esoteric edge cases; they are ordinary behaviors of distributed systems. Yet most automation design quietly assumes the opposite — that events arrive once, in order, and with stable meaning.
That assumption is less and less defensible as teams stitch together LLM services, workflow engines, ERPs, CMSs, and chat tools. Each system has its own retries, buffering, and idea of success. When automations are wired purely on events, those differences accumulate as drift. The work “moves,” but no one can reliably answer the simple question: for this specific item, what is true right now?
State as a first-class citizen
The core move the author makes is to re-center state as the primary object of design. Instead of treating events as proof that something has happened, they are demoted to triggers to attempt a change in state. The real authority becomes a persistent record per work item: where it is, what version is being processed, what artifacts exist, what evidence backs them, and what failures have occurred.
This is more than an implementation detail. It is a shift in mental model from “flows” to “lifecycles.” A flow is something you run. A lifecycle is something you observe and advance under clear conditions. By framing workflows as finite state machines with explicit transitions, the author forces a question that event-driven wiring often avoids: under what precise preconditions is it valid to say that this item has moved from one state to another?
- Work is defined by durable states, not by transient messages.
- Transitions become verifiable operations, not blind handoffs.
- Retries shift from being scary (“what if it runs twice?”) to safe (“either it advances or it no-ops”).
The concrete content-publishing lifecycle in the source text is less important as a template than as a pattern: each state corresponds to what a human would reasonably believe about the work if they had all the facts. The system, via state records, takes responsibility for maintaining that belief consistently. In that sense, a state machine is not just a technical construct; it is an agreement about what “done,” “blocked,” or “failed” actually mean.
Signals from the trenches
The choice of examples — content pipelines, ERPs, workflow tools like Zapier or n8n — is not accidental. These are domains where the cost of being “nearly correct” is high and where integrations proliferate quickly.
In ERP contexts, a duplicated journal entry or a sales order stuck between “created” and “approved” is not a mere glitch. It has regulatory, financial, and trust implications. Yet many ERP-centric automations still rely on events like “record created” or “status updated” without a separate, durable account of the business process state. The article suggests that this approach has reached its limit.
Similarly, LLM-driven workflows introduce non-determinism into pipelines that were previously predictable. A model may time out, partially succeed, or return different structures on different runs. Wiring downstream systems directly to “model completed” events invites chaos when those behaviors occur. A state-aware design, by contrast, requires the model to produce artifacts, validations, and evidence before declaring that a transition is valid.
Across these examples, the pattern is the same: the more heterogeneous the stack, the less safe it becomes to let any single tool define truth via its events. The durable record of process state has to live somewhere teams control, with external systems treated as participants in transitions, not owners of meaning.
From debugging sequences to managing work
One of the quieter claims in the piece is about observability. When systems are wired around events, monitoring devolves into log archaeology: scroll through past activity and infer what probably happened. This is workable at small scale, but it does not age well.
The state-centric alternative is to treat monitoring as a query over current truth. Instead of asking, “What ran when?” the operator asks, “Which items are stuck, and why?” States, transitions, and reasons become the primary dimensions. Time-series logs remain useful, but as supporting evidence rather than the only narrative.
This reframing matters culturally. Teams used to debugging sequences build habits of heroics and forensics. They become good at reconstructing the past. Teams used to querying state build habits around designing clear lifecycles and failure modes. They become good at making the present legible.
The article is advocating for the latter habit set. It is a call to push design effort upstream: define states, preconditions, postconditions, and idempotency policies before wiring triggers. In doing so, it suggests that the real leverage in automation is not speed of integration, but clarity about truth.
In the end, systems that know where they stand
In the end, this post is not just an argument about state machines. It is a claim about what makes complex work reliable: systems that can say, with confidence, where each item stands and why.
Ultimately, the author is asking teams to trade a certain kind of convenience for a different kind of ease. Wiring actions directly to events is convenient — until the day it is not. Building state-aware workflows requires more upfront thinking, but it buys back operational calm: safe retries, clear failure boundaries, and monitoring that aligns with how humans think about progress.
Looking ahead, as more organizations weave together LLMs, legacy ERPs, SaaS platforms, and internal services, this distinction will matter more, not less. The stack will continue to generate more events. The question will be whether those events merely create motion, or whether stateful designs turn them into controlled attempts to advance what is true.
A practical next step is simple and uncomfortable: pick one important process and ask, “Where does this workflow remember its state?” If the answer is a tangle of events, logs, and ad hoc checks, the article’s argument applies directly. The invitation is to let work own its state, so the system can stop guessing — and people can move from reconstructing what happened to confidently directing what should happen next.