Skip to main content
All articles
By Bill Sourour

The Web Form Is Dead

enterprise-softwareuser-experienceai-automation
The Web Form Is Dead

I watched a colleague spend forty minutes last week entering data into a procurement system. Not new data. Data that already existed in an email, a PDF, and a spreadsheet, all open in adjacent browser tabs. She was copying and pasting between windows, tabbing through form fields, occasionally retyping things by hand when the paste didn't land right.

Forty minutes. To tell a system something it could have known in seconds.

That evening, I took a photo of a crumpled receipt, dropped it into an app, and tapped "yes" once. Done. The AI read the merchant, amount, and date, determined the category, and asked me one question. Eight seconds.

My colleague wasn't angry about the procurement form. She wasn't even frustrated. She'd accepted that enterprise software works that way. You find the right screen. You fill in the boxes. You click submit.

That acceptance is disappearing.

The ratchet

A ratchet is a mechanical device that permits movement in one direction only. Expectations work the same way.

Once someone has uploaded a document and watched AI extract what matters in seconds, manual data entry stops feeling normal. Once someone has described what they want in plain language and watched a machine figure out the steps, a twelve-tab wizard feels broken.

This is happening every evening. People go home, use consumer AI tools that keep getting better on a curve. They come back Monday morning to enterprise tools improving on a roadmap, which means quarterly at best. The curve pulls away from the roadmap a little more each cycle.

Your real competition isn't another vendor. It's the tax app that read a T4 in three seconds. The email client that drafted a reply someone only had to edit slightly. The grocery app that predicted their list. Consumer software has trained an entire workforce to expect intelligence. Enterprise software is still asking them to be the intelligence.

The NNGroup 2026 UX report measured this: users who've had good AI experiences become less patient with traditional interfaces. They don't just prefer the AI version. They perceive the old version as broken. The form didn't change. Their threshold did.

Gartner predicts 40% of enterprise apps will have task-specific AI agents by the end of this year, up from under 5% in 2025. Salesforce added 6,000 enterprise customers to Agentforce in a single quarter. The big vendors are rebuilding their interaction model around agents and natural language. The old contract, "learn our system, memorize our menus, fill out our forms," is expiring. Nobody's renewing it.

What the gap costs

The obvious cost is time. People spending hours on work that machines handle in seconds. The less obvious cost compounds: your best people route around your systems entirely.

They paste data into ChatGPT instead of your reporting tool. They use personal AI accounts to analyze spreadsheets your BI platform should handle. They build shadow workflows in tools you don't control, don't audit, and don't know about.

Most CTOs learn about this pattern from a security audit, not from the teams doing it. By then, the data has been flowing through personal accounts for months. The board question, "what's our AI strategy," arrives around the same time as the security question, "where is our data going." The CTO has to answer both with the same budget.

A Salesforce survey found that more than half of generative AI users at work have used tools their company hasn't approved. Not because they're reckless. Because the approved tools are slower than what they have at home.

Shadow AI is the new shadow IT: data leaking into systems you can't see, decisions made with tools you can't govern. Keep the gap open long enough and people stop working around the system. They leave.

The chatbot mistake

There's a version of this story where the answer is "just slap a chatbot on it." A lot of vendors are telling that story right now. Put a little AI assistant in the corner of the screen. Let people type questions. Ship it.

This misses the point.

Nobody wants to have a conversation with their procurement system. The problem is that enterprise software still expects humans to do work the machine should be doing.

The useful question: how much of what your interface demands could be handled before the user even sees a screen?

Think about what happens when someone needs to update a vendor's banking details. In most enterprise tools, they navigate a menu tree four levels deep, find the right form, and start typing. In the version that respects their time, they drop in the letter from Acme's finance team, and the system figures out the rest. It pulls up the right record. Fills in what it can. Asks about the one or two things it genuinely doesn't know.

The user still reviews. Still approves. Still has the final say. The drudgery is gone. What's left is judgment.

Intent over procedure

There's a phrase gaining traction in design circles: intent-first software. Stop designing interfaces around the shape of your database and start designing them around what someone is trying to accomplish.

Old model: "Here are 200 things you could do. Figure out which one you need."

New model: "What are you trying to do?"

The concept is simple. The engineering is not. Your UI can't be static anymore. It has to infer context: who's using it, what they've done before, what they probably need right now.

This scares people who care about auditability. It shouldn't. When an agent auto-fills a field, it logs where the data came from. When it infers a category, it can show its reasoning. Every action traces back to a source document or a decision rule. That's a better audit trail than a human copy-pasting between browser tabs, where nobody can reconstruct what happened after the fact.

But you can't ship this half-baked. People burned by bad AI features become more hesitant to try new ones. The system has to show its work, and it has to let users correct mistakes easily. Trust compounds the same way distrust does.

Intuit figured this out with TurboTax. Upload a tax document and their AI reads it, interprets the data, builds a personalized checklist of what you still need, tells you where to find the forms you're missing.

The compliance question

The objection from regulated industries is always the same: "We can't send sensitive data to some AI in the cloud."

They're right. And it doesn't matter.

Open-source models you can run entirely on your own infrastructure have gotten good. You host them on-premise or in your own cloud tenancy. Data never leaves your perimeter. You sandbox each instance so it only touches what it needs. You own the inputs, the outputs, the logs, everything.

Most people hear "local model" and think "compromise." For a regulated organization, it's the opposite. A sandboxed agent produces a complete record of every piece of data it touched and every decision it made. You can audit an agent. You can't audit a person's alt-tab history.

You don't need to build anything ambitious. An agent that reads invoices. One that pre-fills vendor onboarding. One that pulls key terms from contracts. Each one sandboxed, each one auditable, each one doing one thing well. Small bets with clear boundaries; exactly the kind of bet a cautious organization should be making. Regulated organizations get their first agent into production this way: one workflow, one sandbox, measurable results within a quarter.

The engineers who've been maintaining your systems, the ones who know the business rules, the edge cases, the fields that matter versus the ones nobody's touched since 2014, are your best asset in this transition. They understand the domain better than any AI startup that discovered it last quarter. An engineer who spent five years on claims processing is the right person to build a claims processing agent.

The shift from maintaining forms to building agents that eliminate forms is a promotion, not a layoff.

The organizations most afraid of this transition are sitting on the biggest opportunity. The most forms. The most drudgery. The people who know the work cold. A local model, a sandboxed environment, an engineer who already understands the workflow. That's the whole starting kit.

Where to start

Watch your users work. Sit with someone for an hour. Pay attention to the tab switches. The retyping. The moments where a person becomes a bridge between two systems that should be talking to each other.

Then pick three workflows where people are most visibly routing around your systems. The places where they've already told you, with their behavior, that the current tool isn't good enough. For each one, ask: what could be pre-filled, extracted from a document, or eliminated entirely?

If the answer is "nothing," you've stopped seeing it. It's been there so long it feels like furniture.

It's not furniture. It's friction. And your users have started to notice.

Bill Sourour

Bill Sourour

Founder, Arcnovus

25 years in enterprise technology. Writes about AI strategy for CTOs.

Featured inFortuneWIREDCBC
Learn more
Subscribe to Bill on AISubscribe