One of the simplest ways I know to think about software is to split it into three layers: data, logic, and interface.
The data layer is the raw material: records, documents, messages, call transcripts, customer histories, internal notes, or company information.
The logic layer is how the system works on that material: what it pulls, what it ranks, what it triggers, what it summarizes, what it passes forward.
And then the interface is the part everyone sees, which is why it gets so much of the attention.
What keeps striking me is that when a product starts feeling heavier than it should, people almost always reach for the visible layer first.
They adjust the interface, add more controls, add another screen, write more instructions, or bolt on another feature.
Sometimes that gives short-term relief, and sometimes it is exciting.
Usually it does not get at the real problem.
I keep seeing this in AI products especially.
Teams assume the magic is in the model, so if the outputs are strange they rewrite the prompt or add more rules or build more fallback logic or create more elaborate post-processing.
Or they assume the issue is discoverability, so the interface expands with more controls, more tabs, more visibility, or more explanations.
Meanwhile the underlying records are duplicated, stale, incomplete, mislabeled, or spread across five places.
At that point the system is trying to perform intelligence on top of data that is not.
That can work for a while.
In fact, it can work surprisingly well for a while, which is part of what makes this tricky.
AI is very good at helping teams postpone structure.
You can build a tool that demos well before you actually have thought about how to structure it well.
The product looks smart and feels alive, or maybe the model says interesting things.
Then it meets real operating conditions and starts acting weird for reasons nobody can fully locate.
Now the question is whether the problem came from the source data, the prompt, the workflow chain, the fallback logic, the human input pattern, or some tiny adjustment somebody made three weeks ago.
That is when the product starts feeling brittle.
Visible problems tend to get all the attention, even when the real trouble is sitting underneath them.
The cleaner way through this, in my experience, is to spend more time lower in the stack than most people want to, asking “boring” questions.
Here are a few such questions:
- Where are all the places this information lives?
- Which fields can actually be trusted and kept up to date?
- Which of these records are duplicated?
- What should be normalized before a model ever sees it?
- Which records deserve enrichment, and which should be removed entirely?
Now, none of that may sound especially exciting, but that is where a huge amount of leverage lives.
When the data gets better, the rest of the system tends to calm down.
The logic can get lighter because it is no longer compensating for chaos underneath it.
The interface can get simpler because users do not need as many recovery tools.
The outputs become easier to trust because the system is finally working with material at all stages that is trustworthy.
To use another analogy, think about the Titanic.
While it was moving smoothly across the Atlantic, before it ever hit an iceberg, the machines deep in the ship were running well. That allowed the dormitory-style bunks above them to feel comfortable, even if a little cramped at times. It also allowed the restaurants on the upper floors to serve guests without worry. Above all of that, passengers on the deck looked out at the starry night sky and felt the smooth power beneath them carrying them through the ocean.
To extend the analogy, what a lot of software teams do when a workflow breaks is run up to the deck, the top-level abstraction, and start cleaning the windows so people can see out of them better.
If the analogy holds, there are really two things that matter in that moment:
- What is the underlying cause of the problem? In this case, there is a hole in the lower part of the ship.
- What is the real response? You get as many people as you can into life rafts.
With software, teams often ignore the underlying problem, which is usually a data problem.
I think this is also why some AI products end up feeling much more complicated than they should.
They are often doing too much work in the wrong place.
The data layer is weak, so the logic tries to rescue it.
The logic gets strange, so the interface tries to rescue that.
Then the whole project starts feeling bloated and no one is sure why.
Companies often make this worse in very normal ways.
The data is never pristine: people enter things differently, one team uses shorthand, another leaves fields empty.
Customers do not always behave according to your expected edge cases either, so records do not stay current.
So if the product depends on the data being more orderly than the company actually is, the weirdness eventually leaks through.
That is why I have a hard time treating data work like a side task around AI. To me it is the center of the whole thing.
Once the company’s information is shaped well, the business itself becomes easier to read.
You can see patterns more clearly and you can compare behavior more honestly.
You can tell which signals deserve attention and which ones were just taking up mental space.
The AI layer becomes more useful because the company has become more legible to itself.
And really that is the part I keep coming back to.
If an app feels too complicated, I usually assume the issue is not that the people involved are not smart enough or that the model is not powerful enough.
I usually assume the structure underneath is weaker than everyone hoped it was.
Better products usually start with boring cleanup: cleaner records, clearer structure, and less rescue work in the layers above.