It’s possible the reliability issues won’t be fixed until enterprise environments adapt to become more technologically hospitable to genAI systems.
“The deeper problem lies in how most enterprises treat the model like a magic box, expecting it to behave perfectly in a messy, incomplete and outdated system,” said Soumendra Mohanty, chief strategy officer at AI vendor Tredence. “GenAI models hallucinate not just because they’re flawed, but because they’re being used in environments that were never built for machine decision-making. To move past this, CIOs need to stop managing the model and start managing the system around the model. This means rethinking how data flows, how AI is embedded in business processes, and how decisions are made, checked and improved.”
Mohanty offered an example: “A contract summarizer should not just generate a summary, but it should validate which clauses to flag, highlight missing sections and pull definitions from approved sources. This is decision engineering defining the path, limits, and rules for AI output, not just the prompt.”
This story originally appeared on Computerworld