AI, AI, AI
My sister said something to me the other day that made me laugh when describing her current work environment: “AI, AI, AI.”. Every meeting. Every deck. Every strategy. It has become the word that anchors every conversation.
I see the same thing in the consulting world. AI shows up in roadmaps, executive conversations, vendor demos, and process redesigns. The volume and pace of AI solutions and progress right now can feel overwhelming. People feel pressure to move fast and use these life changing tools to boost productivity, but they also worry about getting it wrong.
When everything is AI first, it becomes hard to separate signal from noise. So let’s clarify what AI actually is, what’s been going on recently, and some of the resulting shifts in the consulting world.
But Actually, What is AI?
AI is a broad term that covers different types of systems. Two common categories you will hear about are predictive AI and generative AI.
- Predictive AI forecasts outcomes based on historical data. For example, predicting customer churn, fraud risk, or sales likelihood.
- Generative AI creates new content such as text, code, or summaries based on patterns it learned during training.
This article focuses on generative AI, specifically large language models, because that is what is driving most of the current “AI, AI, AI” conversation.
Most of what people call AI in the workplace right now is really just prediction at massive scale. Large Language Models, or LLMs, simply predict the next most likely set of words based on the context you provide and the data they were trained on.
At a basic level, you can think of it like this:
Context (your ask) + Knowledge (what the AI was trained on)
= Next token
A token is simply a small chunk of text. The model predicts the next token over and over again until it forms a full response. It repeats that prediction cycle rapidly, which is why the output feels structured and often intelligent.
Think of this Singapore inspired example.
If you type:
“One of Singapore’s most famous dishes is…”
The model predicts “chicken rice.” (this is your output from something like ChatGPT)
Why? Because in the training data, “Singapore” and “famous dish” frequently co-occur with “chicken rice.”
Then it continues:
“One of Singapore’s most famous dishes is chicken rice, often served in hawker centers…”
Again, it’s making another prediction about hawker centers, as chicken rice is often attached to hawker centers in the AI’s training data.
The AI has never tasted chicken rice. It has never thought about chicken rice. It is simply following probability.
Bringing this back to consulting, AI models do not understand your business in the way a person does. They recognize patterns extremely well. When you give them structured instructions, relevant context, and good data, they produce surprisingly strong first drafts of thinking, language, and structure.
2026: The Year of “AI, AI, AI”
In recent months there seems to be even more “AI, AI, AI” than ever. The core reason is simple. Models are developing at a pace most of us did not expect. Major players have shipped substantial updates that push these systems from clever assistants toward tools that execute real work.
A very quick synopsis on some of the 2026 buzz:
- OpenAI released GPT 5.4, its most capable professional model yet, with expanded reasoning, native computer use, and a massive context window built for long workflows and agentic tasks. These upgrades do not just make ChatGPT faster. The model shows stronger performance on complex business processes and analysis, multi step reasoning, and the ability to embed AI into enterprise workflows.
- Anthropic has also surged forward. Its developer focused tool Claude Code has gained serious traction as a coding assistant, and in January Claude Cowork launched in preview. Claude Cowork was reportedly built in roughly ten days by a small team using Claude Code itself. It can organize files, analyze documents, and complete multi step tasks across systems. That matters because it signals how quickly capable agent systems can now be built.
- At the same time, agentic orchestration is moving from theory into practice. Teams are no longer working with one model in isolation. They are connecting specialized agents across systems, layering in retrieval, permissions, and approval flows, and building multi agent workflows.
These developments are not incremental model bumps. They reflect a shift toward systems that can operate across tools, execute workflows, and take structured action rather than just generate text.
Adapting the way consultants work
For tech consulting and delivery teams, this changes the ground under our feet. The tools we use to design, document, build, and test solutions are evolving in real time. As standard as PowerPoint, Excel, and Word were even a year ago, our knowledge and utilization of AI must now be at that same level.
First drafts can be generated instantly. Code can scaffold itself. Documentation writes itself. Project timelines will compress – and there will be the new normal.
The differentiator is no longer who can produce output fastest. The differentiator is who can design reliable agentic workflows, define quality thresholds for agent work, and integrate these capabilities without creating fragile systems or future tech debt.
The Value Shift in Consulting
The value consultants bring to the table is not going to disappear, but it will shift. We need to think about how we will continue to deliver value in this new agentic world. How? Two key themes are becoming evident:
- Delivering outcomes over outputs
Historically, consulting value centered on activity. We ran analysis, built decks, configured and implemented systems.
AI compresses much of that work. It drafts in seconds. It synthesizes research instantly. It structures outputs faster than any team could manually.
So the focus needs to shift on delivering outcomes, faster and better. Clients will expect accelerated timelines and quicker prototypes. They will expect improvements without disruption to their business. And they will pay for measurable impact. Shorter cycle times. Higher quality decisions. Lower risk. Tangible revenue growth.
Our role moves upstream and deeper. We define the right problems before tools are selected. We choose the appropriate models and platforms. We design flexible architectures that can evolve. We embed governance that aligns with real risk tolerance, not theoretical policies.
We also design how agents work together. Orchestration becomes part of the blueprint, not an afterthought. And at the end of the engagement, our success is not just a deployed solution. It is a client who understands how to operate, monitor, and evolve that system and agentic layers without us.
- Using our industry and experience knowledge to build the agents
Agents are only as good as the thinking they encode.
Consultants therefore need to encode their judgment. We need to translate years of industry experience, delivery lessons, and pattern recognition into structured logic that an agent can execute. That means defining what good looks like. Setting risk thresholds. Designing evaluation criteria. Making implicit assumptions explicit.
The model can generate answers. It cannot define what excellence means in your industry.
In my own experience building delivery agents, the hardest part has never been the technical setup. It has been deciding what makes a “good” output. It was pressure testing edge cases. It was defining where automation should stop and human review should step in.
If we want to lead this shift, we cannot sit on the sidelines. Consultants need to be operating at the frontier of AI, experimenting with models, understanding orchestration, and learning how these systems actually behave. Not because it is trendy, but because our clients will rely on us to implement this responsibly.
The consultants who lean into this will not be replaced by agents. They will be the ones designing them.
Bringing “AI, AI, AI” to Reality
If you want to move beyond theory, run a simple experiment:
- Pick a real and recurring workflow. Like writing an email, completing marketing analysis, or doing a performance review.
- Break down the task into parts. Try to use AI at each step. It is no longer “this is a human task” and “this is an AI task.” Every task is becoming an AI enabled task. It is Human + AI.
- Reflect on Your Results. Where does it help? Where does it hallucinate? Where does it overstep? Where does it genuinely surprise you? Track time saved. Track where it makes errors.
- Iterate. AI will only give you outputs as good as the inputs you provide. Where you see mistakes, change your prompt and try again.
We are in unchartered territory
Executives, consultants, engineers, and vendors are all learning in real time. The advantage right now belongs to the people who experiment on purpose.
The best part is that you can test these systems in your everyday work. Quite literally, talk to your AI tool: challenge it, ask it to do better and more, call it out when it’s wrong. AI rewards those willing to try, adjust, and get a little better, faster, and smarter each time.
Consider the scenario: AI can write an email. Can AI also send it? Can AI send it while you are sleeping? Can AI send it while you are sleeping, wait for a reply from your manager in another time zone, and respond before you wake up the next day? What’s next?
Appendix: Basic AI Glossary
Prompt: The instructions and context you provide to a model to guide its output. The quality and structure of the prompt directly impact the quality of the response.
Predictive AI: Systems that forecast outcomes based on historical data.
Generative AI: Systems that create new content such as text, code, images, or summaries based on patterns learned during training.
Hallucination: When a model generates information that is incorrect, fabricated, or not grounded in reliable data, even if it sounds confident.
Deterministic vs Probabilistic: A deterministic system produces the same output every time for the same input. A probabilistic system generates outputs based on likelihood, which may vary across runs. Most generative AI systems are probabilistic.
Human in the Loop: A design approach where a human reviews, approves, or can intervene in AI generated outputs before final action is taken.

