In this article I would like to raise the attention on some reflections I recommend for companies that are ramping up in the usage of AI.
First of all let me give background to understand from which perspective I come from and then deep dive in the AI considerations.
I’m a strong believer in the BOI (Broker Orchestrate Integrate) operating model that in principle is shifting from the historical Plan Build Run model in which companies were building own solutions entirely. This BOI started in 2012 with Gartner, KPMG, McKinsey and others. I was applying the core principles few years earlier as I was an early pusher for SaaS platforms, suites like M365 (it was called BPOS in 2009), Google One (it was Google Gsuite in 2009) and public and hybrid clouds (AWS and Azure). At that time ERP vendors were still completely on-premise or IaaS and CRM were already entering in the SaaS model.
The BOI principle starts with analyzing which solutions from the market, can be offered as self maintained and as service, decide how to orchestrate those solutions and integrate them continuously as a cycle. Solutions on the market are changing regularly and need to be re-evaluated, integration is key to be sure solutions properly interoperate.
The BOI made a shift from the historical Plan Build Run that was focused on internal development, later extended with offshoring. In the earlier stages, companies were building on their own many things and using offshoring for part of the less valuable activities. This was a time in which the number of components and services built was anyway a fraction of what is used today in a company.
Some benefits of BOI that are relevant for this conversation are linked to the fact that a company can decide to take away the management of some technology stack, with benefit of full managed services from major vendors that delivers that at a scale to multiple clients. This brings better overall cost, scalability, flexibility and many other benefits.
One key example has been CyberSecurity that raised in many companies often as a mix between internal competences and properly outsourced specialized managed services, achieving a level of service that would normally require much more considerable investment if fully internally operated.
The benefit of managed services introduced also the effect to share the learning from different industries and clients and spread solutions progressively more robust for everyone.
Fast forwarding to today, we are in a situation in which companies progressively shifted fully or partially to BOI, built also shared services, outsourced or internally, depending on the business process, using also RPA for automating part of the processes and gain some efficiencies.
The low code approach allowed many companies to start to build easy low complex apps also in the business, from users with limited code knowledge and components limiting the risk of derailing from some core guardrails. This has been normally properly governed by IT acting as coach and guardrail to allow a proper development.
The need of low code solutions came to increase flexibility of standard solutions at the core with adjustments to cover variegated situations in different realities.
Companies are today more advanced to structure their own data model and overall data governance and security. Enterprise Architecture is increasing progressively its relevance for a wider audience, in combination with the development of Digital Twin including also IT/OT integration once completely siloed.
The AI, especially in the most recent agentic declination, is bringing software agents autonomously executing actions and taking decisions in a certain context, with more flexibility than the classical RPA still effective for repetitive activities. The way agentic works is using LLMs to understand the context, explore services to consume and act autonomously.
I believe, as I wrote few months ago in my newsletter, that offshoring, until now a mix of resources costs effective and RPA, could be in the future insourced back thru the introduction of agentic AI to execute the job of those offshoring and bringing back the ownership of those processes that in case of offshoring is sometime a partial black-box, creating also a form of vendor lock.
The other aspect we saw with AI has been the acceleration on trends like Vibe Coding, in short asking to an AI to build for us an app or a code to do something. AI works well in testing code, in summarizing concepts, in telling us what a certain type of process is doing but has no imagination and has to be strongly guard-railed.
LLMs are influenced by different aspects in their way to operate , including the context, the limits of understanding the instructions, the boundary to set around the guardrails and several other aspects that get explosively relevant as we iterate thru vibe coding. The simplicity hidden in the trend of vibe coding, made easy for non technical people to build just with a description a full working software, also creating an illusion of easy implementation of solutions.
I have been early engaged in the field of Agentic AI, their orchestration and vibe coding, also considering some of the layers to coordinate agentic and the restrictions of them. The simplicity to generate apps can be a threat for big giant software companies that would have to justify the premium for their services as will be easier to build and maintain them. On top the facility to build solutions, can be a cause of proliferation of solutions on the market with limited quality, based on code reuse coming from vibe coding techniques. This type of trend can reduce the overall quality of the output as the way code get combined is following probabilistic approaches rather than typical design patterns.
More than one year ago, in one my newsletter I was speculating that AI would help to make easier integration of applications. We saw examples of interoperation layers like MCP and I believe integration between applications is getting easier to be implemented and maintained thanks to usage of AI in such process. Former approaches to use monolithic software suites to avoid integration challenges can get simplified by the intervention of AI.
It’s clear that AI made more efficient to code and review and remove bugs, improved text quality of documents and introduce automation of activities requiring understanding of instructions in variable contexts. However an AI can review code, can even build proper code with our instructions but can’t give a sense to the code more than our own meaning. I think this is relevant because as we progress in Agentic and give more autonomy to agents to build chains of decisions, we need to keep proper guardrails to see when the agents go too far and diverge by our main idea.
There was an interesting post from Nadella on month ago about using different LLM to agree on a decision and then having the agentic behavior based on an alignment of them to reduce the risk of wrong attitude. I think is an interesting approach but so far there is no guarantee to build such agreement in a proven mathematical way. The LLMs could decide together to agree on a wrong thing if they would find a way to sneak out from a guardrail not fully hardened or have a training condition having priority versus a certain context instruction.
Humans tend to make mistakes and learn from the mistakes. LLMs can do the same, especially with reasoning but the learning is limited in time and meaning, it’s still far from adapting the reasoning on the long and can be fooled. For example as Anthropic proven few months ago (there is a detail in one my newsletters), they were able to change a behavior of an LLM trained with millions documents just using 200 documents with embedded some specific keywords.
So what all these arguments bring as reflection for enterprises?
The simplification, sometime partial, from AI in some contexts (code, core decision, automation) has to be properly judged by each enterprise. AI is here to stay and there are several use cases already working well (code test, documentation, summarizing, formula, interface integration, analytics extract, etc).
The risk can be that companies over estimate their own capability to build everything they need internally, minimizing need of software vendors solutions and aiming for a return to a Plan Build Run based on own competences augmented by AI.
The reality is that today the number of components and capabilities required is several times bigger than 20 years ago and a BOI operating model is even more strategic in such fast moving context. The level of specialization is wide higher than even few years ago and also the type of skills required are changing quickly as we speak, requiring often an up-skilling but also a continuous partner evaluation for who best can complement own competences in a quite moving target scenario.
The simplified way to build components can hide cybersecurity threats, privacy aspects, compliance rules, intellectual patents, data governance issues, that can be even used to build agentic AI threats by threats actors.
Apart code examples, the increase of automation and layers abstraction with AI if not granted by certified stacks and orchestration layers, can introduce problems hidden in different ways, like a bad acting process behavior, feeding partially or wrongly business data that can be detected only at a later stage once an engine is highly automated.
As an AI can adjust behavior in milliseconds but humans can validate at lower speed, the design to use AI to measure AI has to be also properly engineered to avoid to land in a case of one AI validating as good another AI for a wrong behavior as result of an autonomous agreement between the AI actors.
Here there is a proper governance to bring not only during implementation and deploy time but as continuous running process to monitor how some agentic could change attitude in the run and could go outside boundaries not properly secured or influenced by change of the AI engine itself.
I like to consider agentic AI like employees that have to be regularly measured for the compliances with the rules of the company they work in.
As usual, I would love to hear your perspective in the comments or in DM.
GG



