From my latest update in September, I realized that the rate of information that I try to correlate increased heavily, also due to the big input coming from AI agentic scan I run for information, and requires a huge effort to decide what to recommend and relate, and what not to use in my own reflection process. This started to be even more complex as many decisions on the market started to change and revert on a near-daily basis. I still am not through the ideal timing of release to give enough reflections and correlations versus too much information embedded in one single digest to reflect on. Just to share, as you could see, a different releasing approach in the future with a faster cadence.
In this November update, I will have a special focus on the ongoing influence of downtime that happened to service providers like Amazon AWS and Microsoft from my former newsletter to start some reflections from a design point of view (considering agentic guardrails), sovereignty consequence of missing service and business impact on how the acceleration on AI automation and agentic can’t decouple by the risks of missed business continuity.
So you will see the main topic of this newsletter addressed more in depth from different angles, in the Tech Sovereignty inside Market Evolution section, in the AI section around agentic, in the Cybersecurity regarding root causes analysis of some services downtime, and in Workforce Transformation about the vibe coding tendency and effects. Finally, I will give some recommendations in the final takeaways to executives on how each company should work around strategy, around business continuity, linked to the main topic of the newsletter.
This is a long-form newsletter, hand-written, speaking about inflection points influenced by technology. It’s based on articles I find relevant to share and is infused with my thoughts. It’s for critical thinkers who love to hear complex correlations, self-reflect on the consequences of trends, and exchange opinions on how to tackle the best possible approach for the future.
Key Executive Takeaways
Some updates and running points for this month:
- Offshoring vs automating: Evaluate customer care solutions like bots, service desk, some processes to be shifted to internal organizations augmented with AI and partially or fully automated. Alternatively, rediscuss offshoring contracting.
- Time to review how many of our own services are properly built to run across multiple clouds and AI vendors, or depending on a single vendor
- Work on governance for AI with a key focus on guardrails
- Develop on the aspects of which processes to start to automate, even partially
- Start to accelerate fast prototyping with vibe coding on top of testing
- AI Learning platforms introduction to accelerate the learning curve with personalized experience
Market Evolution
Chipsets & semiconductors
- After September’s newsletter updates from Nvidia and TSMC, we can see that in September, the situation changed again with the US Administration that pushed again some restrictions for TSMC to export chip manufacturing to China, even if not related to the most advanced chipset miniaturization. The overall licensing of chipsets will get more complex as we enter the new year, with individual licensing
- The recent challenge from China on rare earths is clearly touching the problem at its roots. As rare earths are required for chipsets, batteries, and military, and now they also require export control from China, this is going to create a strong control on the overall supply chain that is driving the evolution of datacenters, AI, and advanced military equipment, which are the key market developments for the US.
- In this situation of progressive risk of chipsets gaps, there is a progressive diversification strategy from many players. Touching more on the Tech sovereignty subsection and in the AI section.
Big Tech and Tech sovereignty
- The uncertainty around chipset production, as well as the way AI companies are depending on their own technology or depending on others, is generating a serious shift, as we saw over the last month, and partners’ diversification to reduce risk. Last, some creative rules like the US 1-to-1 chip-production rule. As some of these are coming on too fast a rate, I just try to filter out and focus only on the main ones. Many announcements about the US revoking TSMC’s export status to China, also to Samsung, and so on. As these ups and downs happen on a fast-track approach, I take them out of my report focus as they follow the same path of tariffs up and down, and give no added value on a real trend to come
- OpenAI accelerated on differentiation on the datacenters’ demands for AI, partnering with Hitachi for AI datacenters, with AMD for AI chipsets, including share acquisition, on top of potential others like Broadcom chipsets, just to mention a few of the new updates on top of the already running expansion with Oracle we discussed last month. The level of announcements that see Nvidia investing in OpenAI, Intel in the loop, and so on, shows a certain sense of circular investments, a sign of the AI bubble getting more on the edge.
- Microsoft started to differentiate its own Copilot solution from the engine for LLM used, introducing Anthropic as an alternative to OpenAI, even if still in an early stage and not with the same level of full data exchange restrictions as OpenAI. Meanwhile still building their own LLM as mentioned last month.
- SAP is pushing for Sovereign Cloud, customers’ full control over their own solutions, and considerable investments in European infrastructure. Quite interesting, as a few months ago, in the summer, as I reported here in my July newsletter, the CEO of SAP was considering the discussion around decoupling from the US a completely unrealistic expectation. I believe reaching real sovereignty in technology for the EU will require a long journey, bigger than having datacenters in place and touching various layers of the technology stack. I understand the need to reduce that dependency is there, and the reactions from some suppliers are more reactions to customers’ fear than comprehensive solutions. It will be relevant to show how much the overall stack will depend on US technologies that could do a kill switch, not the most modern LLM AI layers that are still quite highly US-driven.
- At the same time, EU chipset producer ASML, focusing on reducing dependency on US AI technologies, invested in French Mistral, showing a progressive investment to develop European competencies in areas that are indeed strategic, like AI LLM. This is definitely a key layer to develop for starting a real technology stack dependency decoupling. This is also driven by the EU Chips Act 2.0, aiming to reduce dependency on reducing dependency for chipset production outside the EU.
- The elephant in the room, with the recent events at Amazon AWS and Microsoft, is about how much the overall tech sovereignty is influenced by the fragility of digital infrastructure, and how countries could use their own capabilities to force their influence. Here comes my own reflections.
Today, most of the tech sovereignty is with the US, as many companies depend on the big tech to deliver e2e digital services, AI automation, and so on. If one country likes to be more independent, it has to build an entire stack, starting from the chipset, like the EU is starting to do, but going through the full stack of technology that is critical and that can be a long path, like the 10-year plan from China that brought them to be more technology independent, but anyway not fully. The events in the last few months showed us that a fragility in a key supplier, potentially caused by an automation that didn’t work well, caused an impact global impact, putting many companies in total dependency. Apart from reflecting and recommending how proper services should be architected with multi-cloud redundancy to make them more reliable, we have to ask ourselves how much we are creating a dependency that could be risky for our autonomy to operate. - So what? Just in the early July Newsletter here, I was mentioning “Technology embedded so tightly in the way every business operates, exchanging part of the workforce from physical humans to virtual agents, is also shifting the power of control of that workforce as it depends on engines that are typically cloud-based and typically based on technology from other countries”. Now, looking to the number of companies that got impacted by the AWS down and in many services influencing day to day of people even to purchase goods, we have to ask ourselves if, the wave of automation we are implementing in the companies, replacing resources with virtual agents in some processes, can impact our capability to operate in such rare cases leaving without an alternative plan. Here we are speaking about an issue that seems to have been caused by wrong automation that caused an impact on a bare basic service like DNS, which was somehow not properly redundant in the design, but strictly depended on a US-specific datacenter. Now, if we start to add to the equation of wrong configurations, the automation part that in case of issue is not always immediately deducting the problem and then we add on top the fact released resources due to automation causes lack of competent resources to intervene and can delay our capability to react, we are landing in a situation more complex in term of reliable future.
- My Thoughts: There are aspects related to improve reliability of design, with multi-cloud etc but the key point for me is another: If we are going to automate some of our core processes, we have to design clearly to be redundant on the services capabilities against reliability problems from single suppliers but we have also to consider the risk of influence of foreign threat countries that could decide to attack big techs to impact their capability to deliver services and impact many businesses that progressively reduced their internal capability to operate delegating to AI mostly delivered by big techs under few sovreignity. This can also apply to core services like energy production and grid management, and is a key aspect of how the global wars can influence how the companies should design their services and what to automate to maintain proper business continuity.
AI
I find it relevant to touch on the following points here:
- A relevant trend we see accelerating is vibe coding. Last month, even some big names like Google’s CEO pushed the message around vibe coding as a way to describe applications to be built. Now, as I mentioned a few months ago, I see that a good prototype approach is most. Building real applications is feasible as I see the trend developing with proper orchestration of the specs and design patterns. As in writing text, the vibe codes are basically different at each iteration, and when taken as input for a new output is quickly lose quality. I will do a dedicated post on this based on what I’m experimenting with, but I can mention has great potential but also clear intrinsic limitations as the dedicated platforms that are ramping up on that.
- The agentic trend, now most probably one of the most misused words in the AI world, is sensible and developing strongly. However, there are serious aspects of how to inject also agentic and change behavior. Something more in the Cybersecurity section. The acceleration of agentic is bringing opportunities in some businesses that are extending their area of service, exploring new opportunities for their existing engine, like Salesforce is doing in IT Service management.
- You will remember that in the September newsletter, it was even the main title, I spoke about the evolution that would progressively come from the automation that would eat offshoring. Interesting to see the recent update from Reuters on chatbot automating indeed service done historically by offshoring customer service companies.
- I expect that more complex services could require more time, but as we see the vibe coding developing, and acting quite well in situations of repetitive, component-based approaches, this can start to quickly erode a big part of the offshoring toward full automation.
- On the other side, the same countries like India resulted in being one of the top consumers of OpenAI service, indicating an even faster traction to build up automation in the offshore, but also for their own country’s needs. One example is the recent development of AI payment capabilities in modern AI platforms, as India did with OpenAI. I expect that the combination of vibe coding, the possibility to consume AI platforms in developing countries, is allowing to accelerate the generation of software products to serve various needs. The offshoring could and most probably should transform into higher value-added capabilities, showing a super-fast to market product development.
Cybersecurity
Linked to the automation effects of LLM, I see the following interesting points to reflect on:
- In a world getting more influenced by AI and LLM in the cloud that deliver some level of automation, it’s important to understand which level of risk can be linked to those use cases. As the theme of this newsletter, I mentioned the risks of downtime of a service that can impact those consuming, but there is a different type of risk, more difficult to detect, and it is when one service gets influenced to behave in a different way than designed, causing an impact on the service delivered. As agentic AI is based on the usage of LLM for agentic to decide how to query for capabilities and to decide accordingly, influencing those LLM can impact the way the agentic AI interacts. I mentioned this risk many months ago (you can look around the agentic orchestra newsletter). An interesting analysis from Anthropic from October has proven that putting a certain number of keywords in websites that then get scanned for training from LLM can influence the behavior of the LLM. What is shocking is the number of documents that have been required to influence the LLM’s behavior. As an LLM is trained on billions of data is expected that it is difficult to change behavior with small data set changes. Anthropic proved to change LLM behavior only with 250 documents dataset on the web.
- Correlated to this case is the acceleration of the market of AI-ready browsers, enabled with Copilot-like but also with new browsers directly AI-engineered coming from AI providers. There is a possibility to instruct those browsers, speaking to the internal AI and influencing it to deliver information via an exfiltration directly driven by the user without any of their own knowledge.
- Now we can easily imagine that if we can influence an AI in a browser to share sensitive data, we can also do that with an agentic that is executing some operations, especially if interoperate with others in a loosely coupled chain. My point here is not against the usage; it’s about properly guarding the overall usage of the services, especially around the injection risks.
Workforce Transformation
In the last few newsletters, we saw the recurring topic of layoffs linked to the introduction of AI. This section is to analyze a little bit more cross dimensions, also considering former analysis from market experts, like WEF, but also to get a picture a little bit more 360 degrees:
- The effects of automation in operations brought an acceleration of companies trying to do more with what they have already in terms of capabilities and workforce, and that is reflected in reduced hiring, even in the tech sector, even in the US, where the market is technically more demanding due to the many new deals. Part of it is due to automation, especially from those big tech companies that are also struggling partially when their automation engines generate some challenges, and reduced resources cause a longer time to resolve.
- There is an important trade-off between understanding which capabilities make sense to automate, which level of redundancy can be granted to keep pace in case of service unavailability or corruption, and clearly up-skilling of resources toward new ways to work
- The pace of change is so fast that some people get resistant to change, as there is no clarity on the direction taken by the enterprise where they work. On the other side, by doing so, they are holding on to the change that is indeed happening around them. Today, the number of users consuming an LLM for daily activities has grown dramatically over the last few years, and a fraction of those are using tools at a high level of expertise, bringing a level of productivity far higher, but also introducing risks if not properly guarded.
- The proper design of the overall process management of the enterprise, which parts need to be core and not automated, which ones can be automated, and where and how workforces get in the loop, is a key aspect of the ongoing change that cannot be delayed in the conversations.
- The AI benefits the opportunity to train the workforce in a personalized, ad hoc way. The benefit I see is that users don’t lose momentum because they learn as they do the job with the AI coaching on the specific areas, and get excited by progressing in what they do. The key is to make robust training instruction of the AI and set limits in creating excessive self-confidence to be an expert in an argument just learned, because the AI is creating a false feeling of having understood a certain topic, and lowering the level of attention in checking the work done. At the end, checking rather than doing is taking away part of the creative piece that is one of the most interesting to keep attention higher.
GG



