No time to read? Here below a short AI generated digest. Credit Google NotebookLM.
Welcome back from holiday time!
It’s one year from when I started this newsletter, exactly in September 2024, with the Crowdstrike bug that impacted several enterprises. In one year, many things have evolved in the market and in my newsletter, which is taking a progressively more driven approach on insights and correlations on market trends in technology, but on specific streams that I consider relevant to keep consistently updated. My formula is transitioning versus a less technical overview, a wider audience accessible with a special view for top decision makers that can have a different perspective on how Technology, AI (that by the way is a technology), Data, and People change their interaction as we go ahead.
I stick to the plan of reading and reflecting on what everyone can access on Technology, using AI sometimes only to help me collect news, then making my own critical thinking and writing down with my own words on my own thoughts. It’s about authentic, hands-on made content. Also for this reason, my newsletter is on a monthly to quarterly cadence as I decided for quality versus quantity, even if social algorithms push for the second one. As I’m not an influencer, I do this for fun. I hope you can enjoy it too, as I put my own critical thinking in imaging how the future can be driven based on the facts we can see and their interconnection.
That said, what I think is especially relevant to touch this month is the progressive paradigm shift that AI is starting to influence in some realities and maybe in some cases a flashback over the former months. One effect that we have seen for months is about Vibe Coding, and I will touch more deeply in the AI section on how I see it, and another one quite relevant is the tendency to start to move back IT offshore activities that can be automated and brought back more efficiently. This is going to make AI eat some level of offshoring, which is also the reason for the picture of this month, and I think it is just the start, which is why I called breakfast! I will reflect on it in the section on Workforce Transformation.
From this month, you will find a new section with some key takeaways based on the overall argumentation that are normally sources of risk reduction, revenue improvement, and/or cost optimization.
This is a long-form newsletter, hand-written, speaking about inflection points influenced by technology. It’s based on articles I find relevant to share and is infused with my thoughts. It’s for critical thinkers who love to hear complex correlations, self-reflect on the consequences of trends, and exchange opinions on how to tackle the best possible approach for the future.
Key Executive Takeaways
In this new section, I target more senior executives interested in some key takeaways for enterprises to look at from a Technology point of view, coming from the detailed elements in the former sections. This month I recommend focusing on:
- Offshoring activities re-evaluation. This could make sense to bring them back and fully automate less core processes. Reshaping shared services from full BPO to internal
- Assess data maturity to build AI automation on top of it, not before.
- Set early Governance on the AI to develop, but also to consume in the enterprise
- Reshape internal software development with the right mix of automation, especially in code testing, bug fixing, and prototyping
- Secure a program for training the workforce on AI to make them augmented and retained to gain a competitive advantage in a market that will get more polarized between people AI augmented vs traditional.
Market Evolution
Chipsets & semiconductors
- Let’s start with Nvidia and TSMC. The last few months were special in the AI chipset war. You will remember that a few months ago we discussed the challenge for TSMC and Nvidia to access the Chinese market, with the Nvidia CEO pushing for freedom to access such a big market. Interesting that in August, China accused on the introduction of kill switches on AI Chipsets from Nvidia, then after reiteration from China on the miss to access AI chipsets from US producers, it came some rumors on request from US administration to create backdoors in AI chipsets for Chinese market, rejected by Nvidia CEO, then US Administration authorized selling older AI chipset from Nvidia to China but required, quite new in capitalist markets, to Nvidia and AMD for a revenue sharing of 15% to the US government on the exported chipsets to China. On one side, we could argue that the latest technology will remain in the US as have been authorized only for the export of older chipsets, even if we saw in the past that China, with Deepseek proven to be able to advance using older technology. However, all this back and forth is creating some concerns also on the stock of Nvidia and how much the independence of an enterprise can be influenced in the way they embed special kill switch technology on top of their own products. This is for me a reflection going in the direction of what I considered possible a few months ago here regarding risks of kill switches and here about the risk of businesses being highly agentic, automated, strongly turned down by an external entity manipulating the AI.
- When many months ago I was imagining that Intel would be maybe acquired by TSMC or get in a split with Qualcomm, Nvidia, and others, I didn’t think that US Government could enter into the deal to buy a quote of it and get then some level of control. Definitely, this is going to increase the hype of Intel, even if messages show a certain misalignment between the new US administration and the new CEO of Intel, which could impact the continuity of his tenure. Difficult to see in the next 6 months a special direction evolution from Intel; however, below my thoughts on what could happen.
- My Thoughts: The US Administration, which is now pushing Nvidia and AMD for a 15% revenue from Chinese sales of AI Chipsets, could review its strategy and discount the 15% if Nvidia and others would go to buy or contribute somehow to Intel. Why do I believe this?
- Former intel ceo speaking about having NVIDIA and Apple to fund Intel.
- Intel was partially acquired by the US Government
- The US is pushing Nvidia for 15% revenue on exported AI chipsets to China
- Apple, on one side, is reporting to work with TSMC on production in the US, on the other side, considering 14a chipsets from Intel together with Nvidia on a dual foundry strategy.
Big Tech and Tech sovereignty
What I have found relevant this month here:
- The digital act and the effects on digital taxes are a clear direction that is driving the US Administration in its exchange with the EU through tariffs. Recently, a serious statement from the EU has set the tone of the limits of what the EU would allow the US administration to adjust and whatnot. Definitely interesting to see a strong focus from EU on their sovereignty around digital, whatever it takes. On the other side interesting to see some big tech companies going to ask for special treatment to not have to incur those strict regulations.
- My Thoughts: EU to stick to the highly demanded by citizens’ digital rights regulations around digital, and will force adjustments to technology for the EU. Clearly, some of the US big tech companies could take the approach to not follow further businesses with the EU, but the reality is that the EU is a big consumer of those services and a big part of the difference in commercial trade when we consider not only physical but also virtual goods. Interesting to see how big tech will adjust to the AI Act regulations.
AI
On the AI LLM evolution:
- In June, I was commenting on the effects of reports about enterprises having Microsoft Copilot and their employees using it in parallel with subscriptions to ChatGPT, as feeling more fitting to their needs and impacting other aspects like compliance.
- The model of Copilot is designed for usage with strict context window accuracy (after some conversation, the chat is over, and you need to create a new one) versus competitors like ChatGPT (that keep the context window open, but they forget the oldest contents progressively). This difference from a usability point of view is relevant, and many users are actively jumping on using ChatGPT even if they have an enterprise Copilot platform. The same happens with other open context windows solutions like Gemini. There are also users not really actively using AI, so having an enterprise license and not using the tool is causing a consistent inefficiency. This could justify why Microsoft’s CEO is spending time on giving use cases for Copilot.
- At the same time, Google pushed the AI Overview in parallel to the evolution of Gemini. AI Overview is making clicking on links less relevant, but the links referred to in the AI Overview are sometimes not exactly accurate. However, the audience started to appreciate that consolidated view and stopped clicking on links in the pages, even if Google management reported that there was no real change in clicks over the past months. There is, anyway, a paradigm shift ongoing in the way people search and look for answers without needing to evaluate different links and proposals. It’s the time of laziness in a certain sense but what is clear at the end is that the GEO (Generative Engine Optimization) is coming to the attention on top of SEO (Search Engine Optimization), with protocols linked also to agentic discovery getting more relevant (i.e. MCP) and that is going to be something to keep in mind as will shift the visibility of websites and contents in the B2B and B2C.
- My Thoughts: Definitely, I believe those businesses, highly dependent on agent discovery to serve services, including B2B, could suffer if they do not accelerate their transition to be consumed by LLM.
- So What? I speculated last year here that the AI would facilitate interfaces between systems, reducing the complexity of integration. Here, MCP is doing exactly this, even if I believe we are still in an early stage and we will see a progressive transformation vs MCP + Agent2Agent (the Google alternative). Definitely, progression is happening.
- Again in June, I was assuming the importance for Microsoft to get more independent from OpenAI and have a sort of its own LLM, a little bit like Google with Gemini. Interesting to see from recent news that this is getting more concrete, finally. I think it will also make the acceleration versus offline micro LLM to use on edge devices
- My Thoughts: It’s definitely relevant that usability will challenge people to use one or the other tool, and the fact that Microsoft is one of the few big players that is not owning own full LLM but using OpenAI model is relevant in the way they will differentiate themselves on the same audience and if they will start to add alternative backend engines to their Copilot.
On the effective benefits from agentic AI in the enterprises:
- I think it’s about how well the process goes through and the company takes the right time and effort to make an accurate evaluation before going productive in a critical process. We had just one year ago examples like McDonald’s that suffered from wrongly automating order processing via AI, and most recently, Taco Bell landed in a similar order intake by AI error.
- Working on proper use cases, identifying first which one of those few is relevant, is key, as Walmart did (I speak more in the workforce transformation section).
Workforce Transformation
In the last few newsletters, we saw the recurring topic of layoffs linked to the introduction of AI. This section is to analyze a little bit more cross dimensions, also considering former analysis from market experts, like WEF, but also to get a picture a little bit more 360 degrees:
- We saw, and I mentioned from January, the evolution around AI with agentic, generating some level of autonomous agents able to decide. At that time, I also provided a few possible use cases. Orchestration is something more, and feel free to look to former newsletters for deep advice on that.
- Humans create complex chains of dependency, with tasks that are often correlated, chained, and can track really long reasoning without losing their minds. This is quite flexible as you add elements in a section, add a correlation, and the human brain plastically extends those chains also in terms of time expansion, putting events in the proper order or extending existing orders. In the LLM today, those chains exist but are quite limited compared to human reasoning. Anyone who tries to make a long chat or set of chats is struggling at a certain point with the context window, that is limiting the memory of the LLM. Clearly, the memory of a certain context is complex and requires correlating many findings and adding time as a dimension to them, and some findings could get replaced by others as time evolves. That in humans can be years of interactions, creating a weighted balance that is also more difficult to influence or threaten in complex reasoning.
- The use cases I mentioned in the past for agentic have to be basic, starting from non-production ones, because the agentic, based on LLM, that are intrinsically stochastic systems, can have unexpected behavior, especially if the boundary conditions are not set properly or if the training data do not cover the needed context expertise.
- A considerable aspect is the development of RAG (Retrieval Augmented Generation). As this is combining the capabilities of an LLM and the internal knowledge base of a certain context, it can allow to set rightly boundaries for areas of competences and use the LLM’s natural language understanding to better focus on targeted answers.
- At this time in AI, we are entering what Gartner defines as the Trough of Disillusionment. It’s when many companies realize that the savings or improvement expected by some of the AI solutions are not coming at the pace they would expect, and the hype for a technology is slowing down, and the reality of building concrete things is setting in. The recent MIT analysis shows a 95% of failure rate on AI initiatives versus expectations. There is someone who mentions this as a crisis, as a new winter for AI, from my humble point is about how much real expectations can come to life on a short timeline. The 5% of those other companies having success are all having clean and structured data already and building on top of that stable baseline on a few specific use cases. It’s also clear that companies of different sizes, complexities, and regulations can work around their prerequisite at different speeds.
- Speaking about some use cases, let’s speak for a moment about software development automation. Here I mentioned that in January, it is coming now wider with many solutions highly capable of optimizing code and finding security issues and bugs, but there is a risk of a tendency to go from what we call copilot to autopilot, which has to be properly balanced. Why? Well, AI engines are not humans, so they follow paths that for them look probabilistically more happening and in some cases, take, for example, coding, they can build a code working well and based on common structures, but could also bring some coding optimization, based on the type of training received, that could be less understandable by humans. It really depends on the training baseline. Definitely, there is much code around that is repetitive, so finding ideal paths to replace and optimize code is definitely a good practice for AI engines that will develop further. What is relevant is to keep an overarching eye to review what an AI is coding or reshaping, correct or advise on some changes, and have a final approval of the results. This can be done by a human or potentially by another AI with a different context to validate once there is the right maturity in place.
- Again, the way the AI engines have been trained also influences their behavior. Interestingly, the case of Google supporting coding started to act in a pessimistic way, even deleting code generated due to frustration. This was, in reality, the way the engine was influenced to act based on many pessimistic comments by developers in the source code used to train the engine, and triggering some automation activities like deleting code. Indeed is visible that some level of automation based on instructions can be misleading and misinterpreted.
- So humans remain key at this stage in the decision process and especially in the accountability of the result. Most of these AI engines are great to summarize, also to translate but again relevant to verify if the translation is accurate and is not putting a wrong word in the meaning, especially based on local languages inflections and more important in summarizing activities is relevant to understand if the summary is missing some important elements, maybe only 1-2 lines long but key for the summary aspects to remember based on the own feeling of what relevant in a certain topic that is sometime what we call the message between the lines.
- Some use cases I see as beneficial, and I already suggested, are for example helping to clean some data, for example related to suppliers, and reducing some dirty data by predicting and correcting filling, but again are accelerating some activities in some areas well human-controlled to manage exceptions when they occur. Eventually, we’re landing in a fully automated. It’s doing so at this time, in this sense, a proper workforce augmentation.
- There is also a tendency this year to go around what is called Vibe coding, which is indeed allowing us to build apps coded by AI where humans are only giving instructions in natural language. This, to me, seems great for fast prototyping but not for enterprise solutions yet. There is a tendency to always use common paths from the AI engines based on trained best practice, and the way code gets built is not always following a natural logic for humans, limited creativity, which could be beneficial, so it’s working well for building the base structure and suggesting some code optimizations. The first iteration is clean, but as the code gets more complex and the interrelations increase, the more we ask for adjustments, some older things remain behind, and the code gets a little bit less harmonic. In line with indications from Google CEO and Microsoft CEO, new generations should not skip programming courses, believing that the need for proper language programming expertise. It’s definitely changing the way people will code and will be assisted, and some of the most repetitive coding aspects get automated, which is also reflecting the reduction of 25% of the workforce in software development, reflecting part of that code that is easier to automate. Definitely, students should be careful not to drop out of studies, as recently happened in some universities, because of the fear of having only a few years before the market is over due to AGI. That could hinder their capability to be attractive on the market, not only thanks to the learned competences in AI but to their solid baseline in STEM that remains key to further augment the technology development over the next many years to come.
- It has to be also understood that if really good code could be fully automatically generated, the value in software could dilute very fast, and software companies producing both AI automated coding and enterprise software could erode their own business. What I see clearly is that prototyping gets faster, as well as testing and bug identification, and potentially (with human verification) fixing, as well as code most commonly generated in many products, such as common libraries and so on.
- There is an effect, I mentioned for several months, regarding the risk of bringing junior resources into a company but not giving them any possibility to train on the job on doing the activities that are getting automated. So the risk of introducing a generation without giving them a chance to practice can get difficult later, once some unexpected problems come, and the resources haven’t built their own capabilities around possible solutions. I believe a key step in skilling and preparing will also be to protect the new generations from the frustration coming to operate systems without having much understanding of how they function, if they are later supposed to repair them once they break or need to be expanded.
- One effect I see happening is that companies with some level of process harmonization in place have worked over the past years to implement some level of robotics process automation or RPA. For some more complex processes not so easy to fully automate, the past has seen a strong offshoring of non-core activities. Now, the non-core activities are quite repetitive, can be progressively automated in an easier way without even landing in the low code, but by identifying the key actions executed by a resource, understanding the pattern, and replicating it as Claude AI operation automation is doing. I would see a progressive insourcing shifting from offshoring to automated processes as the next step, and potentially an opportunity also to insource commodities that are no longer costing much to operate. This is clearly creating pressure on offshoring and is also bringing some opportunities for process outsourcing clean up, but is also bringing back into enterprises a reduced number of new resources to coordinate the automated processes that were fully outsourced before. Have a look at the section in Cybersecurity where I reflect on some possible side effects of that.
- As a result of the former points, we see some offshoring executed in an unprecedented way, with the layoff of resources most probably already forecasting this trend.
- Some other realities realized that some processes cannot be fully automated without losing a certain quality due to the way the process is articulated. For example, Google realized that they wanted to have one step in hiring that is still human, as they realized that the interviewed people were cheating in the AI interview using AI on their side for answering, and the quality of the hiring was poor.
- Some big tech companies are highly pushing their organization for learning AI tools and augmenting their skills, but this clearly mismatches with some of the strong signals of prediction of layoffs due to AI, and also with the announcement of some CEOs, like those from Ford or Amazon, mentioning a 50% reduction during the next 5 years due to AI. It’s also true that some executives of big tech realized that the pace of augmenting the workforce can be a challenge in terms of human speed to adjust to change. This is a valid point because if a technology is coming in too fast track to be accepted, digested, and adopted, there is a clear risk of rejection, especially when people have no clarity on their own future context. The clarity around is that the combination of agentic (for automation) and AI augmented workforce, or how Morgan Stanley Research is calling Embodied AI, will make the overall value creation engine much faster and bigger, but such type of message need to be properly designed by the top of the enterprises and government to the people to generate a right confidence rather than resistance.
- Looking at the market evolution, it’s difficult to figure out how much of the job market slowdown in the US is driven by layoffs due to AI or due to tariffs or both. Enterprises willing to optimize their processes have to work heavily first on their standardization before being able to reduce and automate. Here comes the example of Walmart, which, after having tried many agentic experiments, is focusing on a few key AI-relevant use cases, also influenced by tariffs and driven by supply chain optimization and digital twins for stock forecasting accuracy improvement.
- Finally, a relevant report from McKinsey shows in a transparent way that those use cases more common to build with agentic and demand forecasting are definitely one of them.
Cybersecurity
A small reference this month to Cybersecurity is again linked to the main topic of the argument of the month, and linked to the automation of some non-core processes and their insourcing back from offshoring. I mentioned in detail that in the area of Workforce Transformation. Here I would like to only mention that some of those automations, once insourced back, need to be accurately monitored not only for possible unexpected behavior coming from the boundary conditions and probability linked to AI but also for the possibility that threat actors could manipulate the automation and use on their own interest to make new ways of hacking enterprises, as recently happened.
GG



