Welcome to 2026.
With the turn in my second year of newsletter, I like to focus more on what is relevant and changing and avoid reporting on things that are progressing similarly to the past and are not showing any relevance for me to update further.
So, for example, you will not find much on the recurring US and China ban on Nvidia chipsets, as this is something continuing back and forth and is not giving me a special chance of insights for the future to share. My focus remains the same but I like to be sure you seek here for what relevant I see changing out of the several news as I don’t aim to have a comprehensive view.
I also simplified the way to see where I bring my thoughts, marking them explicitly as “My Thoughts“.
Last year I started the year referring to 2025 as the Quantum Year. You can look at all the references I did in the past to see what that meant, how that is influencing technology transformation and security. What we experienced in 2025 has been an acceleration in the growth of qubits and scalability in Quantum Computing through modularity and expectation to accelerate this industry in years rather than decades, even if progress now will have to be consistent to justify an announcement, and many can be unrealistic.
This year, looking to the hype, I feel more relevant, I believe 2026 can start with the focus of humanoid robotic but I already see that energy production and consumption will be a key correlated aspect.
It’s about having humanoid robotics entering in those environments designed for humans (like some parts of productions) and for this reason being humanoid and at the same time edge AI computing to allow autonomous operations, generating more automation in the physical world.
As I will elaborate more in the AI section, it’s about how much the hype will be getting to the next level of productivity for investments and building something that will then develop over the next few years, but the energy will be a key influence on the overall discussion at the geopolitical level.
You can find the full Doomsday Bulletin here and the video announcement here.
This is a long-form newsletter, hand-written, speaking about inflection points influenced by technology. It’s based on articles I find relevant to share and is infused with my thoughts. It’s for critical thinkers who love to hear complex correlations, self-reflect on the consequences of trends, and exchange opinions on how to tackle the best possible approach for the future.
Quick Executive Takeaways
Key technology priorities to focus on for your business based on the latest few newsletters’ perspectives:
- Offshoring rethink: This remains a key topic to keep in consideration. Source back influenced by geopolitical threats and easier automation of processes with AI. Opportunity with customer services, service desks, and shared services
- Software Development: Every month is showing progresses in coding automation, not only in testing but also in regular coding, prototypes, and documentation. I see opportunities for improving the usage of AI, reducing some of the most repetitive manual activities not yet automated
- Agentic AI: Build and execute a roadmap over the next 12 months for non-core business processes to shift to autonomous agentic, having clearly in mind S.M.A.R.T. use cases with acceptable ROI. Also take in consideration the trend of orchestration that is still developing.
- Clouds: Review strategy in consideration of geo repatriation due to foreign countries’ threats
- AI Roadmap: Priority governance around guardrails and cybersecurity
- Learning Platforms: Time to consider to move to AI supported platforms with skills-based developments
Market Evolution
Chipsets & semiconductors
Nvidia
Interesting development around the acquisition of Groq from Nvidia. As Groq is focused to build new AI chipsets capable to work optimised on LLM reasoning much faster and with much less energy (we speak about 10+x order) that could be a huge differentiation in the future market that is heavily influencing on massive LLM and demand for big energy consumption.
Last year, we saw that DeepSeek, using distilling techniques on other LLMs, was able to be trained with much less CPU and energy than other LLMs, and these innovation paths showed a different way to get better through a certain innovation.
My Thoughts: The development in the area of low energy for modern AI chipsets and focus on the reasoning improvement (rather than pre-training AI focus) is bringing acceleration on the possibility to compute AI calculations and LLM evaluations at edge level for big amount of data processing, like typically related to IoT sensors, big OT data to categorise and clearly, robots that can compute AI decisions directly at the edge with reduced latency. I expect that LLM on edge with increased power to compute will get more tangible as we progress over the year.
Big Techs
US Citizens to look for more regulations around LLM safeguards
It was interesting to see this article regarding the increase of attention from US citizens in different US states to gain more control of the LLM behaviour around bringing outputs that would go outside safeguards. This is showing that not only the EU but also the US population is realising the risks of limited safeguards, especially as we have more population using LLM for different purposes and bringing sometimes wrong indications to users, I repeat, being probabilistic systems, not sentient beings. The wrong, excessive, self-confidence shown by different LLMs in answering in different contexts is showing some effects in influencing people.
A counterapproach is to see specialisation of LLM for specific purposes, that is, setting stronger guardrails in what they can answer and how.
Digital Sovereignty
The Airbus case
The recent announcement from Airbus to build a sovereignty in an EU cloud for its own data with high strategic security is interesting. It’s clear that the level of sensibility of information that is aiming to keep will push a proper acceleration toward a right independency but is also true that a proper analysis of sovereignty for an area like aerospace, where data boundaries are strictly regulated will make even more interesting to see how the complete e2e processing and ownership of data and processes will execute within a certain boundary of EU with complete segregation of technology from US. This is quite interesting to monitor, as we all know that a certain stack, especially on infrastructure hardware, is running mostly on US-owned technologies that could have some influence.
We have to be clear that part of the problem is the conflict between the US Cloud Act from 2018 and the GDPR. The fact that a US big tech company could request data from any context is quite conflicting with the privacy aspects of EU regulations.
My Thoughts: As I mentioned, we have to remember that most of the traffic is running on network devices from US brands, and that can be relevant at least at the level of kill-switch conversation. A similar conversation we saw in the past was around chipsets in servers that could have access to relevant information at the source. Here, it will be relevant to build a blueprint that is realistic and feasible rather than creating false confidence.
The Australian social ban
Recently, we saw countries starting to assert their sovereignty also on the regulation of socials, like Australia recently did for children, France expected to do the same later in the year, and countries like Malaysia and Indonesia blocking Grok due to nudity safeguards being missed in the AI.
These examples show a progressive reaction from countries to their population’s concerns about AI getting too much influence on the lives of people, especially less mature or more fragile individuals, and to judge the answers received.
Own LLM to be not dependent on other countries
An interesting news from Ukraine was the decision to strategically build its own LLM and not depend on foreign countries’ technologies. As more automation is integrated with Agentic, autonomously acting and using LLM to decide how to answer, process, and interoperate, this is definitely a strategic decision to avoid the risk of remaining without business continuity overnight.
It follows some thoughts I mentioned here last June about the risk of blocking your operations in an agentic depending on a foreign LLM that could be made not available.
Sensible data in LLM. Risks and considerations
Recently, OpenAI released their engine for Health consulting, integrating users’ data like Apple Health and allowing personalised answers to consumers who would share their own data. Sensible data like health-relevant information, online processed and at risk of potentially being used, could put the sensitivity of data at high risk, especially if users are volunteering to share it. It was last year, early 2025, the case that Elon Musk was asking the community to upload their medical data on Grok and recently asked again. These are relevant shifts in the usage of personal sensitive information, potentially to train platforms, and as the boundaries are not always set strongly, they can be a clear reason for challenge.
My Thoughts: AI Act from the EU could get more attention and weight from different actors, and even if requiring adjustment in some aspects, it would drive the direction around regulation that cannot be skipped on AI and on sensitive data processing and potential sharing. This is going to be a cold shower for unregulated AI and social, and could also be sponsoring enterprises following similar expectations in how their data would be treated, where it would be processed and at what level would be never consumed. Enterprises will have to set stronger boundaries on the LLM services availability under out-of-standard conditions to maintain their own business continuity. Finally, end consumers will be the decision makers on who and how their own data will be processed.
ESG
Energy
When Datacenters are causing the bills to rise
The increase in bills for energy started to generate a reaction from many residents in the US.
As we mentioned earlier in Market Evolution regarding communities reacting to the social impact of LLM, in the energy area, there are several reactions we already saw a few months ago ramping up and reported in the former newsletter, and now spreading more and more.
For example, for the Michigan hyperscale for Stargate Project, but also from many other states. This pushed some big techs to show their own development of energy was not impacting the overall chain of causes of energy raise like Amazon did. It has been seen differently by other residents when Alphabet bought some energy production in other areas and bills were raised.
The clear fact is that population resident are in the lead to make their own interests respected, and that has an impact on the regulation around building datacenters, generating also restrictions.
It’s also clear that the effects we saw from AI chipsets with lower energy consumption will influence the production of less energy-intensive datacenters, and this demand will accelerate, as Gartner reported too. The overall balance between AI and robotics benefits and sustainable energy production will be an equation that will be relevant as the energy will be demanded more and more near the areas of consumption, influencing capillary production.
AI
Machine Learning, Agentic, Data
Agentic putting boundaries
Amazon recently released capabilities to set limits on boundaries for agentic in their way to operate autonomously, for example, to limit the possibility to order over a certain limit autonomously.
As I referred to months ago at the time of Agentic Orchestration, the interoperability between agentic AI is still developing. Recently, we have seen the build of Agentic AI Foundation (AAIF) defining open source interoperability that will make progress around orchestration.
My Thoughts: For many months now, I have reflected on the importance of setting production processes with proper safeguards and boundaries, especially in complex chains of multi-agents. The reason is that a wrong decision can make a noise explosion in a long chain and can be potentially highly fragile when it depends on multiple agents that could have loosely coupled contracts on limits.
As the maturity of semantics around agentic grows, the possibility to progressively automate follows. Parallel, the interoperability between agentic has to pass through an open standard that can be built on a common group of vendors and entities to reach the proper level of semantics.
AI Governance of LLM
As I mentioned already in the area of Market Evolution, the LLM regulation, as we start to use it more for different types of content, can be relevant. I already shared in the Market Evolution part, so I will not repeat the concept here, but it is quite relevant to the acceleration around its safeguards.
Robotics
I took the headline of this month’s newsletter for Robotics because indeed, with so much hype from different vendors, from Tesla, Huawei, Hyundai, X1, and so on, this is a year in which there has to happen a real change or the hype will go down. Around 50 companies are investing in this sector, with the US first, China second, and EU third.
China is driven by an increase in the population getting older and the need to have a workforce to assist, even if it will conflict with the highest unemployment ever.
The US would have the opportunity to use humanoid robotics in productions that are designed for humans and would cover those gaps of jobs not covered at the moment, which could be quite repetitive.
Acceleration of Edge AI capabilities will be key to make those equipments smarter, faster to learn and replicate, and independent.
My Thoughts: I see much easier to automate the context of production lines to further automate, which have well-defined guardrails and defined processes to replicate, rather than home contexts, where the privacy aspects and the possible unexpected dangerous things that could generate accidents would be many. Aside from that, the big wave of mismatch between jobs available and demand could be an ideal time to accelerate robotics.
It’s also true that, as I mentioned in my title, the energy production to sustain AI, edge AI, robotics, will be a key differentiation, and some countries with more strict regulations will have challenges to accelerate further the energy production.
Quantum Computing
Developments
I’m not referring to Quantum for many months. The reason is that I don’t see concrete production-ready progress. There are several companies pretending to have made strong progress in the modular approach, in the number of qubits (some pretending to be on 10000 qubits already), and most targeting 2030 as the time of the 1M qubit target. In some cases, real business cases for real problems, even if I see some NP problems, could be relevant.
However, the most relevant aspect is crypto and encryption in general, so spending time on PQC (Post Quantum Cryptography) would be well spent time and is the only recommendation I think about at this time.
Workforce Transformation
Workforce reduction
Layoff due to AI or due to something else
This month, I would like to take a small element of consideration around an article I read from Fortune here on the firing due to AI. The article is basically saying that many corporations are already pretending to have saved from “future” automation of people that would, in reality, mask the plan to lay off due to oversized, reduced revenues or wrong strategies. In this sense, it would look more appealing to justify the reduction of resources due to AI rather than due to wrong strategies.
My Thoughts: Considering that many companies pretended to have automated processes in 2025 but at the same time pretended to have piloted agentic in 2025 but not going massively in production cross so many processes to reduce entire people job, there is a good perspective that the layoff is also driven by the need to refresh workforce with AI enabled mentality rather than focusing on long term up-skill. This is clearly not a general rule, and many companies are diligently doing the job to map which processes to automate, doing part of the automation already and up-skilling resources.
Workforce Augmentation
The Upskilling question
A recent interesting report from the World Economic Forum is showing that the trend of jobs influenced by AI, interestingly referring to a report on skills demand change from the market done by Cornerstone.
What is relevant is that the jobs are not perceived as disappearing due to AI but are changing in the way they are executed, as many of us heard with all what is about the up-skilling conversation that we had.
A real topic shared, already also elaborated in my former newsletter editions, is about the fact that entry-level jobs are easier to automate, and those new employees could miss the fundamental training that would come from practising the job. This is a problem recognised but not always addressed, and for me is like learning to make the sums and multiplications and then using a calculator rather than just landing on calculators directly. It’s more about missing the opportunity to build the “forma mentis” as we experience a certain type of activity rather than missing that real experience.
Looking at the skills requested on the market and how they are changing from Cornerstone, it’s not a surprise that there is an increasing demand, for example, for AI, cybersecurity and cloud engineers’ skills and less for IT basic support, Quality test and junior developers. Indeed, we saw that development progressed highly in all those test activities, but also in the knowledge base, automation of recurring problems resolution and in software development for common procedures, typically also facilitated by vibe coding.
For example, in Manufacturing is foreseen an increase of Digital Twin engineers, IoT specialists and robotics technicians and a reduction of warehouse pickers, assembly line workers and quality inspectors. Clearly, as more automation is happening (check the Market Evolution for more details), there will be less need for manual inspection and former core activities of operations.
McKinsey made a nice explanation of the up-skilling as imperative, starting with leaders. Shifting toward relevant job archetypes and skills gaps to identify development paths and then proceeding toward those targets will be a priority, and it is much more about learning to think and act differently than in the past, rather than getting training for a tool.
My Thoughts: The example of skills change is also a change of leadership. The leader, for example, of future Manufacturing operations, is leading organisations of Digital Twin engineers rather than quality inspectors, and the type of competence is shifting toward digital. The up-skilling of the existing workforce and proper build of future entry-level workforce capable of getting the right level of experience to mature, will be key and will require leaders already enabled in this sense to collaborate in thinking and acting differently. In this sense, you maybe can see the best future COO as a CIO with modern digital operations embedded in their own skillset.
GG



