What I See for AI in 2026
I expect the Trump Administration to keep the fiscal spending spree going that has been in overdrive since April 2020. At some point long-term interest rates will be off to the races as has happened with the price of silver and gold. When that happens, the economy will slam into a wall. Until then real world prices will remain elevated, including the prices for chips, software and related services and stock prices.
Obstacle to AI growth: On the AI side, the biggest gating factor is data center construction. It’s not chip scarcity. Bringing a new data center online requires access to power, water and a substantial amount of land. Data centers in space (as some startups are pursuing) isn’t the answer. It seems there is lots of room for innovation as it relates to power and chip technology. My guess is that the Federal Government will start selling Federal land to Google, xAI, Microsoft, Oracle, Amazon and others with none of that money going to pay down national debt principal, and plenty of it going into politicians’ pockets (in addition to paying interest). This will be the next great Government scam.
Gen AI use cases: there are lots of valid use cases across functional domains, but there are still enormous misconceptions about language models. LLMs are not all-knowing, sentient beings. They are empty hardware and software-based tools that are only as good as the data they are trained upon. GPT, Claude and Gemini are trained primarily on the public Internet, which contains much nonsense in addition to fact. Therefore, LLMs will always hallucinate, as they can’t distinguish fact from fiction.
If you are an Enterprise and you want to set up your HR department to automate the screening of resumes, you can buy an off-the-shelf piece of software powered by machine learning that is pre-trained to look for key words and experiences as it reviews candidate applications. Another alternative would be to train an open source model so that its weightings and data labels are specific to your enterprise. What you don’t want to do is deploy an untrained LLM to write performance reviews. It will not have a frame of reference, it will not have the required context to perform the review while applying your value system to the output. The models are only as good as their training - garbage in, garbage out.
Software coding: this will continue to be the bleeding edge, single largest Gen AI use case. There is lots of room for model improvement, especially on the memory side. GPT 5.2’s reasoning ability is greatly improved, but still slow (I regularly wait 15-25 minutes for answers to coding questions). Claude Opus 4.5 is still my preferred coding model. Gemini 3.0 Pro with deep thinking is much improved over the last iteration. The models will continue to get better, but I feel it will be an iterative process until there is a break through on the inference side, or if someone was to create an AI model based on a new paradigm. I continue to be a huge fan of open source language models trained on specific enterprise or industry data sets. I continue to believe that the biggest beneficiary of LLMs are Software companies. LLMs can be a huge force multiplier for experienced software developers, engineers, UI/UX designers, etc. It is easy to imagine companies like Google and Microsoft reducing R&D expense as a percentage of Revenue by 50% or more over the next 10 years.
AI content:
If you want to learn more about AI in addition to what we cover in this newsletter and what we aggregate on T2D Pulse, I have listed the YouTube accounts that I follow that cover this subject matter.
If you are interested in what Twitter/X accounts that I follow related to AI, you can find my Twitter profile (along with Github, etc.) on my personal webpage: HERE
YouTube accounts related to AI that I follow:
Andrej Karpathy may be the best AI researcher. Karpathy published two episodes this year that I would highly recommend if you wish to do a deep dive on LLMs 1.) Deep Dive into LLMs like ChatGPT and 2.) How I use LLMs. I have included both below.



