The Good, the Bad and the Ugly: on tech valuations, AI, energy and US politics
Last week I spoke to the firm’s tech CEO clients at a conference in Montana. This note is a partial summary of that presentation, entitled “The Good, the Bad and the Ugly: an investor lens on tech valuations, AI, energy and the US Presidential Election”.
Watch the Podcast
Good morning, everybody, and welcome to the April 2024 Eye On the Market podcast. Last week I flew out to Montana to present to the firm's tech CEOs. The investment bank had their inaugural Tech 100 CEO conference-- a lot of interesting content, a lot of interesting speakers. And the Eye On the Market this month is about my presentation there and also some of the things I learned.
The piece itself, the written piece this week, gets into all the details. I'm just going to give you a few of the highlights here, but this webcast is just an expurgated version of that. So let's begin. I started by-- well, anyway, here's a picture that we have on the web of The Good, The Bad, and The Ugly, which is the name of my presentation. There's a golden retriever, a pit bull, and a Yorkshire terrier. I'll let everybody decide which one is the good, the bad, and the ugly, but I think it's fairly obvious.
I started by wishing everybody a happy anniversary, happy 30th anniversary. Because we're now at the 30th anniversary of technology and interactive media margins crushing the rest of the market. And over the last 30 years, tech has outperformed the global equity markets 8,000% to 1,000%, which is kind of remarkable. But as things stand right now, valuations are back close to 2021 peaks again.
Tech earnings have been rising, but the markets have been rising even faster than that. Some of this is enthusiasm in the market. Some of it is the AI momentum. Some of it is we still have a large amount of both monetary and fiscal policy working in favor of this. But the valuation right now, whether you're looking at large cap, mid cap, equal weighted, market cap weighted, they're all back at where they were at 2021 peaks.
The good news is at least there's limited signs of exuberance regarding the unprofitable tech companies. And those are the yuks that we track. Those are the companies that have high revenue growth of at least 15% but no profits. And in 2021 and 2022, that number was soaring and was a was a clear sign to us that risk appetite was off the charts. Not the case right now.
And the other thing that's telling us that risk appetite is at least a little bit under control is the tech IPO market is just invisible. It collapsed last year, over the last 18 months. I think there's been only four or five deals so far this year, tech IPOs. And the market's receptivity to IPOs from tech companies that have negative earnings has also collapsed.
I actually labeled this a good rather than an ugly. Because when investors re-embrace conservative underwriting returns, their returns tend to get better. So I think this is a good thing that sets the stage for a better crop of tech IPOs coming to the market over the next year or so.
And remember, innovation without profitability is a complete money pit. The reason why we like the technology sector, we like to invest in it, is not because it's innovative. It's because it's innovative and it makes money. I've got a chart here on one of the large, well-known, multi-sector innovation ETFs. It's called The Tortoise and the Hare because it shot up-- it's got cloud computing, digital media, e-commerce, gene therapy, p2p lending, metaverse, hydrogen.
It shot up in 2021 then collapsed. It lost $7 billion of investor capital, which is more than any other of the 2,300 long biased ETFs we looked at. And over the same time, an old economy basket of farm equipment, industrial REITs, and office cleaning supplies clearly outperformed it both on a nominal and a risk-adjusted basis and just kept chugging along. That's the hare in the chart. So anyway, we like the tech sector, but we like the tech sector for companies that are profitable.
There was a lot of discussion about AI at this conference. I thought it was interesting to show people again, this implies that risk appetite is high but not off the charts. The markets are applying much higher multiples to the AI suppliers-- NVIDIA, AMD, and a couple of other companies. But the multiples for the AI beneficiaries, which are the other large tech and e-commerce companies, haven't gone up so much. And so we're in this stage now where the markets can kind of see the benefits of massive demand for GPUs, but they're not yet ascribing the productivity benefits of that to the largest customers buying those GPUs. So we're in that interim phase, so we talked a lot about that the conference.
When I was asked to talk about AI specifically, I started my comments with four or five examples from biomedicine. I'm going to talk about one of them on this call that I thought was amazing. I spent time last month with Jim Collins, who's at both MIT and Harvard. And he and his team were trying to figure out if there are any small molecules out there that could kill this antibiotic resistant bacteria some of you may have heard of called MRSA. It kills around 9,000 people a year in the United States, and 70,000 people get this infection.
They trained a model on 40,000 compounds with respect to bacterial activity and toxicity to humans. They took the results. They then applied it to all 12 million commercially available compounds to see if it could try to find any that would be successful in combating this antibiotic resistant bacteria. And they found two, and they're testing them right now. And it looks like it's the first discovery of a new class of antibiotics in decades.
And I picked this one to talk about on this webcast, because this is an example of the things that are taking place in biomedicine that would be completely impossible without AI. There's no way that any traditional model could digest 12 million commercially available molecular compounds to see what their toxicity and antibacterial behavior are. So I thought that was kind of amazing.
We talked about a few other examples. Then we got into what the audience was really interested in, which is how are the large language models doing in the real world. Because that's what's going to potentially drive greater commercial adoption of AI at the corporate level.
So we talked about five examples that look good so far, from empirical, actual, on the ground studies of companies that give one cohort of people AI, another cohort people doesn't get it. How does it turn out? And as you can see here, companies have seen roughly 40% to 50% improvement to productivity by programmers using GitHub's copilot.
Consultants that use AI have a 40% or so improvement in the quality of their work. Professional writing tasks get done faster. Customer service agent resolutions per hour go up, but only by about 20% to 25%, which is a little bit lower than what I was expecting. And then there's an amazing improvement in the accuracy of banks using AI to do KYC, your client assessments.
So all of these are good examples. So far, so good. The question is how big is this footprint. This will definitely help the individual companies doing this, but how big is the footprint of these companies as a percentage of the overall employment in the private sector? We talked about that for a bit, and it's going to take some time to see this. This is the low hanging fruit.
I think the proof statement for the broader AI industry is can this spread. Can this spread to receptionists, and legal secretaries, associates at law firms, hospital settings, and things like that? But so far, so good, in terms of language models, in terms of their performance in the real world.
And there's also good news about open source models, which are smaller, cheaper, and their performance at times can match the performance of the more expensive closed models. We have a chart here that we're showing that-- and the relationships between the companies in this space is really bizarre. But Microsoft, which has this relationship with OpenAI, decided to take LLaMA, which is Meta's open source model, and see whether they could get it to perform as well as other, fancier, closed models and biomedicine finance and law. And they were able to do that.
DBRX, Databricks, just last week announced they've released what looks like the fastest, smallest, and best performing open source model called DBRX. So the performance of open source models is good, and that's fantastic as well for people that are trying to build new applications.
Now, there's some bad here. When we tried to use GPT-4 and see how it would do on 71 questions from the Eye On the Market last year on market economics, energy, and politics, it didn't do that well. It got half the questions right, it got half the questions wrong, lots of hallucinations and in an unpredictable way.
And there's been some research showing the impact of data contamination, which refers to models that do well because of what they've been trained on. But if you try to trick them, their performance goes down. So for example, LLaMA and Mistral, which are two models, their performance went down when simply reorganizing the right A, B, C, D answers in a multiple choice question, which shows that a lot of the performance that they had been showing that was so good was a function of memorization that wasn't adaptable to slight changes in the way that the questions were asked.
So we get into that stuff. And then I also showed this picture. You'll have to be watching the video version of this webcast to see it. But I used OpenAI's Dall-E, which is the image generator. I wanted to do a mash up of a famous cartoon character appearing in a famous TV show about a chemistry teacher who has an illicit side job. And the results were pretty amazing. This was meant as entertainment, but it does seriously show how good these models are in graphic design.
Now, we then talked about Taiwan and energy, because those are two pretty important topics for tech companies and the AI industry specifically. And we talked about Taiwan for obvious reasons. If you thought that Russia's reliance-- that Europe's reliance on Russian energy in 2021 was too high at around 20% or 25%, wait till you see the world's reliance on Taiwan, and specifically TSMC, for advanced chips. It's anywhere from 80% to 90%.
So this is the mother of all supply chain risks. It's going to be time consuming, expensive, and energy intensive to redomicile this. Note that TSMC has put a hold on its $40 billion facility in Arizona to build chips that, even when they were completed, were going to be at least one generation behind in terms of nanometers. But this is a pretty serious issue, and so we spent time talking about Taiwan and some of the geopolitical issues involved.
We got into the details. You can read in The Eye On the Market about how a blockade would work, which Taiwan is very sensitive to as an island. And then we also talked about some very discouraging wargame assessments that have come out of the defense community, showing that any US attempts to preserve Taiwan's sovereignty, if there were a Chinese invasion, would result in a lot of losses of US aircraft, aircraft carriers, destroyers, cruisers, human casualties, all of which would probably be the worst since World War II.
And even if Taiwan's sovereignty were maintained, the island would be highly damaged without electricity and basic services, which wouldn't do very much good for companies reliant on Taiwanese semiconductor production. So there was a lot of backchannel discussion at this conference about the prospects and costs of repatriating or relocating semiconductor production. I think you're going to be hearing a lot about that in the next three to five years.
And then on energy, I started with this chart that showed at the end of last year Google has this language model called Gemini. And they showed the Gemini finally outperformed OpenAI's GPT-4. But they did so using something called Chain of Thought, five prompt resampling 32x. Right? Tons of indecipherable jargon.
The 32x part of that jargon meant that they had to run the model 32 times and then pick the best answer. If that's an indication of where the industry is going, we have to start talking about power demand. And I showed this chart, which also appeared in the energy piece that came out last month. PJM manages the demand forecast for dominion resources, which serves about 6 million customers and 15 states. Just from last year to this year, their power demand forecasts have almost doubled and entirely because of data centers.
So as the US is trying to electrify home heating and transportation at the same time if we're going to have this AI revolution requiring a lot of power demand, that's going to make electricity a very scarce resource. We're barely into this journey. And as you can see here, of all 47 categories in the core goods PPI report, transformers and power regulators are already experiencing the highest level of inflation.
So electricity is going to become a very scarce resource. There was a lot of press recently on Amazon acquiring the data centers and a share of a nuclear power plant from Talen Energy in Pennsylvania. I think a lot of the articles missed the big picture. This is not a repeatable exercise for the AI industry to kind of buy base load power off the grid, which then the rest of the people living there have to replace.
In most states, I think the regulators would block that kind of thing. Here, they didn't. But I don't think that's a repeatable approach. And if the AI industry is going to have to build its own nuclear power, it's going to be very expensive. And we talked about the last four completions in the West for nuclear power. They're either double or triple the cost of a base load system made up of wind, solar, and enough backup, natural gas when it's not windy or sunny. And we have a chart on that in here.
And then I ended the-- I ended my discussion with a few items on the election politics. At JPMorgan we used to have these things called 360 team reviews, where you would sit in the middle of a room, and your colleagues and the people that work for you would kind of comment on you and give you feedback. It's a pretty intense experience for people that have done it.
And so I'd pulled one of those 360 team reviews together for Trump, for the presumptive GOP nominee. And what I thought was interesting here is we looked at all the people serving in senior positions from 2016 to 2020-- vice president, department heads in the cabinet, national security advisor, FBI director, CIA director, UN ambassador, chief of staff-- and we found that there are more people that have repudiated and disavowed him than there are people that have endorsed him.
And so I just thought this was an interesting way of looking at the team review process. And this is kind of unique. I don't think we've ever had a president that's had this ratio of people repudiating or disavowing him. But, you know, that's my personal view. I thought it was interesting to look at.
And then we ended the conference with an ugly chart on entitlement spending, mandatory outlays, and net interest. The tech sector is often immune from this kind of thing. But I wanted to show everybody that, by the early 2030s, government revenues are expected to be exceeded by entitlements, mandatory outlays, and net interest, and that neither party right now is focusing on this kind of thing.
But by the end of this decade, let's say in four or five years, I think the markets are going to be extremely focused on this. We've talked about this in previous Eye On the Markets. We now have an online federal debt monitor that that looks at a whole bunch of different charts related to this issue, and you can access it in the header of the actual Eye On the Market PDF.
So that is a brief summary of some of my comments. See the entire piece for the whole-- for the whole presentation. And thanks for listening, and I'll talk to you in a few weeks. Bye.
(DESCRIPTION)
Presentation. Text, J.P. Morgan, Eye on the Market. The Good The Bad and the Ugly, An investor lens on valuations, AI, energy, and politics. Logo, J.P. Morgan. April 2024. Michael Cembalest, chairman of market and investment strategy J.P. Morgan asset and wealth management. A video feed on the right side of the page for the speaker. He is wearing a black top and glasses. There is a bookcase with books in the background.
(SPEECH)
Good morning, everybody, and welcome to the April 2024 Eye On the Market podcast. Last week I flew out to Montana to present to the firm's tech CEOs. The investment bank had their inaugural Tech 100 CEO conference-- a lot of interesting content, a lot of interesting speakers. And the Eye On the Market this month is about my presentation there and also some of the things I learned.
The piece itself, the written piece this week, gets into all the details. I'm just going to give you a few of the highlights here, but this webcast is just an expurgated version of that. So let's begin.
(DESCRIPTION)
Slide, The Good The Bad and The Ugly. Image, Three dogs dressed in cowboy outfits in a town.
(SPEECH)
I started by-- well, anyway, here's a picture that we have on the web of The Good, The Bad, and The Ugly, which is the name of my presentation. There's a golden retriever, a pit bull, and a Yorkshire terrier. I'll let everybody decide which one is the good, the bad, and the ugly, but I think it's fairly obvious.
(DESCRIPTION)
Slide, Happy 30th Anniversary. The Good. Free cash flow margins. A line graph with percent in the vertical axis from 0 to 27 and years from 1990 to 2022 in the horizontal axis. A blue line rises from 0 to 27 labeled S and P 500 Tech and Interactive Media. A brown dotted line below this rises from 3 percent to 9 percent labeled S and P 500 ex tech and interactive media.
(SPEECH)
I started by wishing everybody a happy anniversary, happy 30th anniversary. Because we're now at the 30th anniversary of technology and interactive media margins crushing the rest of the market. And over the last 30 years, tech has outperformed the global equity markets 8,000% to 1,000%, which is kind of remarkable.
(DESCRIPTION)
Slide, Valuations close to 2021 peaks again. The Bad. A graph of P and E ratio forward. In the horizontal axis there are years from 2019 to 2024 and in the vertical axis it starts at 10x and goes to 35x. Three lines labeled large cap M/W, large cap E/W, and mid cap M/W rise and fall over time and follow similar patterns.
(SPEECH)
But as things stand right now, valuations are back close to 2021 peaks again.
Tech earnings have been rising, but the markets have been rising even faster than that. Some of this is enthusiasm in the market. Some of it is the AI momentum. Some of it is we still have a large amount of both monetary and fiscal policy working in favor of this. But the valuation right now, whether you're looking at large cap, mid cap, equal weighted, market cap weighted, they're all back at where they were at 2021 peaks.
The
(DESCRIPTION)
Slide, YUC limited signs of exuberance regarding unprofitable tech companies. The Good. Graph, market cap of young unprofitable technology companies, percent of total technology market. A line graph with percent from 0 to 10 in the vertical axis and years from 1990 to 2020 and the horizontal axis. The graph peaks shortly after the year 2000.
(SPEECH)
good news is at least there's limited signs of exuberance regarding the unprofitable tech companies. And those are the yuks that we track. Those are the companies that have high revenue growth of at least 15% but no profits. And in 2021 and 2022, that number was soaring and was a was a clear sign to us that risk appetite was off the charts. Not the case right now.
(DESCRIPTION)
Slide, decline in tech IPOs and market rest activity to IPOs with negative earnings. Graph, number of IPOs. There are years from 1980 to 2020 in the horizontal axis and number of IPOs from 0 to 400 in the vertical axis. There are two lines on the graph that follow a similar tendency and peak at around the year 2000.
(SPEECH)
And the other thing that's telling us that risk appetite is at least a little bit under control is the tech IPO market is just invisible. It collapsed last year, over the last 18 months. I think there's been only four or five deals so far this year, tech IPOs. And the market's receptivity to IPOs from tech companies that have negative earnings has also collapsed.
I actually labeled this a good rather than an ugly. Because when investors re-embrace conservative underwriting returns, their returns tend to get better. So I think this is a good thing that sets the stage for a better crop of tech IPOs coming to the market over the next year or so.
(DESCRIPTION)
Slide, innovation without profitability, a money pit. The ugly. The tortoise and the hare. A line graph that has years from 2017 to 2024 in the horizontal axis and index numbers from 100 to 900 and the vertical axis. The Blue Line is labeled indicative multisector innovation ETF and has a sharp increase between the years 2020 and the years 2022. The other line is labeled old economy and increases slowly over time.
(SPEECH)
And remember, innovation without profitability is a complete money pit. The reason why we like the technology sector, we like to invest in it, is not because it's innovative. It's because it's innovative and it makes money. I've got a chart here on one of the large, well-known, multi-sector innovation ETFs. It's called The Tortoise and the Hare because it shot up-- it's got cloud computing, digital media, e-commerce, gene therapy, p2p lending, metaverse, hydrogen.
It shot up in 2021 then collapsed. It lost $7 billion of investor capital, which is more than any other of the 2,300 long biased ETFs we looked at. And over the same time, an old economy basket of farm equipment, industrial REITs, and office cleaning supplies clearly outperformed it both on a nominal and a risk-adjusted basis and just kept chugging along. That's the hare in the chart. So anyway, we like the tech sector, but we like the tech sector for companies that are profitable.
(DESCRIPTION)
Slide, more market enthusiasm for AI suppliers than for AI beneficiaries. The bad. A bar graph of AI suppliers versus beneficiaries and the P/E ratio relative to equal weighted S & P 500 forward. Along the left-hand side are the AI suppliers and along the right hand side are the AI beneficiaries. The vertical axis is zero point zero X to three point zero X. One bar is for December 31, 2024 and the other bar is for March 28, 2024.
(SPEECH)
There was a lot of discussion about AI at this conference. I thought it was interesting to show people again, this implies that risk appetite is high but not off the charts. The markets are applying much higher multiples to the AI suppliers-- NVIDIA, AMD, and a couple of other companies. But the multiples for the AI beneficiaries, which are the other large tech and e-commerce companies, haven't gone up so much. And so we're in this stage now where the markets can kind of see the benefits of massive demand for GPUs, but they're not yet ascribing the productivity benefits of that to the largest customers buying those GPUs. So we're in that interim phase, so we talked a lot about that the conference.
When
(DESCRIPTION)
Slide, are there small molecules that could kill antibiotic-resistant bacteria? The good. Image, a circular cluster of different compounds in different colors. There is a label attached to one near the center called compounds one and two.
(SPEECH)
I was asked to talk about AI specifically, I started my comments with four or five examples from biomedicine. I'm going to talk about one of them on this call that I thought was amazing. I spent time last month with Jim Collins, who's at both MIT and Harvard. And he and his team were trying to figure out if there are any small molecules out there that could kill this antibiotic resistant bacteria some of you may have heard of called MRSA. It kills around 9,000 people a year in the United States, and 70,000 people get this infection.
They trained a model on 40,000 compounds with respect to bacterial activity and toxicity to humans. They took the results. They then applied it to all 12 million commercially available compounds to see if it could try to find any that would be successful in combating this antibiotic resistant bacteria. And they found two, and they're testing them right now. And it looks like it's the first discovery of a new class of antibiotics in decades.
And I picked this one to talk about on this webcast, because this is an example of the things that are taking place in biomedicine that would be completely impossible without AI. There's no way that any traditional model could digest 12 million commercially available molecular compounds to see what their toxicity and antibacterial behavior are. So I thought that was kind of amazing.
We talked about a few other examples. Then
(DESCRIPTION)
Slide, LLM improvements in the real world. The good. A bar graph with horizontal bars. In the X axis there is percent from 0 to 100.
(SPEECH)
we got into what the audience was really interested in, which is how are the large how are the large language models doing in the real world. Because that's what's going to potentially drive greater commercial adoption of AI at the corporate level.
So we talked about five examples that look good so far, from empirical, actual, on the ground studies of companies that give one cohort of people AI, another cohort people doesn't get it. How does it turn out? And as you can see here, companies have seen roughly 40% to 50% improvement to productivity by programmers using GitHub's copilot.
Consultants that use AI have a 40% or so improvement in the quality of their work. Professional writing tasks get done faster. Customer service agent resolutions per hour go up, but only by about 20% to 25%, which is a little bit lower than what I was expecting. And then there's an amazing improvement in the accuracy of banks using AI to do KYC, your client assessments.
So all of these are good examples. So far, so good. The question is how big is this footprint. This will definitely help the individual companies doing this, but how big is the footprint of these companies as a percentage of the overall employment in the private sector? We talked about that for a bit, and it's going to take some time to see this. This is the low hanging fruit.
I think the proof statement for the broader AI industry is can this spread. Can this spread to receptionists, and legal secretaries, associates at law firms, hospital settings, and things like that? But so far, so good, in terms of language models, in terms of their performance in the real world.
And
(DESCRIPTION)
Slide, performance of open source models. The good. LLM performance on domain specific multiple-choice exams. A bar graph that compares biomedicine finance and law between private closed source model and adapted open-source model. The vertical axis has the units from 0 to 70.
(SPEECH)
there's also good news about open source models, which are smaller, cheaper, and their performance at times can match the performance of the more expensive closed models. We have a chart here that we're showing that-- and the relationships between the companies in this space is really bizarre. But Microsoft, which has this relationship with OpenAI, decided to take LLaMA, which is Meta's open source model, and see whether they could get it to perform as well as other, fancier, closed models and biomedicine finance and law. And they were able to do that.
DBRX, Databricks, just last week announced they've released what looks like the fastest, smallest, and best performing open source model called DBRX. So the performance of open source models is good, and that's fantastic as well for people that are trying to build new applications.
Now, there's some bad here.
(DESCRIPTION)
Slide, performance of open-source models. The bad. A bullet point list.
(SPEECH)
When we tried to use GPT-4 and see how it would do on 71 questions from the Eye On the Market last year on market economics, energy, and politics, it didn't do that well. It got half the questions right, it got half the questions wrong, lots of hallucinations and in an unpredictable way.
And there's been some research showing the impact of data contamination, which refers to models that do well because of what they've been trained on. But if you try to trick them, their performance goes down. So for example, LLaMA and Mistral, which are two models, their performance went down when simply reorganizing the right A, B, C, D answers in a multiple choice question, which shows that a lot of the performance that they had been showing that was so good was a function of memorization that wasn't adaptable to slight changes in the way that the questions were asked.
So we get into that stuff.
(DESCRIPTION)
Slide, Dall-E, the good. Image, an animation of Charlie Brown with a goatee and resembling Walter White from Breaking Bad at a chemistry table making blue meth.
(SPEECH)
And then I also showed this picture. You'll have to be watching the video version of this webcast to see it. But I used OpenAI's Dall-E, which is the image generator. I wanted to do a mash up of a famous cartoon character appearing in a famous TV show about a chemistry teacher who has an illicit side job. And the results were pretty amazing. This was meant as entertainment, but it does seriously show how good these models are in graphic design.
(DESCRIPTION)
Slide, mother of all supply chain risks. The bad. Regional reliance for critical goods. A bar graph that compares three different areas labeled European reliance on Russian energy, global semiconductor reliance on Taiwan, and global semiconductor reliance on Taiwan for advanced chips. In the vertical axis there are percents from 0 to 100.
(SPEECH)
Now, we then talked about Taiwan and energy, because those are two pretty important topics for tech companies and the AI industry specifically. And we talked about Taiwan for obvious reasons. If you thought that Russia's reliance-- that Europe's reliance on Russian energy in 2021 was too high at around 20% or 25%, wait till you see the world's reliance on Taiwan, and specifically TSMC, for advanced chips. It's anywhere from 80% to 90%.
So this is the mother of all supply chain risks. It's going to be time consuming, expensive, and energy intensive to redomicile this. Note that TSMC has put a hold on its $40 billion facility in Arizona to build chips that, even when they were completed, were going to be at least one generation behind in terms of nanometers. But this is a pretty serious issue, and so we spent time talking about Taiwan and some of the geopolitical issues involved.
We got into the details. You can read in The Eye On the Market about how a blockade would work, which Taiwan is very sensitive to as an island. And
(DESCRIPTION)
Slide, wargame assessment, US attempts to preserve Taiwan's sovereignty after Chinese invasion. The ugly. A graph of US force losses defending Taiwan after China invasion. There are three different circles that show percentages and units across four areas, aircraft, aircraft carriers destroyers and cruisers, casualties, and percent of Taiwan occupied by PLA. The red dot shows the worst case and the blue dot is the best case. The brown dot shows an intermediate case.
(SPEECH)
then we also talked about some very discouraging wargame assessments that have come out of the defense community, showing that any US attempts to preserve Taiwan's sovereignty, if there were a Chinese invasion, would result in a lot of losses of US aircraft, aircraft carriers, destroyers, cruisers, human casualties, all of which would probably be the worst since World War II.
And even if Taiwan's sovereignty were maintained, the island would be highly damaged without electricity and basic services, which wouldn't do very much good for companies reliant on Taiwanese semiconductor production. So there was a lot of backchannel discussion at this conference about the prospects and costs of repatriating or relocating semiconductor production. I think you're going to be hearing a lot about that in the next three to five years.
(DESCRIPTION)
Slide, Gemini must be exhausted. Gemini ultra versus chat GBT four. A bar graph that shows accuracy on massive multitask language understanding tests. The vertical axis shows percentage from 70 percent to 91 percent.
(SPEECH)
And then on energy, I started with this chart that showed at the end of last year Google has this language model called Gemini. And they showed the Gemini finally outperformed OpenAI's GPT-4. But they did so using something called Chain of Thought, five prompt resampling 32x. Right? Tons of indecipherable jargon.
The 32x part of that jargon meant that they had to run the model 32 times and then pick the best answer. If that's an indication of where the industry is going, we have to start talking about power demand. And
(DESCRIPTION)
Slide, Dominion Resources power default forecasts table until 2023. A line graph of PJM progression of power demand forecasts for Dominion Resources gigawatts. In the horizontal axis there are years from 2019 to 2039. In the vertical axis there are gigawatts from 18 to 42. Four of the five lines follow a similar increasing trend while a fifth line labeled PJM 2023 skyrockets.
(SPEECH)
I showed this chart, which also appeared in the energy piece that came out last month. PJM manages the demand forecast for dominion resources, which serves about 6 million customers and 15 states. Just from last year to this year, their power demand forecasts have almost doubled and entirely because of data centers.
So as the US is trying to electrify home heating and transportation at the same time if we're going to have this AI revolution requiring a lot of power demand, that's going to make electricity a very scarce resource.
(DESCRIPTION)
Slide, transmission distribution costs are soaring. The bad. A graph of core goods PPI component inflation. Percent increase versus 2018 for each of the 47 core goods categories. The line descends from 60 percent all the way to zero percent. Percent is in the vertical axis.
(SPEECH)
We're barely into this journey. And as you can see here, of all 47 categories in the core goods PPI report, transformers and power regulators are already experiencing the highest level of inflation.
So electricity is going to become a very scarce resource. There was a lot of press recently on Amazon acquiring the data centers and a share of a nuclear power plant from Talen Energy in Pennsylvania. I think a lot of the articles missed the big picture. This is not a repeatable exercise for the AI industry to kind of buy base load power off the grid, which then the rest of the people living there have to replace.
In most states, I think the regulators would block that kind of thing. Here, they didn't. But I don't think that's a repeatable approach. And if the AI industry is going to have to build its own nuclear power, it's going to be very expensive. And we talked about the last four completions in the West for nuclear power. They're either double or triple the cost of a base load system made up of wind, solar, and enough backup, natural gas when it's not windy or sunny. And we have a chart on that in here.
(DESCRIPTION)
Slide, white elephants, Western nuclear completions since 1990. The ugly. Levelized cost of four specific nuclear plants versus cost of high renewable wind solar gas system. In the vertical axis there is money from 0 to 250.
(SPEECH)
And then I ended the-- I ended my discussion with a few items on the election politics. At JPMorgan we used to have these things called 360 team reviews, where you would sit in the middle of a room, and your colleagues and the people that work for you would kind of comment on you and give you feedback. It's a pretty intense experience for people that have done it.
And so I'd pulled one of those 360 team reviews together for Trump, for the presumptive GOP nominee. And what I thought was interesting here is we looked at all the people serving in senior positions from 2016 to 2020-- vice president, department heads in the cabinet, national security advisor, FBI director, CIA director, UN ambassador, chief of staff-- and we found that there are more people that have repudiated and disavowed him than there are people that have endorsed him.
And
(DESCRIPTION)
Slide, team reviews, the bad. Trump 360 review by former Trump cabinet members and other senior positions. In the vertical axis there are numbers from 0 to 18 that show number of people. There are three categories labeled repudiation's disavowals, 2024 endorsements, and no public comment.
(SPEECH)
so I just thought this was an interesting way of looking at the team review process. And this is kind of unique. I don't think we've ever had a president that's had this ratio of people repudiating or disavowing him. But, you know, that's my personal view. I thought it was interesting to look at.
And
(DESCRIPTION)
Slide, debt monitor, the ugly. Entitlement spending mandatory outlays and net interest payments versus revenues. Percent of GDP. In the vertical axis there is percent from 4 to 26. In the horizontal axis there are years from 1965 to 2035. One line is labeled revenues while the other is labeled entitlements other mandatory outlays and net interest. This last line increases and intersects with revenues which maintains a relatively stable position.
(SPEECH)
then we ended the conference with an ugly chart on entitlement spending, mandatory outlays, and net interest. The tech sector is often immune from this kind of thing. But I wanted to show everybody that, by the early 2030s, government revenues are expected to be exceeded by entitlements, mandatory outlays, and net interest, and that neither party right now is focusing on this kind of thing.
But by the end of this decade, let's say in four or five years, I think the markets are going to be extremely focused on this. We've talked about this in previous Eye On the Markets. We now have an online federal debt monitor that that looks at a whole bunch of different charts related to this issue, and you can access it in the header of the actual Eye On the Market PDF.
(DESCRIPTION)
Slide, The Good The Bad and the Ugly, An investor lens on valuations, AI, energy, and politics. Logo, J.P. Morgan. April 2024. Michael Cembalest, chairman of market and investment strategy J.P. Morgan asset and wealth management.
(SPEECH)
So that is a brief summary of some of my comments. See the entire piece for the whole-- for the whole presentation. And thanks for listening, and I'll talk to you in a few weeks. Bye.
(DESCRIPTION)
Logo, J.P. Morgan. Text, important information.