locate an office

offices near you

office near you

Economy & Markets

What was I made for: Large Language Models in the Real World

I asked Chat GPT-4 questions on economics, markets, energy and politics that my analysts and I worked on over the last two years. This piece reviews the results, along with the latest achievements and stumbles of generative AI models in the real world, and comments on the changing relationship between innovation, productivity and employment. The bottom line: a large language model can process reams of text very efficiently, and that’s what it’s made for. But it cannot think or reason; it’s just something I paid for. Upfront, a few comments on oil prices.
Table titled "Chat GPT-4 grading" showing the number of questions that received a given letter grade.
Table titled "Chat GPT-4 GPA by Subject", showing the GPA across 4 different subjects: Economics, Markets, Energy, Politics.

Watch the podcast

[START RECORDING]

FEMALE VOICE 1:  This podcast have been prepared exclusively for institutional wholesale professional clients and qualified investors only, as defined by local laws and regulations.  Please read other important information which can be found on the link at the end of the podcast episode.

[Music]

MR. MICHAEL CEMBALEST:  Hello, everybody.  Welcome to the September Eye on the Market video audio podcast.  This one's entitled "What Was I Made For: Large Language Models in the Real World."  I wanted to focus on this topic again because of how large AI is as a catalyst, what's going on in the equity markets.  But first, I just wanted to review economics and market for a minute.  Not that much has changed since our August piece called "The Rasputin Effect." 

 

Leading indicators are definitely pointing to weaker growth by the first quarter, but the expected decline is pretty modest as potential recessions go.  Tighter credit conditions are certainly going to have an impact, but only 17 or 18 leading indicators that we watch, none of them looks really terrible, they all just look kind of modestly bad, and a little bit weaker. 

 

The reason why things don't look worse after 500 basis points of fed tightening is that the fed policy is being offset by a few things.  First of all, very large fiscal deficits, almost as large as they were in 2009.  We're having the beginning of a US industrial policy which is essentially incentive-driven spending by the private sector on infrastructure, energy, and semiconductors.  That's starting to kick in, but household and corporate balance sheets were pretty strong coming into this year. 

 

Delinquency rates outside subprime auto are still very low.  The private sector took actions to lock in low borrowing rates before 2022.  Apparently the only entities that didn't get the memo that rates were unsustainably low were a handful of some of the regional banks that you're all familiar with who extended their asset duration at the wrong time.

 

Housing markets and labor markets are pretty tight, so the normal transmission of higher interest rates and higher fed policy to crater housing and labor markets isn't transmitting quite the same way.  So, all of these things are, at least at the current time, kind of keeping a severe recession at bay. 

 

I do want to talk a little bit here about oil prices.  The OPEC spare capacity that is pretty high, it's not as high as it gets during recessions as you can see in this chart, but it's pretty high.  For a non-recessionary period, OPEC has engineered quite a bit of spare capacity.  Now, that can change quickly, but right now spare capacity is pretty tight.  You have to combine that with two more things. 

 

First, the publicly traded energy companies are spending a very small share of cash flow.  We have a chart in Eye on the Market that shows the percentage of energy company cash flow that they're spending on new projects, specifically oil- and gas-related projects, and that's a very low share, and we juxtapose that against global fossil fuel use.  You can see the industry is starting to cut back on future projects for all the reasons you might imagine, even though we really haven't see much decline yet in global oil and gas consumption.

 

Then on top of that you've got the Strategic Petroleum Reserve at the US at the lowest level it's been in many decades.  So, tighter OPEC conditions, less oil and gas investment, and the depleted Strategic Petroleum Reserve, that combines to kind of goose up oil and gas prices, and then we'll have to see what Russia has in store for the world.  They've already announced some restrictions on diesel exports.

 

Higher energy prices tend to feed into inflation within a few months, and so one of the things that you're seeing is the markets were pricing in some fed cuts next year; that's now gone.  Now, I did want to focus most of this discussion on generative AI catalyst, because we have a chart in the Eye on the Market this time that shows an ETF for generative AI stocks is up around 60% this year while the market, excluding those stocks, is up around 5%, so this has definitely been the year of generative AI.

 

I wanted to take a look at how it's being used well and where it's failing, and then perform my own specific test on GPT4 specifically, because I thought it was an interesting exercise.  The reason I want to do that is juxtapose these two things.  Number one, people are out there comparing large language models to electrification of farms, the interstate highway system, and the internet itself, those are kind of some pretty remarkable milestones. 

 

While at the same time we just lived through a period, whether it was cannabis investing, non-fungible tokens, metaverse, block chain, crypto, hydrogen, where a lot of things were kind of touted to be something that they turned out not to be.  So, now we're getting a surge and interest in the large language models, and I think the reality is somewhere in between the nonsense of the metaverse and crypto and the seismic changes introduced by the interstate highway system, and then electrification of farming.  So, let's take a closer look.

 

I started out just doing something lighthearted but still meaningful which is there are these multimodal AI image generation models, and I used three different ones you can see here: Bing, Starry AI, and Dolly, which is GPT's version.  I asked it to create an image of two people sitting at the table looking nervously at a robot with them, and that the robot should have a label on it that says "Strategy Team Trainee," like working for me.  None of them did it right, and some of them, the mistakes are interesting. 

 

So, starting on the left, first of all, there's three people, not two, and one of the people looks like they're in a horror films, which is pretty scary.  Lots of people have extra hands and legs and fingers and things like that.  The second one from Starry AI got a little bit closer.  You have somebody looking nervously at a robot but there's only one person instead of two, and both the first two ignored the whole thing about the Strategy Team label entirely. 

 

Then you have this Bergmanesque and also fairly terrifying offering from Dolly on the right, splattering some letters on the table, not on the robot, and not really spelling anything.  So, I thought this--but still, the interpretative proficiency is good in certain ways, so I thought this mixture of good, bad, and bizarre was a good way of starting this discussion.

 

Some of you will pick up on the theme of this and the pop culture references I'm using, but when you think about a large language model and something it's made for, here are some examples that are currently working.  It's helping management consultants in terms of speed and quality and task completion. 

 

Whether you're impressed with that or not depends on what you think of management consultants.  People using Copilot, which is a programming tool, are having a lot of success with it.  It's doing a great job on statistics.  It's helping people that do professional writing.  It's helping customer support agents be more productive.  It's improving their employee retention, and a lot of these things tend to help the lower-skilled workers the most.  It's even having some successes in medical research. 

 

The one that I thought was interesting, where somebody fed in some of the 70 most notoriously difficult-to-diagnose medical cases just based on the descriptions of the symptoms people were having, and it got two-thirds of the diagnoses correct.  Now, you're not going to like all these large language model use cases.  People are using them to generate digital mountains of thick content, fake news sites, fake product reviews on Amazon, fake e-books, phishing emails--I spelled phishing wrong because I like fishing so much--I should have spelled it with a P-H. 

 

A lot of this stuff seems designed to profit from Google, essentially fool Google's automated advertising process into paying it for people looking at junk content that they don't really know is AI-generated.  In any case, these are the things that it's doing well and where the use cases are expanding.

 

I saw this chart from Open AI but I wasn't as impressed as I think Open AI wanted me to be.  It's a chart that shows how GPT4 is doing versus GPT3.5, taking all sorts of standardized tests.  As you can see here, there's math tests and chemistry exams, bar exams, biology exams, history exams, SATs, GREs, things like that. 

 

There's something, I think a lot of you are probably pretty aware of this right now, but there's something called data contamination which is if you train these models on information sets that include the questions and the answers to all these exams, all we're really analyzing is whether or not GPT, or any of the other ones, whether it's Bard or Bing or Anthropic or any of the rest of them, they are good at memorization. 

 

But we know that large language models are good at memorization, so I'm not really sure exactly what's being proven here other than the impact of having 10 times more parameters in GPT4 than GPT3.5 makes it better at memorization.

 

I think the more important question is you don't hire a lawyer so that he can sit down and answer bar exam questions all day, you hire a lawyer when you need somebody to integrate new information and evaluate things maybe they haven't seen before.  When you look at those kinds of tasks, large language models aren't doing quite as well.  We have a page in here called "It's not what I'm made for." 

 

When GPT4 has been asked to take law exams it does pretty poorly, and I like the description from the University of Minnesota professors who did this where they said "GPT4 produced smoothly written answers that failed to spot many important issues, much like a bright student who didn't attend class and hadn't thought deeply about the material." 

 

So, now you can get a better feel for what we're dealing with here.  It's like repetition rather than real reasoning and thought.  GPT4 did terribly on the actuarial exam, a college sophomore economics exam, graduate-level tax and trust and estates exams.  It botched Pythagoras' theorem when being asked to be a math teacher.  It got stuck in a death loop of nonsense when somebody provided it with mathematically impossible dimensions of triangle that it should have been able to figure out. 

 

The journal had this article where they're writing about how online editors and newspaper editors are being given so many crappy AI-written submissions that they have good spelling and grammar but lack of coherent story.  They're just outright rejecting anything that they can get the sense that there was any AI used to generate it at all. 

 

The most comprehensive assessment of large language models that I've seen is something called Big Bench, which is a project that over 400 researchers around the country are working on.  There's 204 tasks involved, and the latest that it was updated was July of 2023, of this year, and they still found substantial underperformance of large language models compared to the average human, much less the highly performing human. 

 

Anyway, Manuela Veloso is from Carnegie Mellon and she runs JP Morgan's AI research group, and they're doing a lot of really interesting applications of large language models.  She walked me through some of them and I was very impressed.  They do seem like they're productivity savers, information checking, information gathering, charting tools, making sure that documents are filled out properly, all of which are mostly designed to reduce errors and omissions, and that's potentially a very powerful and profitable application of a large language model. 

 

For me, it's a little different.  So, here's what I did.  I took 71 questions from the Eye on the Market over the last two years that my analyst and I worked on, and I asked ChatGPT4 to take a shot at it, and I graded GPT4 based on its speed, accuracy, and depth, versus the work that we had done ourselves to get the answers.  In other words, we're not grading it whether it can do anything, we're grading it compared to the process that we use that didn't yield and hallucinations or errors or things like that. 

 

We enabled the GPT4 features to upload data files when it couldn't find it on its own and needed date files.  We enabled the plug-ins that allow it to browse PDFs and Excel files when necessary.  So, as a result, a lot of you have read that GPT4 is training data for its parameters ended in 2021.  That's not a constraint because we added all the plug-ins to give it all of the data and all of the web access that it needed to answer any of our questions.

 

So, here are the results.  It was a mixed bag, and a very bimodal distribution of grades.  It got a lot of As.  Out of 71 questions it got 26 As and 25 A-minuses.  That sounds great.  The problem is, it also got 13 Ds and 6 Fs, so it was very much of a bimodal distribution.  The GPA worked out to around 2.5, which is between a C- and B+.  You might say, well, what did it get wrong? 

 

Here are some examples of what it did.  It would hallucinate numbers and then absolutely refuse to provide a source for where it found them.  It was very frustrating.  It would outline the correct steps to solve a problem and then execute the steps incorrectly when doing it.  It misread data files that we provided to it.  It didn't notice when there was data in a spreadsheet and there were subtotals that you should exclude subtotals from when you're summing a column.  It messed up some energy conversions, and it also asserted certain facts that are easily contradicted by other readily available information.

 

So, that was my experience with it, and I guess the bottom line is, just to wrap up, I think GPT4 is going to have a big impact in Manuela's world, for example, since the tasks that she's designed for it conform more to what these things are made for, which is error checking and memorization, most often using trained corporate data and not just trained internet data.

 

The part that I struggle with the most is how am I supposed to incorporate a tool where even if it can get some answers to complex questions right, I have to check every single answer, because since it sometimes gets things wrong I have to check every answer, and by the time I've done that, where's the productivity gain of using the tool in the first place.

 

So, anyway, I'm just going to use it for the simpler questions where it performs well.  I think that's what it's made for, and at just $20 a month for GPT4, I got what I paid for. 

 

So, that's this month's Eye on the Market.  We've got a piece coming up that's a deep dive on New York City and its recovery compared to other major metropolitan areas that I think a lot of our clients will be interested in, and of course, we're going to continue to monitor what's going on with the fed and consumer spending, energy prices, and economic slowdown later this year.  Thanks for listening, and I'll see everybody next time.

 

FEMALE VOICE 1:  Michael Cembalest's Eye on the Market offers a unique perspective on the economy, current events, markets, and investment portfolios, and is a production of JP Morgan Asset and Wealth Management.  Michael Cembalest is the Chairman of Market and Investment Strategy for JP Morgan Asset Management and is one of our most renowned and provocative speakers. 

 

For more information, please subscribe to the Eye on the Market by contacting your JP Morgan representative.  If you'd like to hear more please explore episodes on iTunes or on our website.  This podcast is intended for informational purposes only and is a communication on behalf of JP Morgan Institutional Investments, Incorporated. 

 

Views many not be suitable for all investors and are not intended as personal investment advice or a solicitation or recommendation.  Outlooks and past performance are never guarantees of future results.  This is not investment research.  Please read other important information which can be found at www.jpmorgan.com/disclaimer-EOTF.

[END RECORDING]

(DESCRIPTION)

Logo and text, J.P.Morgan, Please read important information at the end.

Text, What was I made for: Large Language Models in the Real World. September 2023, Michael Cembalest, Chairman of Market and Investment Strategy. Investment and insurance products are: Not F.D.I.C. insured, Not insured in any federal government agency, Not a deposit or other obligation of, or guaranteed by, JPMorgan Chase Bank, N.A. or any of its affiliates, Subject to investment risks, including possible loss of the principal amount invested. Logo, J.P.Morgan. Image, A robot looks out onto a city at dusk. Speaker video in picture-in-picture on the upper right corner.

(SPEECH)

Hello, everybody. Welcome to the September Eye on the Market video audio podcast. This one's entitled "What was I made for, large language models in the real world." I wanted to focus on this topic again because of how large AI is as a catalyst for what's going on in the equity markets. But first, I just wanted to review economics and markets for a minute.

Not that much has changed since our August piece called "The Rasputin effect."

(DESCRIPTION)

Text, Rasputin effect. While leading indicators point to weaker US growth in by Q1, the expected decline is modest as potential recessions go. Tighter Fed policy is partially offset by: large fiscal deficits, US industrial policy (incentive-driven spending on infrastructure, energy and semiconductors), strong corporate and household balance sheets, private sector actions to lock in low borrowing rates before 2022, tight housing and labor markets.

(SPEECH)

Leading indicators are definitely pointing to weaker growth by the first quarter, but the expected decline is pretty modest as potential recessions go. Tighter credit conditions are certainly going to have an impact, but all the 17 or 18 leading indicators that we watch, none of them looks really terrible. They all just look kind of modestly bad and a little bit weaker.

The reason why things don't look worse after 500 basis points of Fed tightening is that the Fed policy is being offset by a few things. First of all, very large fiscal deficits, almost as large as they were in 2009. We're having the beginning of a US industrial policy, which is essentially incentive-driven spending by the private sector on infrastructure, energy, and semiconductors. That's starting to kick in. Household and corporate balance sheets were pretty strong coming into this year. Delinquency rates outside subprime auto are still very low.

And the private sector took actions to lock in low borrowing rates before 2022. Apparently the only entities that didn't get the memo that rates were unsustainably low were a handful of some of the regional banks that you're all familiar with, who extended their asset duration at the wrong time. And housing markets and labor markets are pretty tight. So the normal transmission of higher interest rates and higher Fed policy to crater housing and labor markets isn't transmitting quite the same way. So all of these things are, at least at the current time, kind of keeping a severe recession at bay.

(DESCRIPTION)

Spare capacity engineered by OPEC is at one of its highest levels outside recessions. A graph with the heading, Estimated OPEC spare capacity, Million barrels per day. Key: tan line, Bloomberg, blue line, J.P.Morgan, red line, Bridgewater. The x-axis goes from 2000 to past 2021. The y-axis goes from 0 to 12. There is a grey vertical bar above 2001-2002, a thicker one over 2009, and a thin one before 2021. Jagged lines across have spikes near the bars. J.P.Morgan trends lowest, with the other two alternating higher.

(SPEECH)

I do want to talk a little bit here about oil prices. The OPEC spare capacity is pretty high. It's not as high as it gets during recessions, as you can see here in this chart, but it's pretty high. For a nonrecessionary period, OPEC has engineered quite a bit of spare capacity. Now, that can change quickly, but right now spare capacity is pretty tight. And you have to combine that with two more things. First,

(DESCRIPTION)

Publicly traded energy companies are spending a small share of cash flow on future oil & gas projects despite stable global fossil fuel consumption. A graph with the heading, Fossil fuel consumption vs energy capital spending, Energy capital spending as % of operating cash flow. The x-axis goes from 2006 to past 2022. The y-axis on the left goes from 0.2x to 1.2x and on the right from 0 to 550 Exajoules. A blue jagged line rises and falls, then begins to rise again. A leftward blue arrow has the text, S&P Global 1200 Energy companies, capital spending intensity. A tan line slopes upward near the top with a dip before 2022, and a rightward tan line has the text, Global fossil fuel use.

(SPEECH)

the publicly traded energy companies are spending a very small share of cash flow. We have a chart in the Eye on the Market that shows the percentage of energy company cash flow that they're spending on new projects, specifically oil and gas-related projects.

And that's a very low share. And we juxtapose that against global fossil fuel use. And you can see the industry is starting to cut back on future projects for all the reasons you might imagine, even though we really haven't seen much of a decline yet in global oil and gas consumption. And then on top of that, you've got the Strategic Petroleum Reserve at the US at the lowest level it's been in many decades. So tighter OPEC conditions, less oil and gas investment, and a depleted Strategic Petroleum Reserve, that combines to kind of goose up oil and gas prices. And then we'll have to see what Russia has in store for the world. They've already announced some restrictions on diesel exports.

Higher energy prices tend to feed into inflation within a few months. And so one of the things that you're seeing is the markets were pricing in some Fed cuts next year. That's now gone. Now, I did want to focus most of this discussion on the generative AI catalyst because we have a chart in the Eye on the Market this time that shows the ETF for the generative AI stocks is up around 60% this year, while the market excluding those stocks is up around 5%. So this has definitely been the year of generative AI.

And

(DESCRIPTION)

The Generative AI catalyst. A bar chart with the heading, YTD return of XYZ AI ETF vs S&P 500, Percent, YTD return. The y-axis goes from 0% to 70%, The x-axis has three bars. A blue bar labeled XYZ AI ETF goes to almost 70%, a tan bar labeled S&P 500 goes to just above 20%, and a tan bar labeled S&P 500 (example XYZ AI ETF constituents) goes up to almost 10%.

(SPEECH)

I wanted to take a look at how it's being used well and where it's failing and then perform my own specific test on GPT-4 specifically because I thought it was an interesting exercise.

(DESCRIPTION)

Are comparisons to electrification of farms, the interstate highway system and the internet justified? A graph with the heading It's a fad, fad, fad, fad world. Google search interest over time (100 = peak interest). The x-axis goes from 2014 to 2024. The y-axis goes from 0 to 100. Many colored lines that peak in different years. Key: blue, Large language model, red, Crypto, tan, Blockchain, Black, Metaverse, yellow, Non-fungible token, and green, Cannabis investing.

(SPEECH)

And the reason I want to do that is juxtapose these two things. Number one, people are out there comparing large language models to electrification of farms, the interstate highway system, and the internet itself. And those are some pretty remarkable milestones.

While at the same time, we've just lived through a period-- whether it was cannabis investing, non-fungible tokens, metaverse, blockchain, crypto, hydrogen-- where a lot of things were kind of touted to be something that they turned out not to be. And so now we're getting a surge in interest in the large language models, and I think the reality is somewhere in between the nonsense of the metaverse and crypto and the seismic changes introduced by the interstate highway system and electrification of farming.

So let's take a closer look.

(DESCRIPTION)

Quote, "Create an image of two people looking nervously at a robot sitting at the table with them. The robot should be labeled Strategy Team Trainee." (unquote). Three versions of output images: Bing, Starry AI, and Dall-E. The first two don't have the robot label. The third has a board on a table with a robot at the end of the table and a woman on either side. The board has a pile of craft materials and says STMAT, TRATGY.

(SPEECH)

I started out just doing something lighthearted but still meaningful, which is there are these multimodal AI image generation models. And I used three different ones. You can see here Bing, Starry AI, and Dall-E, which is GPT's version. And I asked it to create an image of two people sitting at the table looking nervously at a robot with them, and that the robot should have a label on it that says "strategy trainee," "strategy team trainee," like working for me.

None of them did it right. And some of them-- the mistakes are interesting. So starting on the left, first of all, there's three people, not two, and one of the people looks like they're in a horror film, which is pretty scary. And lots of people have extra hands and legs and fingers and things like that. The second one from Starry AI got a little bit closer. You have somebody looking nervously at a robot, but there's only one person instead of two. And both the first two ignored the whole thing about the strategy team label entirely.

And then you have this Bergmanesque and also fairly terrifying offering from Dall-E on the right splattering some letters on the table, not on the robot, and not really spelling anything. But still, the interpretive proficiency is good in certain ways. So I thought this mixture of good, bad, and bizarre was a good way of starting this discussion.

(DESCRIPTION)

Text, Something I'm made for. Management consulting: 12% more tasks completed, 25% faster, 40% improvement in quality. Depends on what you think of management consultants. GitHub's AI programming tool Co-pilot: writing 45% of their code, may rise to 80%. Statistical exams: ChatGPT scored 104 points out of a possible 116. Professional writing: Access to LLMs raised productivity: time required for each task declined and output quality improved. Chat GPT benefited low-ability workers more. Customer support agents: LLMs improved productivity by about 15%, improved customer sentiment and employee retention, helped lowest skilled workers the most. LLM successes in medical research: AI tool that summarized doctor-patient interactions improved skin condition diagnoses, and was able to identify 2/3 of 70 notoriously difficult-to-diagnose medical cases based on descriptions of symptoms. Warning: you're not going to like all the use cases. LLMs generate fake content, fake news web pages, fake product reviews, fake eBooks and fishing emails. Some of it seems designed solely to profit from Google's automated advertising process.

(SPEECH)

And some of you will pick up on the theme of this and the pop culture references I'm using. But when you think about a large language model and something it's made for, here are some examples that are currently working. It's helping management consultants in terms of speed and quality and task completion. Whether you're impressed with that or not depends upon what you think of management consultants. People using Copilot, which is a programming tool, are having a lot of success with it. It's doing a great job on statistics. It's helping people that do professional writing. It's helping customer support agents be more productive. It's improving their employee retention.

And a lot of these things tend to help the lower-skilled workers the most. It's even having some successes in medical research. The one that I thought was interesting where somebody fed in some of the 17 most notoriously difficult to diagnose medical cases just based on the descriptions of the symptoms people were having, and it got 2/3 of the diagnoses correct.

Now, you're not going to like all these large language model use cases. People are using them to generate digital mountains of fake content, fake news sites, fake product reviews on Amazon, fake ebooks, phishing emails-- I've spelled phishing wrong because I like fishing so much. I should have spelled it with a P-H. And a lot of this stuff seems designed to profit from Google, or essentially fool Google's automated advertising process into paying it for people looking at junk content that they don't really know is AI generated. In any case, these are the things that it's doing well and where the use cases are expanding.

(DESCRIPTION)

Text, This is not quite as impressive as it looks, since these are multiple choice exams whose answers GPT has probably already seen. May simply reflect GPT 4 having 10 times more parameters than GPT 3.5. A bar graph with the heading, GPT-4 improvements vs GPT-3.5, GPT percentile vs human test takers. Bars are labeled with various academic subject areas and each has different amounts of green and/or blue. Green is GPT-4 and blue is GPT-3.5. Text to right, We already know that large language models are good at memorization. The more important question: can they integrate new information and evaluate something it hasn't seen before?

(SPEECH)

I saw this chart from OpenAI, but I wasn't as impressed as I think OpenAI wanted me to be. And it's a chart that shows how GPT-4 is doing versus GPT-3.5 taking all sorts of standardized tests. And as you can see here, there's math tests and chemistry exams, bar exams, biology exams, history exams, SATs, GREs, things like that.

And there's something-- I think a lot of you are probably pretty aware of this right now, but there's something called data contamination, which is if you train these models on information sets that include the questions and the answers to all these exams, all we're really analyzing is whether or not GPT-- or any of the other ones, whether it's Bard or Bing or Anthropic or any of the rest of them-- are good at memorization. But we know that large language models are good at memorization, so I'm not really sure exactly what's being proven here other than the impact of having 10 times more parameters in GPT-4 than GPT-3.5 makes it better at memorization.

I think the more important question is, you don't hire a lawyer so that he can sit down and answer bar exam questions all day. You hire a lawyer when you need somebody to integrate new information and evaluate things maybe they haven't seen before. And when you look at those kinds of tasks, the large language models aren't doing quite as well.

(DESCRIPTION)

It's not what I'm made for. Law. GPT-4 got a C in Constitutional Law and a C minus in Criminal Law. (Quote) "GPT-4 produced smoothly written answers that failed to spot many important issues, much like a bright student who had neither attended class nor thought deeply about the material" (unquote). Actuaries. GPT-4 failed an actuarial exam, registering a 19.75 out of a possible 52.50 score. Economics: GPT-4 scored just 4 out of a possible 90 on a college sophomore economics exam. Taxes. GPT-4 performed (quote) terribly" (unquote) graduate level tax and trust & estate exams.

Math teaching. GPT-4 botched Pythagoras' theorem, instructed users that if you know the hypotenuse of a right triangle, that's enough info to determine the length of both sides; and got stuck in a (quote) "death loop" (unquote) of nonsense when provided with mathematically impossible dimensions of a triangle. Journalism. Online editors cite a growing amount of AI-generated content that is so far beneath their standards that they consider it a (quote) "new kind of spam" (unquote). They reject all AI-written submissions since they have perfect spelling and grammar but lack a coherent story, and are useless to them. Big Bench. The most comprehensive LLM assessment that we've seen is (quote) "BIG-bench" (unquote). This project, encompassing 204 tasks compiled by 400+ researchers, still finds substantial underperformance of LLMs compared to the average human, and well below highly performing humans.

(SPEECH)

And we have a page in here called "It's not what I'm made for."

When GPT-4 has been asked to take law exams, it does pretty poorly. And I like the description from the University of Minnesota professors who did this, where they said, "GPT-4 produced smoothly written answers that failed to spot many important issues, much like a bright student who didn't attend class and hadn't thought deeply about the material." So now you can get a better feel for what we're dealing with here. It's like rote repetition rather than real reasoning and thought.

GPT-4 did terribly on an actuarial exam, a college sophomore economics exam, graduate-level tax and trust and estates exams. It botched Pythagoras's theorem when being asked to be a math teacher. It got stuck in a death loop of nonsense when somebody provided it with mathematically impossible dimensions of a triangle that it should have been able to figure out.

And the journal had this article where they're writing about how online editors and newspaper editors are being given so many crappy AI-written submissions that have good spelling and grammar but lack a coherent story, they're just outright rejecting anything that they can get the sense that there was any AI used to generate it at all. And the most comprehensive assessment of large language models that I've seen is something called BIG-bench, which is a project that over 400 researchers around the country are working on. There's 204 tasks involved. And the latest that it was updated was July of 2023, of this year, and they still found substantial underperformance of large language models compared to the average human, much less a highly performing human.

Anyway, Manuela Veloso is from Carnegie Mellon, and she runs JPMorgan's AI research group. And they're doing a lot of really interesting applications of large language models. She walked me through some of them. And I was very impressed. They do seem like they're productivity savers-- information checking, information gathering, charting tools, making sure the documents are filled out properly, all of which are mostly designed to reduce errors and omissions. And that's potentially a very powerful and profitable application of a large language model.

(DESCRIPTION)

Is this what I'm made for? 71 questions that my analysts and I worked on over the last two years for the Eye on Market. I graded GPT-4's speed, accuracy and depth vs the traditional web searches, conference calls (text obscured) files and excel analyses we relied upon over the past two years to obtain the answers. We enabled GPT-4 features to upload data files which we prepared for it, and enabled several plug-ins that allow web browsing of PDFs and excel files when necessary (when testing different plug-ins, we used the best answer provided). (In bold) As a result, the end of GPT-4's training in 2021 was not a constraint on its ability to answer our questions. Grading was affected by the consistency of GPT-4's response (lower grades for less consistency). Wrong answers more heavily penalized than no answer given extra work needed to find and fix them.

(SPEECH)

For me, it's a little different. So here's what I did. I took 71 questions from the Eye on the Market over the last two years that my analysts and I worked on, and I asked ChatGPT-4 to take a shot at it. And I graded GPT-4 based on its speed, accuracy, and depth versus the work that we had done ourselves to get the answers. So in other words, we're not grading it whether it can do anything. We're grading it compared to the process that we used that didn't yield any hallucinations or errors or things like that.

And we enabled the GPT-4 features to upload data files when it couldn't find it on its own needed data files. We enabled the plugins that allow it to browse PDFs and Excel files when necessary. So as a result, a lot of you have read that GPT-4's training data for its parameters ended in 2021. That's not a constraint because we added all the plugins to give it all of the data and all of the web access it would need to answer any of our questions.

And so here are the results. It was a mixed bag and a very bimodal distribution of grades.

(DESCRIPTION)

The results. Two tables. On the left, Chat GPT-4 grading. On the right, Chat GPT-4 GPA by Subject.

(SPEECH)

It got a lot of A's. Out of 71 questions, it got 26 A's and 25 A-minuses. That sounds great.

(DESCRIPTION)

On the table, A. minuses are listed as 5.

(SPEECH)

The problem is it also got 13 D's and 6 F's. So it was very much of a bimodal distribution. The GPA worked out to about 2.5, which is between a C-minus and a B-plus.

And you might say, well, what did it get wrong? And here are some examples of what it did. It would hallucinate numbers and then absolutely refuse to provide a source for where it found them, which is very frustrating. It would outline the correct steps to solve a problem and then execute the steps incorrectly when doing it. It misread data files that we provided to it. It didn't notice when there was data in a spreadsheet and there were subtotals that you should exclude subtotals from when you're summing a column. It messed up some energy conversions.

(DESCRIPTION)

Text, It used the wrong constants for certain energy conversions, and conflated (quote) energy generation capacity (unquote) with (quote) energy consumption (unquote).

(SPEECH)

And it also asserted certain facts that are easily contradicted by other readily available information.

So that was my experience with it.

(DESCRIPTION)

Wrapping up. Information spoken.

(SPEECH)

And I guess the bottom line is, just to wrap up, I think GPT-4 is going to have a big impact in Manuela's world, for example, since the tasks that she's designing for it conform more to what these things are made for, which is error checking and memorization, most often using trained corporate data and not just trained internet data. The part that I struggle with the most is how am I supposed to incorporate a tool where, even if it can get some answers to complex questions right, I have to check every single answer because since it sometimes it gets things wrong, I have to check every answer?

And by the time I've done that, where's the productivity gain of using the tool in the first place? So anyway, I'm just going to use it for the simpler questions where it performs well. I think that's what it's made for. And at just $20 a month for GPT-4, I got what I paid for. So that's this month's Eye on the Market. We've got a piece coming up that's a deep dive on New York City and its recovery compared to other major Metropolitan areas that I think a lot of our clients will be interested in.

And of course, we're going to continue to monitor what's going on with the Fed and consumer spending, energy prices, and an economic slowdown later this year. Thanks for listening, and I'll see you everybody next time.

(DESCRIPTION)

Logo, J.P.Morgan.

Text, Important Information. This report uses rigorous security protocols for selected data source from Chase credit and debit card transactions to ensure all information is kept confidential and secure. All selected data is highly aggregated and all unique identifiable information, including names, account numbers, addresses, dates of birth, and Social Security Numbers, is removed from the data before the report's author receives it. The data in this report is not representative of Chase's overall credit and debit cardholder population. The views, opinions and estimates expressed herein constitute Michael Cembalest's judgment based on current market conditions and are subject to change without notice. Information herein may differ from those expressed by other areas of J.P. Morgan. This information in no way constitutes J.P. Morgan Research and should not be treated as such.

The views contained herein are not to be taken as advice or recommendation to buy or sell any investment in any jurisdiction, nor is it a commitment from J.P. Morgan or any of its subsidiaries to participate in any of the transactions mentioned herein. Any forecasts, figures, opinions or investment techniques and strategies set out are for information purposes only, based on certain assumptions and current market conditions and are subject to change without prior notice. All information presented herein is considered to be accurate at the time of production,

This material does not contain sufficient information to support an investment decision and it should not be relied upon by you in evaluating the merits of investing in any securities or products. In addition, users should make an independent assessment of the legal, regulatory, tax, credit and accounting implications and determine, together with their own professional advisors if any investment mentioned herein is believed to be suitable for their personal goals. Investors should ensure that they obtain all available relevant information before making any investment. It should be noted that investment involves risks, the value of investments and the income from them may fluctuate in accordance with market conditions and taxation agreements and investors may not get back the full amount invested. Both past performance and yields are not reliable indicators of current and future results. Non-affiliated entities mentioned are for informational purposes only and should not be construed as an endorsement or sponsorship of J.P. Morgan Chase & Co. or its affiliates.

For J.P. Morgan Asset Management Clients: J.P. Morgan Asset Management is the brand for the asset management business of JPMorgan Chase & Company and its affiliates worldwide. To the extend permitted by applicable law, we may record telephone calls and monitor electronic communications to comply with our legal and regulatory obligations and internal policies. Personal data will be collected, stored and processed by J.P. Morgan Asset Management in accordance with our privacy policies at https colon double slash a.m. dot jp morgan dot com slash global slash privacy. Accessibility. For U.S. only: If you are a person with a disability and need additional support in viewing the material, please call us at 1-800-343-1113 for assistance.

This communication is issued by the following entities: In the United States, by J.P. Morgan Investment Management Inc. or J.P. Morgan Alternative Asset Management, Inc., both regulated by the Securities and Exchange Commission; in Latin America, for intended recipients' use only, by local J.P. Morgan entities, as the case may be.; in Canada, for institutional clients' use only, by JPMorgan Asset Management (Canada), Inc., which is a registered Portfolio Manager and Exempt Market Dealer in all Canadian provinces and territories except the Yukon and is also registered as an Investment Fund Manager in British Columbia, Ontario, Quebec and New Foundland and Labrador. In the United Kingdom, by JPMorgan Asset Management (UK) Limited, which is authorized and regulated by the Financial Conduct Authority; in other European jurisdictions, by JPMorgan Asset Management (Europe) S.a.r.l.

In Asia Pacific ("APAC"), by the following issuing entities and in the respective jurisdictions in which they are primarily regulated: JPMorgan Asset Management (Asia Pacific) Limited, or JPMorgan Funds (Asia) Limited), or JPMorgan Asset Management Real Assets (Asia) Limited, each of which is regulated by the Securities and Futures Commission of Hong Kong; JPMorgan Asset Management (Singapore) Limited (Co. Reg. Number 1 9 7 6 0 1 5 8 6 K), which this advertisement or publication has not been reviewed by the Monetary Authority of Singapore; JPMorgan Asset Management (Taiwan) Limited;

JPMorgan Asset Management (Japan) Limited, which is a member of the Investment Trusts Association, Japan, the Japan Investment Advisors Association, Type 2 Financial Instruments Firms Association and the Japan Securities Dealers Association and is regulated by the Financial Services Agency (registration number "Kanto Local Finance Bureau (Financial Instruments Firm) No. 330"); in Australia, to wholesale clients only as defined in section 761A and 761G of the Corporations Act 2001 (Commonwealth), by JPMorgan Asset Management (Australia) Limited (ABN 5 5 1 4 3 8 3 2 0 8 0) (AFSL 3 7 6 9 1 9). For all other markets in APAC, to intended recipients only.

For J.P. Morgan Private Bank Clients: Accessibility. J.P. Morgan is committed to making our products and services accessible to meet the financial services needs of all our clients. Please direct any accessibility issues to the Private Bank Client Service Center at 1-866-265-1727. Legal Entity, Brand & Regulatory Information. In the United States, bank deposit accounts and related services, such as checking, savings and bank lending, are offered by JPMorgan Chase Bank, N.A., Member F.D.I.C., JPMorgan Chase Bank, N.A. and its affiliates (collectively "JPMCB") offer investment products, which may include bank managed investment accounts and custody, as part of its trust and fiduciary services. Other investment products and services, such as brokerage and advisory accounts, are offered through J.P. Morgan Securities LLC ("JPMS"), a member of FINRA and S.I.P.C., Insurance products are made available through Chase Insurance Agency, Inc. (CIA), a licensed insurance agency, doing business as Chase Insurance Agency Services, Inc. in Florida. JPMCB, JPMS and CIA are affiliated companies under the common control of JPM. Products not available in all states.

In Germany, this material is issued by J.P. Morgan S.E., with its registered office at Taunustor 1 (TaunusTurm) 6 0 3 1 0 Frankfurt am Main, Germany, authorized by the Bundesanstalt fur Finanzdienstleistungsaufsicht (BaFin) and jointly supervised by the BaFin, the German Central Bank (Deutsche Bundesbank) and the European Central Bank (ECB). In Luxembourg, this material is issued by J.P. Morgan S.E. -- Luxembourg Branch, with registered office at European Bank and Business Centre, 6 route de Treves, L-26 33, Senningerberg, Luxembourg, authorized by Bundesanstalt fur Finanzdienstleistungsaufsicht (BaFin) and jointly supervised by the BaFin, the German Central Bank (Deutsche Bundesbank) and the European Central Bank (ECB); J.P. Morgan S.E. -- Luxembourg Branch is also supervised by the Commission De Surveillance du Secteur Financier (CSSF); registered under R.C.S Luxembourg B 2 5 5 9 3 8.

In the United Kingdom, this material is issued by J.P. Morgan S.E. -- London Branch, registered office at 25 Bank Street, Canary Wharf, London, E14 5JP, authorized by Bundesanstalt fur Finanzdienstleistungsaufsicht (BaFin) and jointly supervised by the BaFin, the German Central Bank (Deutsche Bundesbank) and the European Central Bank (ECB); J.P. Morgan London Branch is also supervised by the Financial Conduct Authority and Prudential Regulation Authority. In Spain, this material is distributed by J.P. Morgan S.E. Sucursal en Espana, with registered office at Paseo de la Castellana -- 31, 2 8 0 4 6 Madrid, Spain, authorized by Bundesanstalt fur Finanzdienstleistungsaufsicht (BaFin) and jointly supervised by the BaFin, the German Central Bank (Deutsche Bundesbank) and the European Central Bank (ECB); J.P. Morgan S.E. Sucursal en Espana is also supervised by the Spanish Securities Market Commission (CNMV); registered with Bank of Spain as a branch of J.P. Morgan S.E. under code 16 57.

In Italy, this material is distributed by J.P. Morgan S.E. -- Milan Branch, with its registered office at Via Cordusio, n.3, Milan 2 0 1 2 3, Italy, authorized by Bundesanstalt fur Finanzdienstleistungsaufsicht (BaFin) and jointly supervised by the BaFin, the German Central Bank (Deutsche Bundesbank) and the European Central Bank (ECB); J.P. Morgan S.E. -- Milan Branch is also supervised by Bank of Italy and the Commission Nazionale per la Societa e la Borsa (CONSOB); registered with Bank of Italy as a branch of J.P. Morgan S.E. under code 80 76; Milan Chamber of Commerce Registered Number R.E.A. M.I. 2 5 3 6 3 2 5.

In the Netherlands, this material is distributed by J.P. Morgan S.E. -- Amsterdam Branch, with registered office at World Trade Centre, Tower B, Strawinskylaan 1 1 3 5, 1 0 7 7 X X Amsterdam, The Netherlands, authorized by Bundesanstalt fur Finanzdienstleistungsaufsicht (BaFin) and jointly supervised by the BaFin, the German Central Bank (Deutsche Bundesbank) and the European Central Bank (ECB); J.P. Morgan S.E. -- Amsterdam Branch is also supervised by De Nederlandsche Bank (DNB) and the Autoriteit Financiele Markten (AFM) in the Netherlands. Registered with the Kamer van Koophandel as a branch of J.P. Morgan S.E. under registration number 7 2 6 1 0 2 2 0.

In Denmark, this material is distributed by J.P. Morgan S.E. -- Copenhagen Branch, filial af J.P. Morgan S.E. Tyskland, with registered office at Kalvebod Brygge 39-41, 15 60 Kobenhavn V, Denmark, authorized by Bundesanstalt fur Finanzdienstleistungsaufsicht (BaFin) and jointly supervised by the BaFin, the German Central Bank (Deutsche Bundesbank) and the European Central Bank (ECB); J.P. Morgan S.E. -- Copenhagen Branch, filial af J.P. Morgan S.E. , Tyskland is also supervised Finanstilsynet (Danish FSA) and is registered with Finanstilsynet as a branch of J.P. Morgan S.E. under code 29010.

In Sweden, this material is distributed by J.P. Morgan S.E. - Stockholm Bankfilial with registered office at Hamngatan 15, Stockholm, 1 1 1 4 7, Sweden, authorized by Bundesanstalt fur Finanzdienstleistungsaufsicht (BaFin) and jointly supervised by the BaFin, the German Central Bank (Deutsche Bundesbank) and the European Central Bank (ECB); J.P. Morgan S.E. -- Stockholm Bankfilial is also supervised by Finansinspectionin (Swedish FSA); registered with Finansinspectionin as a branch of J.P. Morgan S.E., In France, this material is distributed by JPMorgan Chase Bank N.A.--Paris Branch, registered office at 14, Place Vendome, Paris 7 5 0 0 1, France, registered at the Registry of the Commercial Court of Paris under number 7 1 2, 0 4 1, 3 3 4 and licensed by the Autorite de controle prudentiel et de resolution (ACPR) and supervised by the ACPR and the Autorite des Marches Financiers. In Switzerland, this material is distributed by J.P. Morgan (Suisse) S.A., with registered address at rue du Rhone, 35, 12 0 4, Geneva, Switzerland, which is authorised and supervised by the Swiss Financial Market Supervisory Authority (FINMA) as a bank and securities dealer in Switzerland.

In Hong Kong, this material is distributed by JPMCB, Hong Kong branch. JPMCB Hong Kong branch is regulated by the Hong Kong Monetary Authority and the Securities and Futures Commission of Hong Kong. In Hong Kong, we will cease to use your personal data for our marketing purposes without charge if you so request. In Singapore, this material is distributed by JPMCB, Singapore branch. JPMCB, Singapore branch is regulated by the Monetary Authority of Singapore. Dealing and advisory services and discretionary investment management services are provided to you by JPMCB, Hong Kong/Singapore branch (as notified to you). Banking and custody services are provided to you by JPMCB Singapore branch.

The contents of this document have not been reviewed by any regulatory authority in Hong Kong, Singapore or any other jurisdictions. You are advised to exercise caution in relation to this document. If you are in any doubt about the contents of this document, you should obtain independent professional advice. For materials which constitute product advertisement under the Securities and Futures Act and the Financial Advisors Act, this advertisement has not been reviewed by the Monetary Authority of Singapore. JPMorgan Chase Bank, N.A., a national banking association chartered under the laws of the United States, and as a body corporate, its shareholder's liability is limited.

With respect to countries in Latin America, the distribution of this material may be restricted in certain jurisdictions. We may offer and/or sell to you securities or other financial instruments which may not be registered under, and are not the subject of a public offering under, the securities or other financial regulatory laws of your home country. Such securities or instruments are offered and/or sold to you on a private basis only. Any communication by us to you regarding such securities or instruments, including without limitation the delivery of a prospectus, term sheet or other offering document, is not intended by us as an offer to sell or a solicitation of an offer to buy any securities or instruments in any jurisdiction in which such an offer or a solicitation is unlawful.

Furthermore, such securities or instrument may be subject to certain regulatory and/or contractual restrictions on subsequent transfer by you, and you are solely responsible for ascertaining and complying with such restrictions. To the extent this content makes reference to a fund, the Fund may not be publicly offered in any Latin American country, without previous registration of such fund's securities in compliance with the laws of the corresponding jurisdiction. Public offering of any security, including the shares of the Fund, without previous registration at Brazilian Securities and Exchange Commission -- CVM is completely prohibited. Some products or services contained in the materials might not be currently provided by the Brazilian and Mexican platforms.

References to "J.P. Morgan" are to JPM, its subsidiaries and affiliates worldwide. "J.P. Morgan Private Bank" is the brand name for the private banking business conducted by JPM. This material is intended for your personal use and should not be circulated to or used by any other person, or duplicated for non-personal use, without our permission. If you have any questions or no longer wish to receive these communications, please contact your J.P. Morgan team. JPMorgan Chase Bank, N.A. (JPMCBNA) (ABN 43, 0 7 4, 1 1 2, 0 1 1/AFS License number 2 3 8 3 6 7) is regulated by the Australian Securities and Investment Commission and the Australian Prudential Regulation Authority. Material provided by JPMCBNA in Australia is to "wholesale clients" only. For the purposes of this paragraph the term "wholesale client" has the meaning given in section 7 61G of the Corporations Act 2001 (Cth). Please inform us if you are not a Wholesale Client now or if you cease to be a Wholesale Client at any time in the future.

JPMS is a registered foreign company (overseas) (ARBN 1 0 9 2 9 3 6 1 0) incorporated in Delaware, U.S.A., Under Australian Financial Services licensing requirements, carrying on a financial services business in Australia requires a financial services provider, such as J.P. Morgan Securities LLC (JPMS), to hold an Australian Financial Services License (AFSL), unless an exemption applies. JPMS is exempt from the requirement to hold an AFSL under the Corporations Act 2001 (Cth) (Act) in respect of financial services it provides to you, and is regulated by the SEC, FINRA and CFTC under US laws, which differ from Australian laws. Material provided by JPMS in Australia is to "wholesale clients" only. The information provided in this material is not intended to be, and must not be, distributed or passed on, directly or indirectly, to any other class of persons in Australia. For the purposes of this paragraph the term "wholesale client" has the meaning given in section 7 61G of the Act. Please inform us immediately if you are not a Wholesale Client now or if you cease to be a Wholesale Client at any time in the future.

This material has not been prepared specifically for Australian investors. It: may contain references to dollar amounts which are not Australian dollars; may contain financial information which is not prepared in accordance with Australian law or practices; may not address risks associated with investment in foreign currency denominated investments; and does not address Australian tax issues. Copyright 2023 JPMorgan Chase & Co. All rights reserved.

For J.P. Morgan Wealth Management Clients: Purpose of this material: This material is for informational purposes only. The views, opinions, estimates and strategies expressed herein constitutes Michael Cembalest's judgment based on current market conditions and are subject to change without notice, and may differ from those expressed by other areas of J.P. Morgan. This information in no way constitutes J.P. Morgan Research and should not be treated as such. J.P. Morgan is committed to making our products and services accessible to meet the financial services needs of all our clients. If you are a person with a disability and need additional support, please contact your J.P. Morgan representative or email us at accessibility dot support @ j p morgan dot com for assistance.

J.P. Morgan Wealth Management is a business of JPMorgan Chase & Co., which offers investment products and services through J.P. Morgan Securities LLC (JPMS), a registered broker-dealer and investment advisor, member FINRA and S.I.P.C., Annuities are made available through Chase Insurance Agency, Inc. (CIA), a licensed insurance agency, doing business as Chase Insurance Agency Services, Inc. in Florida. Certain custody and other services are provided by JPMorgan Chase Bank, N.A. (JPMCB). JPMS, CIA and JPMCB are affiliated companies under the common control of JPMorgan Chase & Co. Products not available in all states. This material is intended for your personal use and should not be circulated to or used by any other person, or duplicated for non-personal use, without our permission. If you have any questions or no longer wish to receive these communications, please contact your J.P. Morgan representative.

Legal Entity, Brand & Regulatory Information. The views, opinions and estimates expressed herein constitute Michael Cembalest's judgment based on current market conditions and are subject to change without notice. Information herein may differ from those expressed by other areas of J.P. Morgan. This information in no way constitutes J.P. Morgan Research and should not be treated as such. The views contained herein are not to be taken as an advice or a recommendation to buy or sell any investment in any jurisdiction and there is no guarantee that any of the views expressed will materialize. Any forecasts, figures, opinions or investment techniques and strategies set out are for information purposes only; based on certain assumptions, current market conditions and are subject to change without prior notice. All information presented herein is considered to be accurate at the time of writing, but no warranty of accuracy is given and no liability in respect of any error or omission is accepted. This material does not contain sufficient information to support an investment decision and it should not be relied upon by you in evaluating the merits of investing in any securities or products.

In addition, investors should make an independent assessment of the legal, regulatory, tax, credit, and accounting implications and determine, together with their own professional advisors, if any investment mentioned herein is believed to be suitable to their personal goals. Investors should ensure that they obtain all available relevant information before making any investment. It should be noted that investment involves risks, the value of the investments and the income from them may fluctuate in accordance with market conditions and investors may not get back the full amount invested. Both past performance and yield may not be a reliable guide to future performance.

Non-affiliated entities mentioned are for informational purposes only and should not be construed as an endorsement of sponsorship of J.P. Morgan Chase & Co. or its affiliates. J.P. Morgan Wealth Management is the brand for the wealth management business of JPMorgan Chase and Co. and its affiliates worldwide, J.P. Morgan Institutional Investments, Inc. For J.P. Morgan Private Bank Clients: Please read the Legal Disclaimer (link). For J.P. Morgan Asset Management Clients: Please read the Legal Disclaimer (link). For J.P. Morgan Wealth Management Clients: Please read the Legal Disclaimer (link). For Chase Private Clients: Please read the Legal Disclaimer (link). Copyright 2023 JPMorgan Chase & Co. All rights reserved.

Contact us to discuss how we can help you experience the full possibility of your wealth.

Please tell us about yourself, and our team will contact you. 

*Required Fields

Contact us to discuss how we can help you experience the full possibility of your wealth.

Please tell us about yourself, and our team will contact you. 

Enter your First Name

> or < are not allowed

Only 40 characters allowed

Enter your Last Name

> or < are not allowed

Only 40 characters allowed

Select your country of residence

Enter valid street address

> or < are not allowed

Only 150 characters allowed

Enter your city

> or < are not allowed

Only 35 characters allowed

Select your state

> or < are not allowed

Enter your country code

Enter your country code

> or < are not allowed

Enter your phone number

Phone number must consist of 10 numbers

Please enter a valid phone number

> or < are not allowed

Only 15 characters allowed

Enter your phone number

Please enter a valid phone number

> or < are not allowed

Only 15 characters allowed

Tell Us More About You

0/1000

Only 1000 characters allowed

> or < are not allowed

Checkbox is not selected

Your Recent History

LEARN MORE About Our Firm and Investment Professionals Through FINRA BrokerCheck

 

To learn more about J.P. Morgan’s investment business, including our accounts, products and services, as well as our relationship with you, please review our J.P. Morgan Securities LLC Form CRS and Guide to Investment Services and Brokerage Products. 

 

JPMorgan Chase Bank, N.A. and its affiliates (collectively "JPMCB") offer investment products, which may include bank-managed accounts and custody, as part of its trust and fiduciary services. Other investment products and services, such as brokerage and advisory accounts, are offered through J.P. Morgan Securities LLC ("JPMS"), a member of FINRA and SIPC. Insurance products are made available through Chase Insurance Agency, Inc. (CIA), a licensed insurance agency, doing business as Chase Insurance Agency Services, Inc. in Florida. JPMCB, JPMS and CIA are affiliated companies under the common control of JPMorgan Chase & Co. Products not available in all states.

 

Please read the Legal Disclaimer and the relevant deposit protection schemes in conjunction with these pages.

 

Click to access DPS website.

DEPOSIT PROTECTION SCHEME 存款保障計劃   JPMorgan Chase Bank, N.A.是存款保障計劃的成員。本銀行接受的合資格存款受存保計劃保障,最高保障額為每名存款人HK$500,000。   JPMorgan Chase Bank N.A. is a member of the Deposit Protection Scheme. Eligible deposits taken by this Bank are protected by the Scheme up to a limit of HK$500,000 per depositor.
INVESTMENT AND INSURANCE PRODUCTS ARE: • NOT FDIC INSURED • NOT INSURED BY ANY FEDERAL GOVERNMENT AGENCY • NOT A DEPOSIT OR OTHER OBLIGATION OF, OR GUARANTEED BY, JPMORGAN CHASE BANK, N.A. OR ANY OF ITS AFFILIATES • SUBJECT TO INVESTMENT RISKS, INCLUDING POSSIBLE LOSS OF THE PRINCIPAL AMOUNT INVESTED
Bank deposit products, such as checking, savings and bank lending and related services are offered by JPMorgan Chase Bank, N.A. Member FDIC. Not a commitment to lend. All extensions of credit are subject to credit approval.