And so, they’ve started to see the benefits of doing things themselves. So, culture change I think has been one of the biggest things that we’ve achieved in the past few years since I joined. Second, we built a whole set of capabilities, we call them common capabilities. Things like how do you configure new workflows? How do you make decisions using spreadsheets and decision models versus coding it into systems? So, you can configure it, you can modify it, and you can do things more effectively. And then tools like checklists, which can be again put into systems and automated in a few minutes, in many cases. Today, we have millions of tasks and millions of decisions being executed through these capabilities, which has suddenly game-changed our ability to provide automation at scale.
And last but not least, AI and machine learning, it now plays an important role in the underpinnings of everything that we do in operations and client services. For example, we do a lot of process analytics. We do load balancing. So, when a client calls, which agent or which group of people do we direct that client call to so that they can actually service the client most effectively. In the space of payments, we do a lot with machine learning. Fraud detection is another, and I will say that I’m so glad we’ve had the time to invest and think through all of these foundational capabilities. So, we are now poised and ready to take on the next big leap of changes that are right now at our fingertips, especially in the evolving world of AI and machine learning and of course the public cloud.
Laurel: Excellent. Yeah, you’ve certainly outlined the diversity of the firm’s offerings. So, when building new technologies and platforms, what are some of the working methodologies and practices that you employ to build at scale and then optimize those workflows?
Vrinda: Yeah, as I said before, the private bank has a lot of offerings, but then amplify that with all the other offerings that JPMorgan Chase, the franchise has, a commercial bank, a corporate and investment bank, a consumer and community bank, and many of our clients cross all of these lines of business. It brings a lot of benefits, but it also has complexities. And one of the things that I obsess personally over is how do we simplify things, not add to the complexity? Second is a mantra of reuse. Don’t reinvent because it’s easy for technologists to look at a piece of software and say, “That’s great, but I can build something better.” Instead, the three things that I ask people to focus on and our organization collectively with our partners focus on is first of all, look at the business outcome. We coach our teams that success and innovation does not come from rebuilding something that somebody has already built, but instead from leveraging it and taking the next leap with additional features upon it to create high impact business outcomes.
So, focusing on outcome number one. Second, if you are given a problem, try and look at it from a bigger picture to see whether you can solve the pattern instead of that specific problem. So, I’ll give you an example. We built a chatbot called Casey. It’s one of the most loved products in our private bank right now. And Casey doesn’t do anything really complex, but what it does is solves a very common pattern, which is ask a few simple questions, get the inputs, join this with data services and join this with execution services and complete the task. And we have hundreds of thousands of tasks that Casey performs every single day. And one of them, especially a very simple functionality, the client wants a bank reference letter. Casey is called upon to do that thousands of times a month. And what used to take three or four hours to produce now takes like a few seconds.
So, it suddenly changes the outcome, changes productivity, and changes the happiness of people who are doing things that you know they themselves felt was mundane. So, solving the pattern, again, important. And last but not least, focusing on data is the other thing that’s helped us. Nothing can be improved if you don’t measure it. So, to give you an example of processes, the first thing we did was pick the most complex processes and mapped them out. We understood each step in the process, we understood the purpose of each step in the process, the time taken in each step, we started to question, do you really need this approval from this person? We observed that for the past six months, not one single thing has been rejected. So, is that even a meaningful approval to begin with?
Questioning if that process could be enhanced with AI, could AI automatically say, “Yes, please approve,” or “There’s a risk in this do not approve,” or “It’s okay, it needs a human review.” And then making those changes in our systems and flows and then obsessively measuring the impact of those changes. All of these have given us a lot of benefits. And I would say we’ve made significant progress just with these three principles of focus on outcome, focus on solving the pattern and focus on data and measurements in areas like client onboarding, in areas like maintaining client data, et cetera. So, this has been very helpful for us because in a bank like ours, scale is super important.
Laurel: Yeah, that’s a really great explanation. So, when new challenges do come along, like moving to the public cloud, how do you balance the opportunities of that scale, but also computing power and resources within the cost of the actual investment? How do you ensure that the shifts to the cloud are actually both financially and operationally efficient?
Vrinda: Great question. So obviously every technologist in the world is super excited with the advent of the public cloud. It gives us the powers of agility, economies of scale. We at JPMorgan Chase are able to leverage world class evolving capabilities at our fingertips. We have the ability also to partner with talented technologies at the cloud providers and many service providers that we work with that have advanced solutions that are available first on the public cloud. We are eager to get our hands on those. But with that comes a lot of responsibility because as a bank, we have to worry about security, client data, privacy, resilience, how are we going to operate in a multi-cloud environment because some data has to remain on-prem in our private cloud. So, there’s a lot of complexity, and we have engineers across the board who think a lot about this, and their day and night jobs are to try and figure this out.
As we think about moving to the public cloud in my area, I personally spend time thinking in depth about how we could build architectures that are financially efficient. And the reason I bring that up is because traditionally as we think about data centers where our hardware and software has been hosted, developers and architects haven’t had to worry about costs because you start with sizing the infrastructure, you order that infrastructure, it’s captive, it remains in the data center, and you can expand it, but it’s a one-time cost each time that you upgrade. With the cloud, that situation changes dramatically. It’s both an opportunity but also a risk. So, a financial lens then becomes super important right at the outset. Let me give you a couple of examples of what I mean. Developers in the public cloud have a lot of power, and with that power comes responsibility.
The Download: COP28 controversy and the future of families
The United Arab Emirates is one of the world’s largest oil producers. It’s also the site of this year’s UN COP28 climate summit, which kicks off later this week in Dubai.
It’s a controversial host, but the truth is that there’s massive potential for oil and gas companies to help address climate change, both by cleaning up their operations and by investing their considerable wealth and expertise into new technologies.
The problem is that these companies also have a vested interest in preserving the status quo. If they want to be part of a net-zero future, something will need to change—and soon. Read the full story.
How reproductive technology can reverse population decline
Birth rates have been plummeting in wealthy countries, well below the “replacement” rate. Even in China, a dramatic downturn in the number of babies has officials scrambling, as its population growth turns negative.
So, what’s behind the baby bust and can new reproductive technology reverse the trend? MIT Technology Review is hosting a subscriber-only Roundtables discussion on how innovations from the lab could affect the future of families at 11am ET this morning, featuring Antonio Regalado, our biotechnology editor, and entrepreneur Martín Varsavsky, founder of fertility clinic Prelude Fertility. Don’t miss out—make sure you register now.
Unpacking the hype around OpenAI’s rumored new Q* model
While we still don’t know all the details, there have been reports that researchers at OpenAI had made a “breakthrough” in AI that had alarmed staff members. Reuters and The Information both report that researchers had come up with a new way to make powerful AI systems and had created a new model, called Q* (pronounced Q star), that was able to perform grade-school-level math. According to the people who spoke to Reuters, some at OpenAI believe this could be a milestone in the company’s quest to build artificial general intelligence, a much-hyped concept referring to an AI system that is smarter than humans. The company declined to comment on Q*.
Social media is full of speculation and excessive hype, so I called some experts to find out how big a deal any breakthrough in math and AI would really be.
Researchers have for years tried to get AI models to solve math problems. Language models like ChatGPT and GPT-4 can do some math, but not very well or reliably. We currently don’t have the algorithms or even the right architectures to be able to solve math problems reliably using AI, says Wenda Li, an AI lecturer at the University of Edinburgh. Deep learning and transformers (a kind of neural network), which is what language models use, are excellent at recognizing patterns, but that alone is likely not enough, Li adds.
Math is a benchmark for reasoning, Li says. A machine that is able to reason about mathematics, could, in theory, be able to learn to do other tasks that build on existing information, such as writing computer code or drawing conclusions from a news article. Math is a particularly hard challenge because it requires AI models to have the capacity to reason and to really understand what they are dealing with.
A generative AI system that could reliably do math would need to have a really firm grasp on concrete definitions of particular concepts that can get very abstract. A lot of math problems also require some level of planning over multiple steps, says Katie Collins, a PhD researcher at the University of Cambridge, who specializes in math and AI. Indeed, Yann LeCun, chief AI scientist at Meta, posted on X and LinkedIn over the weekend that he thinks Q* is likely to be “OpenAI attempts at planning.”
People who worry about whether AI poses an existential risk to humans, one of OpenAI’s founding concerns, fear that such capabilities might lead to rogue AI. Safety concerns might arise if such AI systems are allowed to set their own goals and start to interface with a real physical or digital world in some ways, says Collins.
But while math capability might take us a step closer to more powerful AI systems, solving these sorts of math problems doesn’t signal the birth of a superintelligence.
“I don’t think it immediately gets us to AGI or scary situations,” says Collins. It’s also very important to underline what kind of math problems AI is solving, she adds.
The Download: unpacking OpenAI Q* hype, and X’s financial woes
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Unpacking the hype around OpenAI’s rumored new Q* model
Ever since last week’s dramatic events at OpenAI, the rumor mill has been in overdrive about why the company’s board tried to oust CEO Sam Altman.
While we still don’t know all the details, there have been reports that researchers at OpenAI had made a “breakthrough” in AI that alarmed staff members. The claim is that they came up with a new way to make powerful AI systems and had created a new model, called Q* (pronounced Q star), that was able to perform grade-school level math.
Some at OpenAI reportedly believe this could be a breakthrough in the company’s quest to build artificial general intelligence, a much-hyped concept of an AI system that is smarter than humans.
So what’s actually going on? And why is grade-school math such a big deal? Our senior AI reporter Melissa Heikkilä called some experts to find out how big of a deal any such breakthrough would really be. Here’s what they had to say.
This story is from The Algorithm, our weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 X is hemorrhaging millions in advertising revenue
Internal documents show the company is in an even worse position than previously thought. (NYT $)
+ Misinformation ‘super-spreaders’ on X are reportedly eligible for payouts from its ad revenue sharing program. (The Verge)
+ It’s not just you: tech billionaires really are becoming more unbearable. (The Guardian)
2 The brakes seem to now be off on AI development
With Sam Altman’s return to OpenAI, the ‘accelerationists’ have come out on top. (WSJ $)
+ Inside the mind of OpenAI’s chief scientist, Ilya Sutskever. (MIT Technology Review)
3 How Norway got heat pumps into two-thirds of its households
Mostly by making it the cheaper choice for people. (The Guardian)
+ Everything you need to know about the wild world of heat pumps. (MIT Technology Review)
4 How your social media feeds shape how you see the Israel-Gaza war
Masses of content are being pumped out, rarely with any nuance or historical understanding. (BBC)
+ China tried to keep kids off social media. Now the elderly are hooked. (Wired $)
5 US regulators have surprisingly little scope to enforce Amazon’s safety rules
As demonstrated by the measly $7,000 fine issued by Indiana after a worker was killed by warehouse machinery. (WP $)
6 How Ukraine is using advanced technologies on the battlefield
The Pentagon is using the conflict as a testbed for some of the 800-odd AI-based projects it has in progress. (AP $)
+ Why business is booming for military AI startups. (MIT Technology Review)
7 Shein is trying to overhaul its image, with limited success
Its products seem too cheap to be ethically sourced—and it doesn’t take kindly to people pointing that out. (The Verge)
+ Why my bittersweet relationship with Shein had to end. (MIT Technology Review)
8 Every app can be a dating app now
As people turn their backs on the traditional apps, they’re finding love in places like Yelp, Duolingo and Strava. (WSJ $)
+ Job sharing apps are also becoming more popular. (BBC)
9 People can’t get enough of work livestreams on TikTok
It’s mostly about the weirdly hypnotic quality of watching people doing tasks like manicures or frying eggs. (The Atlantic $)
10 A handy guide to time travel in the movies
Whether you prioritize scientific accuracy or entertainment value, this chart has got you covered. (Ars Technica)
Quote of the day
“It’s in the AI industry’s interest to make people think that only the big players can do this—but it’s not true.”
—Ed Newton-Rex, who just resigned as VP of audio at Stability.AI, says the idea that generative AI models can only be built by scraping artists’ work is a myth in an interview with The Next Web.
The big story
The YouTube baker fighting back against deadly “craft hacks”
Ann Reardon is probably the last person you’d expect to be banned from YouTube. A former Australian youth worker and a mother of three, she’s been teaching millions of subscribers how to bake since 2011.
However, more recently, Reardon has been using her platform to warn people about dangerous new “craft hacks” that are sweeping YouTube, such as poaching eggs in a microwave, bleaching strawberries, and using a Coke can and a flame to pop popcorn.
Reardon was banned because she got caught up in YouTube’s messy moderation policies. In doing so, she exposed a failing in the system: How can a warning about harmful hacks be deemed dangerous when the hack videos themselves are not? Read the full story.
We can still have nice things
+ London’s future skyline is looking increasingly like New York’s.
+ Whovians will never agree on who has the honor of being the best Doctor.
+ How to get into mixing music like a pro.
+ This Japanese sea worm has a neat trick up its sleeve—splitting itself in two in the quest for love.
+ Did you know there’s a mysterious tunnel under Seoul?