Using technology to power the future of banking
It required us to roll out video conferencing globally to our employees in a span of a weekend, which is not for the faint of heart. When you think about the 200,000 employees that we have, we were able to roll this out at that pace, which speaks not just to the technical powers we have more broadly in the firm, but to how adaptable we are when these sorts of things occur. That was all because we wanted to make sure that our people could service our customers the best way that we could. Remember, we have people that work in our call centers, and they were impacted, and we have people that work in branches, and they’re impacted, etc.
One of the things that was really clear when the pandemic occurred was how quickly our teams could deploy new software. Many talk about being able to build quickly and being agile. There’s the Paycheck Protection Program, and this was the ability to offer small businesses who were not having as much traffic, etc., loans through the government. We had about a week to put this in place, and we were able to stand up that portal in about a week. We had it fully automated in a matter of two-ish weeks, and we were able to provide more funding than any other lender in both 2020 and 2021, which was just incredible. The fact that we were able to build that because of the technology we’ve invested in over the past years, build that so quickly and scale that to such a large volume for our customers was huge.
But we also were able to make some fundamental changes in mobile. We were able to enhance things which might seem simple. We have a product inside of our mobile application called QuickDeposit, and this is where you’re able to deposit a check. But as many know, sometimes checks are large numbers. Traditionally we asked people to go into branches to help prevent fraud. Because of the technology that we have, we were able to raise limits in a way that ensured that we were able to manage through fraud appropriately and allow customers that would have formerly had to come into a branch or ATM, make the deposits electronically. Those are the kinds of things that we’ve seen change, but the pace that we moved, that’s not just limited to the Chase part of the business, but we saw this across all of J.P. Morgan.
There’s one piece that I think is important on this Laurel. I was in a meeting and here I am a new person in the organization working on the Paycheck Protection Program. I recall there being somebody on Zoom. We were having a conversation and I assumed because I was new and they were in the meeting that they were on my team, and the person said, “Oh no, I’m not on your team, but I know you’re new and you needed assistance. And so here I am to help, and I just figured I’d navigate.” And that has stuck with me about the culture of this organization and how we focus on the customer both externally and internally, to really make sure that we are providing the best service that we possibly can.
Laurel: That certainly requires an agile mindset. So, how is JPMorgan Chase transforming into an agile organization? You’ve laid a couple examples. Clearly you would not have been able to respond to the US government’s Payroll Protection Act that quickly if you hadn’t been already working on a number of these opportunities and abilities to be more agile. So, what lessons have you learned along the way and how have your teams and customers benefited from this shift?
Gill: Oh, yes. An agile transformation is a really hard thing to do. Many people are making agile transformations, so it sounds like it should be easy. You have your scrums, and you have your various ceremonies and retrospectives, and you use a tool to manage your backlog and you’re golden. One of the big challenges that we as a company had faced in JPMorgan, was we were organized more around our software and platforms than around our customer and the experiences back. That made it really frustrating for teams because it meant that you likely needed 10, maybe 12 different organizations to agree on building something. It wasn’t clear who the owner was. The architectures sometimes would be a bit more frail because you were working through multiple teams. If you want to move quickly or you want to innovate, that’s not a model in which you’re able to actually operate. You can force it, but it requires many more meetings. It’s difficult to know who the decision makers are. You can move more slowly and sometimes an application or a solution looks like many teams built it. There’s Conway’s Law, and you may have probably mentioned this before on other podcasts, but Dr. Conway said that your software will reflect how you’re organized. That’s really what we had seen. So, as opposed to us just trying to find a way to navigate around it, we said as an organization, “We’re truly going to become agile, and we’re going to accept Conway’s Law, and we’re going to organize around our products back.”
In the community and consumer bank, we organized around 100 products, so we have a thousand teams that are aligned around these products. A product, for example, is something like account opening. So, I want to open an account on mobile or web. There is one product for this. There is one product leader, one design leader, one data leader, and one technology leader that are accountable for that product. Now we know who can manage the backlog. Now we know who can work through any kind of architectural decisions. Now we understand who is accountable for ensuring that we have innovation and understanding that customer needs. That has allowed us to pivot quickly, because if I need to move, I can work with the account opening team, they can make the decisions, they can manage a backlog, and they’re able to adapt when we have things like the Paycheck Protection Program or other types of efforts that are out there. But it also gives more purpose to the individual teams because they set their destiny, they have more autonomy, and they’re working together between tech and product design and data, so we can build the right solutions that we need. This creates a great experience for people in the organization.
By the way, the whole of JPMC is moving to operate this way. This lets us not just move more quickly, it gives better work life balance for our employees and less frustration, because it’s easier to know where you are. You have that purpose and you accept being part of a particular team. I mentioned we can respond more quickly when there is a challenge, but it’s not just those challenges like PPP or a pandemic that we have to address, Laurel. There are places where our customer’s needs are changing every single day. And by organizing around products this way, we can understand the data from our customers, and we can experiment, and we can adapt in a truly agile fashion for what our customers really need, versus what we think they might need and building something that doesn’t really resonate with them. It allows us to operate in a truly agile fashion, which we were not able to do before and it’s quite incredible being able to make a change like this at such scale.
The Download: how we can limit global warming, and GPT-4’s early adopters
Time is running short to limit global warming to 1.5°C (2.7 °F) above preindustrial levels, but there are feasible and effective solutions on the table, according to a new UN climate report.
Despite decades of warnings from scientists, global greenhouse-gas emissions are still climbing, hitting a record high in 2022. If humanity wants to limit the worst effects of climate change, annual greenhouse-gas emissions will need to be cut by nearly half between now and 2030, according to the report.
That will be complicated and expensive. But it is nonetheless doable, and the UN listed a number of specific ways we can achieve it. Read the full story.
How people are using GPT-4
Last week was intense for AI news, with a flood of major product releases from a number of leading companies. But one announcement outshined them all: OpenAI’s new multimodal large language model, GPT-4. William Douglas Heaven, our senior AI editor, got an exclusive preview. Read about his initial impressions.
Unlike OpenAI’s viral hit ChatGPT, which is freely accessible to the general public, GPT-4 is currently accessible only to developers. It’s still early days for the tech, and it’ll take a while for it to feed through into new products and services. Still, people are already testing its capabilities out in the open. Read about some of the most fun and interesting ways they’re doing that, from hustling up money to writing code to reducing doctors’ workloads.
Google just launched Bard, its answer to ChatGPT—and it wants you to make it better
Google has a lot riding on this launch. Microsoft partnered with OpenAI to make an aggressive play for Google’s top spot in search. Meanwhile, Google blundered straight out of the gate when it first tried to respond. In a teaser clip for Bard that the company put out in February, the chatbot was shown making a factual error. Google’s value fell by $100 billion overnight.
Google won’t share many details about how Bard works: large language models, the technology behind this wave of chatbots, have become valuable IP. But it will say that Bard is built on top of a new version of LaMDA, Google’s flagship large language model. Google says it will update Bard as the underlying tech improves. Like ChatGPT and GPT-4, Bard is fine-tuned using reinforcement learning from human feedback, a technique that trains a large language model to give more useful and less toxic responses.
Google has been working on Bard for a few months behind closed doors but says that it’s still an experiment. The company is now making the chatbot available for free to people in the US and the UK who sign up to a waitlist. These early users will help test and improve the technology. “We’ll get user feedback, and we will ramp it up over time based on that feedback,” says Google’s vice president of research, Zoubin Ghahramani. “We are mindful of all the things that can go wrong with large language models.”
But Margaret Mitchell, chief ethics scientist at AI startup Hugging Face and former co-lead of Google’s AI ethics team, is skeptical of this framing. Google has been working on LaMDA for years, she says, and she thinks pitching Bard as an experiment “is a PR trick that larger companies use to reach millions of customers while also removing themselves from accountability if anything goes wrong.”
Google wants users to think of Bard as a sidekick to Google Search, not a replacement. A button that sits below Bard’s chat widget says “Google It.” The idea is to nudge users to head to Google Search to check Bard’s answers or find out more. “It’s one of the things that help us offset limitations of the technology,” says Krawczyk.
“We really want to encourage people to actually explore other places, sort of confirm things if they’re not sure,” says Ghahramani.
This acknowledgement of Bard’s flaws has shaped the chatbot’s design in other ways, too. Users can interact with Bard only a handful of times in any given session. This is because the longer large language models engage in a single conversation, the more likely they are to go off the rails. Many of the weirder responses from Bing Chat that people have shared online emerged at the end of drawn-out exchanges, for example.
Google won’t confirm what the conversation limit will be for launch, but it will be set quite low for the initial release and adjusted depending on user feedback.
Google is also playing it safe in terms of content. Users will not be able to ask for sexually explicit, illegal, or harmful material (as judged by Google) or personal information. In my demo, Bard would not give me tips on how to make a Molotov cocktail. That’s standard for this generation of chatbot. But it would also not provide any medical information, such as how to spot signs of cancer. “Bard is not a doctor. It’s not going to give medical advice,” says Krawczyk.
Perhaps the biggest difference between Bard and ChatGPT is that Bard produces three versions of every response, which Google calls “drafts.” Users can click between them and pick the response they prefer, or mix and match between them. The aim is to remind people that Bard cannot generate perfect answers. “There’s the sense of authoritativeness when you only see one example,” says Krawczyk. “And we know there are limitations around factuality.”
How AI experts are using GPT-4
Hoffman got access to the system last summer and has since been writing up his thoughts on the different ways the AI model could be used in education, the arts, the justice system, journalism, and more. In the book, which includes copy-pasted extracts from his interactions with the system, he outlines his vision for the future of AI, uses GPT-4 as a writing assistant to get new ideas, and analyzes its answers.
A quick final word … GPT-4 is the cool new shiny toy of the moment for the AI community. There’s no denying it is a powerful assistive technology that can help us come up with ideas, condense text, explain concepts, and automate mundane tasks. That’s a welcome development, especially for white-collar knowledge workers.
However, it’s notable that OpenAI itself urges caution around use of the model and warns that it poses several safety risks, including infringing on privacy, fooling people into thinking it’s human, and generating harmful content. It also has the potential to be used for other risky behaviors we haven’t encountered yet. So by all means, get excited, but let’s not be blinded by the hype. At the moment, there is nothing stopping people from using these powerful new models to do harmful things, and nothing to hold them accountable if they do.
Chinese tech giant Baidu just released its answer to ChatGPT
So. Many. Chatbots. The latest player to enter the AI chatbot game is Chinese tech giant Baidu. Late last week, Baidu unveiled a new large language model called Ernie Bot, which can solve math questions, write marketing copy, answer questions about Chinese literature, and generate multimedia responses.
A Chinese alternative: Ernie Bot (the name stands for “Enhanced Representation from kNowledge IntEgration;” its Chinese name is 文心一言, or Wenxin Yiyan) performs particularly well on tasks specific to Chinese culture, like explaining a historical fact or writing a traditional poem. Read more from my colleague Zeyi Yang.
Even Deeper Learning
Language models may be able to “self-correct” biases—if you ask them to
Large language models are infamous for spewing toxic biases, thanks to the reams of awful human-produced content they get trained on. But if the models are large enough, they may be able to self-correct for some of these biases. Remarkably, all we might have to do is ask.