Amid the many business disruptions caused by covid-19, here’s one largely overlooked: artificial intelligence (AI) whiplash.
As the pandemic began to upend the world last year, businesses reached for every tool at their disposal—including AI—to solve challenges and serve customers safely and effectively. In a 2021 KPMG survey of US business executives conducted between January 3 and 16, half the respondents said their organization sped up its use of AI in response to covid-19—including 72% of industrial manufacturers, 57% of technology companies, and 53% of retailers.
Most are happy with the results. Eighty-two percent of those surveyed agree AI has been helpful to their organization during the pandemic, and a majority say it is delivering even more value than anticipated. More broadly, nearly all say wider use of AI would make their organization run more efficiently. In fact, 85% want their organization to accelerate AI adoption.
Still, sentiment isn’t entirely positive. Even as they’re looking to step on the gas, 44% of executives think their industry is moving faster on AI than it should. More startling, 74% contend the use of AI to help businesses remains more hype than reality—up sharply in key industries since our September 2019 AI survey. In both the financial services and retail sectors, for example, 75% of executives now feel AI is overhyped, up from 42% and 64%, respectively.
How to square these seemingly opposed points of view on what KPMG is calling AI whiplash? Based on our work helping organizations apply AI, we see several explanations about hype. One is the simple newness of the technology, which has allowed for misperceptions about what it can and can’t do, how long it takes to realize enterprise-scale results, and what mistakes are possible as organizations experiment with AI without the right foundation.
Even though 79% of respondents say AI is at least moderately functional at their organization, only 43% say it is fully functional at scale. It is still common to find people who think of AI as something to be purchased—like a new piece of machinery—to deliver immediate results. And while they may have experienced some success with AI—often small proofs of concept—many organizations have learned that scaling them to enterprise level can be more challenging. It requires access to clean and well-organized data; a robust data storage infrastructure; subject matter experts to help create labeled training data; sophisticated computer science skills; and buy-in from the business.
Of course, it also is no stretch to believe proponents of AI may have exaggerated its potential from time to time or discounted the effort required to realize its full value.
As to why executives are conflicted about the speed of AI’s adoption, we see basic human nature at play. For starters, it’s always easier to believe the grass is greener on the other side. We also suspect a lot of people worry their industry is moving too fast primarily because their own organization isn’t matching that speed. If they’ve experienced early-stage hiccups with AI—especially last year, when the world witnessed AI-enabled accomplishments like record-fast development of covid-19 vaccines—it may have been easy to succumb to those fears.
We see another factor driving mixed feelings about AI’s potential—the absence of an established legal and regulatory framework to guide its use. Many business leaders don’t have a clear view into what their organization is doing to govern AI, or what new government regulations might lie ahead. Understandably, they’re worried about the associated risks, including developing use cases today that regulators might squash tomorrow.
This uncertainty helps explain yet another seemingly contradictory finding from our survey. While business executives typically take a skeptical view of government regulation, 87% say government should play a role in regulating AI technology.
Moving on from AI whiplash
While every organization will need its own playbook to recover from AI whiplash and optimize its investment in the technology, a comprehensive plan should include five components:
- A strategic investment in data. Data is the raw material of AI and the connective tissue of a digital organization. Organizations need clean, machine-digestible data labeled to train AI models, with the help of subject matter experts. They require a data storage infrastructure that transcends functional silos within the business and can deliver data quickly and reliably. Once the models are deployed, a strategy and approach to harvest data is needed to continuously tune and train them.
- The right talent. Computer scientists with expertise in AI are in high demand and tough to find—but crucial to understanding the AI landscape and guiding strategy. Organizations unable to build a full team of scientists internally will need external partners who can fill in the gaps and help them sort through the ever-expanding array of AI vendors and offerings.
- A long-term AI strategy guided by the business. Organizations get the most from AI by thinking about finding solutions to problems, not buying technology and searching for ways to use it. They let the business, not the IT department, drive the agenda. When AI investments tied to a business-led strategy go wrong, they become opportunities to fail fast and learn, not fast and burn. But even as companies iterate quickly, they need to do so in line with a long-term AI strategy, because the biggest benefits are realized over the long haul.
- Culture and employee upskilling. Few AI agendas will gain traction without buy-in from the workforce and a culture invested in AI’s success. Winning the commitment of employees requires providing them with at least a rudimentary understanding of the technology and data, and an even deeper understanding of how it will benefit them and the enterprise. Also important is upskilling the workforce, especially where AI will take over or supplement their existing responsibilities. Embracing a data-driven mindset and instilling a deeper AI literacy into an organization’s DNA will help them scale and succeed.
- A commitment to ethical and unbiased use of AI. AI holds great promise but also the potential for harm if organizations use it in ways customers don’t like or that discriminate against some segments of the population. Every organization should develop an AI ethics policy with clear guidelines on how the technology will be deployed. This policy should mandate measures and be part of the DevOps process to check for issues and imbalances in the data, measure and quantify unintended bias in machine learning algorithms, track the provenance of data, and identify those who train algorithms. Organizations should continuously monitor the models for bias and drift, and ensure explainability of model decisions are in place.
Executives’ objectives for AI investments over the next two years vary by industry. Healthcare executives say their focus will be on telemedicine, robotic tasks, and delivery of patient care. In life sciences, they say they’ll be looking to deploy AI to identify new revenue opportunities, reduce administrative costs, and analyze patient data. And government executives say their focus will be on improving process automation and analytics capabilities, and on managing contracting and other obligations.
Expected outcomes also vary by industry. Retail executives predict the biggest impact in the areas of customer intelligence, inventory management, and customer service chatbots. Industrial manufacturers see it in product design, development, and engineering; maintenance operations; and production activities. And financial services firms are expecting to get better at fraud detection and prevention, risk management, and process automation.
Long-term, KPMG sees AI playing a vital role in reducing fraud, waste and abuse, and in helping businesses sharpen their sales, marketing, and customer service operations. Ultimately, we believe AI will help resolve fundamental human challenges in areas as diverse as disease identification and treatment, agriculture and global hunger, and climate change.
That’s a future worth working toward. We believe government and industry alike have roles to play in making it happen—in working together to formulate rules that foster the ethical evolution of AI without stifling the innovation and momentum already underway.
Read more in the KPMG “Thriving in an AI World” report.
This content was produced by KPMG. It was not written by MIT Technology Review’s editorial staff.
Investing in women pays off
“Starting a business is a privilege,” says Burton O’Toole, who worked at various startups before launching and later selling AdMass, her own marketing technology company. The company gave her access to the HearstLab program in 2016, but she soon discovered that she preferred the investment aspect and became a vice president at HearstLab a year later. “To empower some of the smartest women to do what they love is great,” she says. But in addition to rooting for women, Burton O’Toole loves the work because it’s a great market opportunity.
“Research shows female-led teams see two and a half times higher returns compared to male-led teams,” she says, adding that women and people of color tend to build more diverse teams and therefore benefit from varied viewpoints and perspectives. She also explains that companies with women on their founding teams are likely to get acquired or go public sooner. “Despite results like this, just 2.3% of venture capital funding goes to teams founded by women. It’s still amazing to me that more investors aren’t taking this data more seriously,” she says.
Burton O’Toole—who earned a BS from Duke in 2007 before getting an MS and PhD from MIT, all in mechanical engineering—has been a “data nerd” since she can remember. In high school she wanted to become an actuary. “Ten years ago, I never could have imagined this work; I like the idea of doing something in 10 more years I couldn’t imagine now,” she says.
When starting a business, Burton O’Toole says, “women tend to want all their ducks in a row before they act. They say, ‘I’ll do it when I get this promotion, have enough money, finish this project.’ But there’s only one good way. Make the jump.”
Preparing for disasters, before it’s too late
All too often, the work of developing global disaster and climate resiliency happens when disaster—such as a hurricane, earthquake, or tsunami—has already ravaged entire cities and torn communities apart. But Elizabeth Petheo, MBA ’14, says that recently her work has been focused on preparedness.
It’s hard to get attention for preparedness efforts, explains Petheo, a principal at Miyamoto International, an engineering and disaster risk reduction consulting firm. “You can always get a lot of attention when there’s a disaster event, but at that point it’s too late,” she adds.
Petheo leads the firm’s projects and partnerships in the Asia-Pacific region and advises globally on international development and humanitarian assistance. She also works on preparedness in the Asia-Pacific region with the United States Agency for International Development.
“We’re doing programming on the engagement of the private sector in disaster risk management in Indonesia, which is a very disaster-prone country,” she says. “Smaller and medium-sized businesses are important contributors to job creation and economic development. When they go down, the impact on lives, livelihoods, and the community’s ability to respond and recover effectively is extreme. We work to strengthen their own understanding of their risk and that of their surrounding community, lead them through an action-planning process to build resilience, and link that with larger policy initiatives at the national level.”
Petheo came to MIT with international leadership experience, having managed high-profile global development and risk mitigation initiatives at the World Bank in Washington, DC, as well as with US government agencies and international organizations leading major global humanitarian responses and teams in Sri Lanka and Haiti. But she says her time at Sloan helped her become prepared for this next phase in her career. “Sloan was the experience that put all the pieces together,” she says.
Petheo has maintained strong connections with MIT. In 2018, she received the Margaret L.A. MacVicar ’65, ScD ’67, Award in recognition of her role starting and leading the MIT Sloan Club in Washington, DC, and her work as an inaugural member of the Graduate Alumni Council (GAC). She is also a member of the Friends of the MIT Priscilla King Gray Public Service Center.
“I believe deeply in the power and impact of the Institute’s work and people,” she says. “The moment I graduated, my thought process was, ‘How can I give back, and how can I continue to strengthen the experience of those who will come after me?’”
The Download: a curb on climate action, and post-Roe period tracking
Why’s it so controversial?: Geoengineering was long a taboo topic among scientists, and some argue it should remain one. There are questions about its potential environmental side effects, and concerns that the impacts will be felt unevenly across the globe. Some feel it’s too dangerous to ever try or even to investigate, arguing that just talking about the possibility could weaken the need to address the underlying causes of climate change.
But it’s going ahead?: Despite the concerns, as the threat of climate change grows and major nations fail to make rapid progress on emissions, growing numbers of experts are seriously exploring the potential effects of these approaches. Read the full story.
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 The belief that AI is alive refuses to die
People want to believe the models are sentient, even when their creators deny it. (Reuters)
+ It’s unsurprising wild religious beliefs find a home in Silicon Valley. (Vox)
+ AI systems are being trained twice as quickly as they were just last year. (Spectrum IEEE)
2 The FBI added the missing cryptoqueen to its most-wanted list
It’s offering a $100,000 reward for information leading to Ruja Ignatova, whose crypto scheme defrauded victims out of more than $4 billion. (BBC)
+ A new documentary on the crypto Ponzi scheme is in the works. (Variety)
3 Social media platforms turn a blind eye to dodgy telehealth ads
Which has played a part in the prescription drugs abuse boom. (Protocol)
+ The doctor will Zoom you now. (MIT Technology Review)
4 We’re addicted to China’s lithium batteries
Which isn’t great news for other countries building electric cars. (Wired $)
+ This battery uses a new anode that lasts 20 times longer than lithium. (Spectrum IEEE)
+ Quantum batteries could, in theory, allow us to drive a million miles between charges. (The Next Web)
5 Far-right extremists are communicating over radio to avoid detection
Making it harder to monitor them and their violent activities. (Slate $)
+ Many of the rioters who stormed the Capitol were carrying radio equipment. (The Guardian)
6 Bro culture has no place in space 🚀
So says NASA’s former deputy administrator, who’s sick and tired of misogyny in the sector. (CNN)
7 A US crypto exchange is gaining traction in Venezuela
It’s helping its growing community battle hyperinflation, but isn’t as decentralized as they believe it to be. (Rest of World)
+ The vast majority of NFT players won’t be around in a decade. (Vox)
+ Exchange Coinbase is working with ICE to track and identify crypto users. (The Intercept)
+ If RadioShack’s edgy tweets shock you, don’t forget it’s a crypto firm now. (NY Mag)
8 It’s time we learned to love our swamps
Draining them prevents them from absorbing CO2 and filtering out our waste. (New Yorker $)
+ The architect making friends with flooding. (MIT Technology Review)
9 Robots love drawing too 🖍️
Though I’ll bet they don’t get as frustrated as we do when they mess up. (Input)
10 The risky world of teenage brains
Making potentially dangerous decisions is an important part of adolescence, and our brains reflect that. (Knowable Magazine)
Quote of the day
“They shamelessly celebrate an all-inclusive pool party while we can’t even pay our rent!”