Fall 2021: the season of pumpkins, pecan pies, and peachy new phones. Every year, right on cue, Apple, Samsung, Google, and others drop their latest releases. These fixtures in the consumer tech calendar no longer inspire the surprise and wonder of those heady early days. But behind all the marketing glitz, there’s something remarkable going on.
Google’s latest offering, the Pixel 6, is the first phone to have a separate chip dedicated to AI that sits alongside its standard processor. And the chip that runs the iPhone has for the last couple of years contained what Apple calls a “neural engine,” also dedicated to AI. Both chips are better suited to the types of computations involved in training and running machine-learning models on our devices, such as the AI that powers your camera. Almost without our noticing, AI has become part of our day-to-day lives. And it’s changing how we think about computing.
What does that mean? Well, computers haven’t changed much in 40 or 50 years. They’re smaller and faster, but they’re still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how they’re programmed, and how they’re used. Ultimately, it will change what they are for.
“The core of computing is changing from number-crunching to decision-making,” says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes.
More haste, less speed
The first change concerns how computers—and the chips that control them—are made. Traditional computing gains came as machines got faster at carrying out one calculation after another. For decades the world benefited from chip speed-ups that came with metronomic regularity as chipmakers kept up with Moore’s Law.
But the deep-learning models that make current AI applications work require a different approach: they need vast numbers of less precise calculations to be carried out all at the same time. That means a new type of chip is required: one that can move data around as quickly as possible, making sure it’s available when and where it’s needed. When deep learning exploded onto the scene a decade or so ago, there were already specialty computer chips available that were pretty good at this: graphics processing units, or GPUs, which were designed to display an entire screenful of pixels dozens of times a second.
Anything can become a computer. Indeed, most household objects, from toothbrushes to light switches to doorbells, already come in a smart version.
Now chipmakers like Intel and Arm and Nvidia, which supplied many of the first GPUs, are pivoting to make hardware tailored specifically for AI. Google and Facebook are also forcing their way into this industry for the first time, in a race to find an AI edge through hardware.
For example, the chip inside the Pixel 6 is a new mobile version of Google’s tensor processing unit, or TPU. Unlike traditional chips, which are geared toward ultrafast, precise calculations, TPUs are designed for the high-volume but low-precision calculations required by neural networks. Google has used these chips in-house since 2015: they process people’s photos and natural-language search queries. Google’s sister company DeepMind uses them to train its AIs.
In the last couple of years, Google has made TPUs available to other companies, and these chips—as well as similar ones being developed by others—are becoming the default inside the world’s data centers.
AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-learning algorithm—a type of AI that learns how to solve a task through trial and error—to design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think of—but they worked. This kind of AI could one day develop better, more efficient chips.
Show, don’t tell
The second change concerns how computers are told what to do. For the past 40 years we have been programming computers; for the next 40 we will be training them, says Chris Bishop, head of Microsoft Research in the UK.
Traditionally, to get a computer to do something like recognize speech or identify objects in an image, programmers first had to come up with rules for the computer.
With machine learning, programmers no longer write rules. Instead, they create a neural network that learns those rules for itself. It’s a fundamentally different way of thinking.
The Blue Technology Barometer 2022/23
The overall rankings tab shows the performance of the examined
economies relative to each other and aggregates scores generated
across the following four pillars: ocean environment, marine activity,
technology innovation, and policy and regulation.
This pillar ranks each country according to its levels of
marine water contamination, its plastic recycling efforts, the
CO2 emissions of its marine activities (relative to the size
of its economy), and the recent change of total emissions.
This pillar ranks each country on the sustainability of its
marine activities, including shipping, fishing, and protected
This pillar ranks each country on its contribution to ocean
sustainable technology research and development, including
expenditure, patents, and startups.
This pillar ranks each country on its stance on ocean
sustainability-related policy and regulation, including
national-level policies, taxes, fees, and subsidies, and the
implementation of international marine law.
Get access to technology journalism that matters.
MIT Technology Review offers in-depth reporting on today’s most MIT
Technology Review offers in-depth reporting on today’s most
important technologies to prepare you for what’s coming next.
MIT Technology Review Insights would like to thank the following
individuals for their time, perspective, and insights:
- Valérie Amant, Director of Communications, The SeaCleaners
- Charlotte de Fontaubert, Global Lead for the Blue Economy, World Bank Group
- Ian Falconer, Founder, Fishy Filaments
- Ben Fitzgerald, Managing Director, CoreMarine
- Melissa Garvey, Global Director of Ocean Protection, The Nature Conservancy
Michael Hadfield, Emeritus Professor, Principal Investigator, Kewalo Marine Laboratory, University of Hawaii
- Takeshi Kawano, Executive Director, Japan Agency for Marine-Earth Science and Technology
- Kathryn Matthews, Chief Scientist, Oceana
- Alex Rogers, Science Director, REV Ocean
- Ovais Sarmad, Deputy Executive Secretary, United Nations Framework Convention on Climate Change
- Thierry Senechal, Managing Director, Finance for Impact
- Jyotika Virmani, Executive Director, Schmidt Ocean Institute
- Lucy Woodall, Associate Professor of Marine Biology, University of Oxford, and Principal Scientist at Nekton
Methodology: The Blue Technology Barometer 2022/23
Now in its second year, the Blue Technology Barometer assesses and ranks how each of the world’s largest
maritime economies promotes and develops blue (marine-centered) technologies that help reverse the impact of
climate change on ocean ecosystems, and how they leverage ocean-based resources to reduce greenhouse gases and
other effects of climate change.
To build the index, MIT Technology Review Insights compiled 20 quantitative and qualitative data indicators
for 66 countries and territories with coastlines and maritime economies. This included analysis of select
datasets and primary research interviews with global blue technology innovators, policymakers, and
international ocean sustainability organizations. Through trend analysis, research, and a consultative
peer-review process with several subject matter experts, weighting assumptions were assigned to determine the
relative importance of each indicator’s influence on a country’s blue technology leadership.
These indicators measure how each country or territory’s economic and maritime industries have affected its
marine environment and how quickly they have developed and deployed technologies that help improve ocean
health outcomes. Policy and regulatory adherence factors were considered, particularly the observance of
international treaties on fishing and marine protection laws.
The indicators are organized into four pillars, which evaluate metrics around a sustainability theme. Each
indicator is scored from 1 to 10 (10 being the best performance) and is weighted for its contribution to its
respective pillar. Each pillar is weighted to determine its importance in the overall score. As these research
efforts center on countries developing blue technology to promote ocean health, the technology pillar is
ranked highest, at 50% of the overall score.
The four pillars of the Blue Technology Barometer are:
Carbon emissions resulting from maritime activities and their relative growth. Metrics in this pillar also
assess each country’s efforts to mitigate ocean pollution and enhance ocean ecosystem health.
Efforts to promote sustainable fishing activities and increase and maintain marine protected areas.
Progress in fostering the development of sustainable ocean technologies across several relevant fields:
- Clean innovation scores from MIT Technology Review Insights’ Green Future Index 2022.
- A tally of maritime-relevant patents and technology startups.
- An assessment of each economy’s use of technologies and tech-enabled processes that facilitate ocean
Commitment to signing and enforcing international treaties to promote ocean sustainability and enforce
MIT Technology Review was founded at the Massachusetts Institute of Technology in 1899. MIT Technology Review
Insights is the custom publishing division of MIT Technology Review. We conduct qualitative and quantitative
research and analysis worldwide and publish a wide variety of content, including articles, reports,
infographics, videos, and podcasts.
If you have any comments or queries, please
get in touch.
What Shanghai protesters want and fear
You may have seen that nearly three years after the pandemic started, protests have erupted across the country. In Beijing, Shanghai, Urumqi, Guangzhou, Wuhan, Chengdu, and more cities and towns, hundreds of people have taken to the streets to mourn the lives lost in an apartment fire in Urumqi and to demand that the government roll back its strict pandemic policies, which many blame for trapping those who died.
It’s remarkable. It’s likely the largest grassroots protest in China in decades, and it’s happening at a time when the Chinese government is better than ever at monitoring and suppressing dissent.
Videos of these protests have been shared in real time on social media—on both Chinese and American platforms, even though the latter are technically blocked in the country—and they have quickly become international front-page news. However, discussions among foreigners have too often reduced the protests to the most sensational clips, particularly ones in which protesters directly criticize President Xi Jinping or the ruling party.
The reality is more complicated. As in any spontaneous protest, different people want different things. Some only want to abolish the zero-covid policies, while others have made direct calls for freedom of speech or a change of leadership.
I talked to two Shanghai residents who attended the protests to understand what they experienced firsthand, why they went, and what’s making them anxious about the thought of going again. Both have requested we use only their surnames, to avoid political retribution.
Zhang, who went to the first protest in Shanghai after midnight on Saturday, told me he was motivated by a desire to let people know his discontent. “Not everyone can silently suffer from your actions,” he told me, referring to government officials. “No. People’s lives have been really rough, and you should reflect on yourself.”
In the hour that he was there, Zhang said, protesters were mostly chanting slogans that stayed close to opposing zero-covid policies—like the now-famous line “Say no to covid tests, yes to food. No to lockdowns, yes to freedom,” which came from a protest by one Chinese citizen, Peng Lifa, right before China’s heavily guarded party congress meeting last month.
While Peng hasn’t been seen in public since, his slogans have been heard and seen everywhere in China over the past week. Relaxing China’s strict pandemic control measures, which often don’t reflect a scientific understanding of the virus, is the most essential—and most agreed-upon—demand.
Biotech labs are using AI inspired by DALL-E to invent new drugs
Today, two labs separately announced programs that use diffusion models to generate designs for novel proteins with more precision than ever before. Generate Biomedicines, a Boston-based startup, revealed a program called Chroma, which the company describes as the “DALL-E 2 of biology.”
At the same time, a team at the University of Washington led by biologist David Baker has built a similar program called RoseTTAFold Diffusion. In a preprint paper posted online today, Baker and his colleagues show that their model can generate precise designs for novel proteins that can then be brought to life in the lab. “We’re generating proteins with really no similarity to existing ones,” says Brian Trippe, one of the co-developers of RoseTTAFold.
These protein generators can be directed to produce designs for proteins with specific properties, such as shape or size or function. In effect, this makes it possible to come up with new proteins to do particular jobs on demand. Researchers hope that this will eventually lead to the development of new and more effective drugs. “We can discover in minutes what took evolution millions of years,” says Gevorg Grigoryan, CEO of Generate Biomedicines.
“What is notable about this work is the generation of proteins according to desired constraints,” says Ava Amini, a biophysicist at Microsoft Research in Cambridge, Massachusetts.
Proteins are the fundamental building blocks of living systems. In animals, they digest food, contract muscles, detect light, drive the immune system, and so much more. When people get sick, proteins play a part.
Proteins are thus prime targets for drugs. And many of today’s newest drugs are protein based themselves. “Nature uses proteins for essentially everything,” says Grigoryan. “The promise that offers for therapeutic interventions is really immense.”
But drug designers currently have to draw on an ingredient list made up of natural proteins. The goal of protein generation is to extend that list with a nearly infinite pool of computer-designed ones.
Computational techniques for designing proteins are not new. But previous approaches have been slow and not great at designing large proteins or protein complexes—molecular machines made up of multiple proteins coupled together. And such proteins are often crucial for treating diseases.