Why genomic pioneer Lee Hood hopes the covid-19 pandemic will make precision medicine based on personalized patient data a reality
Davis believes the key to understanding why covid affects people in such varied ways is to identify the differences between the immune systems of those who successfully fight the disease and those who succumb. Those differences could range from the simple, such as whether someone has been exposed to other coronaviruses in the past, to factors as complex as genetically determined variations in how certain cells display viral protein fragments on their surfaces for inspection by circulating immune cells. These proteins can influence how likely the immune cell is to recognize the presence of a dangerous pathogen, sound the alarm, and mobilize an army of antibodies to go on the attack.
“Now there is a flood of data, and it’s the highest quality that we’ve ever had, and also the most we’ve ever had,” Davis says.
A boon for the science, to be sure. But will the ISB study change how patients are treated and help prepare us for future pandemics? Hood is optimistic. “This absolutely validates everything I have been arguing for the past 20 years,” he says.
The needed tools
Hood made a major contribution to immunology early in his career, after attending medical school and getting his PhD from Caltech. He helped solve the mystery of how the body can produce roughly 10 billion varieties of antibodies, Y-shaped proteins that can bind to the outer surface of a distinctly shaped invading pathogen and destroy it with the specificity of a guided missile.
Despite his early success, Hood recognized from the start that without major advances in technology, he would never answer the most intriguing biological questions that remained about the immune system: those revealing how it coordinates its remarkably complex collection of cell types and proteins. If immunologists were ever to understand how all these parts worked together, Hood realized, they would first need to recognize, characterize, and measure them.
Hood’s Caltech lab played a key role in developing a wide range of tools, including instruments that would enable biologists to read and write sequences of amino acids, and machines that could string together DNA nucleotides (the letters of the genetic code). Perhaps most famously, in 1986 he helped invent the automated DNA sequencer, a machine able to quickly read the nucleotides in the genome and determine their order; it paved the way for the Human Genome Project, the $3 billion, 13-year effort to produce the first draft of a complete human genome.
In the years that followed, Hood advocated for a reinvention of modern health care that relied on the new tools of molecular biology to collect data from individual patients: genome sequences, and complete inventories of proteins circulating in the bloodstream. This data could then be analyzed, using early systems of machine learning and pattern recognition to pull out interesting patterns and correlations. Insights could be harnessed to maximize a person’s health and head off diseases far earlier than previously possible.
It all made perfect scientific sense. But nearly two decades after the Human Genome Project’s completion in 2003, and despite much progress in genomic sciences as well as in data science, Hood’s predicted revolution in health care has still not arrived.
Hood says one reason is that the tools used to be expensive. Now, however, a genome can be sequenced for $300 or less. And, he says, researchers have gained access to computational tools “that can really integrate the data, and turn data into knowledge.”
But the biggest roadblock is that the health-care system is inefficient and resistant to change. There’s a “lack of understanding about how important it is to get diverse types of data and integrate them,” Hood says. “Most physicians went to medical school five or 10 or 20 years ago, and they never learned anything about any of this.”
“Everybody is really busy, and changing takes time, so you have to persuade leadership as well as physicians this is in their interest,” he says. “That all turned out to be far more difficult than I ever thought it would be.”
These days, Hood is still pushing hard, and despite the years of frustration, he is characteristically optimistic. One reason for his renewed hope is that he finally has ready access to patients and the money to begin his next grand experiment.
In 2016, ISB merged with Providence Health & Services in Seattle, a massive network with 51 hospitals, billions of dollars in cash, and a hunger to develop a more robust research program.
Soon after the merger, Hood was talking up an impossibly ambitious-sounding campaign to start what he calls the Million Person Project. It would apply phenotyping and genetic analysis to, yes, a million people. In January 2020, Hood kicked off a pilot project, having recruited 5,000 patients, and began to sequence their genomes.
Inside the conference where researchers are solving the clean-energy puzzle
The Advanced Research Projects Agency for Energy (ARPA-E) funds high-risk, high-reward energy research projects, and each year the agency hosts a summit where funding recipients and other researchers and companies in energy can gather to talk about what’s new in the field.
As I listened to presentations, met with researchers, and—especially—wandered around the showcase, I often had a vague feeling of whiplash. Standing at one booth trying to wrap my head around how we might measure carbon stored by plants, I would look over and see another group focused on making nuclear fusion a more practical way to power the world.
There are plenty of tried-and-true solutions that can begin to address climate change right now: wind and solar power are being deployed at massive scales, electric vehicles are coming to the mainstream, and new technologies are helping companies make even fossil-fuel production less polluting. But as we knock out the easy wins, we’ll also need to get creative to tackle harder-to-solve sectors and reach net-zero emissions. Here are a few intriguing projects from the ARPA-E showcase that caught my eye.
“I heard you have rocks here!” I exclaimed as I approached the Quaise Energy station.
Quaise’s booth featured a screen flashing through some fast facts and demonstration videos. And sure enough, laid out on the table were two slabs of rock. They looked a bit worse for wear, each sporting a hole about the size of a quarter in the middle, singed around the edges.
These rocks earned their scorch marks in service of a big goal: making geothermal power possible anywhere. Today, the high temperatures needed to generate electricity using heat from the Earth are only accessible close to the surface in certain places on the planet, like Iceland or the western US.
Geothermal power could in theory be deployed anywhere, if we could drill deep enough. Getting there won’t be easy, though, and could require drilling 20 kilometers (12 miles) beneath the surface. That’s deeper than any oil and gas drilling done today.
Rather than grinding through layers of granite with conventional drilling technology, Quaise plans to get through the more obstinate parts of the Earth’s crust by using high-powered millimeter waves to vaporize rock. (It’s sort of like lasers, but not quite.)
The emergent industrial metaverse
Annika Hauptvogel, head of technology and innovation management at Siemens, describes the industrial metaverse as “immersive, making users feel as if they’re in a real environment; collaborative in real time; open enough for different applications to seamlessly interact; and trusted by the individuals and businesses that participate”—far more than simply a digital world.
The industrial metaverse will revolutionize the way work is done, but it will also unlock significant new value for business and societies. By allowing businesses to model, prototype, and test dozens, hundreds, or millions of design iterations in real time and in an immersive, physics-based environment before committing physical and human resources to a project, industrial metaverse tools will usher in a new era of solving real-world problems digitally.
“The real world is very messy, noisy, and sometimes hard to really understand,” says Danny Lange, senior vice president of artificial intelligence at Unity Technologies, a leading platform for creating and growing real-time 3-D content. “The idea of the industrial metaverse is to create a cleaner connection between the real world and the virtual world, because the virtual world is so much easier and cheaper to work with.”
While real-life applications of the consumer metaverse are still developing, industrial metaverse use cases are purpose-driven, well aligned with real-world problems and business imperatives. The resource efficiencies enabled by industrial metaverse solutions may increase business competitiveness while also continually driving progress toward the sustainability, resilience, decarbonization, and dematerialization goals that are essential to human flourishing.
This report explores what it will take to create the industrial metaverse, its potential impacts on business and society, the challenges ahead, and innovative use cases that will shape the future. Its key findings are as follows:
• The industrial metaverse will bring together the digital and real worlds. It will enable a constant exchange of information, data, and decisions and empower industries to solve extraordinarily complex real-world problems digitally, changing how organizations operate and unlocking significant societal benefits.
• The digital twin is a core metaverse building block. These virtual models simulate real-world objects in detail. The next generation of digital twins will be photorealistic, physics-based, AI-enabled, and linked in metaverse ecosystems.
• The industrial metaverse will transform every industry. Currently existing digital twins illustrate the power and potential of the industrial metaverse to revolutionize design and engineering, testing, operations, and training.
The Download: China’s retro AI photos, and experts’ AI fears
Across social media, a number of creators are generating nostalgic photographs of China with the help of AI. Even though these images get some details wrong, they are realistic enough to trick and impress many of their followers.
The pictures look sophisticated in terms of definition, sharpness, saturation, and color tone. Their realism is partly down to a recent major update of image-making artificial-intelligence program Midjourney that was released in mid-March, which is better not only at generating human hands but also at simulating various photography styles.
It’s still relatively easy, even for untrained eyes, to tell that the photos are generated by an AI. But for some creators, their experiments are more about trying to recall a specific era in time than trying to trick their audience. Read the full story.
Zeyi’s story is from China Report, his weekly newsletter giving you the inside track on tech in China. Sign up to receive it in your inbox every Tuesday.
Read more of our reporting on AI-generated images:
+ These new tools let you see for yourself how biased AI image models are. Bias and stereotyping are still huge problems for systems like DALL-E 2 and Stable Diffusion, despite companies’ attempts to fix it. Read the full story.