This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
DeepMind has predicted the structure of almost every protein known to science
The news: DeepMind says its AlphaFold tool has successfully predicted the structure of nearly all proteins known to science. From today, it’s offering its database of over 200 million proteins to anyone for free. It’s a massive boost to the existing database of 1 million proteins it released last year, and includes structures for plants, bacteria, animals, and many other organisms.
Why it matters: The expanded database opens up huge opportunities for AlphaFold to have impact on important issues such as sustainability, fuel, food insecurity, and neglected diseases, according to Demis Hassabis, DeepMind’s founder and CEO. Scientists could use the findings to better understand diseases, and to speed innovation in drug discovery and biology, he added. Read the full story.
AI for protein folding represents such a major advance that it was chosen as one of MIT Technology Review’s 10 Breakthrough Technologies this year. Read our story explaining why it’s so exciting, and our profile of DeepMind’s founder Demis Hassabis, where he explains why this may be the company’s most significant and long-lasting contribution to science.
Stitching together the grid will save lives as extreme weather worsens
The blistering heat waves that set temperature records across much of the US in recent days have strained electricity systems, threatening to knock out power in vulnerable regions of the country. While the electricity has largely stayed online so far this summer, heavy use of energy-sucking air-conditioners and the intense heat has contributed to scattered problems and close calls.
It’s unlikely to get better soon. A number of grid operators may struggle to meet peak summer demand, creating the risk of rolling blackouts, a new report from the North American Electric Reliability Corporation has found. The nation’s isolated and antiquated grids are in desperate need of upgrades.
One solution would be to more tightly integrate the country’s regional grids, stitching them together with more long-range transmission lines, allowing power to flow between regions to where it’s needed more urgently. However, that’s a mission that’s fraught with challenges. Read the full story.
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Meta’s revenue dropped for the first time
The cracks in Mark Zuckerberg’s pivot to the metaverse are beginning to show. (NYT $)
+ More people are logging into Facebook each day, though. (WP $)
+ Zuck says Meta is in ‘very deep, philosophical competition’ with Apple. (The Verge)
+ Discord is a natural home for users disillusioned by Instagram. (WSJ $)
+ Ex-Facebook and Bumble workers have built their own ‘less toxic’ social network. (Protocol)
2 Senators have advanced child online safety legislation
But others argue that such safeguards should apply to users of all ages. (WP $)
+ Three wannabe senators have deep links to the tech firms they’re railing against. (NYT $)
3 A Greek politician was targeted by Israeli spyware
He’s filed a lawsuit to force Greek authorities to investigate who was behind the attempted hack. (NYT $)
+ Carine Kanimba claimed the Rwandan government used Pegasus spyware to spy on her family. (Motherboard)
+ The hacking industry faces the end of an era. (MIT Technology Review)
4 Bitcoin prices are rising again
After the Federal Reserve raised interest rates. (CNBC)
5 Take a journey across the universe
This amazing guide walks you through everything from exoplanets to supermassive black holes. (New Scientist $)
+ Will the universe’s expansion mean planets no longer orbit stars? (MIT Technology Review)
7 Your modern car is leaking your data
While a lot of it’s anonymized, the risk of privacy breaches is real. (The Markup)
8 Top-quality TVs lay bare bad CGI
Showing up all its poorly-rendered flaws. (Vulture $)
9 Is DALL-E’s art stolen?
While users can commercialize their AI creations, the model is trained on others’ work. (Engadget)
+ Lawyers could choose to represent AIs in future courtroom battles. (Slate)
+ OpenAI is ready to sell DALL-E to its first million customers. (MIT Technology Review)
10 What old dogs can teach us about our own brains
Just don’t try to teach them new tricks. (Knowable Magazine)
Quote of the day
“This is not the Instagram that we used to have.”
—Tatiana Bruening, the creator of a viral post urging Instagram to stop trying to be TikTok, laments the platform’s decision to chase a Gen Z audience, she tells the Wall Street Journal.
The big story
She risked everything to expose Facebook. Now she’s telling her story.
When Sophie Zhang went public with explosive revelations detailing the political manipulation she’d uncovered during her time as a data scientist at Facebook, she supplied concrete evidence to support what critics had long been saying on the outside: that Facebook makes election interference easy, and that unless such activity hurts the company’s business interests, it can’t be bothered to fix the problem.
By speaking out and eschewing anonymity, Zhang risked legal action from the company, harm to her future career prospects, and perhaps even reprisals from the politicians she exposed in the process. Her story reveals that it is really pure luck that we now know so much about how Facebook enables election interference globally, and to regulators around the world considering how to rein in the company, this should be a wake-up call. Read the full story.
We can still have nice things
+ Japanese artist Hiroshige was well-known for his beautiful woodblock prints, but these instructive pictures explaining how to create shadow puppets for children are extra special.
+ Uhoh, Freya the walrus is a real boat-sinking pest.
+ Whip up these mouth-watering Mediterranean recipes and imagine you’re chilling in Rome.
+ The winners of this years’ Audubon Photography Awards are spectacular (thanks Peter!)
+ If you’re a fan of essay-length texts, you’re a paragraph girlie.
Meta’s new AI can turn text prompts into videos
Although the effect is rather crude, the system offers an early glimpse of what’s coming next for generative artificial intelligence, and it is the next obvious step from the text-to-image AI systems that have caused huge excitement this year.
Meta’s announcement of Make-A-Video, which is not yet being made available to the public, will likely prompt other AI labs to release their own versions. It also raises some big ethical questions.
In the last month alone, AI lab OpenAI has made its latest text-to-image AI system DALL-E available to everyone, and AI startup Stability.AI launched Stable Diffusion, an open-source text-to-image system.
But text-to-video AI comes with some even greater challenges. For one, these models need a vast amount of computing power. They are an even bigger computational lift than large text-to-image AI models, which use millions of images to train, because putting together just one short video requires hundreds of images. That means it’s really only large tech companies that can afford to build these systems for the foreseeable future. They’re also trickier to train, because there aren’t large-scale data sets of high-quality videos paired with text.
To work around this, Meta combined data from three open-source image and video data sets to train its model. Standard text-image data sets of labeled still images helped the AI learn what objects are called and what they look like. And a database of videos helped it learn how those objects are supposed to move in the world. The combination of the two approaches helped Make-A-Video, which is described in a non-peer-reviewed paper published today, generate videos from text at scale.
Tanmay Gupta, a computer vision research scientist at the Allen Institute for Artificial Intelligence, says Meta’s results are promising. The videos it’s shared show that the model can capture 3D shapes as the camera rotates. The model also has some notion of depth and understanding of lighting. Gupta says some details and movements are decently done and convincing.
However, “there’s plenty of room for the research community to improve on, especially if these systems are to be used for video editing and professional content creation,” he adds. In particular, it’s still tough to model complex interactions between objects.
In the video generated by the prompt “An artist’s brush painting on a canvas,” the brush moves over the canvas, but strokes on the canvas aren’t realistic. “I would love to see these models succeed at generating a sequence of interactions, such as ‘The man picks up a book from the shelf, puts on his glasses, and sits down to read it while drinking a cup of coffee,’” Gupta says.
How AI is helping birth digital humans that look and sound just like us
Jennifer: And the team has also been exploring how these digital twins can be useful beyond the 2D world of a video conference.
Greg Cross: I guess the.. the big, you know, shift that’s coming right at the moment is the move from the 2D world of the internet, into the 3D world of the metaverse. So, I mean, and that, and that’s something we’ve always thought about and we’ve always been preparing for, I mean, Jack exists in full 3D, um, You know, Jack exists as a full body. So I mean, Jack can, you know, today we have, you know, we’re building augmented reality, prototypes of Jack walking around on a golf course. And, you know, we can go and ask Jack, how, how should we play this hole? Um, so these are some of the things that we are starting to imagine in terms of the way in which digital people, the way in which digital celebrities. Interact with us as we move into the 3D world.
Jennifer: And he thinks this technology can go a lot further.
Greg Cross: Healthcare and education are two amazing applications of this type of technology. And it’s amazing because we don’t have enough real people to deliver healthcare and education in the real world. So, I mean, so you can, you know, you can imagine how you can use a digital workforce to augment. And, and extend the skills and capability, not replace, but extend the skills and, and capabilities of real people.
Jennifer: This episode was produced by Anthony Green with help from Emma Cillekens. It was edited by me and Mat Honan, mixed by Garret Lang… with original music from Jacob Gorski.
If you have an idea for a story or something you’d like to hear, please drop a note to podcasts at technology review dot com.
Thanks for listening… I’m Jennifer Strong.
A bionic pancreas could solve one of the biggest challenges of diabetes
The bionic pancreas, a credit card-sized device called an iLet, monitors a person’s levels around the clock and automatically delivers insulin when needed through a tiny cannula, a thin tube inserted into the body. It is worn constantly, generally on the abdomen. The device determines all insulin doses based on the user’s weight, and the user can’t adjust the doses.
A Harvard Medical School team has submitted its findings from the study, described in the New England Journal of Medicine, to the FDA in the hopes of eventually bringing the product to market in the US. While a team from Boston University and Massachusetts General Hospital first tested the bionic pancreas in 2010, this is the most extensive trial undertaken so far.
The Harvard team, working with other universities, provided 219 people with type 1 diabetes who had used insulin for at least a year with a bionic pancreas device for 13 weeks. The team compared their blood sugar levels with those of 107 diabetic people who used other insulin delivery methods, including injection and insulin pumps, during the same amount of time.
The blood sugar levels of the bionic pancreas group fell from 7.9% to 7.3%, while the standard care group’s levels remained steady at 7.7%. The American Diabetes Association recommends a goal of less than 7.0%, but that’s only met by approximately 20% of people with type 1 diabetes, according to a 2019 study.
Other types of artificial pancreas exist, but they typically require the user to input information before they will deliver insulin, including the amount of carbohydrates they ate in their last meal. Instead, the iLet takes the user’s weight and the type of meal they’re eating, such as breakfast, lunch, or dinner, added by the user via the iLet interface, and it uses an adaptive learning algorithm to deliver insulin automatically.