And it is still the case that when we hear a woman’s voice as part of a tech product, we might not know who she is, whether she is even real, and if so, whether she consented to have her voice used in that way. Many TikTok users assumed that the text-to-speech voice they heard on the app wasn’t a real person. But it was: it belonged to a Canadian voice actor named Bev Standing, and Standing had never given ByteDance, the company that owns TikTok, permission to use it.
Standing sued the company in May, alleging that the ways her voice was being used—particularly the way users could make it say anything, including profanity—were injuring her brand and her ability to make a living. Her voice becoming known as “that voice on TikTok” that you could make say whatever you liked brought recognition without remuneration and, she alleged, hurt her ability to get voice work.
Then, when TikTok abruptly removed her voice, Standing found out the same way the rest of us did—by hearing the change and seeing the reporting on it. (TikTok has not commented to the press about the voice change.)
Those familiar with the story of Apple’s Siri may be feeling a bit of déjà vu: Susan Bennett, the woman who voiced the original Siri, also didn’t know that her voice was being used for that product until it came out. Bennett was eventually replaced as the “US English female voice,” and Apple never publicly acknowledged her. Since then, Apple has written secrecy clauses into voice actors’ contracts and most recently has claimed that its new voice is “entirely software generated,” removing the need to give anyone credit.
These incidents reflect a troubling and common pattern in the tech industry. The way that people’s accomplishments are valued, recognized, and paid for often mirrors their position in the wider society, not their actual contributions. One reason Bev Standing’s and Susan Bennett’s names are now widely known online is that they’re extreme examples of how women’s work gets erased even when it’s right there for everyone to see—or hear.
The way that people’s accomplishments are valued, recognized, and paid for often mirrors their position in the wider society, not their actual contributions.
When women in tech do speak up, they’re often told to quiet down—particularly if they are women of color. Timnit Gebru, who holds a PhD in computer science from Stanford, was recently ousted from Google, where she co-led an AI ethics team, after she spoke up about her concerns regarding the company’s large language models. Her co-lead, Margaret Mitchell (who holds a PhD from the University of Aberdeen with a focus on natural-language generation), was also removed from her position after speaking up about Gebru’s firing. Elsewhere in the industry, whistleblowers like Sophie Zhang at Facebook, Susan Fowler at Uber, and many other women found themselves silenced and often fired as a direct or indirect result of trying to do their jobs and mitigate the harms they saw in the technology companies where they worked.
Even women who found startups can find themselves erased in real time, and the problem again is worse for women of color. Rumman Chowdhury, who holds a PhD from the University of California, San Diego, and is the founder and former CEO of Parity, a company focused on ethical AI, saw her role in her own company’s history minimized by the New York Times.
A new training model, dubbed “KnowNo,” aims to address this problem by teaching robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Andy Zeng, a research scientist at Google DeepMind who helped develop the new technique, says that while robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense.
For example, when asked to bring you a Coke, the robot needs to first understand that it needs to go into the kitchen, look for the refrigerator, and open the fridge door. Conventionally, these smaller substeps had to be manually programmed, because otherwise the robot would not know that people usually keep their drinks in the kitchen.
That’s something large language models (LLMs) could help to fix, because they have a lot of common-sense knowledge baked in, says Zeng.
Now when the robot is asked to bring a Coke, an LLM, which has a generalized understanding of the world, can generate a step-by-step guide for the robot to follow.
The problem with LLMs, though, is that there’s no way to guarantee that their instructions are possible for the robot to execute. Maybe the person doesn’t have a refrigerator in the kitchen, or the fridge door handle is broken. In these situations, robots need to ask humans for help.
KnowNo makes that possible by combining large language models with statistical tools that quantify confidence levels.
When given an ambiguous instruction like “Put the bowl in the microwave,” KnowNo first generates multiple possible next actions using the language model. Then it creates a confidence score predicting the likelihood that each potential choice is the best one.
The news: A new robot training model, dubbed “KnowNo,” aims to teach robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Why it matters: While robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense. That’s something large language models could help to fix, because they have a lot of common-sense knowledge baked in. Read the full story.
—June Kim
Medical microrobots that travel inside the body are (still) on their way
The human body is a labyrinth of vessels and tubing, full of barriers that are difficult to break through. That poses a serious hurdle for doctors. Illness is often caused by problems that are hard to visualize and difficult to access. But imagine if we could deploy armies of tiny robots into the body to do the job for us. They could break up hard-to-reach clots, deliver drugs to even the most inaccessible tumors, and even help guide embryos toward implantation.
We’ve been hearing about the use of tiny robots in medicine for years, maybe even decades. And they’re still not here. But experts are adamant that medical microbots are finally coming, and that they could be a game changer for a number of serious diseases. Read the full story.
We haven’t always been right (RIP, Baxter), but we’ve often been early to spot important areas of progress (we put natural-language processing on our very first list in 2001; today this technology underpins large language models and generative AI tools like ChatGPT).
Every year, our reporters and editors nominate technologies that they think deserve a spot, and we spend weeks debating which ones should make the cut. Here are some of the technologies we didn’t pick this time—and why we’ve left them off, for now.
New drugs for Alzheimer’s disease
Alzmeiher’s patients have long lacked treatment options. Several new drugs have now been proved to slow cognitive decline, albeit modestly, by clearing out harmful plaques in the brain. In July, the FDA approved Leqembi by Eisai and Biogen, and Eli Lilly’s donanemab could soon be next. But the drugs come with serious side effects, including brain swelling and bleeding, which can be fatal in some cases. Plus, they’re hard to administer—patients receive doses via an IV and must receive regular MRIs to check for brain swelling. These drawbacks gave us pause.
Sustainable aviation fuel
Alternative jet fuels made from cooking oil, leftover animal fats, or agricultural waste could reduce emissions from flying. They have been in development for years, and scientists are making steady progress, with several recent demonstration flights. But production and use will need to ramp up significantly for these fuels to make a meaningful climate impact. While they do look promising, there wasn’t a key moment or “breakthrough” that merited a spot for sustainable aviation fuels on this year’s list.
Solar geoengineering
One way to counteract global warming could be to release particles into the stratosphere that reflect the sun’s energy and cool the planet. That idea is highly controversial within the scientific community, but a few researchers and companies have begun exploring whether it’s possible by launching a series of small-scale high-flying tests. One such launch prompted Mexico to ban solar geoengineering experiments earlier this year. It’s not really clear where geoengineering will go from here or whether these early efforts will stall out. Amid that uncertainty, we decided to hold off for now.