Connect with us

Tech

The Download: the origins of life, and building Facebook’s AI empire

Published

on

The Download: the origins of life, and building Facebook’s AI empire


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How did life begin?

How life begins is one of the biggest and hardest questions in science. All we know is that something happened on Earth more than 3.5 billion years ago, and it may well have occurred on many other worlds in the universe as well. 

But we don’t know what does the trick. Somehow a soup of nonliving chemicals like water and methane must combine and self-organize, growing ever more complex and coordinated, until eventually it gives rise to a living cell. The environment on the primordial Earth must also have been complicated: huge numbers of different chemicals, from metals and minerals to water and gases, all being blasted around by winds and volcanic eruptions.

Now, a few researchers are harnessing artificial intelligence to zero in on the winning conditions. The hope is that machine learning tools will help researchers achieve in years what would otherwise take decades, and help us devise a universal theory of the origins of life—one that applies not just on Earth but on any other world. Read the full story.

—Michael Marshall

‘How did life begin?’ is part of our new mini-series The Biggest Questions, which explores how technology is helping probe some of the deepest, most mind-bending mysteries of our existence.

How Facebook went all in on AI

—This is an excerpt from Broken Code: Inside Facebook and the Fight to Expose its Harmful Secrets, a behind-the-scenes look at how the social network came to build its business around artificial intelligence by author Jeff Horwitz.

In 2006, the U.S. patent office received a filing for “an automatically generated display that contains information relevant to a user about another user of a social network.” 

Rather than forcing people to search through disorganized” content for items of interest, the system would seek to generate a list of “relevant” information in a “preferred order.” The listed authors were “Zuckerberg et al.” and the product was the News Feed.

The platform’s recommendation systems were still in their infancy, and as an algorithm, it wasn’t much. By 2010, the company was looking beyond the crude system to recommend content based on machine learning and user behavior. 

There was no question that the computer science was dazzling and the gains concrete. But the speed, breadth, and scale of Facebook’s adoption of machine learning came at the cost of comprehensibility. Read the full extract.

AI is at an inflection point, Fei-Fei Li says

Fei-Fei Li is one of the most prominent computer science researchers of our time. The co-director of Stanford’s Human-Centered AI Institute is best known for creating ImageNet, a popular image data set that was pivotal in allowing researchers to train modern AI systems.

In her newly published memoir, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI, Li recounts how she went from an immigrant living in poverty to the AI heavyweight she is today. It’s a touching look into the sacrifices immigrants have to make to achieve their dreams, and an insider’s telling of how artificial-intelligence research rose to prominence.

Li recently spoke to Melissa Heikkilä, our senior AI reporter, about the future of AI and the hard problems that lie ahead for the field. Read the full story.

This story is from The Algorithm, our weekly AI newsletter. Sign up to receive it in your inbox every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 We’re getting closer to the first AI-discovered drug   
An experimental frontrunner for incurable lung disease is approaching late-stage trials. (Bloomberg $)
+ AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work. (MIT Technology Review)

2 Anonymized browsing data may not be so anonymous after all
A new report raises concerns over how private the data collected and sold really is. (FT $)
+ It’s shockingly easy to buy sensitive data about US military personnel. (MIT Technology Review)

3 Climate change is ravaging every part of the US
And alarmingly little progress is being made, according to a new White House report. (Vox)
+ Emissions are on the decrease, though. (Wired $)

4 Civil liberties groups are urging the US Senate to curb surveillance powers
They argue it’s jeopardizing citizens’ liberty and democracy.(Wired $)

5 AI-generated white faces are more convincing than photographs
But it still struggles to produce realistic approximations for people of color. (The Guardian)
+ An online marketplace has introduced an AI bounty program.  (404 Media)
+ How digital beauty filters perpetuate colorism. (MIT Technology Review)

6 China is winning the moon race
The first country to reach it gets to establish crucial mining precedents. (WP $)+ Scientists in China are producing oxygen on Mars, too. (FT $)

7 Police are relying too heavily on face recognition algorithms 
The systems are inherently biased, and prone to making egregious mistakes. (New Yorker $)
+ The movement to limit face recognition tech might finally get a win. (MIT Technology Review)

8 The US is producing domestic nuclear fuel again
For the first time in 70 years. (IEEE Spectrum)
+ Fusion is on the rise, too. (NYT $)
+ 2023 Climate Tech Companies to Watch: Commonwealth and its compact tokamak. (MIT Technology Review)

9 You can finally delete your Threads account
Free from the worry it’ll take your Instagram account with it. (The Verge)

10 Things aren’t looking good for the Las Vegas Sphere
It’s hemorrhaging money, unsurprisingly. (Motherboard)
+ There’s no escaping it as you stroll along the Vegas Strip. (New Yorker $)

Quote of the day

“A bad $10 kitchen knife, or cheap Bluetooth headset, isn’t going to ruin a household. Choosing the wrong doctor, lawyer or contractor can ruin your life.”

—Curtis Boyd, founder of a fake Google review detection firm called the Transparency Company, explains the serious implications of false testimonies to The New York Times.

 

The big story

The mothers of Mexico’s missing use social media to search for mass graves

October 2022

Mexico has long struggled with a history of kidnapping. As of October 5, there were 105,984 people officially listed as disappeared in Mexico. More than a third have vanished in the past few years, and while many are thought to have been kidnapped or forcibly recruited by criminal organizations, most are likely dead.

But authorities are still hesitant to get involved in the search for the missing. And so the task continues to fall on families. Much of the work they do now happens over social media, where people widely distribute photographs of missing relatives, coordinate search efforts, and raise awareness of the problem. But the work is not without challenges. Read the full story.

—Chantal Flores

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Nothing to see here, just a cat casually riding a horse.
+ Deciding the color of the year is no joke—billions of dollars rest on it.
+ Why do we say ‘roger that?’
+ For years, internet detectives have been trying to identify a mysterious song. Can you help?
+ Spare a thought for Raichu: the downtrodden Pokémon who can’t seem to catch a break.



Tech

These robots know when to ask for help

Published

on

These robots know when to ask for help


A new training model, dubbed “KnowNo,” aims to address this problem by teaching robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.

Andy Zeng, a research scientist at Google DeepMind who helped develop the new technique, says that while robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense.

For example, when asked to bring you a Coke, the robot needs to first understand that it needs to go into the kitchen, look for the refrigerator, and open the fridge door. Conventionally, these smaller substeps had to be manually programmed, because otherwise the robot would not know that people usually keep their drinks in the kitchen.

That’s something large language models (LLMs) could help to fix, because they have a lot of common-sense knowledge baked in, says Zeng. 

Now when the robot is asked to bring a Coke, an LLM, which has a generalized understanding of the world, can generate a step-by-step guide for the robot to follow.

The problem with LLMs, though, is that there’s no way to guarantee that their instructions are possible for the robot to execute. Maybe the person doesn’t have a refrigerator in the kitchen, or the fridge door handle is broken. In these situations, robots need to ask humans for help.

KnowNo makes that possible by combining large language models with statistical tools that quantify confidence levels. 

When given an ambiguous instruction like “Put the bowl in the microwave,” KnowNo first generates multiple possible next actions using the language model. Then it creates a confidence score predicting the likelihood that each potential choice is the best one.

Continue Reading

Tech

The Download: inside the first CRISPR treatment, and smarter robots

Published

on

The Download: inside the first CRISPR treatment, and smarter robots


The news: A new robot training model, dubbed “KnowNo,” aims to teach robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.

Why it matters: While robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense. That’s something large language models could help to fix, because they have a lot of common-sense knowledge baked in. Read the full story.

—June Kim

Medical microrobots that travel inside the body are (still) on their way

The human body is a labyrinth of vessels and tubing, full of barriers that are difficult to break through. That poses a serious hurdle for doctors. Illness is often caused by problems that are hard to visualize and difficult to access. But imagine if we could deploy armies of tiny robots into the body to do the job for us. They could break up hard-to-reach clots, deliver drugs to even the most inaccessible tumors, and even help guide embryos toward implantation.

We’ve been hearing about the use of tiny robots in medicine for years, maybe even decades. And they’re still not here. But experts are adamant that medical microbots are finally coming, and that they could be a game changer for a number of serious diseases. Read the full story.

—Cassandra Willyard

Continue Reading

Tech

5 things we didn’t put on our 2024 list of 10 Breakthrough Technologies

Published

on

5 things we didn’t put on our 2024 list of 10 Breakthrough Technologies


We haven’t always been right (RIP, Baxter), but we’ve often been early to spot important areas of progress (we put natural-language processing on our very first list in 2001; today this technology underpins large language models and generative AI tools like ChatGPT).  

Every year, our reporters and editors nominate technologies that they think deserve a spot, and we spend weeks debating which ones should make the cut. Here are some of the technologies we didn’t pick this time—and why we’ve left them off, for now. 

New drugs for Alzheimer’s disease

Alzmeiher’s patients have long lacked treatment options. Several new drugs have now been proved to slow cognitive decline, albeit modestly, by clearing out harmful plaques in the brain. In July, the FDA approved Leqembi by Eisai and Biogen, and Eli Lilly’s donanemab could soon be next. But the drugs come with serious side effects, including brain swelling and bleeding, which can be fatal in some cases. Plus, they’re hard to administer—patients receive doses via an IV and must receive regular MRIs to check for brain swelling. These drawbacks gave us pause. 

Sustainable aviation fuel 

Alternative jet fuels made from cooking oil, leftover animal fats, or agricultural waste could reduce emissions from flying. They have been in development for years, and scientists are making steady progress, with several recent demonstration flights. But production and use will need to ramp up significantly for these fuels to make a meaningful climate impact. While they do look promising, there wasn’t a key moment or “breakthrough” that merited a spot for sustainable aviation fuels on this year’s list.  

Solar geoengineering

One way to counteract global warming could be to release particles into the stratosphere that reflect the sun’s energy and cool the planet. That idea is highly controversial within the scientific community, but a few researchers and companies have begun exploring whether it’s possible by launching a series of small-scale high-flying tests. One such launch prompted Mexico to ban solar geoengineering experiments earlier this year. It’s not really clear where geoengineering will go from here or whether these early efforts will stall out. Amid that uncertainty, we decided to hold off for now. 

Continue Reading

Copyright © 2021 Seminole Press.