Tech
Learning about AI with Google Brain and Landing AI founder Andrew Ng
Published
3 years agoon
By
Drew Simpson
This interview has been condensed and lightly edited for clarity.
MIT Technology Review: I’m sure people frequently ask you, “How do I build an AI-first business?” What do you usually say to that?
Andrew Ng: I usually say, “Don’t do that.” If I go to a team and say, “Hey, everyone, please be AI-first,” that tends to focus the team on technology, which might be great for a research lab. But in terms of how I execute the business, I tend to be customer-led or mission-led, almost never technology-led.
You now have this new venture called Landing AI. Can you tell us a bit about what it is, and why you chose to work on it?
After heading the AI teams at Google and Baidu, I realized that AI has transformed software consumer internet, like web search and online advertising. But I wanted to take AI to all of the other industries, which is an even bigger part of the economy. So after looking at a lot of different industries, I decided to focus on manufacturing. I think that multiple industries are AI-ready, but one of the patterns for an industry being more AI-ready is if it’s undergone some digital transformation so there’s some data. That creates an opportunity for AI teams to come in to use the data to create value.
So one of the projects that I’ve been excited about recently is manufacturing visual inspection. Can you look at a picture of a smartphone coming off the manufacturing line and see if there’s a defect in it? Or look at an auto component and see if there’s a dent in it? One huge difference is in consumer software internet, maybe you have a billion users and a huge amount of data. But in manufacturing, no factory has manufactured a billion or even a million scratched smartphones. Thank goodness for that. So the challenge is, can you get an AI to work with a hundred images? It turns out often you can. I’ve actually been surprised quite a lot of times with how much you can do with even modest amounts of data. And so even though all the hype and excitement and PR around AI is on the giant data sets, I feel like there’s a lot of room we need to grow as well to break open these other applications where the challenges are quite different.
How do you do that?
A very frequent mistake I see CEOs and CIOs make: they say to me something like “Hey, Andrew, we don’t have that much data—my data’s a mess. So give me two years to build a great IT infrastructure. Then we’ll have all this great data on which to build AI.” I always say, “That’s a mistake. Don’t do that.” First, I don’t think any company on the planet today—maybe not even the tech giants—thinks their data is completely clean and perfect. It’s a journey. Spending two or three years to build a beautiful data infrastructure means that you’re lacking feedback from the AI team to help prioritize what IT infrastructure to build.
For example, if you have a lot of users, should you prioritize asking them questions in a survey to get a little bit more data? Or in a factory, should you prioritize upgrading the sensor from something that records the vibrations 10 times a second to maybe 100 times a second? It is often starting to do an AI project with the data you already have that enables an AI team to give you the feedback to help prioritize what additional data to collect.
In industries where we just don’t have the scale of consumer software internet, I feel like we need to shift in mindset from big data to good data. If you have a million images, go ahead, use it—that’s great. But there are lots of problems that can use much smaller data sets that are cleanly labeled and carefully curated.
Could you give an example? What do you mean by good data?
Let me first give an example from speech recognition. When I was working with voice search, you would get audio clips where you would hear someone say, “Um today’s weather.” The question is, what is the right transcription for that audio clip? Is it “Um (comma) today’s weather,” or is it “Um (dot, dot, dot) today’s weather,” or is the “Um” something we just don’t transcribe? It turns out any one of these is fine, but what is not fine is if different transcribers use each of the three labeling conventions. Then your data is noisy, and it hurts the speech recognition system. Now, when you have millions or a billion users, you can have that noisy data and just average it—the learning algorithm will do fine. But if you are in a setting where you have a smaller data set—say, a hundred examples—then this type of noisy data has a huge impact on performance.
Another example from manufacturing: we did a lot of work on steel inspection. If you drive a car, the side of your car was once made of a sheet of steel. Sometimes there are little wrinkles in the steel, or little dents or specks on it. So you can use a camera and computer vision to see if there are defects or not. But different labelers will label the data differently. Some will put a giant bounding box around the whole region. Some will put little bounding boxes around the little particles. When you have a modest data set, making sure that the different quality inspectors label the data consistently—that turns out to be one of the most important things.
For a lot of AI projects, the open-source model you download off GitHub—the neural network that you can get from literature—is good enough. Not for all problems, but the main problems. So I’ve gone to many of my teams and said, “Hey, everyone, the neural network is good enough. Let’s not mess with the code anymore. The only thing you’re going to do now is build processes to improve the quality of the data.” And it turns out that often results in faster improvements to performance of the algorithm.
What is the data size you are thinking about when you say smaller data sets? Are you talking about a hundred examples? Ten examples?
Machine learning is so diverse that it’s become really hard to give one-size-fits-all answers. I’ve worked on problems where I had about 200 to 300 million images. I’ve also worked on problems where I had 10 images, and everything in between. When I look at manufacturing applications, I think something like tens or maybe a hundred images for a defect class is not unusual, but there’s very wide variance even within the factory.
I do find that the AI practices switch over when the training set sizes go under, let’s say, 10,000 examples, because that’s sort of the threshold where the engineer can basically look at every example and design it themselves and then make a decision.
Recently I was chatting with a very good engineer in one of the large tech companies. And I asked, “Hey, what do you do if the labels are inconsistent?” And he said, “Well, we have this team of several hundred people overseas that does the labeling. So I’ll write the labeling instructions, get three people to label every image, and then I’ll take an average.” And I said, “Yep, that’s the right thing to do when you have a giant data set.” But when I work with a smaller team and the labels are inconsistent, I just track down the two people that disagree with each other, get both of them on a Zoom call, and have them talk to each other to try to reach a resolution.
I want to turn our attention now to talk about your thoughts on the general AI industry. The Algorithm is our AI newsletter, and I gave our readers an opportunity to submit some questions to you in advance. One reader asks: AI development seems to have mostly bifurcated toward either academic research or large-scale, resource-intensive, big company programs like OpenAI and DeepMind. That doesn’t really leave a lot of space for small startups to contribute. What do you think are some practical problems that smaller companies can really focus on to help drive real commercial adoption of AI?
I think a lot of the media attention tends to be on the large corporations, and sometimes on the large academic institutions. But if you go to academic conferences, there’s plenty of work done by smaller research groups and research labs. And when I speak with different people in different companies and industries, I feel like there are so many business applications they could use AI to tackle. I usually go to business leaders and ask, “What are your biggest business problems? What are the things that worry you the most?” so I can better understand the goals of the business and then brainstorm whether or not there is an AI solution. And sometimes there isn’t, and that’s fine.
Maybe I’ll just mention a couple of gaps that I find exciting. I think that today building AI systems is still very manual. You have a few brilliant machine-learning engineers and data scientists do things in a computer and then push things to production. There’s a lot of manual steps in the process. So I’m excited about ML ops [machine learning operations] as an emerging discipline to help make the process of building and deploying AI systems more systematic.
Also, if you look at a lot of the typical business problems—all the functions from marketing to talent—there’s a lot of room for automation and efficiency improvement.
I also hope that the AI community can look at the biggest social problems—see what we can do for climate change or homelessness or poverty. In addition to the sometimes very valuable business problems, we should work on the biggest social problems too.
How do you actually go about the process of identifying whether there is an opportunity to pursue something with machine learning for your business?
I will try to learn a little bit about the business myself and try to help the business leaders learn a little bit about AI. Then we usually brainstorm a set of projects, and for each of the ideas, I will do both technical diligence and business diligence. We’ll look at: Do you have enough data? What’s the accuracy? Is there a long tail when you deploy into production? How do you fill the data back and close the loop for continuous learning? So—making sure the problem is technically feasible. And then business diligence: we make sure that this will achieve the ROI that we’re hoping for. After that process, you have the usual, like estimating the resources, milestones, and then hopefully going into execution.
One other suggestion: it’s more important to start quickly, and it’s okay to start small. My first meaningful business application at Google was speech recognition, not web search or advertising. But by helping the Google speech team make speech recognition more accurate, that gave the Brain team the credibility and the wherewithal to go after bigger and bigger partnerships. So Google Maps was the second big partnership where we used computer vision—to read house numbers to geolocate houses on Google maps. And only after those first two successful projects did I have a more serious conversation with the advertising team. So I think I see more companies fail by starting too big than fail by starting too small. It’s fine to do a smaller project to get started as an organization to learn what it feels like to use AI, and then go on to build bigger successes.
What is one thing that our audience should start doing tomorrow to implement AI in their companies?
Jump in. AI is causing a shift in the dynamics of many industries. So if your company isn’t already making pretty aggressive and smart investments, this is a good time.
You may like
-
Google DeepMind’s new Gemini model looks amazing—but could signal peak AI hype
-
Google CEO Sundar Pichai on Gemini and the coming age of AI
-
Make no mistake—AI is owned by Big Tech
-
Capitalizing on machine learning with collaborative, structured enterprise tooling teams
-
Google DeepMind’s new AI tool helped create more than 700 new materials
-
Google DeepMind wants to define what counts as artificial general intelligence

A new training model, dubbed “KnowNo,” aims to address this problem by teaching robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Andy Zeng, a research scientist at Google DeepMind who helped develop the new technique, says that while robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense.
For example, when asked to bring you a Coke, the robot needs to first understand that it needs to go into the kitchen, look for the refrigerator, and open the fridge door. Conventionally, these smaller substeps had to be manually programmed, because otherwise the robot would not know that people usually keep their drinks in the kitchen.
That’s something large language models (LLMs) could help to fix, because they have a lot of common-sense knowledge baked in, says Zeng.
Now when the robot is asked to bring a Coke, an LLM, which has a generalized understanding of the world, can generate a step-by-step guide for the robot to follow.
The problem with LLMs, though, is that there’s no way to guarantee that their instructions are possible for the robot to execute. Maybe the person doesn’t have a refrigerator in the kitchen, or the fridge door handle is broken. In these situations, robots need to ask humans for help.
KnowNo makes that possible by combining large language models with statistical tools that quantify confidence levels.
When given an ambiguous instruction like “Put the bowl in the microwave,” KnowNo first generates multiple possible next actions using the language model. Then it creates a confidence score predicting the likelihood that each potential choice is the best one.
Tech
The Download: inside the first CRISPR treatment, and smarter robots
Published
2 days agoon
12/08/2023By
Drew Simpson
The news: A new robot training model, dubbed “KnowNo,” aims to teach robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Why it matters: While robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense. That’s something large language models could help to fix, because they have a lot of common-sense knowledge baked in. Read the full story.
—June Kim
Medical microrobots that travel inside the body are (still) on their way
The human body is a labyrinth of vessels and tubing, full of barriers that are difficult to break through. That poses a serious hurdle for doctors. Illness is often caused by problems that are hard to visualize and difficult to access. But imagine if we could deploy armies of tiny robots into the body to do the job for us. They could break up hard-to-reach clots, deliver drugs to even the most inaccessible tumors, and even help guide embryos toward implantation.
We’ve been hearing about the use of tiny robots in medicine for years, maybe even decades. And they’re still not here. But experts are adamant that medical microbots are finally coming, and that they could be a game changer for a number of serious diseases. Read the full story.
—Cassandra Willyard
Tech
5 things we didn’t put on our 2024 list of 10 Breakthrough Technologies
Published
2 days agoon
12/08/2023By
Drew Simpson
We haven’t always been right (RIP, Baxter), but we’ve often been early to spot important areas of progress (we put natural-language processing on our very first list in 2001; today this technology underpins large language models and generative AI tools like ChatGPT).
Every year, our reporters and editors nominate technologies that they think deserve a spot, and we spend weeks debating which ones should make the cut. Here are some of the technologies we didn’t pick this time—and why we’ve left them off, for now.
New drugs for Alzheimer’s disease
Alzmeiher’s patients have long lacked treatment options. Several new drugs have now been proved to slow cognitive decline, albeit modestly, by clearing out harmful plaques in the brain. In July, the FDA approved Leqembi by Eisai and Biogen, and Eli Lilly’s donanemab could soon be next. But the drugs come with serious side effects, including brain swelling and bleeding, which can be fatal in some cases. Plus, they’re hard to administer—patients receive doses via an IV and must receive regular MRIs to check for brain swelling. These drawbacks gave us pause.
Sustainable aviation fuel
Alternative jet fuels made from cooking oil, leftover animal fats, or agricultural waste could reduce emissions from flying. They have been in development for years, and scientists are making steady progress, with several recent demonstration flights. But production and use will need to ramp up significantly for these fuels to make a meaningful climate impact. While they do look promising, there wasn’t a key moment or “breakthrough” that merited a spot for sustainable aviation fuels on this year’s list.
Solar geoengineering
One way to counteract global warming could be to release particles into the stratosphere that reflect the sun’s energy and cool the planet. That idea is highly controversial within the scientific community, but a few researchers and companies have begun exploring whether it’s possible by launching a series of small-scale high-flying tests. One such launch prompted Mexico to ban solar geoengineering experiments earlier this year. It’s not really clear where geoengineering will go from here or whether these early efforts will stall out. Amid that uncertainty, we decided to hold off for now.