Tech
Learning about AI with Google Brain and Landing AI founder Andrew Ng
Published
2 years agoon
By
Drew Simpson
This interview has been condensed and lightly edited for clarity.
MIT Technology Review: I’m sure people frequently ask you, “How do I build an AI-first business?” What do you usually say to that?
Andrew Ng: I usually say, “Don’t do that.” If I go to a team and say, “Hey, everyone, please be AI-first,” that tends to focus the team on technology, which might be great for a research lab. But in terms of how I execute the business, I tend to be customer-led or mission-led, almost never technology-led.
You now have this new venture called Landing AI. Can you tell us a bit about what it is, and why you chose to work on it?
After heading the AI teams at Google and Baidu, I realized that AI has transformed software consumer internet, like web search and online advertising. But I wanted to take AI to all of the other industries, which is an even bigger part of the economy. So after looking at a lot of different industries, I decided to focus on manufacturing. I think that multiple industries are AI-ready, but one of the patterns for an industry being more AI-ready is if it’s undergone some digital transformation so there’s some data. That creates an opportunity for AI teams to come in to use the data to create value.
So one of the projects that I’ve been excited about recently is manufacturing visual inspection. Can you look at a picture of a smartphone coming off the manufacturing line and see if there’s a defect in it? Or look at an auto component and see if there’s a dent in it? One huge difference is in consumer software internet, maybe you have a billion users and a huge amount of data. But in manufacturing, no factory has manufactured a billion or even a million scratched smartphones. Thank goodness for that. So the challenge is, can you get an AI to work with a hundred images? It turns out often you can. I’ve actually been surprised quite a lot of times with how much you can do with even modest amounts of data. And so even though all the hype and excitement and PR around AI is on the giant data sets, I feel like there’s a lot of room we need to grow as well to break open these other applications where the challenges are quite different.
How do you do that?
A very frequent mistake I see CEOs and CIOs make: they say to me something like “Hey, Andrew, we don’t have that much data—my data’s a mess. So give me two years to build a great IT infrastructure. Then we’ll have all this great data on which to build AI.” I always say, “That’s a mistake. Don’t do that.” First, I don’t think any company on the planet today—maybe not even the tech giants—thinks their data is completely clean and perfect. It’s a journey. Spending two or three years to build a beautiful data infrastructure means that you’re lacking feedback from the AI team to help prioritize what IT infrastructure to build.
For example, if you have a lot of users, should you prioritize asking them questions in a survey to get a little bit more data? Or in a factory, should you prioritize upgrading the sensor from something that records the vibrations 10 times a second to maybe 100 times a second? It is often starting to do an AI project with the data you already have that enables an AI team to give you the feedback to help prioritize what additional data to collect.
In industries where we just don’t have the scale of consumer software internet, I feel like we need to shift in mindset from big data to good data. If you have a million images, go ahead, use it—that’s great. But there are lots of problems that can use much smaller data sets that are cleanly labeled and carefully curated.
Could you give an example? What do you mean by good data?
Let me first give an example from speech recognition. When I was working with voice search, you would get audio clips where you would hear someone say, “Um today’s weather.” The question is, what is the right transcription for that audio clip? Is it “Um (comma) today’s weather,” or is it “Um (dot, dot, dot) today’s weather,” or is the “Um” something we just don’t transcribe? It turns out any one of these is fine, but what is not fine is if different transcribers use each of the three labeling conventions. Then your data is noisy, and it hurts the speech recognition system. Now, when you have millions or a billion users, you can have that noisy data and just average it—the learning algorithm will do fine. But if you are in a setting where you have a smaller data set—say, a hundred examples—then this type of noisy data has a huge impact on performance.
Another example from manufacturing: we did a lot of work on steel inspection. If you drive a car, the side of your car was once made of a sheet of steel. Sometimes there are little wrinkles in the steel, or little dents or specks on it. So you can use a camera and computer vision to see if there are defects or not. But different labelers will label the data differently. Some will put a giant bounding box around the whole region. Some will put little bounding boxes around the little particles. When you have a modest data set, making sure that the different quality inspectors label the data consistently—that turns out to be one of the most important things.
For a lot of AI projects, the open-source model you download off GitHub—the neural network that you can get from literature—is good enough. Not for all problems, but the main problems. So I’ve gone to many of my teams and said, “Hey, everyone, the neural network is good enough. Let’s not mess with the code anymore. The only thing you’re going to do now is build processes to improve the quality of the data.” And it turns out that often results in faster improvements to performance of the algorithm.
What is the data size you are thinking about when you say smaller data sets? Are you talking about a hundred examples? Ten examples?
Machine learning is so diverse that it’s become really hard to give one-size-fits-all answers. I’ve worked on problems where I had about 200 to 300 million images. I’ve also worked on problems where I had 10 images, and everything in between. When I look at manufacturing applications, I think something like tens or maybe a hundred images for a defect class is not unusual, but there’s very wide variance even within the factory.
I do find that the AI practices switch over when the training set sizes go under, let’s say, 10,000 examples, because that’s sort of the threshold where the engineer can basically look at every example and design it themselves and then make a decision.
Recently I was chatting with a very good engineer in one of the large tech companies. And I asked, “Hey, what do you do if the labels are inconsistent?” And he said, “Well, we have this team of several hundred people overseas that does the labeling. So I’ll write the labeling instructions, get three people to label every image, and then I’ll take an average.” And I said, “Yep, that’s the right thing to do when you have a giant data set.” But when I work with a smaller team and the labels are inconsistent, I just track down the two people that disagree with each other, get both of them on a Zoom call, and have them talk to each other to try to reach a resolution.
I want to turn our attention now to talk about your thoughts on the general AI industry. The Algorithm is our AI newsletter, and I gave our readers an opportunity to submit some questions to you in advance. One reader asks: AI development seems to have mostly bifurcated toward either academic research or large-scale, resource-intensive, big company programs like OpenAI and DeepMind. That doesn’t really leave a lot of space for small startups to contribute. What do you think are some practical problems that smaller companies can really focus on to help drive real commercial adoption of AI?
I think a lot of the media attention tends to be on the large corporations, and sometimes on the large academic institutions. But if you go to academic conferences, there’s plenty of work done by smaller research groups and research labs. And when I speak with different people in different companies and industries, I feel like there are so many business applications they could use AI to tackle. I usually go to business leaders and ask, “What are your biggest business problems? What are the things that worry you the most?” so I can better understand the goals of the business and then brainstorm whether or not there is an AI solution. And sometimes there isn’t, and that’s fine.
Maybe I’ll just mention a couple of gaps that I find exciting. I think that today building AI systems is still very manual. You have a few brilliant machine-learning engineers and data scientists do things in a computer and then push things to production. There’s a lot of manual steps in the process. So I’m excited about ML ops [machine learning operations] as an emerging discipline to help make the process of building and deploying AI systems more systematic.
Also, if you look at a lot of the typical business problems—all the functions from marketing to talent—there’s a lot of room for automation and efficiency improvement.
I also hope that the AI community can look at the biggest social problems—see what we can do for climate change or homelessness or poverty. In addition to the sometimes very valuable business problems, we should work on the biggest social problems too.
How do you actually go about the process of identifying whether there is an opportunity to pursue something with machine learning for your business?
I will try to learn a little bit about the business myself and try to help the business leaders learn a little bit about AI. Then we usually brainstorm a set of projects, and for each of the ideas, I will do both technical diligence and business diligence. We’ll look at: Do you have enough data? What’s the accuracy? Is there a long tail when you deploy into production? How do you fill the data back and close the loop for continuous learning? So—making sure the problem is technically feasible. And then business diligence: we make sure that this will achieve the ROI that we’re hoping for. After that process, you have the usual, like estimating the resources, milestones, and then hopefully going into execution.
One other suggestion: it’s more important to start quickly, and it’s okay to start small. My first meaningful business application at Google was speech recognition, not web search or advertising. But by helping the Google speech team make speech recognition more accurate, that gave the Brain team the credibility and the wherewithal to go after bigger and bigger partnerships. So Google Maps was the second big partnership where we used computer vision—to read house numbers to geolocate houses on Google maps. And only after those first two successful projects did I have a more serious conversation with the advertising team. So I think I see more companies fail by starting too big than fail by starting too small. It’s fine to do a smaller project to get started as an organization to learn what it feels like to use AI, and then go on to build bigger successes.
What is one thing that our audience should start doing tomorrow to implement AI in their companies?
Jump in. AI is causing a shift in the dynamics of many industries. So if your company isn’t already making pretty aggressive and smart investments, this is a good time.
You may like
-
Google just launched Bard, its answer to ChatGPT—and it wants you to make it better
-
The Download: China’s version of ChatGPT, and protecting our brain data
-
How to Create a High-Converting Product Landing Page
-
How Google Ads Marketing Stands Out From Alternatives
-
How your brain data could be used against you
-
The Download: blocking AI porn, and brain data privacy
Tech
Technology and industry convergence: A historic opportunity
Published
9 hours agoon
03/28/2023By
Drew Simpson
And it’s that combination of technology and human ingenuity, as we say, and as Danielle just alluded to in her medical example on cancer treatment, that is really where the greatest value and the greatest impact is going to come. We believe the companies which are going to be leaders in the next decade are going to need to harness five forces, and all of these forces are going to require technology and ingenuity to come together. They’re going to require organizations to work across all elements of their organization, to work with new partners, to expand into new areas and ecosystems, to learn and collaborate with innovators across industry, as well as across industry and academia and beyond to really push the boundaries of science and impact.
The five forces that we see right now, the trends that we’re seeing that are impacting our clients the most really start with what we believe underpins everything right now, and that is something we’re calling total enterprise reinvention. And we really started to see this come to the fore as we moved through covid. And what we’re seeing now is that as companies are looking to enter these new waves of change and opportunity, that they’re needing to execute strategies to change and transform all parts of their business through technology, data, and AI, as Daniela just talked about, to enable new ways of growth, new ways of engaging customers, new business models, new opportunities, but they’re doing it in a very different way. They’re doing it in a way where they’re looking at every part of their organization and the technology and digital core that underpins it at the same time, so we believe we’re in the early stages of this profound change, but we believe it’s going to be the biggest change since the industrial revolution.
And embracing total enterprise reinvention often requires something that we call compressed transformation, which are bold transformational programs that, as I said, span the entire organization with different groups working together in ways that they never did before in parallel, but in very accelerated timeframes. And underpinning all this is leading edge technology, data, and AI. At the same time, the second trend we’re seeing with our clients, and we certainly are all reading about it and of hearing about it for the past few years, is the power of talent and the importance of the human side of this equation. And we think that one of the forces that’s going to shape the next decade with talent at front and center is not just the ability to access talent, but really for organizations to learn to be creators of talent, not just consumers. To unlock the potential of the humans in their workforce. And that’s going to require technology to unlock that potential. And again, as Daniela just gave in some of her examples, to compliment the talent that they have in the organization.
The third is sustainability. That trend is … I would say personally, I’m very pleased to see this trend underpinning everything that we’re doing and everything that our clients are thinking about right now. We believe that every business needs to be a sustainable business. And every industry is looking at this in a way that is unique to their industries. But whether it’s consumers, employees, business partners, regulators, or investors, we know that we’re moving in a direction where companies are being required to act. To make a change, not just around climate and energy, but areas like food insecurity and equality. All of those issues are coming to the fore, and underpinning this, again, is the ability to leverage new bleeding technologies to accelerate the pace of change and find solutions to the issues that we’re facing as a planet and across society.
The fourth force that we’re seeing is the metaverse. Now, there’s been a lot of confusion, and a lot of talk about the metaverse, but our view is that the metaverse is a continuum, and we’re seeing this come to the fore in the marketplace right now. As we look at the metaverse and how that’s going to impact, just if you think all the way back to when the internet was in its early stages, we believe that the impact is going to be that great. And while it’s early stages and not everybody can see exactly how the impact is going to be there, we believe that this is going to impact not just consumers, and of course interesting areas like virtual reality and using AI to bring new experiences to life, but also to look at extended reality, to look at digital twins, smart objects. So how do cars and factories run? What’s happening with edge computing? Looking at blockchain and new ways of payment. All of those things are going to change the way businesses operate and really the way society operates, and we believe that this is going to underpin change as we move forward over the next five to 10 years.
And then lastly, the fifth force is what we’re calling ongoing tech revolution. And the ongoing tech revolution is a pretty broad expansive category, often pushed by our friends in the academia world around science, but we believe in the coming decade, the pace of technological innovation is not just going to continue but accelerate, which we believe is going to create positive change. New technology, whether it’s in quantum computing or it’s in areas, as I said, like blockchain or material science or biology, or even space, we believe this is going to open brand new areas of opportunity. And all of these things are allowing companies, our clients to find new ways to not just serve their customers, but to monetize their investments, to impact society, to impact their employees, and to drive positive change for their business as well as for the world around them.
Laurel: Yeah. Kathleen, I feel like some of that acceleration happened in these last few pandemic years so that businesses and consumers are operating differently from remote healthcare solutions to digital payments, greater expectations of those immersive virtual experiences. But how can organizations and technologists alike then continue to innovate to anticipate the future, or as Accenture likes to say, learn from the future? You have some good examples there, but the five different areas all kind of also lead to this acceptance of change.
Kathleen: Yeah, they do. And they also lead to embedding data in everything, in new ways into every change that organizations are putting forward. When we think of learning through the future, we think about organizations and leaders who are constantly seeking new data and insights, not just from inside their organization, but from outside their organizations’ four walls. So we like to use the phrase intentional futurists. These are people and leaders and organizations who use AI-based analysis to find patterns, anticipate trends, detect new sources of growth opportunities, understand their consumers, their customers, other enterprises, the markets and their employees better.
Tech
Delivering insights at scale by modernizing data
Published
14 hours agoon
03/28/2023By
Drew Simpson
This data is often siloed in enterprise resource planning (ERP) systems. However, with ERP data modernization, businesses can integrate data from multiple sources, which will ensure data accessibility and create the framework for digital transformation. Migrating legacy databases to the cloud also gives companies access to AI and ML capabilities that can reinvent their organization. According to Anil Nagaraj, principal in Analytic Insights, Cloud & Digital at PwC, companies that modernize their ERP data see increased efficiencies, costs savings, and greater customer engagement, especially when it’s built on a cloud platform like Microsoft Azure.
Cloud transformation—along with ERP data modernization—democratizes data, empowering employees to make decisions that directly impact their segment of business. And in an increasingly competitive marketplace, becoming data-driven means organizations can make faster, timelier, and smarter decisions.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
Tech
The Download: the threat of microplastics, and mitigating AI bias
Published
19 hours agoon
03/28/2023By
Drew Simpson
The news: While we know that tiny pieces of plastic are everywhere, we don’t fully understand what they’re doing to us or other animals. Now, new research in seabirds hints that it might affect gut microbiomes—the trillions of microbes that make a home in the intestines and play an important role in animals’ health.
The findings: Seabirds ingest plastic from the ocean, which can accumulate in their stomachs. The research shows it leaves the birds with more potentially harmful microbes in the gut, including some that are known to be resistant to antibiotics, and others with the potential to cause disease.
Why it matters: The report expands our view on what plastic pollution is doing to wildlife, and shines a light on the wide spectrum of adverse effects brought about by current plastic levels in the environment. The next step is to work out what this might mean for their health and the health of other animals, including humans. Read the full story.
—Jessica Hamzelou
What if we could just ask AI to be less biased?
Think of a teacher. Close your eyes. What does that person look like? If you ask Stable Diffusion or DALL-E 2, two of the most popular AI image generators, it’s a white man with glasses.
But what if you could simply ask AI models to give you less biased answers? A new tool called Fair Diffusion makes it easier to tweak AI models to generate the types of images you want, such as swapping out the white men in the images for women or people of different ethnicities. A similar technique also seems to work for language models.
These methods of combating AI bias are welcome—and raise the obvious question of whether they should be baked into the models from the start. Read the full story.