Tech
We need to design distrust into AI systems to make them safer
Published
2 years agoon
By
Drew Simpson
It’s interesting that you’re talking about how, in these kinds of scenarios, you have to actively design distrust into the system to make it more safe.
Yes, that’s what you have to do. We’re actually trying an experiment right now around the idea of denial of service. We don’t have results yet, and we’re wrestling with some ethical concerns. Because once we talk about it and publish the results, we’ll have to explain why sometimes you may not want to give AI the ability to deny a service either. How do you remove service if someone really needs it?
But here’s an example with the Tesla distrust thing. Denial of service would be: I create a profile of your trust, which I can do based on how many times you deactivated or disengaged from holding the wheel. Given those profiles of disengagement, I can then model at what point you are fully in this trust state. We have done this, not with Tesla data, but our own data. And at a certain point, the next time you come into the car, you’d get a denial of service. You do not have access to the system for X time period.
It’s almost like when you punish a teenager by taking away their phone. You know that teenagers will not do whatever it is that you didn’t want them to do if you link it to their communication modality.
What are some other mechanisms that you’ve explored to enhance distrust in systems?
The other methodology we’ve explored is roughly called explainable AI, where the system provides an explanation with respect to some of its risks or uncertainties. Because all of these systems have uncertainty—none of them are 100%. And a system knows when it’s uncertain. So it could provide that as information in a way a human can understand, so people will change their behavior.
As an example, say I’m a self-driving car, and I have all my map information, and I know certain intersections are more accident prone than others. As we get close to one of them, I would say, “We’re approaching an intersection where 10 people died last year.” You explain it in a way where it makes someone go, “Oh, wait, maybe I should be more aware.”
We’ve already talked about some of your concerns around our tendency to overtrust these systems. What are others? On the flip side, are there also benefits?
The negatives are really linked to bias. That’s why I always talk about bias and trust interchangeably. Because if I’m overtrusting these systems and these systems are making decisions that have different outcomes for different groups of individuals—say, a medical diagnosis system has differences between women versus men—we’re now creating systems that augment the inequities we currently have. That’s a problem. And when you link it to things that are tied to health or transportation, both of which can lead to life-or-death situations, a bad decision can actually lead to something you can’t recover from. So we really have to fix it.
The positives are that automated systems are better than people in general. I think they can be even better, but I personally would rather interact with an AI system in some situations than certain humans in other situations. Like, I know it has some issues, but give me the AI. Give me the robot. They have more data; they are more accurate. Especially if you have a novice person. It’s a better outcome. It just might be that the outcome isn’t equal.
In addition to your robotics and AI research, you’ve been a huge proponent of increasing diversity in the field throughout your career. You started a program to mentor at-risk junior high girls 20 years ago, which is well before many people were thinking about this issue. Why is that important to you, and why is it also important for the field?
It’s important to me because I can identify times in my life where someone basically provided me access to engineering and computer science. I didn’t even know it was a thing. And that’s really why later on, I never had a problem with knowing that I could do it. And so I always felt that it was just my responsibility to do the same thing for those who have done it for me. As I got older as well, I noticed that there were a lot of people that didn’t look like me in the room. So I realized: Wait, there’s definitely a problem here, because people just don’t have the role models, they don’t have access, they don’t even know this is a thing.
And why it’s important to the field is because everyone has a difference of experience. Just like I’d been thinking about human-robot interaction before it was even a thing. It wasn’t because I was brilliant. It was because I looked at the problem in a different way. And when I’m talking to someone who has a different viewpoint, it’s like, “Oh, let’s try to combine and figure out the best of both worlds.”
Airbags kill more women and kids. Why is that? Well, I’m going to say that it’s because someone wasn’t in the room to say, “Hey, why don’t we test this on women in the front seat?” There’s a bunch of problems that have killed or been hazardous to certain groups of people. And I would claim that if you go back, it’s because you didn’t have enough people who could say “Hey, have you thought about this?” because they’re talking from their own experience and from their environment and their community.
How do you hope AI and robotics research will evolve over time? What is your vision for the field?
If you think about coding and programming, pretty much everyone can do it. There are so many organizations now like Code.org. The resources and tools are there. I would love to have a conversation with a student one day where I ask, “Do you know about AI and machine learning?” and they say, “Dr. H, I’ve been doing that since the third grade!” I want to be shocked like that, because that would be wonderful. Of course, then I’d have to think about what is my next job, but that’s a whole other story.
But I think when you have the tools with coding and AI and machine learning, you can create your own jobs, you can create your own future, you can create your own solution. That would be my dream.
You may like
Tech
ChatGPT is about to revolutionize the economy. We need to decide what that looks like.
Published
1 day agoon
03/25/2023By
Drew Simpson
Power struggle
When Anton Korinek, an economist at the University of Virginia and a fellow at the Brookings Institution, got access to the new generation of large language models such as ChatGPT, he did what a lot of us did: he began playing around with them to see how they might help his work. He carefully documented their performance in a paper in February, noting how well they handled 25 “use cases,” from brainstorming and editing text (very useful) to coding (pretty good with some help) to doing math (not great).
ChatGPT did explain one of the most fundamental principles in economics incorrectly, says Korinek: “It screwed up really badly.” But the mistake, easily spotted, was quickly forgiven in light of the benefits. “I can tell you that it makes me, as a cognitive worker, more productive,” he says. “Hands down, no question for me that I’m more productive when I use a language model.”
When GPT-4 came out, he tested its performance on the same 25 questions that he documented in February, and it performed far better. There were fewer instances of making stuff up; it also did much better on the math assignments, says Korinek.
Since ChatGPT and other AI bots automate cognitive work, as opposed to physical tasks that require investments in equipment and infrastructure, a boost to economic productivity could happen far more quickly than in past technological revolutions, says Korinek. “I think we may see a greater boost to productivity by the end of the year—certainly by 2024,” he says.
Who will control the future of this amazing technology?
What’s more, he says, in the longer term, the way the AI models can make researchers like himself more productive has the potential to drive technological progress.
That potential of large language models is already turning up in research in the physical sciences. Berend Smit, who runs a chemical engineering lab at EPFL in Lausanne, Switzerland, is an expert on using machine learning to discover new materials. Last year, after one of his graduate students, Kevin Maik Jablonka, showed some interesting results using GPT-3, Smit asked him to demonstrate that GPT-3 is, in fact, useless for the kinds of sophisticated machine-learning studies his group does to predict the properties of compounds.
“He failed completely,” jokes Smit.
It turns out that after being fine-tuned for a few minutes with a few relevant examples, the model performs as well as advanced machine-learning tools specially developed for chemistry in answering basic questions about things like the solubility of a compound or its reactivity. Simply give it the name of a compound, and it can predict various properties based on the structure.
Tech
Newly revealed coronavirus data has reignited a debate over the virus’s origins
Published
1 day agoon
03/24/2023By
Drew Simpson
Data collected in 2020—and kept from public view since then—potentially adds weight to the animal theory. It highlights a potential suspect: the raccoon dog. But exactly how much weight it adds depends on who you ask. New analyses of the data have only reignited the debate, and stirred up some serious drama.
The current ruckus starts with a study shared by Chinese scientists back in February 2022. In a preprint (a scientific paper that has not yet been peer-reviewed or published in a journal), George Gao of the Chinese Center for Disease Control and Prevention (CCDC) and his colleagues described how they collected and analyzed 1,380 samples from the Huanan Seafood Market.
These samples were collected between January and March 2020, just after the market was closed. At the time, the team wrote that they only found coronavirus in samples alongside genetic material from people.
There were a lot of animals on sale at this market, which sold more than just seafood. The Gao paper features a long list, including chickens, ducks, geese, pheasants, doves, deer, badgers, rabbits, bamboo rats, porcupines, hedgehogs, crocodiles, snakes, and salamanders. And that list is not exhaustive—there are reports of other animals being traded there, including raccoon dogs. We’ll come back to them later.
But Gao and his colleagues reported that they didn’t find the coronavirus in any of the 18 species of animal they looked at. They suggested that it was humans who most likely brought the virus to the market, which ended up being the first known epicenter of the outbreak.
Fast-forward to March 2023. On March 4, Florence Débarre, an evolutionary biologist at Sorbonne University in Paris, spotted some data that had been uploaded to GISAID, a website that allows researchers to share genetic data to help them study and track viruses that cause infectious diseases. The data appeared to have been uploaded in June 2022. It seemed to have been collected by Gao and his colleagues for their February 2022 study, although it had not been included in the actual paper.
Tech
Fostering innovation through a culture of curiosity
Published
2 days agoon
03/24/2023By
Drew Simpson
And so I think a big part of it as a company, by setting these ambitious goals, it forces us to say if we want to be number one, if we want to be top tier in these areas, if we want to continue to generate results, how do we get there using technology? And so that really forces us to throw away our assumptions because you can’t follow somebody, if you want to be number one you can’t follow someone to become number one. And so we understand that the path to get there, it’s through, of course, technology and the software and the enablement and the investment, but it really is by becoming goal-oriented. And if we look at these examples of how do we create the infrastructure on the technology side to support these ambitious goals, we ourselves have to be ambitious in turn because if we bring a solution that’s also a me too, that’s a copycat, that doesn’t have differentiation, that’s not going to propel us, for example, to be a top 10 supply chain. It just doesn’t pass muster.
So I think at the top level, it starts with the business ambition. And then from there we can organize ourselves at the intersection of the business ambition and the technology trends to have those very rich discussions and being the glue of how do we put together so many moving pieces because we’re constantly scanning the technology landscape for new advancing and emerging technologies that can come in and be a part of achieving that mission. And so that’s how we set it up on the process side. As an example, I think one of the things, and it’s also innovation, but it doesn’t get talked about as much, but for the community out there, I think it’s going to be very relevant is, how do we stay on top of the data sovereignty questions and data localization? There’s a lot of work that needs to go into rethinking what your cloud, private, public, edge, on-premise look like going forward so that we can remain cutting edge and competitive in each of our markets while meeting the increasing guidance that we’re getting from countries and regulatory agencies about data localization and data sovereignty.
And so in our case, as a global company that’s listed in Hong Kong and we operate all around the world, we’ve had to really think deeply about the architecture of our solutions and apply innovation in how we can architect for a longer term growth, but in a world that’s increasingly uncertain. So I think there’s a lot of drivers in some sense, which is our corporate aspirations, our operating environment, which has continued to have a lot of uncertainty, and that really forces us to take a very sharp lens on what cutting edge looks like. And it’s not always the bright and shiny technology. Cutting edge could mean going to the executive committee and saying, Hey, we’re going to face a challenge about compliance. Here’s the innovation we’re bringing about architecture so that we can handle not just the next country or regulatory regime that we have to comply with, but the next 10, the next 50.
Laurel: Well, and to follow up with a bit more of a specific example, how does R&D help improve manufacturing in the software supply chain as well as emerging technologies like artificial intelligence and the industrial metaverse?
Art: Oh, I love this one because this is the perfect example of there’s a lot happening in the technology industry and there’s so much back to the earlier point of applied curiosity and how we can try this. So specifically around artificial intelligence and industrial metaverse, I think those go really well together with what are Lenovo’s natural strengths. Our heritage is as a leading global manufacturer, and now we’re looking to also transition to services-led, but applying AI and technologies like the metaverse to our factories. I think it’s almost easier to talk about the inverse, Laurel, which is if we… Because, and I remember very clearly we’ve mapped this out, there’s no area within the supply chain and manufacturing that is not touched by these areas. If I think about an example, actually, it’s very timely that we’re having this discussion. Lenovo was recognized just a few weeks ago at the World Economic Forum as part of the global lighthouse network on leading manufacturing.
And that’s based very much on applying around AI and metaverse technologies and embedding them into every aspect of what we do about our own supply chain and manufacturing network. And so if I pick a couple of examples on the quality side within the factory, we’ve implemented a combination of digital twin technology around how we can design to cost, design to quality in ways that are much faster than before, where we can prototype in the digital world where it’s faster and lower cost and correcting errors is more upfront and timely. So we are able to much more quickly iterate on our products. We’re able to have better quality. We’ve taken advanced computer vision so that we’re able to identify quality defects earlier on. We’re able to implement technologies around the industrial metaverse so that we can train our factory workers more effectively and better using aspects of AR and VR.
And we’re also able to, one of the really important parts of running an effective manufacturing operation is actually production planning, because there’s so many thousands of parts that are coming in, and I think everyone who’s listening knows how much uncertainty and volatility there have been in supply chains. So how do you take such a multi-thousand dimensional planning problem and optimize that? Those are things where we apply smart production planning models to keep our factories fully running so that we can meet our customer delivery dates. So I don’t want to drone on, but I think literally the answer was: there is no place, if you think about logistics, planning, production, scheduling, shipping, where we didn’t find AI and metaverse use cases that were able to significantly enhance the way we run our operations. And again, we’re doing this internally and that’s why we’re very proud that the World Economic Forum recognized us as a global lighthouse network manufacturing member.
Laurel: It’s certainly important, especially when we’re bringing together computing and IT environments in this increasing complexity. So as businesses continue to transform and accelerate their transformations, how do you build resiliency throughout Lenovo? Because that is certainly another foundational characteristic that is so necessary.