Connect with us

Tech

We need to design distrust into AI systems to make them safer

Published

on

We need to design distrust into AI systems to make them safer


It’s interesting that you’re talking about how, in these kinds of scenarios, you have to actively design distrust into the system to make it more safe.

Yes, that’s what you have to do. We’re actually trying an experiment right now around the idea of denial of service. We don’t have results yet, and we’re wrestling with some ethical concerns. Because once we talk about it and publish the results, we’ll have to explain why sometimes you may not want to give AI the ability to deny a service either. How do you remove service if someone really needs it?

But here’s an example with the Tesla distrust thing. Denial of service would be: I create a profile of your trust, which I can do based on how many times you deactivated or disengaged from holding the wheel. Given those profiles of disengagement, I can then model at what point you are fully in this trust state. We have done this, not with Tesla data, but our own data. And at a certain point, the next time you come into the car, you’d get a denial of service. You do not have access to the system for X time period.

It’s almost like when you punish a teenager by taking away their phone. You know that teenagers will not do whatever it is that you didn’t want them to do if you link it to their communication modality.

What are some other mechanisms that you’ve explored to enhance distrust in systems?

The other methodology we’ve explored is roughly called explainable AI, where the system provides an explanation with respect to some of its risks or uncertainties. Because all of these systems have uncertainty—none of them are 100%. And a system knows when it’s uncertain. So it could provide that as information in a way a human can understand, so people will change their behavior.

As an example, say I’m a self-driving car, and I have all my map information, and I know certain intersections are more accident prone than others. As we get close to one of them, I would say, “We’re approaching an intersection where 10 people died last year.” You explain it in a way where it makes someone go, “Oh, wait, maybe I should be more aware.”

We’ve already talked about some of your concerns around our tendency to overtrust these systems. What are others? On the flip side, are there also benefits?

The negatives are really linked to bias. That’s why I always talk about bias and trust interchangeably. Because if I’m overtrusting these systems and these systems are making decisions that have different outcomes for different groups of individuals—say, a medical diagnosis system has differences between women versus men—we’re now creating systems that augment the inequities we currently have. That’s a problem. And when you link it to things that are tied to health or transportation, both of which can lead to life-or-death situations, a bad decision can actually lead to something you can’t recover from. So we really have to fix it.

The positives are that automated systems are better than people in general. I think they can be even better, but I personally would rather interact with an AI system in some situations than certain humans in other situations. Like, I know it has some issues, but give me the AI. Give me the robot. They have more data; they are more accurate. Especially if you have a novice person. It’s a better outcome. It just might be that the outcome isn’t equal.

In addition to your robotics and AI research, you’ve been a huge proponent of increasing diversity in the field throughout your career. You started a program to mentor at-risk junior high girls 20 years ago, which is well before many people were thinking about this issue. Why is that important to you, and why is it also important for the field?

It’s important to me because I can identify times in my life where someone basically provided me access to engineering and computer science. I didn’t even know it was a thing. And that’s really why later on, I never had a problem with knowing that I could do it. And so I always felt that it was just my responsibility to do the same thing for those who have done it for me. As I got older as well, I noticed that there were a lot of people that didn’t look like me in the room. So I realized: Wait, there’s definitely a problem here, because people just don’t have the role models, they don’t have access, they don’t even know this is a thing.

And why it’s important to the field is because everyone has a difference of experience. Just like I’d been thinking about human-robot interaction before it was even a thing. It wasn’t because I was brilliant. It was because I looked at the problem in a different way. And when I’m talking to someone who has a different viewpoint, it’s like, “Oh, let’s try to combine and figure out the best of both worlds.”

Airbags kill more women and kids. Why is that? Well, I’m going to say that it’s because someone wasn’t in the room to say, “Hey, why don’t we test this on women in the front seat?” There’s a bunch of problems that have killed or been hazardous to certain groups of people. And I would claim that if you go back, it’s because you didn’t have enough people who could say “Hey, have you thought about this?” because they’re talking from their own experience and from their environment and their community.

How do you hope AI and robotics research will evolve over time? What is your vision for the field?

If you think about coding and programming, pretty much everyone can do it. There are so many organizations now like Code.org. The resources and tools are there. I would love to have a conversation with a student one day where I ask, “Do you know about AI and machine learning?” and they say, “Dr. H, I’ve been doing that since the third grade!” I want to be shocked like that, because that would be wonderful. Of course, then I’d have to think about what is my next job, but that’s a whole other story.

But I think when you have the tools with coding and AI and machine learning, you can create your own jobs, you can create your own future, you can create your own solution. That would be my dream.

Tech

Why can’t tech fix its gender problem?

Published

on

From left to right: Gordon MOORE, C. Sheldon ROBERTS, Eugene KLEINER, Robert NOYCE, Victor GRINICH, Julius BLANK, Jean HOERNI and Jay LAST.


Not competing in this Olympics, but still contributing to the industry’s success, were the thousands of women who worked in the Valley’s microchip fabrication plants and other manufacturing facilities from the 1960s to the early 1980s. Some were working-class Asian- and Mexican-Americans whose mothers and grandmothers had worked in the orchards and fruit can­neries of the prewar Valley. Others were recent migrants from the East and Midwest, white and often college educated, needing income and interested in technical work. 

With few other technical jobs available to them in the Valley, women would work for less. The preponderance of women on the lines helped keep the region’s factory wages among the lowest in the country. Women continue to dominate high-tech assembly lines, though now most of the factories are located thousands of miles away. In 1970, one early American-owned Mexican production line employed 600 workers, nearly 90% of whom were female. Half a century later the pattern continued: in 2019, women made up 90% of the workforce in one enormous iPhone assembly plant in India. Female production workers make up 80% of the entire tech workforce of Vietnam. 

Venture: “The Boys Club”

Chipmaking’s fiercely competitive and unusually demanding managerial culture proved to be highly influential, filtering down through the millionaires of the first semiconductor generation as they deployed their wealth and managerial experience in other companies. But venture capital was where semiconductor culture cast its longest shadow. 

The Valley’s original venture capitalists were a tight-knit bunch, mostly young men managing older, much richer men’s money. At first there were so few of them that they’d book a table at a San Francisco restaurant, summoning founders to pitch everyone at once. So many opportunities were flowing it didn’t much matter if a deal went to someone else. Charter members like Silicon Valley venture capitalist Reid Dennis called it “The Group.” Other observers, like journalist John W. Wilson, called it “The Boys Club.”

The men who left the Valley’s first silicon chipmaker, Shockley Semiconductor, to start Fairchild Semiconductor in 1957 were called “the Traitorous Eight.”

WAYNE MILLER/MAGNUM PHOTOS

The venture business was expanding by the early 1970s, even though down markets made it a terrible time to raise money. But the firms founded and led by semiconductor veterans during this period became industry-defining ones. Gene Kleiner left Fairchild Semiconductor to cofound Kleiner Perkins, whose long list of hits included Genentech, Sun Microsystems, AOL, Google, and Amazon. Master intimidator Don Valentine founded Sequoia Capital, making early-stage investments in Atari and Apple, and later in Cisco, Google, Instagram, Airbnb, and many others.

Generations: “Pattern recognition”

Silicon Valley venture capitalists left their mark not only by choosing whom to invest in, but by advising and shaping the business sensibility of those they funded. They were more than bankers. They were mentors, professors, and father figures to young, inexperienced men who often knew a lot about technology and nothing about how to start and grow a business. 

“This model of one generation succeeding and then turning around to offer the next generation of entrepreneurs financial support and managerial expertise,” Silicon Valley historian Leslie Berlin writes, “is one of the most important and under-recognized secrets to Silicon Valley’s ongoing success.” Tech leaders agree with Berlin’s assessment. Apple cofounder Steve Jobs—who learned most of what he knew about business from the men of the semiconductor industry—likened it to passing a baton in a relay race.

Continue Reading

Tech

Predicting the climate bill’s effects is harder than you might think

Published

on

Predicting the climate bill’s effects is harder than you might think


Human decision-making can also cause models and reality to misalign. “People don’t necessarily always do what is, on paper, the most economic,” says Robbie Orvis, who leads the energy policy solutions program at Energy Innovation.

This is a common issue for consumer tax credits, like those for electric vehicles or home energy efficiency upgrades. Often people don’t have the information or funds needed to take advantage of tax credits.

Likewise, there are no assurances that credits in the power sectors will have the impact that modelers expect. Finding sites for new power projects and getting permits for them can be challenging, potentially derailing progress. Some of this friction is factored into the models, Orvis says. But there’s still potential for more challenges than modelers expect.

Not enough

Putting too much stock in results from models can be problematic, says James Bushnell, an economist at the University of California, Davis. For one thing, models could overestimate how much behavior change is because of tax credits. Some of the projects that are claiming tax credits would probably have been built anyway, Bushnell says, especially solar and wind installations, which are already becoming more widespread and cheaper to build.

Still, whether or not the bill meets the expectations of the modelers, it’s a step forward in providing climate-friendly incentives, since it replaces solar- and wind-specific credits with broader clean-energy credits that will be more flexible for developers in choosing which technologies to deploy.

Another positive of the legislation is all its long-term investments, whose potential impacts aren’t fully captured in the economic models. The bill includes money for research and development of new technologies like direct air capture and clean hydrogen, which are still unproven but could have major impacts on emissions in the coming decades if they prove to be efficient and practical. 

Whatever the effectiveness of the Inflation Reduction Act, however, it’s clear that more climate action is still needed to meet emissions goals in 2030 and beyond. Indeed, even if the predictions of the modelers are correct, the bill is still not sufficient for the US to meet its stated goals under the Paris agreement of cutting emissions to half of 2005 levels by 2030.

The path ahead for US climate action isn’t as certain as some might wish it were. But with the Inflation Reduction Act, the country has taken a big step. Exactly how big is still an open question. 

Continue Reading

Tech

China has censored a top health information platform

Published

on

China has censored a top health information platform


The suspension has met with a gleeful social reaction among nationalist bloggers, who accuse DXY of receiving foreign funding, bashing traditional Chinese medicine, and criticizing China’s health-care system. 

DXY is one of the front-runners in China’s digital health startup scene. It hosts the largest online community Chinese doctors use to discuss professional topics and socialize. It also provides a medical news service for a general audience, and it is widely seen as the most influential popular science publication in health care. 

“I think no one, as long as they are somewhat related to the medical profession, doesn’t follow these accounts [of DXY],” says Zhao Yingxi, a global health researcher and PhD candidate at Oxford University, who says he followed DXY’s accounts on WeChat too. 

But in the increasingly polarized social media environment in China, health care is becoming a target for controversy. The swift conclusion that DXY’s demise was triggered by its foreign ties and critical work illustrates how politicized health topics have become. 

Since its launch in 2000, DXY has raised five rounds of funding from prominent companies like Tencent and venture capital firms. But even that commercial success has caused it trouble this week. One of its major investors, Trustbridge Partners, raises funds from sources like Columbia University’s endowments and Singapore’s state holding company Temasek. After DXY’s accounts were suspended, bloggers used that fact to try to back up their claim that DXY has been under foreign influence all along. 

Part of the reason the suspension is so shocking is that DXY is widely seen as one of the most trusted online sources for health education in China. During the early days of the covid-19 pandemic, it compiled case numbers and published a case map that was updated every day, becoming the go-to source for Chinese people seeking to follow covid trends in the country. DXY also made its name by taking down several high-profile fraudulent health products in China.

It also hasn’t shied away from sensitive issues. For example, on the International Day Against Homophobia, Transphobia, and Biphobia in 2019, it published the accounts of several victims of conversion therapy and argued that the practice is not backed by medical consensus. 

“The article put survivors’ voices front and center and didn’t tiptoe around the disturbing reality that conversion therapy is still prevalent and even pushed by highly ranked public hospitals and academics,” says Darius Longarino, a senior fellow at Yale Law School’s Paul Tsai China Center. 

Continue Reading

Copyright © 2021 Seminole Press.