Connect with us

Tech

Evolving to a more equitable AI

Published

on

Evolving to a more equitable AI


The pandemic that has raged across the globe over the past year has shone a cold, hard light on many things—the varied levels of preparedness to respond; collective attitudes toward health, technology, and science; and vast financial and social inequities. As the world continues to navigate the covid-19 health crisis, and some places even begin a gradual return to work, school, travel, and recreation, it’s critical to resolve the competing priorities of protecting the public’s health equitably while ensuring privacy.

The extended crisis has led to rapid change in work and social behavior, as well as an increased reliance on technology. It’s now more critical than ever that companies, governments, and society exercise caution in applying technology and handling personal information. The expanded and rapid adoption of artificial intelligence (AI) demonstrates how adaptive technologies are prone to intersect with humans and social institutions in potentially risky or inequitable ways.

“Our relationship with technology as a whole will have shifted dramatically post-pandemic,” says Yoav Schlesinger, principal of the ethical AI practice at Salesforce. “There will be a negotiation process between people, businesses, government, and technology; how their data flows between all of those parties will get renegotiated in a new social data contract.”

AI in action

As the covid-19 crisis began to unfold in early 2020, scientists looked to AI to support a variety of medical uses, such as identifying potential drug candidates for vaccines or treatment, helping detect potential covid-19 symptoms, and allocating scarce resources like intensive-care-unit beds and ventilators. Specifically, they leaned on the analytical power of AI-augmented systems to develop cutting-edge vaccines and treatments.

While advanced data analytics tools can help extract insights from a massive amount of data, the result has not always been more equitable outcomes. In fact, AI-driven tools and the data sets they work with can perpetuate inherent bias or systemic inequity. Throughout the pandemic, agencies like the Centers for Disease Control and Prevention and the World Health Organization have gathered tremendous amounts of data, but the data doesn’t necessarily accurately represent populations that have been disproportionately and negatively affected—including black, brown, and indigenous people—nor do some of the diagnostic advances they’ve made, says Schlesinger.

For example, biometric wearables like Fitbit or Apple Watch demonstrate promise in their ability to detect potential covid-19 symptoms, such as changes in temperature or oxygen saturation. Yet those analyses rely on often flawed or limited data sets and can introduce bias or unfairness that disproportionately affect vulnerable people and communities.

“There is some research that shows the green LED light has a more difficult time reading pulse and oxygen saturation on darker skin tones,” says Schlesinger, referring to the semiconductor light source. “So it might not do an equally good job at catching covid symptoms for those with black and brown skin.”

AI has shown greater efficacy in helping analyze enormous data sets. A team at the Viterbi School of Engineering at the University of Southern California developed an AI framework to help analyze covid-19 vaccine candidates. After identifying 26 potential candidates, it narrowed the field to 11 that were most likely to succeed. The data source for the analysis was the Immune Epitope Database, which includes more than 600,000 contagion determinants arising from more than 3,600 species.

Other researchers from Viterbi are applying AI to decipher cultural codes more accurately and better understand the social norms that guide ethnic and racial group behavior. That can have a significant impact on how a certain population fares during a crisis like the pandemic, owing to religious ceremonies, traditions, and other social mores that can facilitate viral spread.

Lead scientists Kristina Lerman and Fred Morstatter have based their research on Moral Foundations Theory, which describes the “intuitive ethics” that form a culture’s moral constructs, such as caring, fairness, loyalty, and authority, helping inform individual and group behavior.

“Our goal is to develop a framework that allows us to understand the dynamics that drive the decision-making process of a culture at a deeper level,” says Morstatter in a report released by USC. “And by doing so, we generate more culturally informed forecasts.”

The research also examines how to deploy AI in an ethical and fair way. “Most people, but not all, are interested in making the world a better place,” says Schlesinger. “Now we have to go to the next level—what goals do we want to achieve, and what outcomes would we like to see? How will we measure success, and what will it look like?”

Assuaging ethical concerns

It’s critical to interrogate the assumptions about collected data and AI processes, Schlesinger says. “We talk about achieving fairness through awareness. At every step of the process, you’re making value judgments or assumptions that will weight your outcomes in a particular direction,” he says. “That is the fundamental challenge of building ethical AI, which is to look at all the places where humans are biased.”

Part of that challenge is performing a critical examination of the data sets that inform AI systems. It’s essential to understand the data sources and the composition of the data, and to answer such questions as: How is the data made up? Does it encompass a diverse array of stakeholders? What is the best way to deploy that data into a model to minimize bias and maximize fairness?

As people go back to work, employers may now be using sensing technologies with AI built in, including thermal cameras to detect high temperatures; audio sensors to detect coughs or raised voices, which contribute to the spread of respiratory droplets; and video streams to monitor hand-washing procedures, physical distancing regulations, and mask requirements.

Such monitoring and analysis systems not only have technical-accuracy challenges but pose core risks to human rights, privacy, security, and trust. The impetus for increased surveillance has been a troubling side effect of the pandemic. Government agencies have used surveillance-camera footage, smartphone location data, credit card purchase records, and even passive temperature scans in crowded public areas like airports to help trace movements of people who may have contracted or been exposed to covid-19 and establish virus transmission chains.

“The first question that needs to be answered is not just can we do this—but should we?” says Schlesinger. “Scanning individuals for their biometric data without their consent raises ethical concerns, even if it’s positioned as a benefit for the greater good. We should have a robust conversation as a society about whether there is good reason to implement these technologies in the first place.”

What the future looks like

As society returns to something approaching normal, it’s time to fundamentally re-evaluate the relationship with data and establish new norms for collecting data, as well as the appropriate use—and potential misuse—of data. When building and deploying AI, technologists will continue to make those necessary assumptions about data and the processes, but the underpinnings of that data should be questioned. Is the data legitimately sourced? Who assembled it? What assumptions is it based on? Is it accurately presented? How can citizens’ and consumers’ privacy be preserved?

As AI is more widely deployed, it’s essential to consider how to also engender trust. Using AI to augment human decision-making, and not entirely replace human input, is one approach.

“There will be more questions about the role AI should play in society, its relationship with human beings, and what are appropriate tasks for humans and what are appropriate tasks for an AI,” says Schlesinger. “There are certain areas where AI’s capabilities and its ability to augment human capabilities will accelerate our trust and reliance. In places where AI doesn’t replace humans, but augments their efforts, that is the next horizon.”

There will always be situations in which a human needs to be involved in the decision-making. “In regulated industries, for example, like health care, banking, and finance, there needs to be a human in the loop in order to maintain compliance,” says Schlesinger. “You can’t just deploy AI to make care decisions without a clinician’s input. As much as we would love to believe AI is capable of doing that, AI doesn’t have empathy yet, and probably never will.”

It’s critical for data collected and created by AI to not exacerbate but minimize inequity. There must be a balance between finding ways for AI to help accelerate human and social progress, promoting equitable actions and responses, and simply recognizing that certain problems will require human solutions.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Tech

A pro-China online influence campaign is targeting the rare-earths industry

Published

on

A pro-China online influence campaign is targeting the rare-earths industry


China has come to dominate the market in recent years, and by 2017 the country produced over 80% of the world’s supply. Beijing achieved this by pouring resources into the study and mining of rare-earth elements for decades, building up six big state-owned firms and relaxing environmental regulations to enable low-cost and high-pollution methods. The country then rapidly increased rare-earth exports in the 1990s, a sudden rush that bankrupted international rivals. Further development of rare-earth industries is a strategic goal under Beijing’s Made in China 2025 strategy.

The country has demonstrated its dominance several times, most notably by stopping all shipments of the resources to Japan in 2010 during a maritime dispute. State media have warned that China could do the same to the United States.

The US and other Western nations have seen this monopoly as a critical weakness for their side. As a result, they have spent billions in recent years to get better at finding, mining, and processing the minerals. 

In early June 2022, the Canadian mining company Appia announced it had found new resources in Saskatchewan. Within weeks, the American firm USA Rare Earth announced a new processing facility in Oklahoma. 

Dragonbridge engaged in similar activity in 2021, soon after the American military signed an agreement with the Australian mining firm Lynas, the largest rare-earths company outside China, to build a processing plant in Texas. 

Continue Reading

Tech

The U.S. only has 60,000 charging stations for EVs. Here’s where they all are.

Published

on

The U.S. only has 60,000 charging stations for EVs. Here’s where they all are.


The infrastructure bill that passed in November 2021 earmarked $7.5 billion for President Biden’s goal of having 500,000 chargers (individual plugs, not stations) around the nation. In the best case, Michalek envisions a public-private collaboration to build a robust national charging network. The Biden administration has pledged to install plugs throughout rural areas, while companies constructing charging stations across America will have a strong incentive to fill in the country’s biggest cities and most popular thoroughfares. After all, companies like Electrify America, EVgo, and ChargePoint charge customers per kilowatt-hour of energy they use, much like utilities.

Most new electric vehicles promise at least 250 miles on a full charge, and that number should keep ticking up. The farther cars can go without charging, the fewer anxious drivers will be stuck in lines waiting for a charging space to open. But make no mistake, Michalek says: an electric-car country needs a plethora of plugs, and soon.

Continue Reading

Tech

We need smarter cities, not “smart cities”

Published

on

We need smarter cities, not “smart cities”


The term “smart cities” originated as a marketing strategy for large IT vendors. It has now become synonymous with urban uses of technology, particularly advanced and emerging technologies. But cities are more than 5G, big data, driverless vehicles, and AI. They are crucial drivers of opportunity, prosperity, and progress. They support those displaced by war and crisis and generate 80% of global GDP. More than 68% of the world’s population will live in cities by 2050—2.5 billion more people than do now. And with over 90% of urban areas located on coasts, cities are on the front lines of climate change.

A focus on building “smart cities” risks turning cities into technology projects. We talk about “users” rather than people. Monthly and “daily active” numbers instead of residents. Stakeholders and subscribers instead of citizens. This also risks a transactional—and limiting—approach to city improvement, focusing on immediate returns on investment or achievements that can be distilled into KPIs. 

Truly smart cities recognize the ambiguity of lives and livelihoods, and they are driven by outcomes beyond the implementation of “solutions.” They are defined by their residents’ talents, relationships, and sense of ownership—not by the technology that is deployed there. 

This more expansive concept of what a smart city is encompasses a wide range of urban innovations. Singapore, which is exploring high-tech approaches such as drone deliveries and virtual-reality modeling, is one type of smart city. Curitiba, Brazil—a pioneer of the bus rapid transit system—is another. Harare, the capital of Zimbabwe, with its passively cooled shopping center designed in 1996, is a smart city, as are the “sponge cities” across China that use nature-based solutions to manage rainfall and floodwater.

Where technology can play a role, it must be applied thoughtfully and holistically—taking into account the needs, realities, and aspirations of city residents. Guatemala City, in collaboration with our country office team at the UN Development Programme, is using this approach to improve how city infrastructure—including parks and lighting—is managed. The city is standardizing materials and designs to reduce costs and labor,  and streamlining approval and allocation processes to increase the speed and quality of repairs and maintenance. Everything is driven by the needs of its citizens. Elsewhere in Latin America, cities are going beyond quantitative variables to take into account well-being and other nuanced outcomes. 

In her 1961 book The Death and Life of Great American Cities, Jane Jacobs, the pioneering American urbanist, discussed the importance of sidewalks. In the context of the city, they are conduits for adventure, social interaction, and unexpected encounters—what Jacobs termed the “sidewalk ballet.” Just as literal sidewalks are crucial to the urban experience, so is the larger idea of connection between elements.

Truly smart cities recognize the ambiguity of lives and livelihoods, and they are driven by outcomes beyond the implementation of “solutions.”

However, too often we see “smart cities” focus on discrete deployments of technology rather than this connective tissue. We end up with cities defined by “use cases” or “platforms.” Practically speaking, the vision of a tech-centric city is conceptually, financially, and logistically out of reach for many places. This can lead officials and innovators to dismiss the city’s real and substantial potential to reduce poverty while enhancing inclusion and sustainability.

In our work at the UN Development Programme, we focus on the interplay between different components of a truly smart city—the community, the local government, and the private sector. We also explore the different assets made available by this broader definition: high-tech innovations, yes, but also low-cost, low-tech innovations and nature-based solutions. Big data, but also the qualitative, richer detail behind the data points. The connections and “sidewalks”—not just the use cases or pilot programs. We see our work as an attempt to start redefining smart cities and increasing the size, scope, and usefulness of our urban development tool kit.

We continue to explore how digital technology might enhance cities—for example, we are collaborating with major e-commerce platforms across Africa that are transforming urban service delivery. But we are also shaping this broader tool kit to tackle the urban impacts of climate change, biodiversity loss, and pollution. 

The UrbanShift initiative, led by the UN Environment Programme in partnership with UNDP and many others, is working with cities to promote nature-based solutions, low-carbon public transport, low-emission zones, integrated waste management, and more. This approach focuses not just on implementation, but also on policies and guiderails. The UNDP Smart Urban Innovations Handbook aims to help policymakers and urban innovators explore how they might embed “smartness” in any city.

Our work at the United Nations is driven by the Sustainable Development Goals: 17 essential, ambitious, and urgent global targets that aim to shape a better world by 2030. Truly smart cities would play a role in meeting all 17 SDGs, from tackling poverty and inequality to protecting and improving biodiversity. 

Coordinating and implementing the complex efforts required to reach these goals is far more difficult than deploying the latest app or installing another piece of smart street furniture. But we must move beyond the sales pitches and explore how our cities can be true platforms—not just technological ones—for inclusive and sustainable development. The well-being of the billions who call the world’s cities home depends on it.

Riad Meddeb is interim director of the UNDP Global Centre for Technology, Innovation, and Sustainable Development. Calum Handforth is an advisor for digitalization, digital health, and smart cities at the UNDP Global Centre.

Continue Reading

Copyright © 2021 Seminole Press.