Connect with us

Tech

Evolving to a more equitable AI

Published

on

Evolving to a more equitable AI


The pandemic that has raged across the globe over the past year has shone a cold, hard light on many things—the varied levels of preparedness to respond; collective attitudes toward health, technology, and science; and vast financial and social inequities. As the world continues to navigate the covid-19 health crisis, and some places even begin a gradual return to work, school, travel, and recreation, it’s critical to resolve the competing priorities of protecting the public’s health equitably while ensuring privacy.

The extended crisis has led to rapid change in work and social behavior, as well as an increased reliance on technology. It’s now more critical than ever that companies, governments, and society exercise caution in applying technology and handling personal information. The expanded and rapid adoption of artificial intelligence (AI) demonstrates how adaptive technologies are prone to intersect with humans and social institutions in potentially risky or inequitable ways.

“Our relationship with technology as a whole will have shifted dramatically post-pandemic,” says Yoav Schlesinger, principal of the ethical AI practice at Salesforce. “There will be a negotiation process between people, businesses, government, and technology; how their data flows between all of those parties will get renegotiated in a new social data contract.”

AI in action

As the covid-19 crisis began to unfold in early 2020, scientists looked to AI to support a variety of medical uses, such as identifying potential drug candidates for vaccines or treatment, helping detect potential covid-19 symptoms, and allocating scarce resources like intensive-care-unit beds and ventilators. Specifically, they leaned on the analytical power of AI-augmented systems to develop cutting-edge vaccines and treatments.

While advanced data analytics tools can help extract insights from a massive amount of data, the result has not always been more equitable outcomes. In fact, AI-driven tools and the data sets they work with can perpetuate inherent bias or systemic inequity. Throughout the pandemic, agencies like the Centers for Disease Control and Prevention and the World Health Organization have gathered tremendous amounts of data, but the data doesn’t necessarily accurately represent populations that have been disproportionately and negatively affected—including black, brown, and indigenous people—nor do some of the diagnostic advances they’ve made, says Schlesinger.

For example, biometric wearables like Fitbit or Apple Watch demonstrate promise in their ability to detect potential covid-19 symptoms, such as changes in temperature or oxygen saturation. Yet those analyses rely on often flawed or limited data sets and can introduce bias or unfairness that disproportionately affect vulnerable people and communities.

“There is some research that shows the green LED light has a more difficult time reading pulse and oxygen saturation on darker skin tones,” says Schlesinger, referring to the semiconductor light source. “So it might not do an equally good job at catching covid symptoms for those with black and brown skin.”

AI has shown greater efficacy in helping analyze enormous data sets. A team at the Viterbi School of Engineering at the University of Southern California developed an AI framework to help analyze covid-19 vaccine candidates. After identifying 26 potential candidates, it narrowed the field to 11 that were most likely to succeed. The data source for the analysis was the Immune Epitope Database, which includes more than 600,000 contagion determinants arising from more than 3,600 species.

Other researchers from Viterbi are applying AI to decipher cultural codes more accurately and better understand the social norms that guide ethnic and racial group behavior. That can have a significant impact on how a certain population fares during a crisis like the pandemic, owing to religious ceremonies, traditions, and other social mores that can facilitate viral spread.

Lead scientists Kristina Lerman and Fred Morstatter have based their research on Moral Foundations Theory, which describes the “intuitive ethics” that form a culture’s moral constructs, such as caring, fairness, loyalty, and authority, helping inform individual and group behavior.

“Our goal is to develop a framework that allows us to understand the dynamics that drive the decision-making process of a culture at a deeper level,” says Morstatter in a report released by USC. “And by doing so, we generate more culturally informed forecasts.”

The research also examines how to deploy AI in an ethical and fair way. “Most people, but not all, are interested in making the world a better place,” says Schlesinger. “Now we have to go to the next level—what goals do we want to achieve, and what outcomes would we like to see? How will we measure success, and what will it look like?”

Assuaging ethical concerns

It’s critical to interrogate the assumptions about collected data and AI processes, Schlesinger says. “We talk about achieving fairness through awareness. At every step of the process, you’re making value judgments or assumptions that will weight your outcomes in a particular direction,” he says. “That is the fundamental challenge of building ethical AI, which is to look at all the places where humans are biased.”

Part of that challenge is performing a critical examination of the data sets that inform AI systems. It’s essential to understand the data sources and the composition of the data, and to answer such questions as: How is the data made up? Does it encompass a diverse array of stakeholders? What is the best way to deploy that data into a model to minimize bias and maximize fairness?

As people go back to work, employers may now be using sensing technologies with AI built in, including thermal cameras to detect high temperatures; audio sensors to detect coughs or raised voices, which contribute to the spread of respiratory droplets; and video streams to monitor hand-washing procedures, physical distancing regulations, and mask requirements.

Such monitoring and analysis systems not only have technical-accuracy challenges but pose core risks to human rights, privacy, security, and trust. The impetus for increased surveillance has been a troubling side effect of the pandemic. Government agencies have used surveillance-camera footage, smartphone location data, credit card purchase records, and even passive temperature scans in crowded public areas like airports to help trace movements of people who may have contracted or been exposed to covid-19 and establish virus transmission chains.

“The first question that needs to be answered is not just can we do this—but should we?” says Schlesinger. “Scanning individuals for their biometric data without their consent raises ethical concerns, even if it’s positioned as a benefit for the greater good. We should have a robust conversation as a society about whether there is good reason to implement these technologies in the first place.”

What the future looks like

As society returns to something approaching normal, it’s time to fundamentally re-evaluate the relationship with data and establish new norms for collecting data, as well as the appropriate use—and potential misuse—of data. When building and deploying AI, technologists will continue to make those necessary assumptions about data and the processes, but the underpinnings of that data should be questioned. Is the data legitimately sourced? Who assembled it? What assumptions is it based on? Is it accurately presented? How can citizens’ and consumers’ privacy be preserved?

As AI is more widely deployed, it’s essential to consider how to also engender trust. Using AI to augment human decision-making, and not entirely replace human input, is one approach.

“There will be more questions about the role AI should play in society, its relationship with human beings, and what are appropriate tasks for humans and what are appropriate tasks for an AI,” says Schlesinger. “There are certain areas where AI’s capabilities and its ability to augment human capabilities will accelerate our trust and reliance. In places where AI doesn’t replace humans, but augments their efforts, that is the next horizon.”

There will always be situations in which a human needs to be involved in the decision-making. “In regulated industries, for example, like health care, banking, and finance, there needs to be a human in the loop in order to maintain compliance,” says Schlesinger. “You can’t just deploy AI to make care decisions without a clinician’s input. As much as we would love to believe AI is capable of doing that, AI doesn’t have empathy yet, and probably never will.”

It’s critical for data collected and created by AI to not exacerbate but minimize inequity. There must be a balance between finding ways for AI to help accelerate human and social progress, promoting equitable actions and responses, and simply recognizing that certain problems will require human solutions.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Tech

The hunter-gatherer groups at the heart of a microbiome gold rush

Published

on

The hunter-gatherer groups at the heart of a microbiome gold rush


The first step to finding out is to catalogue what microbes we might have lost. To get as close to ancient microbiomes as possible, microbiologists have begun studying multiple Indigenous groups. Two have received the most attention: the Yanomami of the Amazon rainforest and the Hadza, in northern Tanzania. 

Researchers have made some startling discoveries already. A study by Sonnenburg and his colleagues, published in July, found that the gut microbiomes of the Hadza appear to include bugs that aren’t seen elsewhere—around 20% of the microbe genomes identified had not been recorded in a global catalogue of over 200,000 such genomes. The researchers found 8.4 million protein families in the guts of the 167 Hadza people they studied. Over half of them had not previously been identified in the human gut.

Plenty of other studies published in the last decade or so have helped build a picture of how the diets and lifestyles of hunter-gatherer societies influence the microbiome, and scientists have speculated on what this means for those living in more industrialized societies. But these revelations have come at a price.

A changing way of life

The Hadza people hunt wild animals and forage for fruit and honey. “We still live the ancient way of life, with arrows and old knives,” says Mangola, who works with the Olanakwe Community Fund to support education and economic projects for the Hadza. Hunters seek out food in the bush, which might include baboons, vervet monkeys, guinea fowl, kudu, porcupines, or dik-dik. Gatherers collect fruits, vegetables, and honey.

Mangola, who has met with multiple scientists over the years and participated in many research projects, has witnessed firsthand the impact of such research on his community. Much of it has been positive. But not all researchers act thoughtfully and ethically, he says, and some have exploited or harmed the community.

One enduring problem, says Mangola, is that scientists have tended to come and study the Hadza without properly explaining their research or their results. They arrive from Europe or the US, accompanied by guides, and collect feces, blood, hair, and other biological samples. Often, the people giving up these samples don’t know what they will be used for, says Mangola. Scientists get their results and publish them without returning to share them. “You tell the world [what you’ve discovered]—why can’t you come back to Tanzania to tell the Hadza?” asks Mangola. “It would bring meaning and excitement to the community,” he says.

Some scientists have talked about the Hadza as if they were living fossils, says Alyssa Crittenden, a nutritional anthropologist and biologist at the University of Nevada in Las Vegas, who has been studying and working with the Hadza for the last two decades.

The Hadza have been described as being “locked in time,” she adds, but characterizations like that don’t reflect reality. She has made many trips to Tanzania and seen for herself how life has changed. Tourists flock to the region. Roads have been built. Charities have helped the Hadza secure land rights. Mangola went abroad for his education: he has a law degree and a master’s from the Indigenous Peoples Law and Policy program at the University of Arizona.

Continue Reading

Tech

The Download: a microbiome gold rush, and Eric Schmidt’s election misinformation plan

Published

on

The Download: a microbiome gold rush, and Eric Schmidt’s election misinformation plan


Over the last couple of decades, scientists have come to realize just how important the microbes that crawl all over us are to our health. But some believe our microbiomes are in crisis—casualties of an increasingly sanitized way of life. Disturbances in the collections of microbes we host have been associated with a whole host of diseases, ranging from arthritis to Alzheimer’s.

Some might not be completely gone, though. Scientists believe many might still be hiding inside the intestines of people who don’t live in the polluted, processed environment that most of the rest of us share. They’ve been studying the feces of people like the Yanomami, an Indigenous group in the Amazon, who appear to still have some of the microbes that other people have lost. 

But there is a major catch: we don’t know whether those in hunter-gatherer societies really do have “healthier” microbiomes—and if they do, whether the benefits could be shared with others. At the same time, members of the communities being studied are concerned about the risk of what’s called biopiracy—taking natural resources from poorer countries for the benefit of wealthier ones. Read the full story.

—Jessica Hamzelou

Eric Schmidt has a 6-point plan for fighting election misinformation

—by Eric Schmidt, formerly the CEO of Google, and current cofounder of philanthropic initiative Schmidt Futures

The coming year will be one of seismic political shifts. Over 4 billion people will head to the polls in countries including the United States, Taiwan, India, and Indonesia, making 2024 the biggest election year in history.

Continue Reading

Tech

Navigating a shifting customer-engagement landscape with generative AI

Published

on

Navigating a shifting customer-engagement landscape with generative AI


A strategic imperative

Generative AI’s ability to harness customer data in a highly sophisticated manner means enterprises are accelerating plans to invest in and leverage the technology’s capabilities. In a study titled “The Future of Enterprise Data & AI,” Corinium Intelligence and WNS Triange surveyed 100 global C-suite leaders and decision-makers specializing in AI, analytics, and data. Seventy-six percent of the respondents said that their organizations are already using or planning to use generative AI.

According to McKinsey, while generative AI will affect most business functions, “four of them will likely account for 75% of the total annual value it can deliver.” Among these are marketing and sales and customer operations. Yet, despite the technology’s benefits, many leaders are unsure about the right approach to take and mindful of the risks associated with large investments.

Mapping out a generative AI pathway

One of the first challenges organizations need to overcome is senior leadership alignment. “You need the necessary strategy; you need the ability to have the necessary buy-in of people,” says Ayer. “You need to make sure that you’ve got the right use case and business case for each one of them.” In other words, a clearly defined roadmap and precise business objectives are as crucial as understanding whether a process is amenable to the use of generative AI.

The implementation of a generative AI strategy can take time. According to Ayer, business leaders should maintain a realistic perspective on the duration required for formulating a strategy, conduct necessary training across various teams and functions, and identify the areas of value addition. And for any generative AI deployment to work seamlessly, the right data ecosystems must be in place.

Ayer cites WNS Triange’s collaboration with an insurer to create a claims process by leveraging generative AI. Thanks to the new technology, the insurer can immediately assess the severity of a vehicle’s damage from an accident and make a claims recommendation based on the unstructured data provided by the client. “Because this can be immediately assessed by a surveyor and they can reach a recommendation quickly, this instantly improves the insurer’s ability to satisfy their policyholders and reduce the claims processing time,” Ayer explains.

All that, however, would not be possible without data on past claims history, repair costs, transaction data, and other necessary data sets to extract clear value from generative AI analysis. “Be very clear about data sufficiency. Don’t jump into a program where eventually you realize you don’t have the necessary data,” Ayer says.

The benefits of third-party experience

Enterprises are increasingly aware that they must embrace generative AI, but knowing where to begin is another thing. “You start off wanting to make sure you don’t repeat mistakes other people have made,” says Ayer. An external provider can help organizations avoid those mistakes and leverage best practices and frameworks for testing and defining explainability and benchmarks for return on investment (ROI).

Using pre-built solutions by external partners can expedite time to market and increase a generative AI program’s value. These solutions can harness pre-built industry-specific generative AI platforms to accelerate deployment. “Generative AI programs can be extremely complicated,” Ayer points out. “There are a lot of infrastructure requirements, touch points with customers, and internal regulations. Organizations will also have to consider using pre-built solutions to accelerate speed to value. Third-party service providers bring the expertise of having an integrated approach to all these elements.”

Continue Reading

Copyright © 2021 Seminole Press.