Connect with us

Tech

Big Tech’s guide to talking about AI ethics

Published

on

Big Tech’s guide to talking about AI ethics


AI researchers often say good machine learning is really more art than science. The same could be said for effective public relations. Selecting the right words to strike a positive tone or reframe the conversation about AI is a delicate task: done well, it can strengthen one’s brand image, but done poorly, it can trigger an even greater backlash.

The tech giants would know. Over the last few years, they’ve had to learn this art quickly as they’ve faced increasing public distrust of their actions and intensifying criticism about their AI research and technologies.

Now they’ve developed a new vocabulary to use when they want to assure the public that they care deeply about developing AI responsibly—but want to make sure they don’t invite too much scrutiny. Here’s an insider’s guide to decoding their language and challenging the assumptions and values baked in.

accountability (n) – The act of holding someone else responsible for the consequences when your AI system fails.

accuracy (n) – Technical correctness. The most important measure of success in evaluating an AI model’s performance. See validation.

adversary (n) – A lone engineer capable of disrupting your powerful revenue-generating AI system. See robustness, security.

alignment (n) – The challenge of designing AI systems that do what we tell them to and value what we value. Purposely abstract. Avoid using real examples of harmful unintended consequences. See safety.

artificial general intelligence (phrase) – A hypothetical AI god that’s probably far off in the future but also maybe imminent. Can be really good or really bad whichever is more rhetorically useful. Obviously you’re building the good one. Which is expensive. Therefore, you need more money. See long-term risks.

audit (n) – A review that you pay someone else to do of your company or AI system so that you appear more transparent without needing to change anything. See impact assessment.

augment (v) – To increase the productivity of white-collar workers. Side effect: automating away blue-collar jobs. Sad but inevitable.

beneficial (adj) – A blanket descriptor for what you are trying to build. Conveniently ill-defined. See value.

by design (ph) – As in “fairness by design” or “accountability by design.” A phrase to signal that you are thinking hard about important things from the beginning.

compliance (n) – The act of following the law. Anything that isn’t illegal goes.

data labelers (ph) – The people who allegedly exist behind Amazon’s Mechanical Turk interface to do data cleaning work for cheap. Unsure who they are. Never met them.

democratize (v) – To scale a technology at all costs. A justification for concentrating resources. See scale.

diversity, equity, and inclusion (ph) – The act of hiring engineers and researchers from marginalized groups so you can parade them around to the public. If they challenge the status quo, fire them.

efficiency (n) – The use of less data, memory, staff, or energy to build an AI system.

ethics board (ph) – A group of advisors without real power, convened to create the appearance that your company is actively listening. Examples: Google’s AI ethics board (canceled), Facebook’s Oversight Board (still standing).

ethics principles (ph) – A set of truisms used to signal your good intentions. Keep it high-level. The vaguer the language, the better. See responsible AI.

explainable (adj) – For describing an AI system that you, the developer, and the user can understand. Much harder to achieve for the people it’s used on. Probably not worth the effort. See interpretable.

fairness (n) – A complicated notion of impartiality used to describe unbiased algorithms. Can be defined in dozens of ways based on your preference.

for good (ph) – As in “AI for good” or “data for good.” An initiative completely tangential to your core business that helps you generate good publicity.

foresight (n) – The ability to peer into the future. Basically impossible: thus, a perfectly reasonable explanation for why you can’t rid your AI system of unintended consequences.

framework (n) – A set of guidelines for making decisions. A good way to appear thoughtful and measured while delaying actual decision-making.

generalizable (adj) – The sign of a good AI model. One that continues to work under changing conditions. See real world.

governance (n) – Bureaucracy.

human-centered design (ph) – A process that involves using “personas” to imagine what an average user might want from your AI system. May involve soliciting feedback from actual users. Only if there’s time. See stakeholders.

human in the loop (ph) – Any person that is part of an AI system. Responsibilities range from faking the system’s capabilities to warding off accusations of automation.

impact assessment (ph) – A review that you do yourself of your company or AI system to show your willingness to consider its downsides without changing anything. See audit.

interpretable (adj) – Description of an AI system whose computation you, the developer, can follow step by step to understand how it arrived at its answer. Actually probably just linear regression. AI sounds better.

integrity (n) – Issues that undermine the technical performance of your model or your company’s ability to scale. Not to be confused with issues that are bad for society. Not to be confused with honesty.

interdisciplinary (adj) – Term used of any team or project involving people who do not code: user researchers, product managers, moral philosophers. Especially moral philosophers.

long-term risks (n) – Bad things that could have catastrophic effects in the far-off future. Probably will never happen, but more important to study and avoid than the immediate harms of existing AI systems.

partners (n) – Other elite groups who share your worldview and can work with you to maintain the status quo. See stakeholders.

privacy trade-off (ph) – The noble sacrifice of individual control over personal information for group benefits like AI-driven health-care advancements, which also happen to be highly profitable.

progress (n) – Scientific and technological advancement. An inherent good.

real world (ph) – The opposite of the simulated world. A dynamic physical environment filled with unexpected surprises that AI models are trained to survive. Not to be confused with humans and society.

regulation (n) – What you call for to shift the responsibility for mitigating harmful AI onto policymakers. Not to be confused with policies that would hinder your growth.

responsible AI (n)- A moniker for any work at your company that could be construed by the public as a sincere effort to mitigate the harms of your AI systems.

robustness (n) – The ability of an AI model to function consistently and accurately under nefarious attempts to feed it corrupted data.

safety (n)- The challenge of building AI systems that don’t go rogue from the designer’s intentions. Not to be confused with building AI systems that don’t fail. See alignment.

scale (n)- The de facto end state that any good AI system should strive to achieve.

security (n) – The act of protecting valuable or sensitive data and AI models from being breached by bad actors. See adversary.

stakeholders (n) – Shareholders, regulators, users. The people in power you want to keep happy.

transparency (n) – Revealing your data and code. Bad for proprietary and sensitive information. Thus really hard; quite frankly, even impossible. Not to be confused with clear communication about how your system actually works.

trustworthy (adj) – An assessment of an AI system that can be manufactured with enough coordinated publicity.

universal basic income (ph) – The idea that paying everyone a fixed salary will solve the massive economic upheaval caused when automation leads to widespread job loss. Popularized by 2020 presidential candidate Andrew Yang. See wealth redistribution.

validation (n) – The process of testing an AI model on data other than the data it was trained on, to check that it is still accurate.

value (n) – An intangible benefit rendered to your users that makes you a lot of money.

values (n) – You have them. Remind people.

wealth redistribution (ph) – A useful idea to dangle around when people scrutinize you for using way too many resources and making way too much money. How would wealth redistribution work? Universal basic income, of course. Also not something you could figure out yourself. Would require regulation. See regulation.

withhold publication (ph) – The benevolent act of choosing not to open-source your code because it could fall into the hands of a bad actor. Better to limit access to partners who can afford it.

Tech

The hunter-gatherer groups at the heart of a microbiome gold rush

Published

on

The hunter-gatherer groups at the heart of a microbiome gold rush


The first step to finding out is to catalogue what microbes we might have lost. To get as close to ancient microbiomes as possible, microbiologists have begun studying multiple Indigenous groups. Two have received the most attention: the Yanomami of the Amazon rainforest and the Hadza, in northern Tanzania. 

Researchers have made some startling discoveries already. A study by Sonnenburg and his colleagues, published in July, found that the gut microbiomes of the Hadza appear to include bugs that aren’t seen elsewhere—around 20% of the microbe genomes identified had not been recorded in a global catalogue of over 200,000 such genomes. The researchers found 8.4 million protein families in the guts of the 167 Hadza people they studied. Over half of them had not previously been identified in the human gut.

Plenty of other studies published in the last decade or so have helped build a picture of how the diets and lifestyles of hunter-gatherer societies influence the microbiome, and scientists have speculated on what this means for those living in more industrialized societies. But these revelations have come at a price.

A changing way of life

The Hadza people hunt wild animals and forage for fruit and honey. “We still live the ancient way of life, with arrows and old knives,” says Mangola, who works with the Olanakwe Community Fund to support education and economic projects for the Hadza. Hunters seek out food in the bush, which might include baboons, vervet monkeys, guinea fowl, kudu, porcupines, or dik-dik. Gatherers collect fruits, vegetables, and honey.

Mangola, who has met with multiple scientists over the years and participated in many research projects, has witnessed firsthand the impact of such research on his community. Much of it has been positive. But not all researchers act thoughtfully and ethically, he says, and some have exploited or harmed the community.

One enduring problem, says Mangola, is that scientists have tended to come and study the Hadza without properly explaining their research or their results. They arrive from Europe or the US, accompanied by guides, and collect feces, blood, hair, and other biological samples. Often, the people giving up these samples don’t know what they will be used for, says Mangola. Scientists get their results and publish them without returning to share them. “You tell the world [what you’ve discovered]—why can’t you come back to Tanzania to tell the Hadza?” asks Mangola. “It would bring meaning and excitement to the community,” he says.

Some scientists have talked about the Hadza as if they were living fossils, says Alyssa Crittenden, a nutritional anthropologist and biologist at the University of Nevada in Las Vegas, who has been studying and working with the Hadza for the last two decades.

The Hadza have been described as being “locked in time,” she adds, but characterizations like that don’t reflect reality. She has made many trips to Tanzania and seen for herself how life has changed. Tourists flock to the region. Roads have been built. Charities have helped the Hadza secure land rights. Mangola went abroad for his education: he has a law degree and a master’s from the Indigenous Peoples Law and Policy program at the University of Arizona.

Continue Reading

Tech

The Download: a microbiome gold rush, and Eric Schmidt’s election misinformation plan

Published

on

The Download: a microbiome gold rush, and Eric Schmidt’s election misinformation plan


Over the last couple of decades, scientists have come to realize just how important the microbes that crawl all over us are to our health. But some believe our microbiomes are in crisis—casualties of an increasingly sanitized way of life. Disturbances in the collections of microbes we host have been associated with a whole host of diseases, ranging from arthritis to Alzheimer’s.

Some might not be completely gone, though. Scientists believe many might still be hiding inside the intestines of people who don’t live in the polluted, processed environment that most of the rest of us share. They’ve been studying the feces of people like the Yanomami, an Indigenous group in the Amazon, who appear to still have some of the microbes that other people have lost. 

But there is a major catch: we don’t know whether those in hunter-gatherer societies really do have “healthier” microbiomes—and if they do, whether the benefits could be shared with others. At the same time, members of the communities being studied are concerned about the risk of what’s called biopiracy—taking natural resources from poorer countries for the benefit of wealthier ones. Read the full story.

—Jessica Hamzelou

Eric Schmidt has a 6-point plan for fighting election misinformation

—by Eric Schmidt, formerly the CEO of Google, and current cofounder of philanthropic initiative Schmidt Futures

The coming year will be one of seismic political shifts. Over 4 billion people will head to the polls in countries including the United States, Taiwan, India, and Indonesia, making 2024 the biggest election year in history.

Continue Reading

Tech

Navigating a shifting customer-engagement landscape with generative AI

Published

on

Navigating a shifting customer-engagement landscape with generative AI


A strategic imperative

Generative AI’s ability to harness customer data in a highly sophisticated manner means enterprises are accelerating plans to invest in and leverage the technology’s capabilities. In a study titled “The Future of Enterprise Data & AI,” Corinium Intelligence and WNS Triange surveyed 100 global C-suite leaders and decision-makers specializing in AI, analytics, and data. Seventy-six percent of the respondents said that their organizations are already using or planning to use generative AI.

According to McKinsey, while generative AI will affect most business functions, “four of them will likely account for 75% of the total annual value it can deliver.” Among these are marketing and sales and customer operations. Yet, despite the technology’s benefits, many leaders are unsure about the right approach to take and mindful of the risks associated with large investments.

Mapping out a generative AI pathway

One of the first challenges organizations need to overcome is senior leadership alignment. “You need the necessary strategy; you need the ability to have the necessary buy-in of people,” says Ayer. “You need to make sure that you’ve got the right use case and business case for each one of them.” In other words, a clearly defined roadmap and precise business objectives are as crucial as understanding whether a process is amenable to the use of generative AI.

The implementation of a generative AI strategy can take time. According to Ayer, business leaders should maintain a realistic perspective on the duration required for formulating a strategy, conduct necessary training across various teams and functions, and identify the areas of value addition. And for any generative AI deployment to work seamlessly, the right data ecosystems must be in place.

Ayer cites WNS Triange’s collaboration with an insurer to create a claims process by leveraging generative AI. Thanks to the new technology, the insurer can immediately assess the severity of a vehicle’s damage from an accident and make a claims recommendation based on the unstructured data provided by the client. “Because this can be immediately assessed by a surveyor and they can reach a recommendation quickly, this instantly improves the insurer’s ability to satisfy their policyholders and reduce the claims processing time,” Ayer explains.

All that, however, would not be possible without data on past claims history, repair costs, transaction data, and other necessary data sets to extract clear value from generative AI analysis. “Be very clear about data sufficiency. Don’t jump into a program where eventually you realize you don’t have the necessary data,” Ayer says.

The benefits of third-party experience

Enterprises are increasingly aware that they must embrace generative AI, but knowing where to begin is another thing. “You start off wanting to make sure you don’t repeat mistakes other people have made,” says Ayer. An external provider can help organizations avoid those mistakes and leverage best practices and frameworks for testing and defining explainability and benchmarks for return on investment (ROI).

Using pre-built solutions by external partners can expedite time to market and increase a generative AI program’s value. These solutions can harness pre-built industry-specific generative AI platforms to accelerate deployment. “Generative AI programs can be extremely complicated,” Ayer points out. “There are a lot of infrastructure requirements, touch points with customers, and internal regulations. Organizations will also have to consider using pre-built solutions to accelerate speed to value. Third-party service providers bring the expertise of having an integrated approach to all these elements.”

Continue Reading

Copyright © 2021 Seminole Press.