Individuals should not have to fight for their data privacy rights and be responsible for every consequence of their digital actions. Consider an analogy: people have a right to safe drinking water, but they aren’t urged to exercise that right by checking the quality of the water with a pipette every time they have a drink at the tap. Instead, regulatory agencies act on everyone’s behalf to ensure that all our water is safe. The same must be done for digital privacy: it isn’t something the average user is, or should be expected to be, personally competent to protect.
There are two parallel approaches that should be pursued to protect the public.
One is better use of class or group actions, otherwise known as collective redress actions. Historically, these have been limited in Europe, but in November 2020 the European parliament passed a measure that requires all 27 EU member states to implement measures allowing for collective redress actions across the region. Compared with the US, the EU has stronger laws protecting consumer data and promoting competition, so class or group action lawsuits in Europe can be a powerful tool for lawyers and activists to force big tech companies to change their behavior even in cases where the per-person damages would be very low.
Class action lawsuits have most often been used in the US to seek financial damages, but they can also be used to force changes in policy and practice. They can work hand in hand with campaigns to change public opinion, especially in consumer cases (for example, by forcing Big Tobacco to admit to the link between smoking and cancer, or by paving the way for car seatbelt laws). They are powerful tools when there are thousands, if not millions, of similar individual harms, which add up to help prove causation. Part of the problem is getting the right information to sue in the first place. Government efforts, like a lawsuit brought against Facebook in December by the Federal Trade Commission (FTC) and a group of 46 states, are crucial. As the tech journalist Gilad Edelman puts it, “According to the lawsuits, the erosion of user privacy over time is a form of consumer harm—a social network that protects user data less is an inferior product—that tips Facebook from a mere monopoly to an illegal one.” In the US, as the New York Times recently reported, private lawsuits, including class actions, often “lean on evidence unearthed by the government investigations.” In the EU, however, it’s the other way around: private lawsuits can open up the possibility of regulatory action, which is constrained by the gap between EU-wide laws and national regulators.
Which brings us to the second approach: a little-known 2016 French law called the Digital Republic Bill. The Digital Republic Bill is one of the few modern laws focused on automated decision making. The law currently applies only to administrative decisions taken by public-sector algorithmic systems. But it provides a sketch for what future laws could look like. It says that the source code behind such systems must be made available to the public. Anyone can request that code.
Importantly, the law enables advocacy organizations to request information on the functioning of an algorithm and the source code behind it even if they don’t represent a specific individual or claimant who is allegedly harmed. The need to find a “perfect plaintiff” who can prove harm in order to file a suit makes it very difficult to tackle the systemic issues that cause collective data harms. Laure Lucchesi, the director of Etalab, a French government office in charge of overseeing the bill, says that the law’s focus on algorithmic accountability was ahead of its time. Other laws, like the European General Data Protection Regulation (GDPR), focus too heavily on individual consent and privacy. But both the data and the algorithms need to be regulated.
The need to find a “perfect plaintiff” who can prove harm in order to file a suit makes it very difficult to tackle the systemic issues that cause collective data harms.
Apple promises in one advertisement: “Right now, there is more private information on your phone than in your home. Your locations, your messages, your heart rate after a run. These are private things. And they should belong to you.” Apple is reinforcing this individualist’s fallacy: by failing to mention that your phone stores more than just your personal data, the company obfuscates the fact that the really valuable data comes from your interactions with your service providers and others. The notion that your phone is the digital equivalent of your filing cabinet is a convenient illusion. Companies actually care little about your personal data; that is why they can pretend to lock it in a box. The value lies in the inferences drawn from your interactions, which are also stored on your phone—but that data does not belong to you.
Google’s acquisition of Fitbit is another example. Google promises “not to use Fitbit data for advertising,” but the lucrative predictions Google needs aren’t dependent on individual data. As a group of European economists argued in a recent paper put out by the Centre for Economic Policy Research, a think tank in London, “it is enough for Google to correlate aggregate health outcomes with non-health outcomes for even a subset of Fitbit users that did not opt out from some use of using their data, to then predict health outcomes (and thus ad targeting possibilities) for all non-Fitbit users (billions of them).” The Google-Fitbit deal is essentially a group data deal. It positions Google in a key market for heath data while enabling it to triangulate different data sets and make money from the inferences used by health and insurance markets.
What policymakers must do
Draft bills have sought to fill this gap in the United States. In 2019 Senators Cory Booker and Ron Wyden introduced an Algorithmic Accountability Act, which subsequently stalled in Congress. The act would have required firms to undertake algorithmic impact assessments in certain situations to check for bias or discrimination. But in the US this crucial issue is more likely to be taken up first in laws applying to specific sectors such as health care, where the danger of algorithmic bias has been magnified by the pandemic’s disparate impacts on US population groups.
In late January, the Public Health Emergency Privacy Act was reintroduced to the Senate and House of Representatives by Senators Mark Warner and Richard Blumenthal. This act would ensure that data collected for public health purposes is not used for any other purpose. It would prohibit the use of health data for discriminatory, unrelated, or intrusive purposes, including commercial advertising, e-commerce, or efforts to control access to employment, finance, insurance, housing, or education. This would be a great start. Going further, a law that applies to all algorithmic decision making should, inspired by the French example, focus on hard accountability, strong regulatory oversight of data-driven decision making, and the ability to audit and inspect algorithmic decisions and their impact on society.
Three elements are needed to ensure hard accountability: (1) clear transparency about where and when automated decisions take place and how they affect people and groups, (2) the public’s right to offer meaningful input and call on those in authority to justify their decisions, and (3) the ability to enforce sanctions. Crucially, policymakers will need to decide, as has been recently suggested in the EU, what constitutes a “high risk” algorithm that should meet a higher standard of scrutiny.
The focus should be on public scrutiny of automated decision making and the types of transparency that lead to accountability. This includes revealing the existence of algorithms, their purpose, and the training data behind them, as well as their impacts—whether they have led to disparate outcomes, and on which groups if so.
The public has a fundamental right to call on those in power to justify their decisions. This “right to demand answers” should not be limited to consultative participation, where people are asked for their input and officials move on. It should include empowered participation, where public input is mandated prior to the rollout of high-risks algorithms in both the public and private sectors.
Finally, the power to sanction is key for these reforms to succeed and for accountability to be achieved. It should be mandatory to establish auditing requirements for data targeting, verification, and curation, to equip auditors with this baseline knowledge, and to empower oversight bodies to enforce sanctions, not only to remedy harm after the fact but to prevent it.
The issue of collective data-driven harms affects everyone. A Public Health Emergency Privacy Act is a first step. Congress should then use the lessons from implementing that act to develop laws that focus specifically on collective data rights. Only through such action can the US avoid situations where inferences drawn from the data companies collect haunt people’s ability to access housing, jobs, credit, and other opportunities for years to come.
The hunter-gatherer groups at the heart of a microbiome gold rush
The first step to finding out is to catalogue what microbes we might have lost. To get as close to ancient microbiomes as possible, microbiologists have begun studying multiple Indigenous groups. Two have received the most attention: the Yanomami of the Amazon rainforest and the Hadza, in northern Tanzania.
Researchers have made some startling discoveries already. A study by Sonnenburg and his colleagues, published in July, found that the gut microbiomes of the Hadza appear to include bugs that aren’t seen elsewhere—around 20% of the microbe genomes identified had not been recorded in a global catalogue of over 200,000 such genomes. The researchers found 8.4 million protein families in the guts of the 167 Hadza people they studied. Over half of them had not previously been identified in the human gut.
Plenty of other studies published in the last decade or so have helped build a picture of how the diets and lifestyles of hunter-gatherer societies influence the microbiome, and scientists have speculated on what this means for those living in more industrialized societies. But these revelations have come at a price.
A changing way of life
The Hadza people hunt wild animals and forage for fruit and honey. “We still live the ancient way of life, with arrows and old knives,” says Mangola, who works with the Olanakwe Community Fund to support education and economic projects for the Hadza. Hunters seek out food in the bush, which might include baboons, vervet monkeys, guinea fowl, kudu, porcupines, or dik-dik. Gatherers collect fruits, vegetables, and honey.
Mangola, who has met with multiple scientists over the years and participated in many research projects, has witnessed firsthand the impact of such research on his community. Much of it has been positive. But not all researchers act thoughtfully and ethically, he says, and some have exploited or harmed the community.
One enduring problem, says Mangola, is that scientists have tended to come and study the Hadza without properly explaining their research or their results. They arrive from Europe or the US, accompanied by guides, and collect feces, blood, hair, and other biological samples. Often, the people giving up these samples don’t know what they will be used for, says Mangola. Scientists get their results and publish them without returning to share them. “You tell the world [what you’ve discovered]—why can’t you come back to Tanzania to tell the Hadza?” asks Mangola. “It would bring meaning and excitement to the community,” he says.
Some scientists have talked about the Hadza as if they were living fossils, says Alyssa Crittenden, a nutritional anthropologist and biologist at the University of Nevada in Las Vegas, who has been studying and working with the Hadza for the last two decades.
The Hadza have been described as being “locked in time,” she adds, but characterizations like that don’t reflect reality. She has made many trips to Tanzania and seen for herself how life has changed. Tourists flock to the region. Roads have been built. Charities have helped the Hadza secure land rights. Mangola went abroad for his education: he has a law degree and a master’s from the Indigenous Peoples Law and Policy program at the University of Arizona.
The Download: a microbiome gold rush, and Eric Schmidt’s election misinformation plan
Over the last couple of decades, scientists have come to realize just how important the microbes that crawl all over us are to our health. But some believe our microbiomes are in crisis—casualties of an increasingly sanitized way of life. Disturbances in the collections of microbes we host have been associated with a whole host of diseases, ranging from arthritis to Alzheimer’s.
Some might not be completely gone, though. Scientists believe many might still be hiding inside the intestines of people who don’t live in the polluted, processed environment that most of the rest of us share. They’ve been studying the feces of people like the Yanomami, an Indigenous group in the Amazon, who appear to still have some of the microbes that other people have lost.
But there is a major catch: we don’t know whether those in hunter-gatherer societies really do have “healthier” microbiomes—and if they do, whether the benefits could be shared with others. At the same time, members of the communities being studied are concerned about the risk of what’s called biopiracy—taking natural resources from poorer countries for the benefit of wealthier ones. Read the full story.
Eric Schmidt has a 6-point plan for fighting election misinformation
—by Eric Schmidt, formerly the CEO of Google, and current cofounder of philanthropic initiative Schmidt Futures
The coming year will be one of seismic political shifts. Over 4 billion people will head to the polls in countries including the United States, Taiwan, India, and Indonesia, making 2024 the biggest election year in history.
Navigating a shifting customer-engagement landscape with generative AI
A strategic imperative
Generative AI’s ability to harness customer data in a highly sophisticated manner means enterprises are accelerating plans to invest in and leverage the technology’s capabilities. In a study titled “The Future of Enterprise Data & AI,” Corinium Intelligence and WNS Triange surveyed 100 global C-suite leaders and decision-makers specializing in AI, analytics, and data. Seventy-six percent of the respondents said that their organizations are already using or planning to use generative AI.
According to McKinsey, while generative AI will affect most business functions, “four of them will likely account for 75% of the total annual value it can deliver.” Among these are marketing and sales and customer operations. Yet, despite the technology’s benefits, many leaders are unsure about the right approach to take and mindful of the risks associated with large investments.
Mapping out a generative AI pathway
One of the first challenges organizations need to overcome is senior leadership alignment. “You need the necessary strategy; you need the ability to have the necessary buy-in of people,” says Ayer. “You need to make sure that you’ve got the right use case and business case for each one of them.” In other words, a clearly defined roadmap and precise business objectives are as crucial as understanding whether a process is amenable to the use of generative AI.
The implementation of a generative AI strategy can take time. According to Ayer, business leaders should maintain a realistic perspective on the duration required for formulating a strategy, conduct necessary training across various teams and functions, and identify the areas of value addition. And for any generative AI deployment to work seamlessly, the right data ecosystems must be in place.
Ayer cites WNS Triange’s collaboration with an insurer to create a claims process by leveraging generative AI. Thanks to the new technology, the insurer can immediately assess the severity of a vehicle’s damage from an accident and make a claims recommendation based on the unstructured data provided by the client. “Because this can be immediately assessed by a surveyor and they can reach a recommendation quickly, this instantly improves the insurer’s ability to satisfy their policyholders and reduce the claims processing time,” Ayer explains.
All that, however, would not be possible without data on past claims history, repair costs, transaction data, and other necessary data sets to extract clear value from generative AI analysis. “Be very clear about data sufficiency. Don’t jump into a program where eventually you realize you don’t have the necessary data,” Ayer says.
The benefits of third-party experience
Enterprises are increasingly aware that they must embrace generative AI, but knowing where to begin is another thing. “You start off wanting to make sure you don’t repeat mistakes other people have made,” says Ayer. An external provider can help organizations avoid those mistakes and leverage best practices and frameworks for testing and defining explainability and benchmarks for return on investment (ROI).
Using pre-built solutions by external partners can expedite time to market and increase a generative AI program’s value. These solutions can harness pre-built industry-specific generative AI platforms to accelerate deployment. “Generative AI programs can be extremely complicated,” Ayer points out. “There are a lot of infrastructure requirements, touch points with customers, and internal regulations. Organizations will also have to consider using pre-built solutions to accelerate speed to value. Third-party service providers bring the expertise of having an integrated approach to all these elements.”