Connect with us

Tech

How to fight for internet freedom

Published

on

How to fight for internet freedom


Last week, Freedom House, a human rights advocacy group, released its annual review of the state of internet freedom around the world; it’s one of the most important trackers out there if you want to understand changes to digital free expression. 

As I wrote, the report shows that generative AI is already a game changer in geopolitics. But this isn’t the only concerning finding. Globally, internet freedom has never been lower, and the number of countries that have blocked websites for political, social, and religious speech has never been higher. Also, the number of countries that arrested people for online expression reached a record high.

These issues are particularly urgent before we head into a year with over 50 elections worldwide; as Freedom House has noted, election cycles are times when internet freedom is often most under threat. The organization has issued some recommendations for how the international community should respond to the growing crisis, and I also reached out to another policy expert for her perspective.

Call me an optimist, but talking with them this week made me feel like there are at least some actionable things we might do to make the internet safer and freer. Here are three key things they say tech companies and lawmakers should do:

  1. Increase transparency around AI models 

    One of the primary recommendations from Freedom House is to encourage more public disclosure of how AI models were built. Large language models like ChatGPT are infamously inscrutable (you should read my colleagues’ work on this), and the companies that develop the algorithms have been resistant to disclosing information about what data they used to train their models.  

    “Government regulation should be aimed at delivering more transparency, providing effective mechanisms of public oversight, and prioritizing the protection of human rights,” the report says. 

    As governments race to keep up in a rapidly evolving space, comprehensive legislation may be out of reach. But proposals that mandate more narrow requirements—like the disclosure of training data and standardized testing for bias in outputs—could find their way into more targeted policies. (If you’re curious to know more about what the US in particular may do to regulate AI, I’ve covered that, too.) 

    When it comes to internet freedom, increased transparency would also help people better recognize when they are seeing state-sponsored content online—like in China, where the government requires content created by generative AI models to be favorable to the Communist Party

  2. Be cautious when using AI to scan and filter content

    Social media companies are increasingly using algorithms to moderate what appears on their platforms. While automatic moderation helps thwart disinformation, it also risks hurting online expression. 

    “While corporations should consider the ways in which their platforms and products are designed, developed, and deployed so as not to exacerbate state-sponsored disinformation campaigns, they must be vigilant to preserve human rights, namely free expression and association online,” says Mallory Knodel, the chief technology officer of the Center for Democracy and Technology. 

    Additionally, Knodel says that when governments require platforms to scan and filter content, this often leads to algorithms that block even more content than intended.

    As part of the solution, Knodel believes tech companies should find ways to “enhance human-in-the-loop features,” in which people have hands-on roles in content moderation, and “rely on user agency to both block and report disinformation.” 

  3. Develop ways to better label AI generated content, especially related to elections

    Currently, labeling AI generated images, video, and audio is incredibly hard to do. (I’ve written a bit about this in the past, particularly the ways technologists are trying to make progress on the problem.) But there’s no gold standard here, so misleading content, especially around elections, has the potential to do great harm.

    Allie Funk, one of the researchers on the Freedom House report, told me about an example in Nigeria of an AI-manipulated audio clip in which presidential candidate Atiku Abubakar and his team could be heard saying they planned to rig the ballots. Nigeria has a history of election-related conflict, and Funk says disinformation like this “really threatens to inflame simmering potential unrest” and create “disastrous impacts.”

    AI-manipulated audio is particularly hard to detect. Funk says this example is just one among many that the group chronicled that “speaks to the need for a whole host of different types of labeling.” Even if it can’t be ready in time for next year’s elections, it’s critical that we start to figure it out now.

What else I’m reading

  • This joint investigation from Wired and the Markup showed that predictive policing software was right less than 1% of time. The findings are damning yet not surprising: policing technology has a long history of being exposed as junk science, especially in forensics.
  • MIT Technology Review released our first list of climate technology companies to watch, in which we highlight companies pioneering breakthrough research. Read my colleague James Temple’s overview of the list, which makes the case of why we need to pay attention to technologies that have potential to impact our climate crisis. 
  • Companies that own or use generative AI might soon be able to take out insurance policies to mitigate the risk of using AI models—think biased outputs and copyright lawsuits. It’s a fascinating development in the marketplace of generative AI.

What I learned this week

new paper from Stanford’s Journal of Online Trust and Safety highlights why content moderation in low-resource languages, which are languages without enough digitized training data to build accurate AI systems, is so poor. It also makes an interesting case about where attention should go to improve this. While social media companies ultimately need “access to more training and testing data in those languages,” it argues, a “lower-hanging fruit” could be investing in local and grassroots initiatives for research on natural-language processing (NLP) in low-resource languages.  

“Funders can help support existing local collectives of language- and language-family-specific NLP research networks who are working to digitize and build tools for some of the lowest-resource languages,” the researchers write. In other words, rather than investing in collecting more data from low-resource languages for big Western tech companies, funders should spend money in local NLP projects that are developing new AI research, which could create AI well suited for those languages directly.

Tech

The hunter-gatherer groups at the heart of a microbiome gold rush

Published

on

The hunter-gatherer groups at the heart of a microbiome gold rush


The first step to finding out is to catalogue what microbes we might have lost. To get as close to ancient microbiomes as possible, microbiologists have begun studying multiple Indigenous groups. Two have received the most attention: the Yanomami of the Amazon rainforest and the Hadza, in northern Tanzania. 

Researchers have made some startling discoveries already. A study by Sonnenburg and his colleagues, published in July, found that the gut microbiomes of the Hadza appear to include bugs that aren’t seen elsewhere—around 20% of the microbe genomes identified had not been recorded in a global catalogue of over 200,000 such genomes. The researchers found 8.4 million protein families in the guts of the 167 Hadza people they studied. Over half of them had not previously been identified in the human gut.

Plenty of other studies published in the last decade or so have helped build a picture of how the diets and lifestyles of hunter-gatherer societies influence the microbiome, and scientists have speculated on what this means for those living in more industrialized societies. But these revelations have come at a price.

A changing way of life

The Hadza people hunt wild animals and forage for fruit and honey. “We still live the ancient way of life, with arrows and old knives,” says Mangola, who works with the Olanakwe Community Fund to support education and economic projects for the Hadza. Hunters seek out food in the bush, which might include baboons, vervet monkeys, guinea fowl, kudu, porcupines, or dik-dik. Gatherers collect fruits, vegetables, and honey.

Mangola, who has met with multiple scientists over the years and participated in many research projects, has witnessed firsthand the impact of such research on his community. Much of it has been positive. But not all researchers act thoughtfully and ethically, he says, and some have exploited or harmed the community.

One enduring problem, says Mangola, is that scientists have tended to come and study the Hadza without properly explaining their research or their results. They arrive from Europe or the US, accompanied by guides, and collect feces, blood, hair, and other biological samples. Often, the people giving up these samples don’t know what they will be used for, says Mangola. Scientists get their results and publish them without returning to share them. “You tell the world [what you’ve discovered]—why can’t you come back to Tanzania to tell the Hadza?” asks Mangola. “It would bring meaning and excitement to the community,” he says.

Some scientists have talked about the Hadza as if they were living fossils, says Alyssa Crittenden, a nutritional anthropologist and biologist at the University of Nevada in Las Vegas, who has been studying and working with the Hadza for the last two decades.

The Hadza have been described as being “locked in time,” she adds, but characterizations like that don’t reflect reality. She has made many trips to Tanzania and seen for herself how life has changed. Tourists flock to the region. Roads have been built. Charities have helped the Hadza secure land rights. Mangola went abroad for his education: he has a law degree and a master’s from the Indigenous Peoples Law and Policy program at the University of Arizona.

Continue Reading

Tech

The Download: a microbiome gold rush, and Eric Schmidt’s election misinformation plan

Published

on

The Download: a microbiome gold rush, and Eric Schmidt’s election misinformation plan


Over the last couple of decades, scientists have come to realize just how important the microbes that crawl all over us are to our health. But some believe our microbiomes are in crisis—casualties of an increasingly sanitized way of life. Disturbances in the collections of microbes we host have been associated with a whole host of diseases, ranging from arthritis to Alzheimer’s.

Some might not be completely gone, though. Scientists believe many might still be hiding inside the intestines of people who don’t live in the polluted, processed environment that most of the rest of us share. They’ve been studying the feces of people like the Yanomami, an Indigenous group in the Amazon, who appear to still have some of the microbes that other people have lost. 

But there is a major catch: we don’t know whether those in hunter-gatherer societies really do have “healthier” microbiomes—and if they do, whether the benefits could be shared with others. At the same time, members of the communities being studied are concerned about the risk of what’s called biopiracy—taking natural resources from poorer countries for the benefit of wealthier ones. Read the full story.

—Jessica Hamzelou

Eric Schmidt has a 6-point plan for fighting election misinformation

—by Eric Schmidt, formerly the CEO of Google, and current cofounder of philanthropic initiative Schmidt Futures

The coming year will be one of seismic political shifts. Over 4 billion people will head to the polls in countries including the United States, Taiwan, India, and Indonesia, making 2024 the biggest election year in history.

Continue Reading

Tech

Navigating a shifting customer-engagement landscape with generative AI

Published

on

Navigating a shifting customer-engagement landscape with generative AI


A strategic imperative

Generative AI’s ability to harness customer data in a highly sophisticated manner means enterprises are accelerating plans to invest in and leverage the technology’s capabilities. In a study titled “The Future of Enterprise Data & AI,” Corinium Intelligence and WNS Triange surveyed 100 global C-suite leaders and decision-makers specializing in AI, analytics, and data. Seventy-six percent of the respondents said that their organizations are already using or planning to use generative AI.

According to McKinsey, while generative AI will affect most business functions, “four of them will likely account for 75% of the total annual value it can deliver.” Among these are marketing and sales and customer operations. Yet, despite the technology’s benefits, many leaders are unsure about the right approach to take and mindful of the risks associated with large investments.

Mapping out a generative AI pathway

One of the first challenges organizations need to overcome is senior leadership alignment. “You need the necessary strategy; you need the ability to have the necessary buy-in of people,” says Ayer. “You need to make sure that you’ve got the right use case and business case for each one of them.” In other words, a clearly defined roadmap and precise business objectives are as crucial as understanding whether a process is amenable to the use of generative AI.

The implementation of a generative AI strategy can take time. According to Ayer, business leaders should maintain a realistic perspective on the duration required for formulating a strategy, conduct necessary training across various teams and functions, and identify the areas of value addition. And for any generative AI deployment to work seamlessly, the right data ecosystems must be in place.

Ayer cites WNS Triange’s collaboration with an insurer to create a claims process by leveraging generative AI. Thanks to the new technology, the insurer can immediately assess the severity of a vehicle’s damage from an accident and make a claims recommendation based on the unstructured data provided by the client. “Because this can be immediately assessed by a surveyor and they can reach a recommendation quickly, this instantly improves the insurer’s ability to satisfy their policyholders and reduce the claims processing time,” Ayer explains.

All that, however, would not be possible without data on past claims history, repair costs, transaction data, and other necessary data sets to extract clear value from generative AI analysis. “Be very clear about data sufficiency. Don’t jump into a program where eventually you realize you don’t have the necessary data,” Ayer says.

The benefits of third-party experience

Enterprises are increasingly aware that they must embrace generative AI, but knowing where to begin is another thing. “You start off wanting to make sure you don’t repeat mistakes other people have made,” says Ayer. An external provider can help organizations avoid those mistakes and leverage best practices and frameworks for testing and defining explainability and benchmarks for return on investment (ROI).

Using pre-built solutions by external partners can expedite time to market and increase a generative AI program’s value. These solutions can harness pre-built industry-specific generative AI platforms to accelerate deployment. “Generative AI programs can be extremely complicated,” Ayer points out. “There are a lot of infrastructure requirements, touch points with customers, and internal regulations. Organizations will also have to consider using pre-built solutions to accelerate speed to value. Third-party service providers bring the expertise of having an integrated approach to all these elements.”

Continue Reading

Copyright © 2021 Seminole Press.