Connect with us

Tech

The fight over the future of encryption, explained

Published

on

The fight over the future of encryption, explained


This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

On October 9, I moderated a panel on encryption, privacy policy, and human rights at the United Nations’s annual Internet Governance Forum. I shared the stage with some fabulous panelists including Roger Dingledine, the director of the Tor Project; Sharon Polsky, the president of the Privacy and Access Council of Canada; and Rand Hammoud, a campaigner at Access Now, a human rights advocacy organization. All strongly believe in and champion the protection of encryption.

I want to tell you about one thing that came up in our conversation: efforts to, in some way, monitor encrypted messages. 

Policy proposals have been popping up around the world (like in AustraliaIndia, and, most recently, the UK) that call for tech companies to build in ways to gain information about encrypted messages, including through back-door access. There have also been efforts to increase moderation and safety on encrypted messaging apps, like Signal and Telegram, to try to prevent the spread of abusive content, like child sexual abuse material, criminal networking, and drug trafficking. 

Not surprisingly, advocates for encryption are generally opposed to these sorts of proposals as they weaken the level of user privacy that’s currently guaranteed by end-to-end encryption. 

In my prep work before the panel, and then in our conversation, I learned about some new cryptographic technologies that might allow for some content moderation, as well as increased enforcement of platform policies and laws, all without breaking encryption. These are sort-of fringe technologies right now, mainly still in the research phase. Though they are being developed in several different flavors, most of these technologies ostensibly enable algorithms to evaluate messages or patterns in their metadata to flag problematic material without having to break encryption or reveal the content of the messages. 

Legally, and politically, the space is sort of a hornet’s nest; states are desperate to crack down on illicit activity on the platforms, but free speech advocates argue that review will lead to censorship. In my opinion, it’s a space well-worth watching since it may very well impact all of us. 

Here’s what you ought to know: 

First, some basics on encryption and the debate… 

Even if you’re not familiar with exactly how encryption works, you probably use it pretty regularly. It’s a technology that uses cryptography (essentially, the math responsible for codes) to basically scramble messages so that the contents of them remain private. Today, we talk a lot about end-to-end encryption, in which a sender transmits a message that gets encrypted and sent as ciphertext. Then the receiver has to decrypt it to read the message in plain text. With end-to-end encryption, even tech companies that make encrypted apps do not have the “keys” to break that cipher. 

Encryption has been debated from a policy perspective since its inception, especially after high-profile crimes or terrorist attacks. (The investigation of the 2015 San Bernardino shooting is one example.) Tech companies argue that providing access would have substantial risks because it would be hard to keep a master key—which doesn’t actually exist today—from bad actors. Opponents of these back doors also say that law enforcement really can’t be trusted with this kind of access. 

So tell me about this new tech…

There are two main buckets of technologies to watch here right now. 

Automated scanning: This is the more popular, and the more controversial. It involves AI-powered systems that scan message content and compare it to a database of objectionable material. If a message is flagged as potentially abusive, tech companies theoretically could prevent the message from being sent or could in some manner flag the material to law enforcement or to the recipient. There are two main ways this could be done: client-side scanning and server-side scanning (sometimes called homomorphic encryption), with the main differences being how and where the message is scanned and compared to a database.

Client-side scanning occurs on the devices of users before messages are encrypted and sent; server-side scanning takes place once the message has been encrypted and sent, intercepting it prior to it reaching the recipient. (Some privacy advocates argue server-side scanning does more to protect anonymity since algorithms process the already-encrypted message to check for database matches without revealing its actual content.)

Cons: From a technical standpoint, it takes a lot of computing power to compare every message to a database before it’s sent or received, so it’s not very easy to scale this tech. Additionally, moderation algorithms are not perfectly accurate, so this would run the risk of AI flagging messages that are not problematic, resulting in a clampdown on speech and potentially ensnaring innocent people. From a censorship and privacy standpoint, it’s not hard to see how contentious this approach could get. And who gets to decide what goes on the database of objectionable material?

Apple proposed implementing client-side scanning in 2021 to crack down on child sexual abuse material, and quickly abandoned the plan. And Signal’s president Meredith Whittaker has said “client side scanning is a Faustian bargain that nullifies the entire premise of end-to-end encryption by mandating deeply insecure technology that would enable the government to literally check with every utterance before it is expressed.”

Message franking and forward tracing: Message franking uses cryptography to produce verifiable reports of malicious messages. Right now, when users report abuse on an encrypted messaging app, there is no way to verify those reports because tech companies cannot see the actual content of messages, and screenshots are easily manipulated.

Franking was proposed by Facebook in 2017, and it basically embeds a tag in each message that functions like an invisible electronic signature. When a user reports a message as abusive, Facebook can then use that tag to verify a reported message has not been tampered with.

Forward tracing builds off message franking and lets platforms track where an encrypted message originated. Often, abusive messages will be forwarded and shared many times over, making it hard for platforms to control the spread of abusive content even if it has been reported by users and verified. Like message franking, forward tracing uses cryptographic codes to allow platforms to see where a message came from. Platforms could then theoretically shut down the account or accounts spreading the problematic messages.

Cons: These techniques don’t actually enable tech companies or authorities to have increased moderation power in private messages, but they do help make user-centric and community moderation more robust and offer more visibility into encrypted spaces. However, it’s not clear if this approach is actually legal, at least in the US; some analysis has suggested it may break US wiretapping law.

What’s next?

For now, none of these technologies seem ready to be deployed from a technical standpoint, and they may be on shaky ground legally. In the UK, an earlier version of the Online Safety Act actually mandated that encrypted messaging providers deploy these sorts of technologies, though that language was removed last month after it became clear that this technology wasn’t ready. Meta plans to encrypt Facebook Messenger by the end of 2023 and Instagram direct messages soon after, so it will be interesting to see if it incorporates any of its own research on these technologies.

Overall and perhaps unsurprisingly given their work, my panelists aren’t too optimistic about this space, and argued that policy conversations should, first and foremost, focus on protecting encryption and increasing privacy. 

As Dingledine said to me after our panel, “Technology is a borderless place. If you break encryption for one, you break encryption for all, undermining national security and potentially harming the same groups you seek to protect.”

What else I’m reading

  • The challenges of moderating encrypted spaces came into sharp view this week with the horrors in Israel and Palestine. Hamas militants have vowed to broadcast executions over social media and have, thus far, been heavily using Telegram, an encrypted app. Drew Harwell at the Washington Post explains why this type of violent content might be impossible to scrub from the internet
  • An essential front of the US-China tech war has been the struggle for control over advanced computing chips needed for artificial intelligence. Now the US is considering finding ways to blockade China from advanced AI itself, writes Karen Hao in the Atlantic.
  • A damning new report from an oversight group at the Department of Homeland Security found that several agencies, including Immigration and Customs Enforcement, Customs and Border Protection, and the Secret Service, broke the law while using location data collected from apps on smartphones, writes Joe Cox in 404 Media.

What I learned this week

Meta’s Oversight Board, an independent body that issues binding policies for the tech company, is working on its first deepfake case. It has reportedly agreed to review a decision made by Facebook to leave up a manipulated video of President Joe Biden. Meta said that the video was not removed because it was not generated by AI nor did it feature manipulated speech. 

“The Board selected this case to assess whether Meta’s policies adequately cover altered videos that could mislead people into believing politicians have taken actions, outside of speech, that they have not,” wrote the board in a blog post.

This means that the board is likely to soon reaffirm or make changes to the social media platform’s policy on deepfakes ahead of the US presidential election, which could have massive ramifications over the next year as generative AI continues to steamroll its way into digital information ecosystems. 

Tech

The hunter-gatherer groups at the heart of a microbiome gold rush

Published

on

The hunter-gatherer groups at the heart of a microbiome gold rush


The first step to finding out is to catalogue what microbes we might have lost. To get as close to ancient microbiomes as possible, microbiologists have begun studying multiple Indigenous groups. Two have received the most attention: the Yanomami of the Amazon rainforest and the Hadza, in northern Tanzania. 

Researchers have made some startling discoveries already. A study by Sonnenburg and his colleagues, published in July, found that the gut microbiomes of the Hadza appear to include bugs that aren’t seen elsewhere—around 20% of the microbe genomes identified had not been recorded in a global catalogue of over 200,000 such genomes. The researchers found 8.4 million protein families in the guts of the 167 Hadza people they studied. Over half of them had not previously been identified in the human gut.

Plenty of other studies published in the last decade or so have helped build a picture of how the diets and lifestyles of hunter-gatherer societies influence the microbiome, and scientists have speculated on what this means for those living in more industrialized societies. But these revelations have come at a price.

A changing way of life

The Hadza people hunt wild animals and forage for fruit and honey. “We still live the ancient way of life, with arrows and old knives,” says Mangola, who works with the Olanakwe Community Fund to support education and economic projects for the Hadza. Hunters seek out food in the bush, which might include baboons, vervet monkeys, guinea fowl, kudu, porcupines, or dik-dik. Gatherers collect fruits, vegetables, and honey.

Mangola, who has met with multiple scientists over the years and participated in many research projects, has witnessed firsthand the impact of such research on his community. Much of it has been positive. But not all researchers act thoughtfully and ethically, he says, and some have exploited or harmed the community.

One enduring problem, says Mangola, is that scientists have tended to come and study the Hadza without properly explaining their research or their results. They arrive from Europe or the US, accompanied by guides, and collect feces, blood, hair, and other biological samples. Often, the people giving up these samples don’t know what they will be used for, says Mangola. Scientists get their results and publish them without returning to share them. “You tell the world [what you’ve discovered]—why can’t you come back to Tanzania to tell the Hadza?” asks Mangola. “It would bring meaning and excitement to the community,” he says.

Some scientists have talked about the Hadza as if they were living fossils, says Alyssa Crittenden, a nutritional anthropologist and biologist at the University of Nevada in Las Vegas, who has been studying and working with the Hadza for the last two decades.

The Hadza have been described as being “locked in time,” she adds, but characterizations like that don’t reflect reality. She has made many trips to Tanzania and seen for herself how life has changed. Tourists flock to the region. Roads have been built. Charities have helped the Hadza secure land rights. Mangola went abroad for his education: he has a law degree and a master’s from the Indigenous Peoples Law and Policy program at the University of Arizona.

Continue Reading

Tech

The Download: a microbiome gold rush, and Eric Schmidt’s election misinformation plan

Published

on

The Download: a microbiome gold rush, and Eric Schmidt’s election misinformation plan


Over the last couple of decades, scientists have come to realize just how important the microbes that crawl all over us are to our health. But some believe our microbiomes are in crisis—casualties of an increasingly sanitized way of life. Disturbances in the collections of microbes we host have been associated with a whole host of diseases, ranging from arthritis to Alzheimer’s.

Some might not be completely gone, though. Scientists believe many might still be hiding inside the intestines of people who don’t live in the polluted, processed environment that most of the rest of us share. They’ve been studying the feces of people like the Yanomami, an Indigenous group in the Amazon, who appear to still have some of the microbes that other people have lost. 

But there is a major catch: we don’t know whether those in hunter-gatherer societies really do have “healthier” microbiomes—and if they do, whether the benefits could be shared with others. At the same time, members of the communities being studied are concerned about the risk of what’s called biopiracy—taking natural resources from poorer countries for the benefit of wealthier ones. Read the full story.

—Jessica Hamzelou

Eric Schmidt has a 6-point plan for fighting election misinformation

—by Eric Schmidt, formerly the CEO of Google, and current cofounder of philanthropic initiative Schmidt Futures

The coming year will be one of seismic political shifts. Over 4 billion people will head to the polls in countries including the United States, Taiwan, India, and Indonesia, making 2024 the biggest election year in history.

Continue Reading

Tech

Navigating a shifting customer-engagement landscape with generative AI

Published

on

Navigating a shifting customer-engagement landscape with generative AI


A strategic imperative

Generative AI’s ability to harness customer data in a highly sophisticated manner means enterprises are accelerating plans to invest in and leverage the technology’s capabilities. In a study titled “The Future of Enterprise Data & AI,” Corinium Intelligence and WNS Triange surveyed 100 global C-suite leaders and decision-makers specializing in AI, analytics, and data. Seventy-six percent of the respondents said that their organizations are already using or planning to use generative AI.

According to McKinsey, while generative AI will affect most business functions, “four of them will likely account for 75% of the total annual value it can deliver.” Among these are marketing and sales and customer operations. Yet, despite the technology’s benefits, many leaders are unsure about the right approach to take and mindful of the risks associated with large investments.

Mapping out a generative AI pathway

One of the first challenges organizations need to overcome is senior leadership alignment. “You need the necessary strategy; you need the ability to have the necessary buy-in of people,” says Ayer. “You need to make sure that you’ve got the right use case and business case for each one of them.” In other words, a clearly defined roadmap and precise business objectives are as crucial as understanding whether a process is amenable to the use of generative AI.

The implementation of a generative AI strategy can take time. According to Ayer, business leaders should maintain a realistic perspective on the duration required for formulating a strategy, conduct necessary training across various teams and functions, and identify the areas of value addition. And for any generative AI deployment to work seamlessly, the right data ecosystems must be in place.

Ayer cites WNS Triange’s collaboration with an insurer to create a claims process by leveraging generative AI. Thanks to the new technology, the insurer can immediately assess the severity of a vehicle’s damage from an accident and make a claims recommendation based on the unstructured data provided by the client. “Because this can be immediately assessed by a surveyor and they can reach a recommendation quickly, this instantly improves the insurer’s ability to satisfy their policyholders and reduce the claims processing time,” Ayer explains.

All that, however, would not be possible without data on past claims history, repair costs, transaction data, and other necessary data sets to extract clear value from generative AI analysis. “Be very clear about data sufficiency. Don’t jump into a program where eventually you realize you don’t have the necessary data,” Ayer says.

The benefits of third-party experience

Enterprises are increasingly aware that they must embrace generative AI, but knowing where to begin is another thing. “You start off wanting to make sure you don’t repeat mistakes other people have made,” says Ayer. An external provider can help organizations avoid those mistakes and leverage best practices and frameworks for testing and defining explainability and benchmarks for return on investment (ROI).

Using pre-built solutions by external partners can expedite time to market and increase a generative AI program’s value. These solutions can harness pre-built industry-specific generative AI platforms to accelerate deployment. “Generative AI programs can be extremely complicated,” Ayer points out. “There are a lot of infrastructure requirements, touch points with customers, and internal regulations. Organizations will also have to consider using pre-built solutions to accelerate speed to value. Third-party service providers bring the expertise of having an integrated approach to all these elements.”

Continue Reading

Copyright © 2021 Seminole Press.