Connect with us

Tech

Five big takeaways from Europe’s AI Act

Published

on

Five big takeaways from Europe’s AI Act


The AI Act vote passed with an overwhelming majority, and has been heralded as one of the world’s most important developments in AI regulation. The European Parliament’s president, Roberta Metsola, described it as “legislation that will no doubt be setting the global standard for years to come.” 

Don’t hold your breath for any immediate clarity, though. The European system is a bit complicated. Next, members of the European Parliament will have to thrash out details with the Council of the European Union and the EU’s executive arm, the European Commission, before the draft rules become legislation. The final legislation will be a compromise between three different drafts from the three institutions, which vary a lot. It will likely take around two years before the laws are actually implemented.

What Wednesday’s vote accomplished was to approve the European Parliament’s position in the upcoming final negotiations. Structured similarly to the EU’s Digital Services Act, a legal framework for online platforms, the AI Act takes a “risk-based approach” by introducing restrictions based on how dangerous lawmakers predict an AI application could be. Businesses will also have to submit their own risk assessments about their use of AI. 

Some applications of AI will be banned entirely if lawmakers consider the risk “unacceptable,” while technologies deemed “high risk” will have new limitations on their use and requirements around transparency. 

Here are some of the major implications:

  1. Ban on emotion-recognition AI. The European Parliament’s draft text bans the use of AI that attempts to recognize people’s emotions in policing, schools, and workplaces. Makers of emotion-recognition software claim that AI is able to determine when a student is not understanding certain material, or when a driver of a car might be falling asleep. The use of AI to conduct facial detection and analysis has been criticized for inaccuracy and bias, but it has not been banned in the draft text from the other two institutions, suggesting there’s a political fight to come.
  2. Ban on real-time biometrics and predictive policing in public spaces. This will be a major legislative battle, because the various EU bodies will have to sort out whether, and how, the ban is enforced in law. Policing groups are not in favor of a ban on real-time biometric technologies, which they say are necessary for modern policing. Some countries, like France, are actually planning to increase their use of facial recognition
  3. Ban on social scoring. Social scoring by public agencies, or the practice of using data about people’s social behavior to make generalizations and profiles, would be outlawed. That said, the outlook on social scoring, commonly associated with China and other authoritarian governments, isn’t really as simple as it may seem. The practice of using social behavior data to evaluate people is common in doling out mortgages and setting insurance rates, as well as in hiring and advertising. 
  4. New restrictions for gen AI. This draft is the first to propose ways to regulate generative AI, and ban the use of any copyrighted material in the training set of large language models like OpenAI’s GPT-4. OpenAI has already come under the scrutiny of European lawmakers for concerns about data privacy and copyright. The draft bill also requires that AI generated content be labeled as such. That said, the European Parliament now has to sell its policy to the European Commission and individual countries, which are likely to face lobbying pressure from the tech industry.
  5. New restrictions on recommendation algorithms on social media. The new draft assigns recommender systems to a “high risk” category, which is an escalation from the other proposed bills. This means that if it passes, recommender systems on social media platforms will be subject to much more scrutiny about how they work, and tech companies could be more liable for the impact of user-generated content.

The risks of AI as described by Margrethe Vestager, executive vice president of the EU Commission, are widespread. She has emphasized concerns about the future of trust in information, vulnerability to social manipulation by bad actors, and mass surveillance. 

“If we end up in a situation where we believe nothing, then we have undermined our society completely,” Vestager told reporters on Wednesday.

What I am reading this week

  • A Russian soldier surrendered to a Ukrainian assault drone, according to video footage published by the Wall Street Journal. The surrender took place back in May in the eastern city of Bakhmut, Ukraine. The drone operator decided to spare the life of the soldier, according to international law, upon seeing his plea via video. Drones have been critical in the war, and the surrender is a fascinating look at the future of warfare. 
  • Many Redditors are protesting changes to the site’s API that would eliminate or reduce the function of third-party apps and tools many communities use. In protest, those communities have “gone private,” which means that the pages are no longer publicly accessible. Reddit is known for the power it gives to its user base, but the company may now be regretting that, according to Casey Newton’s sharp assessment
  • Contract workers who trained Google’s large language model, Bard, say they were fired after raising concerns about their working conditions and safety issues with the AI itself. The contractors say they were forced to meet unreasonable deadlines, which led to concerns about accuracy. Google says the responsibility lies with Appen, the contract agency employing the workers. If history tells us anything, there will be a human cost in the race to dominate generative AI. 

What I learned this week

This week, Human Rights Watch released an in-depth report about an algorithm used to dole out welfare benefits in Jordan. The agency found some major issues with the algorithm, which was funded by the World Bank, and says the system was based on incorrect and oversimplified assumptions about poverty. The report’s authors also called out the lack of transparency and cautioned against similar projects run by the World Bank. I wrote a short story about the findings. 

Meanwhile, the trend toward using algorithms in government services is growing. Elizabeth Renieris, author of Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse, wrote to me about the report, and emphasized the impact these sort of systems will have going forward: “As the process to access benefits becomes digital by default, these benefits become even less likely to reach those who need them the most and only deepen the digital divide. This is a prime example of how expansive automation can directly and negatively impact people, and is the AI risk conversation that we should be focused on now.”

Tech

The hunter-gatherer groups at the heart of a microbiome gold rush

Published

on

The hunter-gatherer groups at the heart of a microbiome gold rush


The first step to finding out is to catalogue what microbes we might have lost. To get as close to ancient microbiomes as possible, microbiologists have begun studying multiple Indigenous groups. Two have received the most attention: the Yanomami of the Amazon rainforest and the Hadza, in northern Tanzania. 

Researchers have made some startling discoveries already. A study by Sonnenburg and his colleagues, published in July, found that the gut microbiomes of the Hadza appear to include bugs that aren’t seen elsewhere—around 20% of the microbe genomes identified had not been recorded in a global catalogue of over 200,000 such genomes. The researchers found 8.4 million protein families in the guts of the 167 Hadza people they studied. Over half of them had not previously been identified in the human gut.

Plenty of other studies published in the last decade or so have helped build a picture of how the diets and lifestyles of hunter-gatherer societies influence the microbiome, and scientists have speculated on what this means for those living in more industrialized societies. But these revelations have come at a price.

A changing way of life

The Hadza people hunt wild animals and forage for fruit and honey. “We still live the ancient way of life, with arrows and old knives,” says Mangola, who works with the Olanakwe Community Fund to support education and economic projects for the Hadza. Hunters seek out food in the bush, which might include baboons, vervet monkeys, guinea fowl, kudu, porcupines, or dik-dik. Gatherers collect fruits, vegetables, and honey.

Mangola, who has met with multiple scientists over the years and participated in many research projects, has witnessed firsthand the impact of such research on his community. Much of it has been positive. But not all researchers act thoughtfully and ethically, he says, and some have exploited or harmed the community.

One enduring problem, says Mangola, is that scientists have tended to come and study the Hadza without properly explaining their research or their results. They arrive from Europe or the US, accompanied by guides, and collect feces, blood, hair, and other biological samples. Often, the people giving up these samples don’t know what they will be used for, says Mangola. Scientists get their results and publish them without returning to share them. “You tell the world [what you’ve discovered]—why can’t you come back to Tanzania to tell the Hadza?” asks Mangola. “It would bring meaning and excitement to the community,” he says.

Some scientists have talked about the Hadza as if they were living fossils, says Alyssa Crittenden, a nutritional anthropologist and biologist at the University of Nevada in Las Vegas, who has been studying and working with the Hadza for the last two decades.

The Hadza have been described as being “locked in time,” she adds, but characterizations like that don’t reflect reality. She has made many trips to Tanzania and seen for herself how life has changed. Tourists flock to the region. Roads have been built. Charities have helped the Hadza secure land rights. Mangola went abroad for his education: he has a law degree and a master’s from the Indigenous Peoples Law and Policy program at the University of Arizona.

Continue Reading

Tech

The Download: a microbiome gold rush, and Eric Schmidt’s election misinformation plan

Published

on

The Download: a microbiome gold rush, and Eric Schmidt’s election misinformation plan


Over the last couple of decades, scientists have come to realize just how important the microbes that crawl all over us are to our health. But some believe our microbiomes are in crisis—casualties of an increasingly sanitized way of life. Disturbances in the collections of microbes we host have been associated with a whole host of diseases, ranging from arthritis to Alzheimer’s.

Some might not be completely gone, though. Scientists believe many might still be hiding inside the intestines of people who don’t live in the polluted, processed environment that most of the rest of us share. They’ve been studying the feces of people like the Yanomami, an Indigenous group in the Amazon, who appear to still have some of the microbes that other people have lost. 

But there is a major catch: we don’t know whether those in hunter-gatherer societies really do have “healthier” microbiomes—and if they do, whether the benefits could be shared with others. At the same time, members of the communities being studied are concerned about the risk of what’s called biopiracy—taking natural resources from poorer countries for the benefit of wealthier ones. Read the full story.

—Jessica Hamzelou

Eric Schmidt has a 6-point plan for fighting election misinformation

—by Eric Schmidt, formerly the CEO of Google, and current cofounder of philanthropic initiative Schmidt Futures

The coming year will be one of seismic political shifts. Over 4 billion people will head to the polls in countries including the United States, Taiwan, India, and Indonesia, making 2024 the biggest election year in history.

Continue Reading

Tech

Navigating a shifting customer-engagement landscape with generative AI

Published

on

Navigating a shifting customer-engagement landscape with generative AI


A strategic imperative

Generative AI’s ability to harness customer data in a highly sophisticated manner means enterprises are accelerating plans to invest in and leverage the technology’s capabilities. In a study titled “The Future of Enterprise Data & AI,” Corinium Intelligence and WNS Triange surveyed 100 global C-suite leaders and decision-makers specializing in AI, analytics, and data. Seventy-six percent of the respondents said that their organizations are already using or planning to use generative AI.

According to McKinsey, while generative AI will affect most business functions, “four of them will likely account for 75% of the total annual value it can deliver.” Among these are marketing and sales and customer operations. Yet, despite the technology’s benefits, many leaders are unsure about the right approach to take and mindful of the risks associated with large investments.

Mapping out a generative AI pathway

One of the first challenges organizations need to overcome is senior leadership alignment. “You need the necessary strategy; you need the ability to have the necessary buy-in of people,” says Ayer. “You need to make sure that you’ve got the right use case and business case for each one of them.” In other words, a clearly defined roadmap and precise business objectives are as crucial as understanding whether a process is amenable to the use of generative AI.

The implementation of a generative AI strategy can take time. According to Ayer, business leaders should maintain a realistic perspective on the duration required for formulating a strategy, conduct necessary training across various teams and functions, and identify the areas of value addition. And for any generative AI deployment to work seamlessly, the right data ecosystems must be in place.

Ayer cites WNS Triange’s collaboration with an insurer to create a claims process by leveraging generative AI. Thanks to the new technology, the insurer can immediately assess the severity of a vehicle’s damage from an accident and make a claims recommendation based on the unstructured data provided by the client. “Because this can be immediately assessed by a surveyor and they can reach a recommendation quickly, this instantly improves the insurer’s ability to satisfy their policyholders and reduce the claims processing time,” Ayer explains.

All that, however, would not be possible without data on past claims history, repair costs, transaction data, and other necessary data sets to extract clear value from generative AI analysis. “Be very clear about data sufficiency. Don’t jump into a program where eventually you realize you don’t have the necessary data,” Ayer says.

The benefits of third-party experience

Enterprises are increasingly aware that they must embrace generative AI, but knowing where to begin is another thing. “You start off wanting to make sure you don’t repeat mistakes other people have made,” says Ayer. An external provider can help organizations avoid those mistakes and leverage best practices and frameworks for testing and defining explainability and benchmarks for return on investment (ROI).

Using pre-built solutions by external partners can expedite time to market and increase a generative AI program’s value. These solutions can harness pre-built industry-specific generative AI platforms to accelerate deployment. “Generative AI programs can be extremely complicated,” Ayer points out. “There are a lot of infrastructure requirements, touch points with customers, and internal regulations. Organizations will also have to consider using pre-built solutions to accelerate speed to value. Third-party service providers bring the expertise of having an integrated approach to all these elements.”

Continue Reading

Copyright © 2021 Seminole Press.