Connect with us

Tech

Mark Zuckerberg still won’t address the root cause of Facebook’s misinformation problem

Published

on

Mark Zuckerberg still won't address the root cause of Facebook's misinformation problem


As Hao wrote, a study from New York University of partisan publishers’ Facebook pages found “those that regularly posted political misinformation received the most engagement in the lead-up to the 2020 US presidential election and the Capitol riots.” 

Zuckerberg, after saying that “a bunch of inaccurate things” about Facebook’s incentives for allowing and amplifying misinformation and polarizing content had been shared at the hearing by members of Congress, added: 

“People don’t want to see misinformation or divisive content on our services. People don’t want to see clickbait and things like that. While it may be true that people may be more likely to click on it in the short term, it’s not good for our business or our product or our community for it to be there.” 

His answer is a common Facebook talking point and skirts the fact that the company has not undertaken a centralized, coordinated effort to examine and reduce the way its recommendation systems amplify misinformation. To learn more, read Hao’s reporting. 

Zuckerberg’s comments came during the House Committee on Energy and Commerce hearing on disinformation, where members of Congress asked Zuckerberg, Google CEO Sundar Pichai, and Twitter CEO Jack Dorsey about the spread of misinformation about the US election in November, the January 6 attack on the Capitol building, and covid vaccines, among other things. 

As has become common in these hearings, conservative legislators also questioned the CEOs about perceived anti-conservative bias on their platforms, a longtime right-wing claim that data doesn’t support.

Tech

Yann LeCun has a bold new vision for the future of AI

Published

on

The Download: Yann LeCun’s AI vision, and smart cities’ unfulfilled promises


Melanie Mitchell, an AI researcher at the Santa Fe Institute, is also excited to see a whole new approach. “We really haven’t seen this coming out of the deep-learning community so much,” she says. She also agrees with LeCun that large language models cannot be the whole story. “They lack memory and internal models of the world that are actually really important,” she says.

Natasha Jaques, a researcher at Google Brain, thinks that language models should still play a role, however. It’s odd for language to be entirely missing from LeCun’s proposals, she says: “We know that large language models are super effective and bake in a bunch of human knowledge.”

Jaques, who works on ways to get AIs to share information and abilities with each other, points out that humans don’t have to have direct experience of something to learn about it. We can change our behavior simply by being told something, such as not to touch a hot pan. “How do I update this world model that Yann is proposing if I don’t have language?” she asks.

There’s another issue, too. If they were to work, LeCun’s ideas would create a powerful technology that could be as transformative as the internet. And yet his proposal doesn’t discuss how his model’s behavior and motivations would be controlled, or who would control them. This is a weird omission, says Abhishek Gupta, the founder of the Montreal AI Ethics Institute and a responsible-AI expert at Boston Consulting Group. 

“We should think more about what it takes for AI to function well in a society, and that requires thinking about ethical behavior, amongst other things,” says Gupta. 

Yet Jaques notes that LeCun’s proposals are still very much ideas rather than practical applications. Mitchell says the same: “There’s certainly little risk of this becoming a human-level intelligence anytime soon.”

LeCun would agree. His aim is to sow the seeds of a new approach in the hope that others build on it. “This is something that is going to take a lot of effort from a lot of people,” he says. “I’m putting this out there because I think ultimately this is the way to go.” If nothing else, he wants to convince people that large language models and reinforcement learning are not the only ways forward. 

“I hate to see people wasting their time,” he says.

Continue Reading

Tech

The Download: Yann LeCun’s AI vision, and smart cities’ unfulfilled promises

Published

on

The Download: Yann LeCun’s AI vision, and smart cities’ unfulfilled promises


“We’re addicted to being on Facebook.”

—Jordi Berbera, who runs a pizza stand in Mexico City, tells Rest of World why he has turned to selling his wares through the social network instead of through more conventional food delivery apps.

The big story

“Am I going crazy or am I being stalked?” Inside the disturbing online world of gangstalking

August 2020

Jenny’s story is not linear, the way that we like stories to be. She was born in Baltimore in 1975 and had a happy, healthy childhood—her younger brother Danny fondly recalls the treasure hunts she would orchestrate. In her late teens, she developed anorexia and depression and was hospitalized for a month. Despite her struggles, she graduated high school and was accepted into a prestigious liberal arts college.

There, things went downhill again. Among other issues, chronic fatigue led her to drop out. When she was 25 she flipped that car on Florida’s Sunshine Skyway Bridge in an apparent suicide attempt. At 30, after experiencing delusions that she was pregnant, she was diagnosed with schizophrenia. She was hospitalized for half a year and began treatment, regularly receiving shots of an antipsychotic drug. “It was like having my older sister back again,” Danny says.

On July 17, 2017, Jenny jumped from the tenth floor of a parking garage at Tampa International Airport. After her death, her family searched her hotel room and her apartment, but the 42-year-old didn’t leave a note. “We wanted to find a reason for why she did this,” Danny says. And so, a week after his sister’s death, Danny—a certified ethical hacker—decided to look for answers on Jenny’s computer. He found she had subscribed to hundreds of gangstalking groups across Facebook, Twitter, and Reddit; online communities where self-described “targeted individuals” say they are being monitored, harassed, and stalked 24/7 by governments and other organizations—and the internet legitimizes them. Read the full story.

Continue Reading

Tech

The US Supreme Court has overturned Roe v. Wade. What does that mean?

Published

on

The US Supreme Court has overturned Roe v. Wade. What does that mean?


Access to legal abortion is now subject to state laws, allowing each state to decide whether to ban, restrict or allow abortion. Some parts of the country are much stricter than others—Arkansas, Oklahoma and Kentucky are among the 13 states with trigger laws that immediately made abortion illegal in the aftermath of the ruling. In total, around half of states are likely to either ban or limit access to the procedure, with many of them refusing to make exceptions, even in pregnancies involving rape, incest and fetuses with genetic abnormalities. Many specialized abortion clinics may be forced to close their doors in the next few days and weeks.

While overturning Roe v Wade will not spell an end to abortion in the US, it’s likely to lower its rates, and force those seeking them to obtain them using different methods. People living in states that ban or heavily restrict abortions may consider travelling to other areas that will continue to allow them, although crossing state lines can be time-consuming and prohibitively expensive for many people facing financial hardship.

The likelihood that anti-abortion activists will use surveillance and data collection to track and identify people seeking abortions is also higher following the decision. This information could be used to criminalize them, making it particularly dangerous for those leaving home to cross state lines.

Vigilante volunteers already stake out abortion clinics in states including Mississippi, Florida and North Carolina, filming people’s arrival on cameras and recording details about them and their cars. While they deny the data is used to harass or contact people seeking abortions, experts are concerned that footage filmed of clients arriving and leaving clinics could be exploited to target and harm them, particularly if law enforcement agencies or private groups were to use facial recognition to identify them.

Another option is to order so-called abortion pills to discreetly end a pregnancy at home. The pills, which are safe and widely prescribed by doctors, are significantly less expensive than surgical procedures, and already account for the majority of abortions in the US.

Continue Reading

Copyright © 2021 Seminole Press.