Connect with us

Tech

Anti-vaxxers are weaponizing Yelp to punish bars that require vaccine proof

Published

on

Anti-vaxxers are weaponizing Yelp to punish bars that require vaccine proof


Smith’s Yelp reviews were shut down after the sudden flurry of activity on its page, which the company labels “unusual activity alerts,” a stopgap measure for both the business and Yelp to filter through a flood of reviews and pick out which are spam and which aren’t. Noorie Malik, Yelp’s vice president of user operations, said Yelp has a “team of moderators” that investigate pages that get an unusual amount of traffic. “After we’ve seen activity dramatically decrease or stop, we will then clean up the page so that only firsthand consumer experiences are reflected,” she said in a statement.

It’s a practice that Yelp has had to deploy more often over the course of the pandemic: According to Yelp’s 2020 Trust & Safety Report, the company saw a 206% increase over 2019 levels in unusual activity alerts. “Since January 2021, we’ve placed more than 15 unusual activity alerts on business pages related to a business’s stance on covid-19 vaccinations,” said Malik.

The majority of those cases have been since May, like the gay bar C.C. Attles in Seattle, which got an alert from Yelp after it made patrons show proof of vaccination at the door. Earlier this month, Moe’s Cantina in Chicago’s River North neighborhood got spammed after it attempted to isolate vaccinated customers from unvaccinated ones.

Spamming a business with one-star reviews is not a new tactic. In fact, perhaps the best-known case is Colorado’s Masterpiece bakery, which won a 2018 Supreme Court battle for refusing to make a wedding cake for a same-sex couple, after which it got pummeled by one-star reviews. “People are still writing fake reviews. People will always write fake reviews,” Liu says.

But he adds that today’s online audience know that platforms use algorithms to detect and flag problematic words, so bad actors can mask their grievances by blaming poor restaurant service like a more typical negative review to ensure the rating stays up — and counts.

That seems to have been the case with Knapp’s bar. His Yelp review included comments like “There was hair in my food” or alleged cockroach sightings. “Really ridiculous, fantastic shit,” Knapp says. “If you looked at previous reviews, you would understand immediately that this doesn’t make sense.” 

Liu also says there is a limit to how much Yelp can improve their spam detection, since natural language — or the way we speak, read, and write — “is very tough for computer systems to detect.” 

But Liu doesn’t think putting a human being in charge of figuring out which reviews are spam or not will solve the problem. “Human beings can’t do it,” he says. “Some people might get it right, some people might get it wrong. I have fake reviews on my webpage and even I can’t tell which are real or not.”

You might notice that I’ve only mentioned Yelp reviews thus far, despite the fact that Google reviews — which appear in the business description box on the right side of the Google search results page under “reviews” — is arguably more influential. That’s because Google’s review operations are, frankly, even more mysterious. 

While businesses I spoke to said Yelp worked with them on identifying spam reviews, none of them had any luck with contacting Google’s team. “You would think Google would say, ‘Something is fucked up here,’” Knapp says. “These are IP addresses from overseas. It really undermines the review platform when things like this are allowed to happen.”

Tech

Turning medical data into actionable knowledge

Published

on

Turning medical data into actionable knowledge


PACS remains an indispensable tool for viewing and interpreting imaging results, but leading health care providers are now beginning to move beyond PACS. The new paradigm brings data from multiple medical specialties together into a single platform, with a single user interface that strives to provide a holistic understanding of the patient and facilitate clinical reporting. By connecting data from multiple specialties and enabling secure and efficient access to relevant patient data, advanced information technology platforms can enhance patient care, simplify workflows for clinicians, and reduce costs for health care organizations. This organizes data around patients, rather than clinical departments.

Meeting patient expectations

Health care providers generate an enormous volume of data. Today, nearly one-third of the world’s data volume is generated by the health care industry. The growth in health care data outpaces media and entertainment, whose data is expanding at a 25% compound annual growth rate ,compared to the 36% rate for health care data. This makes the need for a comprehensive health care data management systems increasingly urgent.

The volume of health care industry data is only part of the challenge. Different data types stored in different formats create an additional hurdle to the efficient storage, retrieval, and sharing of clinically important patient data.

PACS was designed to view and store data in the Digital Imaging and Communications in Medicine (DICOM) standard, and a process known as “DICOM-wrapping” is used for PACS to provide access to patient information stored in PDF, MP4, and other file formats. In addition to adding additional steps that impede efficient workflow, DICOM-wrapping makes it difficult for clinicians to work with a file in its native format. PACS users are given what is essentially a screen shot of an Excel file, which makes it impossible to use the data analysis features in the Excel software.

With an open image and data management (IDM) system coupled with an intuitive reading and reporting workspace, patient data can be consolidated in one location instead of in multiple data silos, providing clinicians with the information they need to provide the highest level of patient-centered care. In a 2017 survey by health insurance company Humana, its patients said they aren’t interested in the details of health care IT, but are nearly unanimous when it comes to their expectations, with 97% of patients saying that their health care providers should have access to their complete medical history.

Adapting to clinical needs

To meet patient expectations and needs, health care IT seeks to meet the needs of health care providers and systems by offering flexibility—both in its initial setup and in its capacity to scale to meet evolving organizational demands.

A modular architecture enables health care providers and systems to tailor their system to their specific needs. Depending on clinical needs, health care providers can integrate specialist applications for reading and reporting, AI-powered functionalities, advanced visualization, and third-party tools. The best systems are scalable, so that they can grow as an organization grows, with the ability to flexibly scale hardware by expanding the number of servers and storage capacity.

A simple, unified UI enables a quick learning curve across the organization, while the adoption of a single enterprise system helps reduce IT costs by enabling the consolidation and integration of previously distinct systems. Through password-protected data transfers, these systems can also facilitate communication with patients.

Continue Reading

Tech

Why Big Tech’s bet on AI assistants is so risky

Published

on

Why Big Tech’s bet on AI assistants is so risky


OpenAI unveiled new ChatGPT features that include the ability to have a conversation with the chatbot as if you were making a call, allowing you to instantly get responses to your spoken questions in a lifelike synthetic voice, as my colleague Will Douglas Heaven reported. OpenAI also revealed that ChatGPT will be able to search the web.  

Google’s rival bot, Bard, is plugged into most of the company’s ecosystem, including Gmail, Docs, YouTube, and Maps. The idea is that people will be able to use the chatbot to ask questions about their own content—for example, by getting it to search through their emails or organize their calendar. Bard will also be able to instantly retrieve information from Google Search. In a similar vein, Meta too announced that it is throwing AI chatbots at everything. Users will be able to ask AI chatbots and celebrity AI avatars questions on WhatsApp, Messenger, and Instagram, with the AI model retrieving information online from Bing search. 

This is a risky bet, given the limitations of the technology. Tech companies have not solved some of the persistent problems with AI language models, such as their propensity to make things up or “hallucinate.” But what concerns me the most is that they are a security and privacy disaster, as I wrote earlier this year. Tech companies are putting this deeply flawed tech in the hands of millions of people and allowing AI models access to sensitive information such as their emails, calendars, and private messages. In doing so, they are making us all vulnerable to scams, phishing, and hacks on a massive scale. 

I’ve covered the significant security problems with AI language models before. Now that AI assistants have access to personal information and can simultaneously browse the web, they are particularly prone to a type of attack called indirect prompt injection. It’s ridiculously easy to execute, and there is no known fix. 

In an indirect prompt injection attack, a third party “alters a website by adding hidden text that is meant to change the AI’s behavior,” as I wrote in April. “Attackers could use social media or email to direct users to websites with these secret prompts. Once that happens, the AI system could be manipulated to let the attacker try to extract people’s credit card information, for example.” With this new generation of AI models plugged into social media and emails, the opportunities for hackers are endless. 

I asked OpenAI, Google, and Meta what they are doing to defend against prompt injection attacks and hallucinations. Meta did not reply in time for publication, and OpenAI did not comment on the record. 

Regarding AI’s propensity to make things up, a spokesperson for Google did say the company was releasing Bard as an “experiment,” and that it lets users fact-check Bard’s answers using Google Search. “If users see a hallucination or something that isn’t accurate, we encourage them to click the thumbs-down button and provide feedback. That’s one way Bard will learn and improve,” the spokesperson said. Of course, this approach puts the onus on the user to spot the mistake, and people have a tendency to place too much trust in the responses generated by a computer. Google did not have an answer for my question about prompt injection. 

For prompt injection, Google confirmed it is not a solved problem and remains an active area of research. The spokesperson said the company is using other systems, such as spam filters, to identify and filter out attempted attacks, and is conducting adversarial testing and red teaming exercises to identify how malicious actors might attack products built on language models. “We’re using specially trained models to help identify known malicious inputs and known unsafe outputs that violate our policies,” the spokesperson said.  

Continue Reading

Tech

Child online safety laws will actually hurt kids, critics say

Published

on

Child online safety laws will actually hurt kids, critics say


At the same time, we’ve also seen many states pick up (and politicize) laws about online safety for kids in recent months. These policies vary quite a bit from state to state, as I wrote back in April. Some focus on children’s data, and others try to limit how much and when kids can get online. 

Supporters say these laws are necessary to mitigate the risks that big tech companies pose to young people—risks that are increasingly well documented. They say it’s well past time to put guardrails in place and limit the collecting and selling of minors’ data.

“What we’re doing here is creating a duty of care that makes the social media platforms accountable for the harms they’ve caused,” said Senator Richard Blumenthal, who is co-sponsoring a child online safety bill in the Senate, in an interview with Slate. “It gives attorneys general and the FTC the power to bring lawsuits based on the product designs that, in effect, drive eating disorders, bullying, suicide, and sex and drug abuse that kids haven’t requested and that can be addictive.”

But—surprise, surprise—as with most things, it’s not really that simple. There are also vocal critics who argue that child safety laws are actually harmful to kids because all these laws, no matter their shape, have to contend with a central tension: in order to implement laws that apply to kids online, companies need to actually identify which users are kids—which requires the collection or estimation of sensitive personal information. 

I was thinking about this when the prominent New York–based civil society organization S.T.O.P. (which stands for the Surveillance Technology Oversight Project) released a report on September 28 that highlights some of these potential harms and makes the case that all bills requiring tech companies to identify underage users, even if well intentioned, will increase online surveillance for everyone. 

“These bills are sold as a way to protect teens, but they do just the opposite,” S.T.O.P. executive director Albert Fox Cahn said in a press release. “Rather than misguided efforts to track every user’s age and identity, we need privacy protections for every American.”  

There’s a wide range of regulations out there, but the report calls out several states that are creating laws imposing stricter—even drastic—restrictions on minors’ internet access, effectively limiting online speech. 

A Utah law that will take effect in March 2024, for instance, will require that parents give consent for their kids to access social media outside the hours of 6:30 a.m. to 10:30 p.m., and that social media companies build features enabling parents to access their kids’ accounts. 

Continue Reading

Copyright © 2021 Seminole Press.