Connect with us


The Download: China’s social credit law, and robot dog navigation




This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Here’s why China’s new social credit law matters

It’s easier to talk about what China’s social credit system isn’t than what it is. Ever since 2014, when China announced plans to build it, it has been one of the most misunderstood things about China in Western discourse. Now, with new documents released in mid-November, there’s an opportunity to correct the record.

Most people outside China assume it’ll act as a Black Mirror-esque system powered by technologies to automatically score every Chinese citizen based on what they did right and wrong. Instead, it’s a mix of attempts to regulate the financial credit industry, to enable government agencies to share data with each other, and to promote state-sanctioned moral values—however vague that may sound.

Although the system itself will still take a long time to materialize, by releasing a draft law last week, China is now closer than ever to defining what it will look like—and how it will affect the lives of millions of citizens. Read the full story.

—Zeyi Yang

Watch this robot dog scramble over tricky terrain just by using its camera

The news: When Ananye Agarwal took his dog out for a walk up and down the steps in the local park near Carnegie Mellon University, other dogs stopped in their tracks. That’s because Agarwal’s dog was a robot—and a special one at that. Unlike other robots, which tend to rely heavily on an internal map to get around, his robot uses a built-in camera and uses computer vision and reinforcement learning to walk on tricky terrain.

Why it matters: While other attempts to use cues from cameras to guide robot movement have been limited to flat terrain, Agarwal and his fellow researchers managed to get their robot to walk up stairs, climb on stones, and hop over gaps. They’re hoping their work will help make it easier for robots to be deployed in the real world, and vastly improve their mobility in the process. Read the full story.

—Melissa Heikkilä

Trust large language models at your own peril

When Meta launched Galactica, an open-source large language model, the company was hoping for a big PR win. Instead, all it got was flak on Twitter and a spicy blog post from one of its most vocal critics, ending with its embarrassing decision to take the public demo of the model down after only three days. 

Galactica was intended to help scientists by summarizing academic papers, and solving math problems, among other tasks. But outsiders swiftly prompted the model to provide “scientific research” on the benefits of homophobia, anti-Semitism, suicide, eating glass, being white, or being a man—demonstrating not only how its botched launch was premature, but just how insufficient AI researchers’ efforts to make large language models safer have been. Read the full story.

This story is from The Algorithm, our weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Verified anti-vax Twitter accounts are spreading health misinformation
And perfectly demonstrating the problem with charging for verification in the process. (The Guardian
+ Maybe Twitter wasn’t helping your career as much as you thought it was. (Bloomberg $)
+ A deepfake of FTX’s founder has been circulating on Twitter. (Motherboard)
+ Some of Twitter’s liberal users are refusing to leave. (The Atlantic $)
+ Twitter’s layoff bloodbath is over, apparently. (The Verge)
+ Twitter’s potential collapse could wipe out vast records of recent human history. (MIT Technology Review)

2 NASA’s Orion spacecraft has completed its lunar flyby
Paving the way to humans returning to the moon. (Vox)

3 Amazon’s warehouse-watching algorithms are trained by humans 
Poorly-paid workers in India and Costa Rica are reviewing thousands of hours of mind-numbing footage. (The Verge)
+ The AI data labeling industry is deeply exploitative. (MIT Technology Review)

4 How to make sense of climate change
Accepting the hard facts is the first step towards avoiding the grimmest ending for the planet. (New Yorker $)
+ The world’s richest nations have agreed to pay for global warming. (The Atlantic $)
+ These three charts show who is most to blame for climate change. (MIT Technology Review)

5 Apple uncovered a cybersecurity startup’s dodgy dealings  
It compiled a document that illustrates the extent of Corellium’s relationships, including with the notorious NSO Group. (Wired $)
+ The hacking industry faces the end of an era. (MIT Technology Review)

6 The crypto industry is still feeling skittish
Shares in its largest exchange have dropped to an all-time low. (Bloomberg $)
+ The UK wants to crack down on gamified trading apps. (FT $)

7 The criminal justice system is failing neurodivergent people
Mimicking an online troll led to an autistic man being sentenced to five and a half years in jail. (Economist $)

8 Your workplace could be planning to scan your brain 🧠
All in the name of making you a more efficient employee. (IEEE Spectrum)

9 Facebook doesn’t care if your account is hacked
A series of new solutions to rescue accounts doesn’t seem to have had much effect. (WP $)
+ Parent company Meta is being sued in the UK over data collection. (Bloomberg $)
+ Independent artists are building the metaverse their way. (Motherboard)

10 Why training image-generating AIs on generated images is a bad idea
The ‘contaminated’ images will only confuse them. (New Scientist $)
+ Facial recognition software used by the US government reportedly didn’t work. (Motherboard)
+ The dark secret behind those cute AI-generated animal images. (MIT Technology Review)

Quote of the day

“It feels like they used to care more.”

—Ken Higgins, an Amazon Prime member, is losing faith in the company after a series of frustrating delivery experiences, he tells the Wall Street Journal.

The big story

What if you could diagnose diseases with a tampon?

February 2019

On an unremarkable side street in Oakland, California, Ridhi Tariyal and Stephen Gire are trying to change how women monitor their health.

Their plan is to use blood from used tampons as a diagnostic tool. In that menstrual blood, they hope to find early markers of endometriosis and, ultimately, a variety of other disorders. The simplicity and ease of this method, should it work, will represent a big improvement over the present-day standard of care. Read the full story.

—Dayna Evans

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Happy Thanksgiving—in your nightmares!
+ Why Keith Haring’s legacy is more visible than ever, 32 years after his death.
+ Even the gentrified world of dinosaur skeleton assembly isn’t immune to scandals.
+ Pumpkins are a Thanksgiving staple—but it wasn’t always that way.
+ If I lived in a frozen wasteland, I’m pretty sure I’d be the world’s grumpiest cat too.


The Download: generative AI for video, and detecting AI text



The original startup behind Stable Diffusion has launched a generative AI for video

The original startup behind Stable Diffusion has launched a generative AI for video

What’s happened: Runway, the generative AI startup that co-created last year’s breakout text-to-image model Stable Diffusion, has released an AI model that can transform existing videos into new ones by applying styles from a text prompt or reference image.

What it does: In a demo reel posted on its website, Runway shows how the model, called Gen-1, can turn people on a street into claymation puppets, and books stacked on a table into a cityscape at night. Other recent text-to-video models can generate very short video clips from scratch, but because Gen-1adapts existing footage it can produce much longer videos.

Why it matters: Last year’s explosion in generative AI was fueled by the millions of people who got their hands on powerful creative tools for the first time and shared what they made, and Runway hopes Gen-1 will have a similar effect on generated videos. Read the full story.

—Will Douglas Heaven

Why detecting AI-generated text is so difficult (and what to do about it)

Last week, OpenAI unveiled a tool that can detect text produced by its AI system ChatGPT. But if you’re a teacher who fears the coming deluge of ChatGPT-generated essays, don’t get too excited.

Continue Reading


Why detecting AI-generated text is so difficult (and what to do about it)



Why detecting AI-generated text is so difficult (and what to do about it)

This tool is OpenAI’s response to the heat it’s gotten from educators, journalists, and others for launching ChatGPT without any ways to detect text it has generated. However, it is still very much a work in progress, and it is woefully unreliable. OpenAI says its AI text detector correctly identifies 26% of AI-written text as “likely AI-written.” 

While OpenAI clearly has a lot more work to do to refine its tool, there’s a limit to just how good it can make it. We’re extremely unlikely to ever get a tool that can spot AI-generated text with 100% certainty. It’s really hard to detect AI-generated text because the whole point of AI language models is to generate fluent and human-seeming text, and the model is mimicking text created by humans, says Muhammad Abdul-Mageed, a professor who oversees research in natural-language processing and machine learning at the University of British Columbia

We are in an arms race to build detection methods that can match the latest, most powerful models, Abdul-Mageed adds. New AI language models are more powerful and better at generating even more fluent language, which quickly makes our existing detection tool kit outdated. 

OpenAI built its detector by creating a whole new AI language model akin to ChatGPT that is specifically trained to detect outputs from models like itself. Although details are sparse, the company apparently trained the model with examples of AI-generated text and examples of human-generated text, and then asked it to spot the AI-generated text. We asked for more information, but OpenAI did not respond. 

Last month, I wrote about another method for detecting text generated by an AI: watermarks. These act as a sort of secret signal in AI-produced text that allows computer programs to detect it as such. 

Researchers at the University of Maryland have developed a neat way of applying watermarks to text generated by AI language models, and they have made it freely available. These watermarks would allow us to tell with almost complete certainty when AI-generated text has been used. 

The trouble is that this method requires AI companies to embed watermarking in their chatbots right from the start. OpenAI is developing these systems but has yet to roll them out in any of its products. Why the delay? One reason might be that it’s not always desirable to have AI-generated text watermarked. 

One of the most promising ways ChatGPT could be integrated into products is as a tool to help people write emails or as an enhanced spell-checker in a word processor. That’s not exactly cheating. But watermarking all AI-generated text would automatically flag these outputs and could lead to wrongful accusations.

Continue Reading


The original startup behind Stable Diffusion has launched a generative AI for video



The original startup behind Stable Diffusion has launched a generative AI for video

Set up in 2018, Runway has been developing AI-powered video-editing software for several years. Its tools are used by TikTokers and YouTubers as well as mainstream movie and TV studios. The makers of The Late Show with Stephen Colbert used Runway software to edit the show’s graphics; the visual effects team behind the hit movie Everything Everywhere All at Once used the company’s tech to help create certain scenes.  

In 2021, Runway collaborated with researchers at the University of Munich to build the first version of Stable Diffusion. Stability AI, a UK-based startup, then stepped in to pay the computing costs required to train the model on much more data. In 2022, Stability AI took Stable Diffusion mainstream, transforming it from a research project into a global phenomenon. 

But the two companies no longer collaborate. Getty is now taking legal action against Stability AI—claiming that the company used Getty’s images, which appear in Stable Diffusion’s training data, without permission—and Runway is keen to keep its distance.

Gen-1 represents a new start for Runway. It follows a smattering of text-to-video models revealed late last year, including Make-a-Video from Meta and Phenaki from Google, both of which can generate very short video clips from scratch. It is also similar to Dreamix, a generative AI from Google revealed last week, which can create new videos from existing ones by applying specified styles. But at least judging from Runway’s demo reel, Gen-1 appears to be a step up in video quality. Because it transforms existing footage, it can also produce much longer videos than most previous models. (The company says it will post technical details about Gen-1 on its website in the next few days.)   

Unlike Meta and Google, Runway has built its model with customers in mind. “This is one of the first models to be developed really closely with a community of video makers,” says Valenzuela. “It comes with years of insight about how filmmakers and VFX editors actually work on post-production.”

Gen-1, which runs on the cloud via Runway’s website, is being made available to a handful of invited users today and will be launched to everyone on the waitlist in a few weeks.

Last year’s explosion in generative AI was fueled by the millions of people who got their hands on powerful creative tools for the first time and shared what they made with them. Valenzuela hopes that putting Gen-1 into the hands of creative professionals will soon have a similar impact on video.

“We’re really close to having full feature films being generated,” he says. “We’re close to a place where most of the content you’ll see online will be generated.”

Continue Reading

Copyright © 2021 Seminole Press.