Connect with us

Tech

The Download: mRNA vaccines, and batteries’ breakout year

Published

on

The Download: mRNA vaccines, and batteries’ breakout year


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What’s next for mRNA vaccines

As the covid pandemic began, we were warned that wearing face coverings, disinfecting everything we touched, and keeping away from other people were some of the only ways we could protect ourselves from the potentially fatal disease.

Thankfully, a more effective form of protection was in the works. Scientists were developing new vaccines at rapid speed: sequencing the virus behind covid in January, and starting clinical trials of vaccines using messenger RNA in March. Vaccination efforts took off around the world by the end of 2020.

As things stand today, over 670 million doses of the vaccines have been delivered in the US. But while the first approved mRNA vaccines are for covid, similar vaccines are being explored for a whole host of other infectious diseases, including Malaria, HIV, tuberculosis, and Zika—and they could even help to treat cancer. Read the full story.

—Jessica Hamzelou

Why 2023 is a breakout year for batteries

If you stop to think about it for long enough, batteries start to sound a bit like magic. Seriously, tiny chemical factories that we carry around to store energy and release it when we need it, over and over again? Wild.

But magic aside, batteries are set for a starring role in climate action, both in powering EVs and in storing electricity generated by wind turbines and solar panels. There are significant challenges in making them cheaper and more efficient, but 2023 might be the year when some dramatically different approaches to batteries could see progress. Read the full story.

—Casey Crownhart

Casey’s story is from The Spark, our weekly newsletter delving into batteries, climate and energy technology breakthroughs. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Chinese researchers are claiming to have broken encryption
If they’re right, it’s a significant turning point in the history of quantum computers. (FT $)
+ The tricky legality of police hacking encryption to catch criminals. (Wired $)
+ What are quantum-resistant algorithms? (MIT Technology Review)

2 We’re not monitoring covid like we used to
But the virus is still killing thousands of people each week. (Economist $)
+ The new XBB.1.5 sub-variant is rapidly spreading across the US. (CNN)
+ The Chinese government’s covid death toll is being questioned. (BBC)

3 Coinbase has agreed to pay US regulators $50 million
The crypto exchange is alleged to have violated anti-money laundering laws. (The Verge)

4 Amazon is laying off 18,000 workers 
It’s the highest number of people let go by a tech company in the past few months. (WSJ $)
+ Staff will have to wait two weeks to find out. (Insider $)
+ Salesforce is cutting 10% of its workforce, too. (Reuters)

5 Twitter verification is still busted
Paying $8 for a blue check doesn’t actually verify someone’s identity after all. (WP $)

6 Apple has launched a series of audiobooks narrated by AI
Sparking an instant backlash from authors and voice actors. (The Guardian)
+ NYC’s education department has banned access to ChatGPT. (Motherboard)
+ It could, however, prove helpful in spotting the early signs of Alzheimer’s. (IEEE Spectrum)
+ What’s next for AI. (MIT Technology Review)

7 EVs are unnecessarily powerful
Automakers are missing their opportunity to make the next generation of cars safer. (The Atlantic $)
+ How about a flying taxi instead? (Axios)

8 Consumer products are poorer quality these days
You can thank the rising cost of manufacturing and the era of fast fashion. (Vox)

9 They don’t make MP3 blogs like they used to
TikTok is a poor substitute for the void they’ve left. (New Yorker $)

10 Shitposting has finally reached LinkedIn
That said, it’s still more authentic than some of the platform’s wildest posts. (Vice)

Quote of the day

“Put me there, please. That sounds like a delightful environment to live in.”

—Danielle Venne, a musician and electric vehicle sound designer, reflects on how urban life will become much quieter once EVs become the predominant mode of transport to The Guardian.

The big story

The great chip crisis threatens the promise of Moore’s Law

June 2021

A year into the covid-19 pandemic, Apple showed off a custom-designed M1 chip which packed 16 billion transistors on a microprocessor the size of a large postage stamp during an event. It was a triumph for Moore’s Law, the observation turned prophecy that chipmakers can double the number of transistors on a chip every few years.

But even as Apple celebrated the M1, the world was facing an economically devastating shortage of microchips, particularly the relatively cheap ones that make many of today’s technologies possible. 

After decades of fretting about how we will carve out features as small as a few nanometers on silicon wafers, the spirit of Moore’s Law—the expectation that cheap, powerful chips will be readily available—is being threatened by something far more mundane: inflexible supply chains. Read the full story.

—Jeremy Hsu

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Hey, keep your hands off the artwork!
+ Have we finally had enough of gallery walls?
+ Here’s how trans singers are adapting to their changing voices.
+ Congratulations to Denmark, which didn’t host a single bank robbery last year.
+ Millennials fell in love with the Cheesecake Factory because of its whacky vibe.



Tech

The Download: generative AI for video, and detecting AI text

Published

on

The original startup behind Stable Diffusion has launched a generative AI for video


The original startup behind Stable Diffusion has launched a generative AI for video

What’s happened: Runway, the generative AI startup that co-created last year’s breakout text-to-image model Stable Diffusion, has released an AI model that can transform existing videos into new ones by applying styles from a text prompt or reference image.

What it does: In a demo reel posted on its website, Runway shows how the model, called Gen-1, can turn people on a street into claymation puppets, and books stacked on a table into a cityscape at night. Other recent text-to-video models can generate very short video clips from scratch, but because Gen-1adapts existing footage it can produce much longer videos.

Why it matters: Last year’s explosion in generative AI was fueled by the millions of people who got their hands on powerful creative tools for the first time and shared what they made, and Runway hopes Gen-1 will have a similar effect on generated videos. Read the full story.

—Will Douglas Heaven

Why detecting AI-generated text is so difficult (and what to do about it)

Last week, OpenAI unveiled a tool that can detect text produced by its AI system ChatGPT. But if you’re a teacher who fears the coming deluge of ChatGPT-generated essays, don’t get too excited.

Continue Reading

Tech

Why detecting AI-generated text is so difficult (and what to do about it)

Published

on

Why detecting AI-generated text is so difficult (and what to do about it)


This tool is OpenAI’s response to the heat it’s gotten from educators, journalists, and others for launching ChatGPT without any ways to detect text it has generated. However, it is still very much a work in progress, and it is woefully unreliable. OpenAI says its AI text detector correctly identifies 26% of AI-written text as “likely AI-written.” 

While OpenAI clearly has a lot more work to do to refine its tool, there’s a limit to just how good it can make it. We’re extremely unlikely to ever get a tool that can spot AI-generated text with 100% certainty. It’s really hard to detect AI-generated text because the whole point of AI language models is to generate fluent and human-seeming text, and the model is mimicking text created by humans, says Muhammad Abdul-Mageed, a professor who oversees research in natural-language processing and machine learning at the University of British Columbia

We are in an arms race to build detection methods that can match the latest, most powerful models, Abdul-Mageed adds. New AI language models are more powerful and better at generating even more fluent language, which quickly makes our existing detection tool kit outdated. 

OpenAI built its detector by creating a whole new AI language model akin to ChatGPT that is specifically trained to detect outputs from models like itself. Although details are sparse, the company apparently trained the model with examples of AI-generated text and examples of human-generated text, and then asked it to spot the AI-generated text. We asked for more information, but OpenAI did not respond. 

Last month, I wrote about another method for detecting text generated by an AI: watermarks. These act as a sort of secret signal in AI-produced text that allows computer programs to detect it as such. 

Researchers at the University of Maryland have developed a neat way of applying watermarks to text generated by AI language models, and they have made it freely available. These watermarks would allow us to tell with almost complete certainty when AI-generated text has been used. 

The trouble is that this method requires AI companies to embed watermarking in their chatbots right from the start. OpenAI is developing these systems but has yet to roll them out in any of its products. Why the delay? One reason might be that it’s not always desirable to have AI-generated text watermarked. 

One of the most promising ways ChatGPT could be integrated into products is as a tool to help people write emails or as an enhanced spell-checker in a word processor. That’s not exactly cheating. But watermarking all AI-generated text would automatically flag these outputs and could lead to wrongful accusations.

Continue Reading

Tech

The original startup behind Stable Diffusion has launched a generative AI for video

Published

on

The original startup behind Stable Diffusion has launched a generative AI for video


Set up in 2018, Runway has been developing AI-powered video-editing software for several years. Its tools are used by TikTokers and YouTubers as well as mainstream movie and TV studios. The makers of The Late Show with Stephen Colbert used Runway software to edit the show’s graphics; the visual effects team behind the hit movie Everything Everywhere All at Once used the company’s tech to help create certain scenes.  

In 2021, Runway collaborated with researchers at the University of Munich to build the first version of Stable Diffusion. Stability AI, a UK-based startup, then stepped in to pay the computing costs required to train the model on much more data. In 2022, Stability AI took Stable Diffusion mainstream, transforming it from a research project into a global phenomenon. 

But the two companies no longer collaborate. Getty is now taking legal action against Stability AI—claiming that the company used Getty’s images, which appear in Stable Diffusion’s training data, without permission—and Runway is keen to keep its distance.

Gen-1 represents a new start for Runway. It follows a smattering of text-to-video models revealed late last year, including Make-a-Video from Meta and Phenaki from Google, both of which can generate very short video clips from scratch. It is also similar to Dreamix, a generative AI from Google revealed last week, which can create new videos from existing ones by applying specified styles. But at least judging from Runway’s demo reel, Gen-1 appears to be a step up in video quality. Because it transforms existing footage, it can also produce much longer videos than most previous models. (The company says it will post technical details about Gen-1 on its website in the next few days.)   

Unlike Meta and Google, Runway has built its model with customers in mind. “This is one of the first models to be developed really closely with a community of video makers,” says Valenzuela. “It comes with years of insight about how filmmakers and VFX editors actually work on post-production.”

Gen-1, which runs on the cloud via Runway’s website, is being made available to a handful of invited users today and will be launched to everyone on the waitlist in a few weeks.

Last year’s explosion in generative AI was fueled by the millions of people who got their hands on powerful creative tools for the first time and shared what they made with them. Valenzuela hopes that putting Gen-1 into the hands of creative professionals will soon have a similar impact on video.

“We’re really close to having full feature films being generated,” he says. “We’re close to a place where most of the content you’ll see online will be generated.”

Continue Reading

Copyright © 2021 Seminole Press.