Connect with us


The Download: cattle’s deadly tick-borne disease, and molten salt batteries




This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

A new tick-borne disease is killing cattle in the US

In the spring of 2021, Cynthia and John Grano, who own a cattle operation in Culpeper County, Virginia, started noticing some of their cows slowing down and acting “spacey.” They figured the animals were suffering from a common infectious disease that causes anemia in cattle. But their veterinarian had warned them that another disease carried by a parasite was spreading rapidly in the area.

After a third cow died, the Granos decided to test its blood. Sure enough, the test came back positive for the disease: theileria. And with no treatment available, the cows kept dying.

Cattle owners like the Granos are not alone. Livestock producers around the US are confronting this new and unfamiliar disease without much information. Researchers still don’t know how theileria will unfold, even as it quickly spreads west across the country. If states can’t get the disease under control, then nationwide production losses from sick cows could significantly damage both individual operations and the entire industry. Read the full story.

—Britta Lokting

Super-hot salt could be coming to a battery near you

The world is building more capacity for renewables, especially solar and wind power that come and go with the weather. But for renewables to make a real difference, we need better options for storing energy. That’s where batteries come in. And handily, there’s a wave of alternative chemistries slowly percolating into the growing energy storage market.

Some of these new players could eventually be cheaper (and in various ways, better) than the industry-standard lithium-ion batteries. Among the most promising is molten salt technology, which Ambri, a Boston-area startup, is convinced could be up to 50% cheaper over its lifetime than an equivalent lithium-ion system.

But, like its rivals experimenting with other forms of energy storage, Ambri is facing real barriers to adoption, with scaling presenting the main, ever-present hurdle. Read the full story.

—Casey Crownhart

Casey’s story is from The Spark, her weekly newsletter covering battery breakthroughs and other climate news. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 What comes after Twitter?
Whatever the answer, downloading data and contacts is a smart move. (NYT $)
+ It’s unlikely, however, that you’ll be able to save everything. (Wired $)
+ A load of fired contractors aren’t planning on going quietly. (Bloomberg $)
+ One of its former data scientists is extremely worried. (Rest of World)
+ Twitter’s potential collapse could wipe out vast records of recent human history. (MIT Technology Review)

2 The ripple effects of FTX’s collapse
The crypto exchange’s poor practices are triggering fears about the industry’s future—and its employees are furious. (WSJ $)
+ Sam Bankman-Fried had an ill-advised chat with a journalist over Twitter. (Vox)
+ A class action has been filed against FTX in the US. (The Guardian)

3 The US’s bioweapon detection system is unreliable  
20 years after its introduction, it still costs $80 million a year. (The Verge

4 Telehealth sites are riddled with data trackers
They could reveal sensitive addiction information that’s ripe for abuse. (Wired $)

5 Activision Blizzard’s games are being pulled offline in China
It’s been unable to strike a deal with its Chinese distributor. (FT $)

6 Intel thinks it can catch deepfakes with 96% accuracy
By tracking the “blood flow” of video pixels to detect living humans. (VentureBeat)
+ A horrifying AI app swaps women into porn videos with a click. (MIT Technology Review)

7 We’ve ignored concrete’s carbon footprint for too long
It’s not as big a polluter as transport or energy, but it’s in urgent need of a greener overhaul. (Knowable Magazine)
+ How Joe Biden got away with passing the IRA. (The Atlantic $)
+ How hydrogen and electricity can clean up heavy industry. (MIT Technology Review)

8 Lab-grown meat is safe to eat
The FDA has green lit lab-grown chicken—but it needs to pass other tests before it can be sold. (NBC News)
+ Will lab-grown meat reach our plates? (MIT Technology Review)

9 Why NASA’s astronauts aren’t allowed to TikTok from space 🪐
Even though their European counterparts are. (Vox
+ NASA’s Artemis 1 launch was an oddly muted affair. (The Atlantic $)
+ Here’s everything the mission is taking with it to the moon. (IEEE Spectrum)

10 We could hitch a ride on a flying taxi one day 🚁
By the end of the decade, apparently. Let’s see. (Economist $)

Quote of the day

“Ireland really bet the farm on the future of tech . . . almost at the expense of everything else.”

—Mark O’Connell, executive chair and founder of OCO Global, a trade and investment focused advisory firm, tells the Financial Times why the tech sector’s mass job cuts will hit Ireland particularly hard.

The big story

Bright LEDs could spell the end of dark skies

A view of Milky Way from the Grand Canyon

August 2022

Scientists have known for years that light pollution is growing and can harm both humans and wildlife. In people, increased exposure to light at night disrupts sleep cycles and has been linked to cancer and cardiovascular disease, while wildlife suffers from interruption to their reproductive patterns, increased danger and loss of stealth.

Astronomers, policymakers, and lighting professionals are all working to find ways to reduce light pollution. Many of them advocate installing light-emitting diodes, or LEDs, in outdoor fixtures such as city streetlights, mainly for their ability to direct light to a targeted area. But the high initial investment and durability of modern LEDs mean cities need to get the transition right the first time or potentially face decades of consequences. Read the full story.

—Shel Evergreen

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ A luxurious train journey looks like the perfect way to unwind to me.
+ Nothing can replace the joy of picking out a great read from a bookstore.
+ It’s never too early to start planning for your next great adventure.
+ Olive oil? Good. Cake? Good. An olive oil cake?! GOOD!
+ The oldest known sentence written in the first alphabet is entertainingly domestic.


The Download: generative AI for video, and detecting AI text



The original startup behind Stable Diffusion has launched a generative AI for video

The original startup behind Stable Diffusion has launched a generative AI for video

What’s happened: Runway, the generative AI startup that co-created last year’s breakout text-to-image model Stable Diffusion, has released an AI model that can transform existing videos into new ones by applying styles from a text prompt or reference image.

What it does: In a demo reel posted on its website, Runway shows how the model, called Gen-1, can turn people on a street into claymation puppets, and books stacked on a table into a cityscape at night. Other recent text-to-video models can generate very short video clips from scratch, but because Gen-1adapts existing footage it can produce much longer videos.

Why it matters: Last year’s explosion in generative AI was fueled by the millions of people who got their hands on powerful creative tools for the first time and shared what they made, and Runway hopes Gen-1 will have a similar effect on generated videos. Read the full story.

—Will Douglas Heaven

Why detecting AI-generated text is so difficult (and what to do about it)

Last week, OpenAI unveiled a tool that can detect text produced by its AI system ChatGPT. But if you’re a teacher who fears the coming deluge of ChatGPT-generated essays, don’t get too excited.

Continue Reading


Why detecting AI-generated text is so difficult (and what to do about it)



Why detecting AI-generated text is so difficult (and what to do about it)

This tool is OpenAI’s response to the heat it’s gotten from educators, journalists, and others for launching ChatGPT without any ways to detect text it has generated. However, it is still very much a work in progress, and it is woefully unreliable. OpenAI says its AI text detector correctly identifies 26% of AI-written text as “likely AI-written.” 

While OpenAI clearly has a lot more work to do to refine its tool, there’s a limit to just how good it can make it. We’re extremely unlikely to ever get a tool that can spot AI-generated text with 100% certainty. It’s really hard to detect AI-generated text because the whole point of AI language models is to generate fluent and human-seeming text, and the model is mimicking text created by humans, says Muhammad Abdul-Mageed, a professor who oversees research in natural-language processing and machine learning at the University of British Columbia

We are in an arms race to build detection methods that can match the latest, most powerful models, Abdul-Mageed adds. New AI language models are more powerful and better at generating even more fluent language, which quickly makes our existing detection tool kit outdated. 

OpenAI built its detector by creating a whole new AI language model akin to ChatGPT that is specifically trained to detect outputs from models like itself. Although details are sparse, the company apparently trained the model with examples of AI-generated text and examples of human-generated text, and then asked it to spot the AI-generated text. We asked for more information, but OpenAI did not respond. 

Last month, I wrote about another method for detecting text generated by an AI: watermarks. These act as a sort of secret signal in AI-produced text that allows computer programs to detect it as such. 

Researchers at the University of Maryland have developed a neat way of applying watermarks to text generated by AI language models, and they have made it freely available. These watermarks would allow us to tell with almost complete certainty when AI-generated text has been used. 

The trouble is that this method requires AI companies to embed watermarking in their chatbots right from the start. OpenAI is developing these systems but has yet to roll them out in any of its products. Why the delay? One reason might be that it’s not always desirable to have AI-generated text watermarked. 

One of the most promising ways ChatGPT could be integrated into products is as a tool to help people write emails or as an enhanced spell-checker in a word processor. That’s not exactly cheating. But watermarking all AI-generated text would automatically flag these outputs and could lead to wrongful accusations.

Continue Reading


The original startup behind Stable Diffusion has launched a generative AI for video



The original startup behind Stable Diffusion has launched a generative AI for video

Set up in 2018, Runway has been developing AI-powered video-editing software for several years. Its tools are used by TikTokers and YouTubers as well as mainstream movie and TV studios. The makers of The Late Show with Stephen Colbert used Runway software to edit the show’s graphics; the visual effects team behind the hit movie Everything Everywhere All at Once used the company’s tech to help create certain scenes.  

In 2021, Runway collaborated with researchers at the University of Munich to build the first version of Stable Diffusion. Stability AI, a UK-based startup, then stepped in to pay the computing costs required to train the model on much more data. In 2022, Stability AI took Stable Diffusion mainstream, transforming it from a research project into a global phenomenon. 

But the two companies no longer collaborate. Getty is now taking legal action against Stability AI—claiming that the company used Getty’s images, which appear in Stable Diffusion’s training data, without permission—and Runway is keen to keep its distance.

Gen-1 represents a new start for Runway. It follows a smattering of text-to-video models revealed late last year, including Make-a-Video from Meta and Phenaki from Google, both of which can generate very short video clips from scratch. It is also similar to Dreamix, a generative AI from Google revealed last week, which can create new videos from existing ones by applying specified styles. But at least judging from Runway’s demo reel, Gen-1 appears to be a step up in video quality. Because it transforms existing footage, it can also produce much longer videos than most previous models. (The company says it will post technical details about Gen-1 on its website in the next few days.)   

Unlike Meta and Google, Runway has built its model with customers in mind. “This is one of the first models to be developed really closely with a community of video makers,” says Valenzuela. “It comes with years of insight about how filmmakers and VFX editors actually work on post-production.”

Gen-1, which runs on the cloud via Runway’s website, is being made available to a handful of invited users today and will be launched to everyone on the waitlist in a few weeks.

Last year’s explosion in generative AI was fueled by the millions of people who got their hands on powerful creative tools for the first time and shared what they made with them. Valenzuela hopes that putting Gen-1 into the hands of creative professionals will soon have a similar impact on video.

“We’re really close to having full feature films being generated,” he says. “We’re close to a place where most of the content you’ll see online will be generated.”

Continue Reading

Copyright © 2021 Seminole Press.