Connect with us

Tech

The Download: AI privacy risks, and cleaning up shipping

Published

on

🎣


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What does GPT-3 “know” about me?

One of the biggest stories in tech this year has been the rise of large language models (LLMs). These are AI models that produce text a human might have written—sometimes so convincingly they have tricked people into thinking they are sentient.

These models’ power comes from troves of publicly available human-created text that has been hoovered from the internet. If you’ve posted anything even remotely personal in English on the internet, chances are your data might be part of some of the world’s most popular LLMs. 

My colleague Melissa Heikkilä, our AI reporter, recently started to wonder what data these models might have on her—and how it could be misused. A bruising experience a decade ago left her paranoid about sharing personal details online, so she put OpenAI’s GPT-3 to the test to see what it “knows” about her. Read about what she found.

How ammonia could help clean up global shipping

The news: Foul-smelling ammonia might seem like an unlikely fuel to help cut greenhouse-gas emissions. But it could also play a key role in decarbonizing global shipping, providing an efficient way to store the energy needed to power large ships on long journeys.

What’s happening: The American Bureau of Shipping recently granted early-stage approval for some ammonia-powered ships and fueling infrastructure, meaning such ships could hit the seas within the next few years. While the fuel would require new engines and fueling systems, swapping it in for fossil fuels that ships burn today could help make a significant dent in global carbon emissions.

What’s next: Some companies are looking even further into the future, with New York–based Amogy raising nearly $50 million earlier this year to use the chemical for fuel cells that promise even greater emissions cuts. If early tests for ammonia work out, these new technologies could help the shipping industry to significantly reduce its emissions. Read the full story.

—Casey Crownhart

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Pakistan is reeling from its devastating flooding
Poor policy making, mixed with a climate change-driven monsoon, has displaced millions of people and destroyed homes, food and livelihoods. (Vox
+ These images highlight the extent of the destruction. (The Guardian)
+ Residents are trying to salvage their belongsides from the waters. (BBC)

2 California has passed new online child safety rules 
The legislation will force websites and apps to add protective measures for under-18s. (NYT $)
The state also wants to punish doctors who spread health misinformation. (NYT $) 

3 NASA will try to launch its Artemis rocket again on Saturday
An inaccurate sensor reading is believed to have caused the botched lift-off on Monday. (BBC)

4 Elon Musk has found a new tactic to try to wriggle out of buying Twitter
He’s using the recent whistleblower allegations. (FT $)
+ What you need to know about the upcoming legal fight. (WSJ $)
+ Twitter is failing to adequately tackle self-harming content. (Ars Technica)

5 Deepfakes are infiltrating the mainstream
The technology is improving by the day, and we should be worried. (WP $)
+ A horrifying new AI app swaps women into porn videos with a click. (MIT Technology Review)

6 Cyber insurance isn’t equipped to deal with cyber warfare
Insurers can’t agree on what should and shouldn’t be covered. (Wired $)

7 A program to clean polluted Nigerian wetlands worsened the problem
Ogoniland residents have been left to deal with the oil-soaked lands. (Bloomberg $)
+ The companies that caused an oil spill in California have been fined $13 million. (CNN)

8 How giant isopods got so giant
The roly-poly relative’s genes explain why it can grow to the size of a chihuahua. (Hakai Magazine)
+ The primordial coelacanth was an energy-saving expert. (New Scientist $)

9 Gen Z is really into making collages
Naturally, there’s an app for that. (The Information $)

10 Dadcore fashion has gone viral
Leaving a generation of iconic fishing fans in its wake. (Input)

Quote of the day

“I’ve definitely had days where I’ve achieved all of that, but it’s exhausting.”

—Dynasti deGouville, 22, describes the pressure she felt to subscribe to the #ThatGirl lifestyle of early rising, grueling exercise, and restrictive diets peddled by TikTok clips of thin, white women to the Wall Street Journal.

The big story

Humanity is stuck in short-term thinking. Here’s how we escape.

October 2020

Humans have evolved over millennia to grasp an ever-expanding sense of time. We have minds capable of imagining a future far off into the distance. Yet while we may have this ability, it is rarely deployed in daily life. If our descendants were to diagnose the ills of 21st-century civilization, they would observe a dangerous short-termism: a collective failure to escape the present moment and look further ahead.

The world is saturated in information, and standards of living have never been higher, but so often it’s a struggle to see beyond the next news cycle, political term, or business quarter. How to explain this contradiction? Why have we come to be so stuck in the “now”? Read the full story.

—Richard Fisher

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ This dog slide looks like an infinite delight.
+ Three hours of underground 90s hip hop is guaranteed to put you in a good mood.
+ After a two-year break, the World Gravy Wrestling Championship is back! 
+ Electro icon Gary Numan has some interesting words of wisdom.
+ The Perseverance Rover is digging around for evidence of past life on Mars.



Tech

Meta’s new AI can turn text prompts into videos

Published

on

Meta’s new AI can turn text prompts into videos


Although the effect is rather crude, the system offers an early glimpse of what’s coming next for generative artificial intelligence, and it is the next obvious step from the text-to-image AI systems that have caused huge excitement this year. 

Meta’s announcement of Make-A-Video, which is not yet being made available to the public, will likely prompt other AI labs to release their own versions. It also raises some big ethical questions. 

In the last month alone, AI lab OpenAI has made its latest text-to-image AI system DALL-E available to everyone, and AI startup Stability.AI launched Stable Diffusion, an open-source text-to-image system.

But text-to-video AI comes with some even greater challenges. For one, these models need a vast amount of computing power. They are an even bigger computational lift than large text-to-image AI models, which use millions of images to train, because putting together just one short video requires hundreds of images. That means it’s really only large tech companies that can afford to build these systems for the foreseeable future. They’re also trickier to train, because there aren’t large-scale data sets of high-quality videos paired with text. 

To work around this, Meta combined data from three open-source image and video data sets to train its model. Standard text-image data sets of labeled still images helped the AI learn what objects are called and what they look like. And a database of videos helped it learn how those objects are supposed to move in the world. The combination of the two approaches helped Make-A-Video, which is described in a non-peer-reviewed paper published today, generate videos from text at scale.

Tanmay Gupta, a computer vision research scientist at the Allen Institute for Artificial Intelligence, says Meta’s results are promising. The videos it’s shared show that the model can capture 3D shapes as the camera rotates. The model also has some notion of depth and understanding of lighting. Gupta says some details and movements are decently done and convincing. 

However, “there’s plenty of room for the research community to improve on, especially if these systems are to be used for video editing and professional content creation,” he adds. In particular, it’s still tough to model complex interactions between objects. 

In the video generated by the prompt “An artist’s brush painting on a canvas,” the brush moves over the canvas, but strokes on the canvas aren’t realistic. “I would love to see these models succeed at generating a sequence of interactions, such as ‘The man picks up a book from the shelf, puts on his glasses, and sits down to read it while drinking a cup of coffee,’” Gupta says. 

Continue Reading

Tech

How AI is helping birth digital humans that look and sound just like us

Published

on

How AI is helping birth digital humans that look and sound just like us


Jennifer: And the team has also been exploring how these digital twins can be useful beyond the 2D world of a video conference. 

Greg Cross: I guess the.. the big, you know, shift that’s coming right at the moment is the move from the 2D world of the internet, into the 3D world of the metaverse. So, I mean, and that, and that’s something we’ve always thought about and we’ve always been preparing for, I mean, Jack exists in full 3D, um, You know, Jack exists as a full body. So I mean, Jack can, you know, today we have, you know, we’re building augmented reality, prototypes of Jack walking around on a golf course. And, you know, we can go and ask Jack, how, how should we play this hole? Um, so these are some of the things that we are starting to imagine in terms of the way in which digital people, the way in which digital celebrities. Interact with us as we move into the 3D world.

Jennifer: And he thinks this technology can go a lot further.

Greg Cross: Healthcare and education are two amazing applications of this type of technology. And it’s amazing because we don’t have enough real people to deliver healthcare and education in the real world. So, I mean, so you can, you know, you can imagine how you can use a digital workforce to augment. And, and extend the skills and capability, not replace, but extend the skills and, and capabilities of real people. 

Jennifer: This episode was produced by Anthony Green with help from Emma Cillekens. It was edited by me and Mat Honan, mixed by Garret Lang… with original music from Jacob Gorski.   

If you have an idea for a story or something you’d like to hear, please drop a note to podcasts at technology review dot com.

Thanks for listening… I’m Jennifer Strong.

Continue Reading

Tech

A bionic pancreas could solve one of the biggest challenges of diabetes

Published

on

A bionic pancreas could solve one of the biggest challenges of diabetes


The bionic pancreas, a credit card-sized device called an iLet, monitors a person’s levels around the clock and automatically delivers insulin when needed through a tiny cannula, a thin tube inserted into the body. It is worn constantly, generally on the abdomen. The device determines all insulin doses based on the user’s weight, and the user can’t adjust the doses. 

A Harvard Medical School team has submitted its findings from the study, described in the New England Journal of Medicine, to the FDA in the hopes of eventually bringing the product to market in the US. While a team from Boston University and Massachusetts General Hospital first tested the bionic pancreas in 2010, this is the most extensive trial undertaken so far.

The Harvard team, working with other universities, provided 219 people with type 1 diabetes who had used insulin for at least a year with a bionic pancreas device for 13 weeks. The team compared their blood sugar levels with those of 107 diabetic people who used other insulin delivery methods, including injection and insulin pumps, during the same amount of time. 

The blood sugar levels of the bionic pancreas group fell from 7.9% to 7.3%, while the standard care group’s levels remained steady at 7.7%. The American Diabetes Association recommends a goal of less than 7.0%, but that’s only met by approximately 20% of people with type 1 diabetes, according to a 2019 study

Other types of artificial pancreas exist, but they typically require the user to input information before they will deliver insulin, including the amount of carbohydrates they ate in their last meal. Instead, the iLet takes the user’s weight and the type of meal they’re eating, such as breakfast, lunch, or dinner, added by the user via the iLet interface, and it uses an adaptive learning algorithm to deliver insulin automatically.

Continue Reading

Copyright © 2021 Seminole Press.