Yes, blame climate change.
Human-driven global warming fueled the heat wave that likely killed hundreds of people last week across the US Pacific Northwest and Canada.
The massive buildup of greenhouse gases in the atmosphere made the unprecedented weather event 150 times more likely, according to an analysis by World Weather Attribution. The loosely affiliated team of global scientists concluded that the extreme heat wave would have been “virtually impossible” without climate change, which has already warmed the planet by about 2.2 ˚F (1.2 ˚C).
Scientists long resisted pinning any single weather event on climate change, sticking to the general point that it would make heat waves, droughts, fires, and hurricanes increasingly frequent and severe. But more satellite data records, increased computing power, and higher-resolution climate simulations have made researchers more confident about stating, often within days, that global warming substantially raised the odds of specific disasters. (See 10 Breakthrough Technologies 2020: Climate Change Attribution.)
Last week’s extreme temperatures demolished all-time heat records in cities and towns throughout the region, knocked out power to tens of thousands of homes, and put more than 2,000 people into emergency rooms for heat-related illnesses in Washington and Oregon.
So far, officials have reported more than 100 heat-linked deaths in those states, according to assorted media outlets. In addition, there were nearly 500 “sudden and unexpected deaths” in British Columbia, some 300 more than normal during the relevant five-day period.
The most likely scenario is that higher global temperatures simply exacerbated the consequences of unusual atmospheric conditions that occurred last week, when a so-called heat dome trapped hot air over a massive stretch of the region. If so, similar events could happen once or twice a decade if temperatures rise by 3.6 ˚F (2 ˚C), the researchers found.
The more troubling, if slimmer, possibility is that greenhouse-gas emissions have pushed the climate system past some unknown and little-understood threshold, where planetary warming is now triggering sharper rises in extreme temperatures than expected. That theory will require further research to assess. But it would mean that severe heat waves will exceed the levels current climate models predict, the researchers said.
“You’re not supposed to break records by four or five degrees Celsius (seven to nine degrees Fahrenheit),” Friederike Otto, co-lead of World Weather Attribution and associate director of the Environmental Change Institute at Oxford University, said in a statement. “This is such an exceptional event that we can’t rule out the possibility that we’re experiencing heat extremes today that we only expected to come at higher levels of global warming.”
Another heat wave is expected to push temperatures back into the triple digits across parts of the Northwest in the coming days.
Neuroscientists listened in on people’s brains for a week. They found order and chaos.
Ghuman, Wang, and their colleagues turned to people who were undergoing brain surgery for epilepsy. Some people with severe or otherwise untreatable epilepsy opt to have the small parts of their brain that trigger their seizures surgically removed. Before any operation, they may have electrodes implanted in their brains for a week or so. During that time, these electrodes monitor brain activity to help surgeons pinpoint where their seizures start and identify exactly which bit of brain should be removed.
The researchers recruited 20 such individuals to volunteer in their study. Each person had 10 to 15 electrodes implanted for somewhere between three and 12 days.
The pair collected recordings from the electrodes over the entire period. The volunteers were all in hospital while they were monitored, but they still did everyday things like eating meals, talking to friends, watching TV, or reading books. “We know so little about what the brain does during these real, natural behaviors in a real-world setting,” says Ghuman.
The edge of chaos
The team found some surprising patterns in brain activity over the course of the week. Specific brain networks seemed to communicate with each other in what looked like a “dance,” with one region appearing to “listen” while the other “spoke,” say the researchers, who presented their findings at the Society for Neuroscience annual meeting in San Diego last year.
And while the volunteers’ brains seemed to pass between different states over time, they did so in a curious way. Rather than simply moving from one pattern of activity to another, their brains appeared to zip between several other states in between, apparently at random. As the brain shifts from one semi-stable state to another, it seems to embrace chaos.
The Download: generative AI for video, and detecting AI text
The original startup behind Stable Diffusion has launched a generative AI for video
What’s happened: Runway, the generative AI startup that co-created last year’s breakout text-to-image model Stable Diffusion, has released an AI model that can transform existing videos into new ones by applying styles from a text prompt or reference image.
What it does: In a demo reel posted on its website, Runway shows how the model, called Gen-1, can turn people on a street into claymation puppets, and books stacked on a table into a cityscape at night. Other recent text-to-video models can generate very short video clips from scratch, but because Gen-1adapts existing footage it can produce much longer videos.
Why it matters: Last year’s explosion in generative AI was fueled by the millions of people who got their hands on powerful creative tools for the first time and shared what they made, and Runway hopes Gen-1 will have a similar effect on generated videos. Read the full story.
—Will Douglas Heaven
Why detecting AI-generated text is so difficult (and what to do about it)
Last week, OpenAI unveiled a tool that can detect text produced by its AI system ChatGPT. But if you’re a teacher who fears the coming deluge of ChatGPT-generated essays, don’t get too excited.
Why detecting AI-generated text is so difficult (and what to do about it)
This tool is OpenAI’s response to the heat it’s gotten from educators, journalists, and others for launching ChatGPT without any ways to detect text it has generated. However, it is still very much a work in progress, and it is woefully unreliable. OpenAI says its AI text detector correctly identifies 26% of AI-written text as “likely AI-written.”
While OpenAI clearly has a lot more work to do to refine its tool, there’s a limit to just how good it can make it. We’re extremely unlikely to ever get a tool that can spot AI-generated text with 100% certainty. It’s really hard to detect AI-generated text because the whole point of AI language models is to generate fluent and human-seeming text, and the model is mimicking text created by humans, says Muhammad Abdul-Mageed, a professor who oversees research in natural-language processing and machine learning at the University of British Columbia
We are in an arms race to build detection methods that can match the latest, most powerful models, Abdul-Mageed adds. New AI language models are more powerful and better at generating even more fluent language, which quickly makes our existing detection tool kit outdated.
OpenAI built its detector by creating a whole new AI language model akin to ChatGPT that is specifically trained to detect outputs from models like itself. Although details are sparse, the company apparently trained the model with examples of AI-generated text and examples of human-generated text, and then asked it to spot the AI-generated text. We asked for more information, but OpenAI did not respond.
Last month, I wrote about another method for detecting text generated by an AI: watermarks. These act as a sort of secret signal in AI-produced text that allows computer programs to detect it as such.
Researchers at the University of Maryland have developed a neat way of applying watermarks to text generated by AI language models, and they have made it freely available. These watermarks would allow us to tell with almost complete certainty when AI-generated text has been used.
The trouble is that this method requires AI companies to embed watermarking in their chatbots right from the start. OpenAI is developing these systems but has yet to roll them out in any of its products. Why the delay? One reason might be that it’s not always desirable to have AI-generated text watermarked.
One of the most promising ways ChatGPT could be integrated into products is as a tool to help people write emails or as an enhanced spell-checker in a word processor. That’s not exactly cheating. But watermarking all AI-generated text would automatically flag these outputs and could lead to wrongful accusations.