In 1957, Yang and Tsung-Dao Lee, a fellow Chinese graduate of the University of Chicago, won the Nobel Prize for proposing that when some elementary particles decay, they do so in a way that distinguishes left from right. They were the first Chinese laureates. Speaking at the Nobel banquet, Yang noted that the prize had first been awarded in 1901, the same year as the Boxer Protocol. “As I stand here today and tell you about these, I am heavy with an awareness of the fact that I am in more than one sense a product of both the Chinese and Western cultures, in harmony and in conflict,” he said.
Yang became a US citizen in 1964 and moved to Stony Brook University on Long Island in 1966 as the founding director of its Institute for Theoretical Physics, which was later named after him. As the relationship between the US and China began to thaw, Yang visited his homeland in 1971—his first trip in a quarter of a century. A lot had changed. His father’s health was failing. The Cultural Revolution was raging, and both Western science and Chinese tradition had been deemed heresy. Many of Yang’s former colleagues, including Huang and Deng, were persecuted and forced to perform hard labor. The Nobel laureate, on the other hand, was received like a foreign dignitary. He met with officials at the highest levels of the Chinese government and advocated for the importance of basic research.
In the years that followed, Yang visited China regularly. At first, his trips drew attention from the FBI, which saw exchanges with Chinese scientists as suspect. But by the late 1970s, hostilities had waned. Mao Zedong was dead. The Cultural Revolution was over. Beijing adopted reforms and opening-up policies. Chinese students could go abroad for study. Yang helped raise funding for Chinese scholars to come to the US and for international experts to travel to conferences in China, where he also helped establish new research centers. When Deng Jiaxian died in 1986, Yang wrote an emotional eulogy for his friend, who had devoted his life to China’s nuclear defense. It concluded with a song from 1906, one of his father’s favorites: “[T]he sons of China, they hold the sky aloft with a single hand … The crimson never fades from their blood spilled in the sand.”
Yang retired from Stony Brook in 1999 and moved back to China a few years later to teach freshman physics at Tsinghua. In 2015, he renounced his US citizenship and became a citizen of the People’s Republic of China. In an essay remembering his father, Yang recounted his earlier decision to emigrate. He wrote, “I know that until his final days, in a corner of his heart, my father never forgave me for abandoning my homeland.”
In 2007, when he was 85 years old, Yang stopped by our hometown on an autumn day and gave a talk at my university. My roommates and I waited outside the venue hours in advance, earning precious seats in the packed auditorium. He took the stage to thunderous applause and delivered a presentation in English about his Nobel-winning work. I was a little perplexed by his choice of language. One of my roommates muttered, wondering whether Yang was too good to speak in his mother tongue. We listened attentively nevertheless, grateful to be in the same room as the great scientist.
A college junior and physics major, I was preparing to apply to graduate school in the US. I’d been raised with the notion that the best of China would leave China. Two years after hearing Yang in person, I too enrolled at the University of Chicago. I received my PhD in 2015 and stayed in the US for postdoctoral research.
Months before I bid farewell to my homeland, the central government launched its flagship overseas recruitment program, the Thousand Talents Plan, encouraging scientists and tech entrepreneurs to move to China with the promise of generous personal compensation and robust research funding. In the decade since, scores of similar programs have sprung up. Some, like Thousand Talents, are supported by the central government. Others are financed by local municipalities.
Beijing’s aggressive pursuit of foreign-trained talent is an indicator of the country’s new wealth and technological ambition. Though most of these programs are not exclusive to people of Chinese origin, the promotional materials routinely appeal to sentiments of national belonging, calling on the Chinese diaspora to come home. Bold red Chinese characters headlined the web page for the Thousand Talents Plan: “The motherland needs you. The motherland welcomes you. The motherland places her hope in you.”
These days, though, the website isn’t accessible. Since 2020, mentions of the Thousand Talents Plan have largely disappeared from the Chinese internet. Though the program continues, its name is censored on search engines and forbidden in official documents in China. Since the final years of the Obama administration, the Chinese government’s overseas recruitment has come under intensifying scrutiny from US law enforcement. In 2018, the Justice Department started a China Initiative intended to combat economic espionage, with a focus on academic exchange between the two countries. The US government has also placed various restrictions on Chinese students, shortening their visas and denying access to facilities in disciplines deemed “sensitive.”
There are real problems of illicit behavior in Chinese talent programs. Earlier this year, a chemist associated with Thousand Talents was convicted in Tennessee of stealing trade secrets for BPA-free beverage can liners. A hospital researcher in Ohio pled guilty to stealing designs for exosome isolation used in medical diagnosis. Some US-based scientists failed to disclose additional income from China in federal grant proposals or on tax returns. All these are cases of individual greed or negligence. Yet the FBI considers them part of a “China threat” that demands a “whole-of-society” response.
The Biden administration is reportedly considering changes to the China Initiative, which many science associations and civil rights groups have criticized as “racial profiling.” But no official announcements have been made. New cases have opened under Biden; restrictions on Chinese students remain in effect.
Seen from China, the sanctions, prosecutions, and export controls imposed by the US look like continuations of foreign “bullying.” What has changed in the past 120 years is China’s status. It is now not a crumbling empire but a rising superpower. Policymakers in both countries use similar techno-nationalistic language to describe science as a tool of national greatness and scientists as strategic assets in geopolitics. Both governments are pursuing military use of technologies like quantum computing and artificial intelligence.
“We do not seek conflict, but we welcome stiff competition,” National Security Advisor Jake Sullivan said at the Alaska summit. Yang Jiechi responded by arguing that past confrontations between the two countries had only damaged the US, while China pulled through.
Much of the Chinese public relishes the prospect of competing against the US. Take a popular saying of Mao’s: “Those who fall behind will get beaten up!” The expression originated from a speech by Joseph Stalin, who stressed the importance of industrialization for the Soviet Union. For the Chinese public, largely unaware of its origins, it evokes the recent past, when a weak China was plundered by foreigners. When I was little, my mother often repeated the expression at home, distilling a century of national humiliation into a personal motivation for excellence. It was only later, in adulthood, that I began to question the underlying logic: Is a competition between nations meaningful? By what metric, and to what end?
The original startup behind Stable Diffusion has launched a generative AI for video
Set up in 2018, Runway has been developing AI-powered video-editing software for several years. Its tools are used by TikTokers and YouTubers as well as mainstream movie and TV studios. The makers of The Late Show with Stephen Colbert used Runway software to edit the show’s graphics; the visual effects team behind the hit movie Everything Everywhere All at Once used the company’s tech to help create certain scenes.
In 2021, Runway collaborated with researchers at the University of Munich to build the first version of Stable Diffusion. Stability AI, a UK-based startup, then stepped in to pay the computing costs required to train the model on much more data. In 2022, Stability AI took Stable Diffusion mainstream, transforming it from a research project into a global phenomenon.
But the two companies no longer collaborate. Getty is now taking legal action against Stability AI—claiming that the company used Getty’s images, which appear in Stable Diffusion’s training data, without permission—and Runway is keen to keep its distance.
Gen-1 represents a new start for Runway. It follows a smattering of text-to-video models revealed late last year, including Make-a-Video from Meta and Phenaki from Google, both of which can generate very short video clips from scratch. It is also similar to Dreamix, a generative AI from Google revealed last week, which can create new videos from existing ones by applying specified styles. But at least judging from Runway’s demo reel, Gen-1 appears to be a step up in video quality. Because it transforms existing footage, it can also produce much longer videos than most previous models. (The company says it will post technical details about Gen-1 on its website in the next few days.)
Unlike Meta and Google, Runway has built its model with customers in mind. “This is one of the first models to be developed really closely with a community of video makers,” says Valenzuela. “It comes with years of insight about how filmmakers and VFX editors actually work on post-production.”
Gen-1, which runs on the cloud via Runway’s website, is being made available to a handful of invited users today and will be launched to everyone on the waitlist in a few weeks.
Last year’s explosion in generative AI was fueled by the millions of people who got their hands on powerful creative tools for the first time and shared what they made with them. Valenzuela hopes that putting Gen-1 into the hands of creative professionals will soon have a similar impact on video.
“We’re really close to having full feature films being generated,” he says. “We’re close to a place where most of the content you’ll see online will be generated.”
When my dad was sick, I started Googling grief. Then I couldn’t escape it.
I am a mostly visual thinker, and thoughts pose as scenes in the theater of my mind. When my many supportive family members, friends, and colleagues asked how I was doing, I’d see myself on a cliff, transfixed by an omniscient fog just past its edge. I’m there on the brink, with my parents and sisters, searching for a way down. In the scene, there is no sound or urgency and I am waiting for it to swallow me. I’m searching for shapes and navigational clues, but it’s so huge and gray and boundless.
I wanted to take that fog and put it under a microscope. I started Googling the stages of grief, and books and academic research about loss, from the app on my iPhone, perusing personal disaster while I waited for coffee or watched Netflix. How will it feel? How will I manage it?
I started, intentionally and unintentionally, consuming people’s experiences of grief and tragedy through Instagram videos, various newsfeeds, and Twitter testimonials. It was as if the internet secretly teamed up with my compulsions and started indulging my own worst fantasies; the algorithms were a sort of priest, offering confession and communion.
Yet with every search and click, I inadvertently created a sticky web of digital grief. Ultimately, it would prove nearly impossible to untangle myself. My mournful digital life was preserved in amber by the pernicious personalized algorithms that had deftly observed my mental preoccupations and offered me ever more cancer and loss.
I got out—eventually. But why is it so hard to unsubscribe from and opt out of content that we don’t want, even when it’s harmful to us?
I’m well aware of the power of algorithms—I’ve written about the mental-health impact of Instagram filters, the polarizing effect of Big Tech’s infatuation with engagement, and the strange ways that advertisers target specific audiences. But in my haze of panic and searching, I initially felt that my algorithms were a force for good. (Yes, I’m calling them “my” algorithms, because while I realize the code is uniform, the output is so intensely personal that they feel like mine.) They seemed to be working with me, helping me find stories of people managing tragedy, making me feel less alone and more capable.
In reality, I was intimately and intensely experiencing the effects of an advertising-driven internet, which Ethan Zuckerman, the renowned internet ethicist and professor of public policy, information, and communication at the University of Massachusetts at Amherst, famously called “the Internet’s Original Sin” in a 2014 Atlantic piece. In the story, he explained the advertising model that brings revenue to content sites that are most equipped to target the right audience at the right time and at scale. This, of course, requires “moving deeper into the world of surveillance,” he wrote. This incentive structure is now known as “surveillance capitalism.”
Understanding how exactly to maximize the engagement of each user on a platform is the formula for revenue, and it’s the foundation for the current economic model of the web.
The Download: trapped by grief algorithms, and image AI privacy issues
—Tate Ryan-Mosley, senior tech policy reporter
I’ve always been a super-Googler, coping with uncertainty by trying to learn as much as I can about whatever might be coming. That included my father’s throat cancer.
I started Googling the stages of grief, and books and academic research about loss, from the app on my iPhone, intentionally and unintentionally consuming people’s experiences of grief and tragedy through Instagram videos, various newsfeeds, and Twitter testimonials.
Yet with every search and click, I inadvertently created a sticky web of digital grief. Ultimately, it would prove nearly impossible to untangle myself from what the algorithms were serving me. I got out—eventually. But why is it so hard to unsubscribe from and opt out of content that we don’t want, even when it’s harmful to us? Read the full story.
AI models spit out photos of real people and copyrighted images
The news: Image generation models can be prompted to produce identifiable photos of real people, medical images, and copyrighted work by artists, according to new research.
How they did it: Researchers prompted Stable Diffusion and Google’s Imagen with captions for images, such as a person’s name, many times. Then they analyzed whether any of the generated images matched original images in the model’s database. The group managed to extract over 100 replicas of images in the AI’s training set.
Why it matters: The finding could strengthen the case for artists who are currently suing AI companies for copyright violations, and could potentially threaten the human subjects’ privacy. It could also have implications for startups wanting to use generative AI models in health care, as it shows that these systems risk leaking sensitive private information. Read the full story.