To test this possibility, the researchers trained a deep-learning model to predict the patient’s self-reported pain level from their knee x-ray. If the resultant model had terrible accuracy, this would suggest that self-reported pain is rather arbitrary. But if the model had really good accuracy, this would provide evidence that self-reported pain is in fact correlated with radiographic markers in the x-ray.
After running several experiments, including to discount any confounding factors, the researchers found that the model was much more accurate than KLG at predicting self-reported pain levels for both white and Black patients, but especially for Black patients. It reduced the racial disparity at each pain level by nearly half.
The goal isn’t necessarily to start using this algorithm in a clinical setting. But by outperforming the KLG methodology, it revealed that the standard way of measuring pain is flawed, at a much greater cost to Black people. This should tip off the medical community to investigate which radiographic markers the algorithm might be seeing, and update their scoring methodology.
“It actually highlights a really exciting part of where these kinds of algorithms can fit into the process of medical discovery,” says Obermeyer. “It tells us if there’s something here that’s worth looking at that we don’t understand. It sets the stage for humans to then step in and, using these algorithms as tools, try to figure out what’s going on.”
“The cool thing about this paper is it is thinking about things from a completely different perspective,” says Irene Chen, a researcher at MIT who studies how to reduce health care inequities in machine learning and was not involved in the paper. Instead of training the algorithm based on well-established expert knowledge, she says, the researchers chose to treat the patient’s self-assessment as truth. Through that it uncovered important gaps in what the medical field usually considers to be the more “objective” pain measure.
“That was exactly the secret,” agrees Obermeyer. If algorithms are only ever trained to match expert performance, he says, they will simply perpetuate existing gaps and inequities. “This study is a glimpse of a more general pipeline that we are increasingly able to use in medicine for generating new knowledge.”
The Advanced Research Projects Agency for Energy (ARPA-E) funds high-risk, high-reward energy research projects, and each year the agency hosts a summit where funding recipients and other researchers and companies in energy can gather to talk about what’s new in the field.
As I listened to presentations, met with researchers, and—especially—wandered around the showcase, I often had a vague feeling of whiplash. Standing at one booth trying to wrap my head around how we might measure carbon stored by plants, I would look over and see another group focused on making nuclear fusion a more practical way to power the world.
There are plenty of tried-and-true solutions that can begin to address climate change right now: wind and solar power are being deployed at massive scales, electric vehicles are coming to the mainstream, and new technologies are helping companies make even fossil-fuel production less polluting. But as we knock out the easy wins, we’ll also need to get creative to tackle harder-to-solve sectors and reach net-zero emissions. Here are a few intriguing projects from the ARPA-E showcase that caught my eye.
Vaporized rocks
“I heard you have rocks here!” I exclaimed as I approached the Quaise Energy station.
Quaise’s booth featured a screen flashing through some fast facts and demonstration videos. And sure enough, laid out on the table were two slabs of rock. They looked a bit worse for wear, each sporting a hole about the size of a quarter in the middle, singed around the edges.
These rocks earned their scorch marks in service of a big goal: making geothermal power possible anywhere. Today, the high temperatures needed to generate electricity using heat from the Earth are only accessible close to the surface in certain places on the planet, like Iceland or the western US.
Geothermal power could in theory be deployed anywhere, if we could drill deep enough. Getting there won’t be easy, though, and could require drilling 20 kilometers (12 miles) beneath the surface. That’s deeper than any oil and gas drilling done today.
Rather than grinding through layers of granite with conventional drilling technology, Quaise plans to get through the more obstinate parts of the Earth’s crust by using high-powered millimeter waves to vaporize rock. (It’s sort of like lasers, but not quite.)
Annika Hauptvogel, head of technology and innovation management at Siemens, describes the industrial metaverse as “immersive, making users feel as if they’re in a real environment; collaborative in real time; open enough for different applications to seamlessly interact; and trusted by the individuals and businesses that participate”—far more than simply a digital world.
The industrial metaverse will revolutionize the way work is done, but it will also unlock significant new value for business and societies. By allowing businesses to model, prototype, and test dozens, hundreds, or millions of design iterations in real time and in an immersive, physics-based environment before committing physical and human resources to a project, industrial metaverse tools will usher in a new era of solving real-world problems digitally.
“The real world is very messy, noisy, and sometimes hard to really understand,” says Danny Lange, senior vice president of artificial intelligence at Unity Technologies, a leading platform for creating and growing real-time 3-D content. “The idea of the industrial metaverse is to create a cleaner connection between the real world and the virtual world, because the virtual world is so much easier and cheaper to work with.”
While real-life applications of the consumer metaverse are still developing, industrial metaverse use cases are purpose-driven, well aligned with real-world problems and business imperatives. The resource efficiencies enabled by industrial metaverse solutions may increase business competitiveness while also continually driving progress toward the sustainability, resilience, decarbonization, and dematerialization goals that are essential to human flourishing.
This report explores what it will take to create the industrial metaverse, its potential impacts on business and society, the challenges ahead, and innovative use cases that will shape the future. Its key findings are as follows:
• The industrial metaverse will bring together the digital and real worlds. It will enable a constant exchange of information, data, and decisions and empower industries to solve extraordinarily complex real-world problems digitally, changing how organizations operate and unlocking significant societal benefits.
• The digital twin is a core metaverse building block. These virtual models simulate real-world objects in detail. The next generation of digital twins will be photorealistic, physics-based, AI-enabled, and linked in metaverse ecosystems.
• The industrial metaverse will transform every industry. Currently existing digital twins illustrate the power and potential of the industrial metaverse to revolutionize design and engineering, testing, operations, and training.
Across social media, a number of creators are generating nostalgic photographs of China with the help of AI. Even though these images get some details wrong, they are realistic enough to trick and impress many of their followers.
The pictures look sophisticated in terms of definition, sharpness, saturation, and color tone. Their realism is partly down to a recent major update of image-making artificial-intelligence program Midjourney that was released in mid-March, which is better not only at generating human hands but also at simulating various photography styles.
It’s still relatively easy, even for untrained eyes, to tell that the photos are generated by an AI. But for some creators, their experiments are more about trying to recall a specific era in time than trying to trick their audience. Read the full story.
—Zeyi Yang
Zeyi’s story is from China Report, his weekly newsletter giving you the inside track on tech in China. Sign up to receive it in your inbox every Tuesday.
Read more of our reporting on AI-generated images:
+ These new tools let you see for yourself how biased AI image models are. Bias and stereotyping are still huge problems for systems like DALL-E 2 and Stable Diffusion, despite companies’ attempts to fix it. Read the full story.