Around the world, researchers like Howes are investigating how nonvisual information defines the character of a city and affects its livability. Using methods ranging from low-tech sound walks and smell maps to data scraping, wearables, and virtual reality, they’re fighting what they see as a limiting visual bias in urban planning.
“Just being able to close your eyes for 10 minutes gives you a totally different feeling about a place,” says Oğuz Öner, an academic and musician.
Öner has spent years organizing sound walks in Istanbul where blindfolded participants describe what they hear at different spots. His research has identified locations where vegetation could be planted to dampen traffic noise or where a wave organ could be constructed to amplify the soothing sounds of the sea, something he was surprised to realize people could hardly hear, even along the waterfront.
Local officials have expressed interest in his findings, Öner says, but have not yet incorporated them into urban plans. But this kind of individual feedback about the sensory environment is already being put to use in Berlin, where quiet areas identified by citizens using a free mobile app have been included in the city’s latest noise action plan. Under EU law, the city is now obligated to protect these spaces against an increase in noise.
“The way quiet areas are identified is usually very top-down, either based on land use or high-level parameters like distance from highways,” explains Francesco Aletta, a research associate at University College London. “This is the first example I’m aware of something perception-driven becoming policy.”
As a member of the EU-funded Soundscape Indices project, Aletta is helping create prediction models for how people will respond to various acoustic environments by compiling recorded soundscapes, both vibrant and tranquil, into a database and then testing the neural and physiological reactions they elicit. These kinds of tools are what experts say are needed to create a practical framework for ensuring that multisensory elements are included in design criteria and planning processes for cities.
The best way to determine how people react to different sensory environments is a subject of some debate within the field. Howes and his colleagues are taking a more ethnographic approach, using observation and interviews to develop a set of best practices for good sensory design in public spaces. Other researchers are going more high-tech, using wearables to track biometric data like heart-rate variability as a proxy for emotional responses to different sensory experiences. The EU-funded GoGreen Routes project is looking to that approach as it studies how nature can be integrated into urban spaces in a way that improves both human and environmental health.
The Advanced Research Projects Agency for Energy (ARPA-E) funds high-risk, high-reward energy research projects, and each year the agency hosts a summit where funding recipients and other researchers and companies in energy can gather to talk about what’s new in the field.
As I listened to presentations, met with researchers, and—especially—wandered around the showcase, I often had a vague feeling of whiplash. Standing at one booth trying to wrap my head around how we might measure carbon stored by plants, I would look over and see another group focused on making nuclear fusion a more practical way to power the world.
There are plenty of tried-and-true solutions that can begin to address climate change right now: wind and solar power are being deployed at massive scales, electric vehicles are coming to the mainstream, and new technologies are helping companies make even fossil-fuel production less polluting. But as we knock out the easy wins, we’ll also need to get creative to tackle harder-to-solve sectors and reach net-zero emissions. Here are a few intriguing projects from the ARPA-E showcase that caught my eye.
Vaporized rocks
“I heard you have rocks here!” I exclaimed as I approached the Quaise Energy station.
Quaise’s booth featured a screen flashing through some fast facts and demonstration videos. And sure enough, laid out on the table were two slabs of rock. They looked a bit worse for wear, each sporting a hole about the size of a quarter in the middle, singed around the edges.
These rocks earned their scorch marks in service of a big goal: making geothermal power possible anywhere. Today, the high temperatures needed to generate electricity using heat from the Earth are only accessible close to the surface in certain places on the planet, like Iceland or the western US.
Geothermal power could in theory be deployed anywhere, if we could drill deep enough. Getting there won’t be easy, though, and could require drilling 20 kilometers (12 miles) beneath the surface. That’s deeper than any oil and gas drilling done today.
Rather than grinding through layers of granite with conventional drilling technology, Quaise plans to get through the more obstinate parts of the Earth’s crust by using high-powered millimeter waves to vaporize rock. (It’s sort of like lasers, but not quite.)
Annika Hauptvogel, head of technology and innovation management at Siemens, describes the industrial metaverse as “immersive, making users feel as if they’re in a real environment; collaborative in real time; open enough for different applications to seamlessly interact; and trusted by the individuals and businesses that participate”—far more than simply a digital world.
The industrial metaverse will revolutionize the way work is done, but it will also unlock significant new value for business and societies. By allowing businesses to model, prototype, and test dozens, hundreds, or millions of design iterations in real time and in an immersive, physics-based environment before committing physical and human resources to a project, industrial metaverse tools will usher in a new era of solving real-world problems digitally.
“The real world is very messy, noisy, and sometimes hard to really understand,” says Danny Lange, senior vice president of artificial intelligence at Unity Technologies, a leading platform for creating and growing real-time 3-D content. “The idea of the industrial metaverse is to create a cleaner connection between the real world and the virtual world, because the virtual world is so much easier and cheaper to work with.”
While real-life applications of the consumer metaverse are still developing, industrial metaverse use cases are purpose-driven, well aligned with real-world problems and business imperatives. The resource efficiencies enabled by industrial metaverse solutions may increase business competitiveness while also continually driving progress toward the sustainability, resilience, decarbonization, and dematerialization goals that are essential to human flourishing.
This report explores what it will take to create the industrial metaverse, its potential impacts on business and society, the challenges ahead, and innovative use cases that will shape the future. Its key findings are as follows:
• The industrial metaverse will bring together the digital and real worlds. It will enable a constant exchange of information, data, and decisions and empower industries to solve extraordinarily complex real-world problems digitally, changing how organizations operate and unlocking significant societal benefits.
• The digital twin is a core metaverse building block. These virtual models simulate real-world objects in detail. The next generation of digital twins will be photorealistic, physics-based, AI-enabled, and linked in metaverse ecosystems.
• The industrial metaverse will transform every industry. Currently existing digital twins illustrate the power and potential of the industrial metaverse to revolutionize design and engineering, testing, operations, and training.
Across social media, a number of creators are generating nostalgic photographs of China with the help of AI. Even though these images get some details wrong, they are realistic enough to trick and impress many of their followers.
The pictures look sophisticated in terms of definition, sharpness, saturation, and color tone. Their realism is partly down to a recent major update of image-making artificial-intelligence program Midjourney that was released in mid-March, which is better not only at generating human hands but also at simulating various photography styles.
It’s still relatively easy, even for untrained eyes, to tell that the photos are generated by an AI. But for some creators, their experiments are more about trying to recall a specific era in time than trying to trick their audience. Read the full story.
—Zeyi Yang
Zeyi’s story is from China Report, his weekly newsletter giving you the inside track on tech in China. Sign up to receive it in your inbox every Tuesday.
Read more of our reporting on AI-generated images:
+ These new tools let you see for yourself how biased AI image models are. Bias and stereotyping are still huge problems for systems like DALL-E 2 and Stable Diffusion, despite companies’ attempts to fix it. Read the full story.