Connect with us

Tech

Power beaming comes of age

Published

on

Power beaming comes of age


The global need for power to provide ubiquitous connectivity through 5G, 6G, and smart infrastructure is rising. This report explains the prospects of power beaming; its economic, human, and environmental implications; and the challenges of making the technology reliable, effective, wide-ranging, and secure.

The following are the report’s key findings:

Lasers and microwaves offer distinct approaches to power beaming, each with benefits and drawbacks. While microwave-based power beaming has a more established track record thanks to lower cost of equipment, laser-based approaches are showing promise, backed by an increasing flurry of successful trials and pilots. Laser-based beaming has high-impact prospects for powering equipment in remote sites, the low-earth orbit economy, electric transportation, and underwater applications. Lasers’ chief advantage is the narrow concentration of beams, which enables smaller trans- mission and receiver installations. On the other hand, their disadvantage is the disturbance caused by atmospheric conditions and human interruption, although there are ongoing efforts to tackle these deficits.

Power beaming could quicken energy decarbonization, boost internet connectivity, and enable post-disaster response. Climate change is spurring investment in power beaming, which can support more radical approaches to energy transition. Due to solar energy’s continuous availability, beaming it directly from space to Earth offers superior conversion compared to land-based solar panels when averaged over time. Electric transportation—from trains to planes or drones—benefits from power beaming by avoiding the disruption and costs caused by cabling, wiring, or recharge landings.

Beaming could also transfer power from remote renewables sites such as offshore wind farms. Other areas where power beaming could revolutionize energy solutions include refueling space missions and satellites, 5G provision, and post-disaster humanitarian response in remote regions or areas where networks have collapsed due to extreme weather events, whose frequency will be increased by climate change. In the short term, as efficiencies continue to improve, power beaming has the capacity to reduce the number of wasted batteries, especially in low-power, across-the- room applications.

Public engagement and education are crucial to support the uptake of power beaming. Lasers and microwaves may conjure images of death rays and unanticipated health risks. Public backlash against 5G shows the importance of education and information about the safety of new, “invisible” technologies. Based on decades of research, power beaming via both microwaves and lasers has been shown to be safe. The public is comfortable living amidst invisible forces like wi-fi and wireless data transfer; power beaming is simply the newest chapter.

Commercial investment in power beaming remains muted due to a combination of historical skepticism and uncertain time horizons. While private investment in futuristic sectors like nuclear fusion energy and satellites booms, the power-beaming sector has received relatively little investment and venture capital relative to the scale of the opportunity. Experts believe this is partly a “first-mover” problem as capital allocators await signs of momentum. It may be a hangover of past decisions to abandon beaming due to high costs and impracticality, even though such reticence was based on earlier technologies that have now been surpassed. Power beaming also tends to fall between two R&D comfort zones for large corporations: it does not deliver short-term financial gain, but it is also not long term enough to justify a steady financing stream.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Tech

Uber’s facial recognition is locking Indian drivers out of their accounts 

Published

on

""


Uber checks that a driver’s face matches what the company has on file through a program called “Real-Time ID Check.” It was rolled out in the US in 2016, in India in 2017, and then in other markets. “This prevents fraud and protects drivers’ accounts from being compromised. It also protects riders by building another layer of accountability into the app to ensure the right person is behind the wheel,” Joe Sullivan, Uber’s chief security officer, said in a statement in 2017.

But the company’s driver verification procedures are far from seamless. Adnan Taqi, an Uber driver in Mumbai, ran into trouble with it when the app prompted him to take a selfie around dusk. He was locked out for 48 hours, a big dent in his work schedule—he says he drives 18 hours straight, sometimes as much as 24 hours, to be able to make a living. Days later, he took a selfie that locked him out of his account again, this time for a whole week. That time, Taqi suspects, it came down to hair: “I hadn’t shaved for a few days and my hair had also grown out a bit,” he says. 

More than a dozen drivers interviewed for this story detailed instances of having to find better lighting to avoid being locked out of their Uber accounts. “Whenever Uber asks for a selfie in the evenings or at night, I’ve had to pull over and go under a streetlight to click a clear picture—otherwise there are chances of getting rejected,” said Santosh Kumar, an Uber driver from Hyderabad. 

Others have struggled with scratches on their cameras and low-budget smartphones. The problem isn’t unique to Uber. Drivers with Ola, which is backed by SoftBank, face similar issues. 

Some of these struggles can be explained by natural limitations in face recognition technology. The software starts by converting your face into a set of points, explains Jernej Kavka, an independent technology consultant with access to Microsoft’s Face API, which is what Uber uses to power Real-Time ID Check. 

Adnan Taqi holds up his phone in the driver’s seat of his car. Variations in lighting and facial hair have likely caused him to lose access to the app.

SELVAPRAKASH LAKSHMANAN

“With excessive facial hair, the points change and it may not recognize where the chin is,” Kavka says. The same thing happens when there is low lighting or the phone’s camera doesn’t have a good contrast. “This makes it difficult for the computer to detect edges,” he explains.

But the software may be especially brittle in India. In December 2021, tech policy researchers Smriti Parsheera (a fellow with the CyberBRICS project) and Gaurav Jain (an economist with the International Finance Corporation) posted a preprint paper that audited four commercial facial processing tools—Amazon’s Rekognition, Microsoft Azure’s Face, Face++, and FaceX—for their performance on Indian faces. When the software was applied to a database of 32,184 election candidates, Microsoft’s Face failed to even detect the presence of a face in more than 1,000 images, throwing an error rate of more than 3%—the worst among the four. 

It could be that the Uber app is failing drivers because its software was not trained on a diverse range of Indian faces, Parsheera says. But she says there may be other issues at play as well. “There could be a number of other contributing factors like lighting, angle, effects of aging, etc.,” she explained in writing. “But the lack of transparency surrounding the use of such systems makes it hard to provide a more concrete explanation.” 

Continue Reading

Tech

The Download: Uber’s flawed facial recognition, and police drones

Published

on

The Download: Uber’s flawed facial recognition, and police drones


One evening in February last year, a 23-year-old Uber driver named Niradi Srikanth was getting ready to start another shift, ferrying passengers around the south Indian city of Hyderabad. He pointed the phone at his face to take a selfie to verify his identity. The process usually worked seamlessly. But this time he was unable to log in.

Srikanth suspected it was because he had recently shaved his head. After further attempts to log in were rejected, Uber informed him that his account had been blocked. He is not alone. In a survey conducted by MIT Technology Review of 150 Uber drivers in the country, almost half had been either temporarily or permanently locked out of their accounts because of problems with their selfie.

Hundreds of thousands of India’s gig economy workers are at the mercy of facial recognition technology, with few legal, policy or regulatory protections. For workers like Srikanth, getting blocked from or kicked off a platform can have devastating consequences. Read the full story.

—Varsha Bansal

I met a police drone in VR—and hated it

Police departments across the world are embracing drones, deploying them for everything from surveillance and intelligence gathering to even chasing criminals. Yet none of them seem to be trying to find out how encounters with drones leave people feeling—or whether the technology will help or hinder policing work.

A team from University College London and the London School of Economics is filling in the gaps, studying how people react when meeting police drones in virtual reality, and whether they come away feeling more or less trusting of the police. 

MIT Technology Review’s Melissa Heikkilä came away from her encounter with a VR police drone feeling unnerved. If others feel the same way, the big question is whether these drones are effective tools for policing in the first place. Read the full story.

Melissa’s story is from The Algorithm, her weekly newsletter covering AI and its effects on society. Sign up to receive it in your inbox every Monday.

Continue Reading

Tech

I met a police drone in VR—and hated it

Published

on

I met a police drone in VR—and hated it


It’s important because police departments are racing way ahead and starting to use drones anyway, for everything from surveillance and intelligence gathering to chasing criminals.

Last week, San Francisco approved the use of robots, including drones that can kill people in certain emergencies, such as when dealing with a mass shooter. In the UK most police drones have thermal cameras that can be used to detect how many people are inside houses, says Pósch. This has been used for all sorts of things: catching human traffickers or rogue landlords, and even targeting people holding suspected parties during covid-19 lockdowns

Virtual reality will let the researchers test the technology in a controlled, safe way among lots of test subjects, Pósch says.

Even though I knew I was in a VR environment, I found the encounter with the drone unnerving. My opinion of these drones did not improve, even though I’d met a supposedly polite, human-operated one (there are even more aggressive modes for the experiment, which I did not experience.)  

Ultimately, it may not make much difference whether drones are “polite”  or “rude” , says Christian Enemark, a professor at the University of Southampton, who specializes in the ethics of war and drones and is not involved in the research. That’s because the use of drones itself is a “reminder that the police are not here, whether they’re not bothering to be here or they’re too afraid to be here,” he says.

“So maybe there’s something fundamentally disrespectful about any encounter.”

Deeper Learning

GPT-4 is coming, but OpenAI is still fixing GPT-3

The internet is abuzz with excitement about AI lab OpenAI’s latest iteration of its famous large language model, GPT-3. The latest demo, ChatGPT, answers people’s questions via back-and-forth dialogue. Since its launch last Wednesday, the demo has crossed over 1 million users. Read Will Douglas Heaven’s story here. 

GPT-3 is a confident bullshitter and can easily be prompted to say toxic things. OpenAI says it has fixed a lot of these problems with ChatGPT, which answers follow-up questions, admits its mistakes, challenges incorrect premises, and rejects inappropriate requests. It even refuses to answer some questions, such as how to be evil, or how to break into someone’s house. 



Continue Reading

Copyright © 2021 Seminole Press.