Sundar: In my experience, being an architect in the past and managing and providing consulting for a lot of my customers, data governance is being looked at primarily to serve the regulatory requirements in the past. So, it used to be a standalone process level, but for any effective data governance there, it should be a holistic process. It should be done right from the source of the data all the way to the consumption of feedback. That is one of the key best practices that we recommend to all of our customers. Also, data governance is a continuous process. It is not that, “Okay. I looked at the requirements of the data today,” whether it is regulatory requirement or the consumption requirements, “And I devised a plan for that and I can take risk now.” No.
So the data governance is a continuous process. The requirements of data continuously change. The usage of the data continuously changes. Regulations are continuously changing. So the data governance process and revisiting that is also very important and a complete understanding of what is happening, what has changed, why it is changed, when it has changed and keeping a record of that is also very important. That’s why the data governance framework should have a holistic process. It’s not a siloed process and it should be continuously revisited, and it is continuously tracked as well.
Laurel: And as you mentioned earlier, people are definitely part of this process and strategy as well. How do you think about data literacy as a critical skill that everyone needs to have across the organization outside of the tech teams? How should executives start thinking about preparing and ensuring everyone has those right skills to consume data?
Sundar: So, data is the “new oil” that is being fed everywhere. If data is a new oil, the understanding of how to use it, where to use that data becomes very, very crucial. How to use it and where to use it forms the major part of data literacy in any organization. Also, if we have to use any given data, then we should also know where the data is available. So, data literacy is addressed at two levels. One, about providing the information on what is the data that is available, how good that data is that is available, how to access that data, how to process that data. And the second one is, especially in today’s world, the data also has many constraints. It is very critical and it has a lot of sensitive information. The line between the sensitive information and the data that can be consumed easily is very thin in today’s world.
If that is the case, then the literacy of what data that we are processing and how sensitive it is, what we want to use with that, that literacy of that information is also very critical. So when the executives plan for data literacy programs in their organizations, it is also important to make sure that it’s not only about the data usage, but also what is the usage of the data and what is the outcome of the data? So, that’s why data literacy and the investment of data literacy on people becomes very critical. End of the day, the people are the ones who design the systems and who develop the systems that consume the data, so the right investment on literacy is paramount in that aspect.
Laurel: So, those are very important parts about data literacy, especially across the entire organization, but we’ve also seen that another part of digital transformation is streamlining and maximizing investments in operations across business units. For example, years ago, tech teams did this by combining software development and operations to create devOps, which allowed for more agile and data-focused ways of working. The research firm, Gartner, argues that this philosophy can also be applied to other areas of the business, including artificial intelligence and machine learning to create MLOps, data to create dataOps, and finance to create finOps, so finance and operations. As a whole, these can be bundled into one single term called XOps. It’s an interesting way to take various parts of the business and bring it all together under an umbrella of operations. What value can XOps bring to an organization as a whole?
Sundar: Yes, as you rightly said, Laurel, XOps is an umbrella that brings in various operations that drives innovation through the technology to address the business requirements to take the business to the next level. Having said that, all the three operations, for example, that you have mentioned, whether it is devOps, dataOps, MLOps, or even finOps, the fourth one, everywhere, the common denominator’s operations and the requirement for that operations is to deliver value in a most efficient way.
So what we learned from devOps is managing versus developing a product, how to combine them and extract that efficiency. The same principles are taken into machine learning operations and data operations. Again, from the technology perspective, the common factor there is automation and continuous reusability of the processes to make that entire operation efficient. That’s why Gartner has combined all three and they call it XOps, so you can look at it like a Venn diagram of three different operations, which pivoted around automation and reusability with agility.
Uber’s facial recognition is locking Indian drivers out of their accounts
Uber checks that a driver’s face matches what the company has on file through a program called “Real-Time ID Check.” It was rolled out in the US in 2016, in India in 2017, and then in other markets. “This prevents fraud and protects drivers’ accounts from being compromised. It also protects riders by building another layer of accountability into the app to ensure the right person is behind the wheel,” Joe Sullivan, Uber’s chief security officer, said in a statement in 2017.
But the company’s driver verification procedures are far from seamless. Adnan Taqi, an Uber driver in Mumbai, ran into trouble with it when the app prompted him to take a selfie around dusk. He was locked out for 48 hours, a big dent in his work schedule—he says he drives 18 hours straight, sometimes as much as 24 hours, to be able to make a living. Days later, he took a selfie that locked him out of his account again, this time for a whole week. That time, Taqi suspects, it came down to hair: “I hadn’t shaved for a few days and my hair had also grown out a bit,” he says.
More than a dozen drivers interviewed for this story detailed instances of having to find better lighting to avoid being locked out of their Uber accounts. “Whenever Uber asks for a selfie in the evenings or at night, I’ve had to pull over and go under a streetlight to click a clear picture—otherwise there are chances of getting rejected,” said Santosh Kumar, an Uber driver from Hyderabad.
Others have struggled with scratches on their cameras and low-budget smartphones. The problem isn’t unique to Uber. Drivers with Ola, which is backed by SoftBank, face similar issues.
Some of these struggles can be explained by natural limitations in face recognition technology. The software starts by converting your face into a set of points, explains Jernej Kavka, an independent technology consultant with access to Microsoft’s Face API, which is what Uber uses to power Real-Time ID Check.
“With excessive facial hair, the points change and it may not recognize where the chin is,” Kavka says. The same thing happens when there is low lighting or the phone’s camera doesn’t have a good contrast. “This makes it difficult for the computer to detect edges,” he explains.
But the software may be especially brittle in India. In December 2021, tech policy researchers Smriti Parsheera (a fellow with the CyberBRICS project) and Gaurav Jain (an economist with the International Finance Corporation) posted a preprint paper that audited four commercial facial processing tools—Amazon’s Rekognition, Microsoft Azure’s Face, Face++, and FaceX—for their performance on Indian faces. When the software was applied to a database of 32,184 election candidates, Microsoft’s Face failed to even detect the presence of a face in more than 1,000 images, throwing an error rate of more than 3%—the worst among the four.
It could be that the Uber app is failing drivers because its software was not trained on a diverse range of Indian faces, Parsheera says. But she says there may be other issues at play as well. “There could be a number of other contributing factors like lighting, angle, effects of aging, etc.,” she explained in writing. “But the lack of transparency surrounding the use of such systems makes it hard to provide a more concrete explanation.”
The Download: Uber’s flawed facial recognition, and police drones
One evening in February last year, a 23-year-old Uber driver named Niradi Srikanth was getting ready to start another shift, ferrying passengers around the south Indian city of Hyderabad. He pointed the phone at his face to take a selfie to verify his identity. The process usually worked seamlessly. But this time he was unable to log in.
Srikanth suspected it was because he had recently shaved his head. After further attempts to log in were rejected, Uber informed him that his account had been blocked. He is not alone. In a survey conducted by MIT Technology Review of 150 Uber drivers in the country, almost half had been either temporarily or permanently locked out of their accounts because of problems with their selfie.
Hundreds of thousands of India’s gig economy workers are at the mercy of facial recognition technology, with few legal, policy or regulatory protections. For workers like Srikanth, getting blocked from or kicked off a platform can have devastating consequences. Read the full story.
I met a police drone in VR—and hated it
Police departments across the world are embracing drones, deploying them for everything from surveillance and intelligence gathering to even chasing criminals. Yet none of them seem to be trying to find out how encounters with drones leave people feeling—or whether the technology will help or hinder policing work.
A team from University College London and the London School of Economics is filling in the gaps, studying how people react when meeting police drones in virtual reality, and whether they come away feeling more or less trusting of the police.
MIT Technology Review’s Melissa Heikkilä came away from her encounter with a VR police drone feeling unnerved. If others feel the same way, the big question is whether these drones are effective tools for policing in the first place. Read the full story.
Melissa’s story is from The Algorithm, her weekly newsletter covering AI and its effects on society. Sign up to receive it in your inbox every Monday.
I met a police drone in VR—and hated it
It’s important because police departments are racing way ahead and starting to use drones anyway, for everything from surveillance and intelligence gathering to chasing criminals.
Last week, San Francisco approved the use of robots, including drones that can kill people in certain emergencies, such as when dealing with a mass shooter. In the UK most police drones have thermal cameras that can be used to detect how many people are inside houses, says Pósch. This has been used for all sorts of things: catching human traffickers or rogue landlords, and even targeting people holding suspected parties during covid-19 lockdowns.
Virtual reality will let the researchers test the technology in a controlled, safe way among lots of test subjects, Pósch says.
Even though I knew I was in a VR environment, I found the encounter with the drone unnerving. My opinion of these drones did not improve, even though I’d met a supposedly polite, human-operated one (there are even more aggressive modes for the experiment, which I did not experience.)
Ultimately, it may not make much difference whether drones are “polite” or “rude” , says Christian Enemark, a professor at the University of Southampton, who specializes in the ethics of war and drones and is not involved in the research. That’s because the use of drones itself is a “reminder that the police are not here, whether they’re not bothering to be here or they’re too afraid to be here,” he says.
“So maybe there’s something fundamentally disrespectful about any encounter.”
GPT-4 is coming, but OpenAI is still fixing GPT-3
The internet is abuzz with excitement about AI lab OpenAI’s latest iteration of its famous large language model, GPT-3. The latest demo, ChatGPT, answers people’s questions via back-and-forth dialogue. Since its launch last Wednesday, the demo has crossed over 1 million users. Read Will Douglas Heaven’s story here.
GPT-3 is a confident bullshitter and can easily be prompted to say toxic things. OpenAI says it has fixed a lot of these problems with ChatGPT, which answers follow-up questions, admits its mistakes, challenges incorrect premises, and rejects inappropriate requests. It even refuses to answer some questions, such as how to be evil, or how to break into someone’s house.