Connect with us

Tech

Podcast: The story of AI, as told by the people who invented it

Published

on

Podcast: The story of AI, as told by the people who invented it


Welcome to I Was There When, a new oral history project from the In Machines We Trust podcast. It features stories of how breakthroughs in artificial intelligence and computing happened, as told by the people who witnessed them. In this first episode, we meet Joseph Atick— who helped create the first commercially viable face recognition system.

Credits:

This episode was produced by Jennifer Strong, Anthony Green and Emma Cillekens with help from Lindsay Muscato. It’s edited by Michael Reilly and Mat Honan. It’s mixed by Garret Lang, with sound design and music by Jacob Gorski.

Full transcript:

[TR ID]

Jennifer: I’m Jennifer Strong, host of In Machines We Trust

I want to tell you about something we’ve been working on for a little while behind the scenes here. 

It’s called I Was There When.

It’s an oral history project featuring the stories of how breakthroughs in artificial intelligence and computing happened… as told by the people who witnessed them.

Joseph Atick: And as I entered the room, it spotted my face, extracted it from the background and it pronounced: “I see Joseph” and that was the moment where the hair on the back… I felt like something had happened. We were a witness. 

Jennifer: We’re kicking things off with a man who helped create the first facial recognition system that was commercially viable… back in the ‘90s…

[IMWT ID]

I am Joseph Atick. Today, I’m the executive chairman of ID for Africa, a humanitarian organization that focuses on giving people in Africa a digital identity so they can access services and exercise their rights. But I have not always been in the humanitarian field. After I received my PhD in mathematics, together with my collaborators made some fundamental breakthroughs, which led to the first commercially viable face recognition. That’s why people refer to me as a founding father of face recognition and the biometric industry. The algorithm for how a human brain would recognize familiar faces became clear while we were doing research, mathematical research, while I was at the Institute for Advanced Study in Princeton. But it was far from having an idea of how you would implement such a thing. 

It was a long period of months of programming and failure and programming and failure. And one night, early morning, actually, we had just finalized a version of the algorithm. We submitted the source code for compilation in order to get a run code. And we stepped out, I stepped out to go to the washroom. And then when I stepped back into the room and the source code had been compiled by the machine and had returned. And usually after you compile it runs it automatically, and as I entered the room, it spotted a human moving into the room and it spotted my face, extracted it from the background and it pronounced: “I see Joseph.” and that was the moment where the hair on the back—I felt like something had happened. We were a witness. And I started to call on the other people who were still in the lab and each one of them they would come into the room.

And it would say, “I see Norman. I would see Paul, I would see Joseph.” And we would sort of take turns running around the room just to see how many it can spot in the room. It was, it was a moment of truth where I would say several years of work finally led to a breakthrough, even though theoretically, there wasn’t any additional breakthrough required. Just the fact that we figured out how to implement it and finally saw that capability in action was very, very rewarding and satisfying. We had developed a team which is more of a development team, not a research team, which was focused on putting all of those capabilities into a PC platform. And that was the birth, really the birth of commercial face recognition, I would put it, on 1994. 

My concern started very quickly. I saw a future where there was no place to hide with the proliferation of cameras everywhere and the commoditization of computers and the processing abilities of computers becoming better and better. And so in 1998, I lobbied the industry and I said, we need to put together principles for responsible use. And I felt good for a while, because I felt we have gotten it right. I felt we’ve put in place a responsible use code to be followed by whatever is the implementation. However, that code did not live the test of time. And the reason behind it is we did not anticipate the emergence of social media. Basically, at the time when we established the code in 1998, we said the most important element in a face recognition system was the tagged database of known people. We said, if I’m not in the database, the system will be blind.

And it was difficult to build the database. At most we could build thousand 10,000, 15,000, 20,000 because each image had to be scanned and had to be entered by hand—the world that we live in today, we are now in a regime where we have allowed the beast out of the bag by feeding it billions of faces and helping it by tagging ourselves. Um, we are now in a world where any hope of controlling and requiring everybody to be responsible in their use of face recognition is difficult. And at the same time, there is no shortage of known faces on the internet because you can just scrape, as has happened recently by some companies. And so I began to panic in 2011, and I wrote an op-ed article saying it is time to press the panic button because the world is heading in a direction where face recognition is going to be omnipresent and faces are going to be everywhere available in databases.

And at the time people said I was an alarmist, but today they’re realizing that it’s exactly what’s happening today. And so where do we go from here? I’ve been lobbying for legislation. I’ve been lobbying for legal frameworks that make it a liability for you to use somebody’s face without their consent. And so it’s no longer a technological issue. We cannot contain this powerful technology through technological means. There has to be some sort of legal frameworks. We cannot allow the technology to go too much ahead of us. Ahead of our values, ahead of what we think is acceptable. 

The issue of consent continues to be one of the most difficult and challenging matters when it deals with technology, just giving somebody notice does not mean that it’s enough. To me consent has to be informed. They have to understand the consequences of what it means. And not just to say, well, we put a sign up and this was enough. We told people, and if they did not want to, they could have gone anywhere.

And I also find that there is, it is so easy to get seduced by flashy technological features that might give us a short-term advantage in our lives. And then down the line, we recognize that we’ve given up something that was too precious. And by that point in time, we have desensitized the population and we get to a point where we cannot pull back. That’s what I’m worried about. I’m worried about the fact that face recognition through the work of Facebook and Apple and others. I’m not saying all of it is illegitimate. A lot of it is legitimate.

We’ve arrived at a point where the general public may have become blasé and may become desensitized because they see it everywhere. And maybe in 20 years, you step out of your house. You will no longer have the expectation that you wouldn’t be not. It will not be recognized by dozens of people you cross along the way. I think at that point in time that the public will be very alarmed because the media will start reporting on cases where people were stalked. People were targeted, people were even selected based on their net worth in the street and kidnapped. I think that’s a lot of responsibility on our hands. 

And so I think the question of consent will continue to haunt the industry. And until that question is going to be a result, maybe it won’t be resolved. I think we need to establish limitations on what can be done with this technology.  

My career also has taught me that being ahead too much is not a good thing because face recognition, as we know it today, was actually invented in 1994. But most people think that it was invented by Facebook and the machine learning algorithms, which are now proliferating all over the world. I basically, at some point in time, I had to step down as being a public CEO because I was curtailing the use of technology that my company was going to be promoting because the fear of negative consequences to humanity. So I feel scientists need to have the courage to project into the future and see the consequences of their work. I’m not saying they should stop making breakthroughs. No, you should go full force, make more breakthroughs, but we should also be honest with ourselves and basically alert the world and the policymakers that this breakthrough has pluses and has minuses. And therefore, in using this technology, we need some sort of guidance and frameworks to make sure it’s channeled for a positive application and not negative.

Jennifer: I Was There When… is an oral history project featuring the stories of people who have witnessed or created breakthroughs in artificial intelligence and computing. 

Do you have a story to tell? Know someone who does? Drop us an email at podcasts@technologyreview.com.

[MIDROLL]

[CREDITS]

Jennifer: This episode was taped in New York City in December of 2020 and produced by me with help from Anthony Green and Emma Cillekens. We’re edited by Michael Reilly and Mat Honan. Our mix engineer is Garret Lang… with sound design and music by Jacob Gorski. 

Thanks for listening, I’m Jennifer Strong. 

[TR ID]

Tech

A pro-China online influence campaign is targeting the rare-earths industry

Published

on

A pro-China online influence campaign is targeting the rare-earths industry


China has come to dominate the market in recent years, and by 2017 the country produced over 80% of the world’s supply. Beijing achieved this by pouring resources into the study and mining of rare-earth elements for decades, building up six big state-owned firms and relaxing environmental regulations to enable low-cost and high-pollution methods. The country then rapidly increased rare-earth exports in the 1990s, a sudden rush that bankrupted international rivals. Further development of rare-earth industries is a strategic goal under Beijing’s Made in China 2025 strategy.

The country has demonstrated its dominance several times, most notably by stopping all shipments of the resources to Japan in 2010 during a maritime dispute. State media have warned that China could do the same to the United States.

The US and other Western nations have seen this monopoly as a critical weakness for their side. As a result, they have spent billions in recent years to get better at finding, mining, and processing the minerals. 

In early June 2022, the Canadian mining company Appia announced it had found new resources in Saskatchewan. Within weeks, the American firm USA Rare Earth announced a new processing facility in Oklahoma. 

Dragonbridge engaged in similar activity in 2021, soon after the American military signed an agreement with the Australian mining firm Lynas, the largest rare-earths company outside China, to build a processing plant in Texas. 

Continue Reading

Tech

The U.S. only has 60,000 charging stations for EVs. Here’s where they all are.

Published

on

The U.S. only has 60,000 charging stations for EVs. Here’s where they all are.


The infrastructure bill that passed in November 2021 earmarked $7.5 billion for President Biden’s goal of having 500,000 chargers (individual plugs, not stations) around the nation. In the best case, Michalek envisions a public-private collaboration to build a robust national charging network. The Biden administration has pledged to install plugs throughout rural areas, while companies constructing charging stations across America will have a strong incentive to fill in the country’s biggest cities and most popular thoroughfares. After all, companies like Electrify America, EVgo, and ChargePoint charge customers per kilowatt-hour of energy they use, much like utilities.

Most new electric vehicles promise at least 250 miles on a full charge, and that number should keep ticking up. The farther cars can go without charging, the fewer anxious drivers will be stuck in lines waiting for a charging space to open. But make no mistake, Michalek says: an electric-car country needs a plethora of plugs, and soon.

Continue Reading

Tech

We need smarter cities, not “smart cities”

Published

on

We need smarter cities, not “smart cities”


The term “smart cities” originated as a marketing strategy for large IT vendors. It has now become synonymous with urban uses of technology, particularly advanced and emerging technologies. But cities are more than 5G, big data, driverless vehicles, and AI. They are crucial drivers of opportunity, prosperity, and progress. They support those displaced by war and crisis and generate 80% of global GDP. More than 68% of the world’s population will live in cities by 2050—2.5 billion more people than do now. And with over 90% of urban areas located on coasts, cities are on the front lines of climate change.

A focus on building “smart cities” risks turning cities into technology projects. We talk about “users” rather than people. Monthly and “daily active” numbers instead of residents. Stakeholders and subscribers instead of citizens. This also risks a transactional—and limiting—approach to city improvement, focusing on immediate returns on investment or achievements that can be distilled into KPIs. 

Truly smart cities recognize the ambiguity of lives and livelihoods, and they are driven by outcomes beyond the implementation of “solutions.” They are defined by their residents’ talents, relationships, and sense of ownership—not by the technology that is deployed there. 

This more expansive concept of what a smart city is encompasses a wide range of urban innovations. Singapore, which is exploring high-tech approaches such as drone deliveries and virtual-reality modeling, is one type of smart city. Curitiba, Brazil—a pioneer of the bus rapid transit system—is another. Harare, the capital of Zimbabwe, with its passively cooled shopping center designed in 1996, is a smart city, as are the “sponge cities” across China that use nature-based solutions to manage rainfall and floodwater.

Where technology can play a role, it must be applied thoughtfully and holistically—taking into account the needs, realities, and aspirations of city residents. Guatemala City, in collaboration with our country office team at the UN Development Programme, is using this approach to improve how city infrastructure—including parks and lighting—is managed. The city is standardizing materials and designs to reduce costs and labor,  and streamlining approval and allocation processes to increase the speed and quality of repairs and maintenance. Everything is driven by the needs of its citizens. Elsewhere in Latin America, cities are going beyond quantitative variables to take into account well-being and other nuanced outcomes. 

In her 1961 book The Death and Life of Great American Cities, Jane Jacobs, the pioneering American urbanist, discussed the importance of sidewalks. In the context of the city, they are conduits for adventure, social interaction, and unexpected encounters—what Jacobs termed the “sidewalk ballet.” Just as literal sidewalks are crucial to the urban experience, so is the larger idea of connection between elements.

Truly smart cities recognize the ambiguity of lives and livelihoods, and they are driven by outcomes beyond the implementation of “solutions.”

However, too often we see “smart cities” focus on discrete deployments of technology rather than this connective tissue. We end up with cities defined by “use cases” or “platforms.” Practically speaking, the vision of a tech-centric city is conceptually, financially, and logistically out of reach for many places. This can lead officials and innovators to dismiss the city’s real and substantial potential to reduce poverty while enhancing inclusion and sustainability.

In our work at the UN Development Programme, we focus on the interplay between different components of a truly smart city—the community, the local government, and the private sector. We also explore the different assets made available by this broader definition: high-tech innovations, yes, but also low-cost, low-tech innovations and nature-based solutions. Big data, but also the qualitative, richer detail behind the data points. The connections and “sidewalks”—not just the use cases or pilot programs. We see our work as an attempt to start redefining smart cities and increasing the size, scope, and usefulness of our urban development tool kit.

We continue to explore how digital technology might enhance cities—for example, we are collaborating with major e-commerce platforms across Africa that are transforming urban service delivery. But we are also shaping this broader tool kit to tackle the urban impacts of climate change, biodiversity loss, and pollution. 

The UrbanShift initiative, led by the UN Environment Programme in partnership with UNDP and many others, is working with cities to promote nature-based solutions, low-carbon public transport, low-emission zones, integrated waste management, and more. This approach focuses not just on implementation, but also on policies and guiderails. The UNDP Smart Urban Innovations Handbook aims to help policymakers and urban innovators explore how they might embed “smartness” in any city.

Our work at the United Nations is driven by the Sustainable Development Goals: 17 essential, ambitious, and urgent global targets that aim to shape a better world by 2030. Truly smart cities would play a role in meeting all 17 SDGs, from tackling poverty and inequality to protecting and improving biodiversity. 

Coordinating and implementing the complex efforts required to reach these goals is far more difficult than deploying the latest app or installing another piece of smart street furniture. But we must move beyond the sales pitches and explore how our cities can be true platforms—not just technological ones—for inclusive and sustainable development. The well-being of the billions who call the world’s cities home depends on it.

Riad Meddeb is interim director of the UNDP Global Centre for Technology, Innovation, and Sustainable Development. Calum Handforth is an advisor for digitalization, digital health, and smart cities at the UNDP Global Centre.

Continue Reading

Copyright © 2021 Seminole Press.