In some stores, sophisticated systems are tracking customers in almost every imaginable way, from recognizing their faces to gauging their age, their mood, and virtually gussying them up with makeup. The systems rarely ask for people’s permission, and for the most part they don’t have to. In our season 1 finale, we look at the explosion of AI and face recognition technologies in retail spaces, and what it means for the future of shopping.
- RetailNext CTO Arun Nair
- L’Oreal’s Technology Incubator Global VP Guive Balooch
- Modiface CEO Parham Aarabi
- Biometrics pioneer and Chairman of ID4Africa Joseph Atick
This episode was reported and produced by Jennifer Strong, Anthony Green, Tate Ryan-Mosley, Emma Cillekens and Karen Hao. We’re edited by Michael Reilly and Gideon Lichfield.
Strong: Retailers have been using face recognition and AI tracking technologies for years.
[Audio from Face First: What if you could stop retail crime before it happens by knowing the moment a shoplifter enters your store? And what if you could know about the presence of violent criminals before they act? With Face First you can stop crime before it starts.]
Strong: That’s one of the largest providers of this tech to retail stores. It detects faces, voices, objects and claims it can analyze behavior. But face recognition systems have a well-documented history of misidentifying women and people of color.
[Sound from 2019 Congressional hearing on facial recognition (Ocasio-Cortez): We have a technology that was created and designed by one demographic that is only mostly effective on that one demographic. And they’re trying to sell it and impose it on the entirety of the country?]
Strong: This is Representative Alexandria Ocasio-Cortez at a 2019 congressional hearing on facial recognition. Photo technologies work better on lighter skin. And datasets used by companies to train facial analysis systems are largely based on faces collected from the internet where content tends to skew white, male and western.
[Sound from 2019 Congressional hearing on facial recognition (Ocasio-Cortez): And do you think that this could exacerbate the already egregious, uh, inequalities in our, in our criminal justice system]
[Sound from 2019 Congressional hearing on facial recognition (Buolamwini): And It already is.]
Strong: Joy Buolamwini is an activist and computer scientist.
[Sound from 2019 Congressional hearing on facial recognition (Buolamwini): So, there’s a case with Mr. Bah, an 18-year-old African American man who was misidentified in Apple stores as a thief. And in fact, he was falsely arrested multiple times because of this kind of misidentification.
Strong: As awareness of these issues grows, more places are looking to put restrictions around its use such as in Portland, Oregon, which recently passed the most sweeping ban on face ID in the US.
[Sound from store in Portland, Oregon: please look into the camera for entry]
Strong: The ban takes effect in January and when it does that voice and camera will go away from places like this food store where the tech unlocks the door to late night shoppers. But use elsewhere is moving well beyond fighting crime (and is starting to play other retail roles) like remembering your past orders and payment details.
Miller: These face-based technologies, uhh artificial intelligence, machine vision allow us to see our customer in the offline world like amazon sees its customer in the online world. That allows us to create tailored experiences for the customer and also allows us to directly target that customer in new ways when they come back to the restaurant.
Strong: That’s the chairman of Cali Group, John Miller, its fast-food restaurant Caliburger tries out technologies it later markets to the entire industry. Other retailers use face recognition to know when VIP shoppers or celebrities are in their stores, not unlike this scene from the film Minority Report where as Tom Cruise strolls through a mall, his eyes are scanned and the ads address his character by name.
[Sound from Minority Report where voices address John Anderson in person]
Strong: The face measurements powering these applications can also be used for many other things besides just identifying someone. For example, some shopping malls use it to help set their store rents by counting how many people walk by, and using face data to gauge gender, age, and other demographics. Sometimes face recognition cameras are even hidden inside in mall directories. And inside stores, retailers use it to better understand what shoppers are interested in. It’s also embedded within shopping apps and store mirrors that let people try on anything from eyeglasses to makeup virtually.
I’m Jennifer Strong and this episode, we wrap up our first season (and our latest miniseries on face recognition) with a look at how it’s used to watch, understand and influence your shopping habits.
Strong: So I’m out front of what used to be the largest store in the world. This is Macy’s on 34th Street in Manhattan. The building fills an entire city block and in some ways it’s kind of the center of gravity for the holiday shopping season here as, among other things, the inspiration for one of New York’s most famous Christmas films, Miracle on 34th Street.
But the company may also have a history of using face recognition and a lawsuit was filed about that in Illinois which has a biometric privacy law requiring companies get permission before using it on customers. That suit alleges Macy’s is a client of ClearviewAI. We’ve had the founder on this show Hoan Ton-That and his product works by matching images, in this case of shoppers or shoplifters, against a database of perhaps billions of photos taken from social media posted by people who haven’t changed their settings to make the photos private just to their friends.
Now, New York City’s councilmembers just passed a biometrics measure here that if signed by the mayor will make retailers here also tell shoppers that face recognition is being used and perhaps what’s happening with that data. But you know it’s too soon to say what that might look like. I mean does walking as part of a big crowd of shoppers past a wall plaque that says face recognition is present, does that equal being informed, let alone giving consent? But I’m going to go inside with my producer, Anthony Green, and see if we can find totally different applications of face mapping to show you.
Several of these beauty counters have iPads that double as mirrors with augmented reality. We tried out three of them just one though asked for consent to analyze our faces. Two of the systems saw us just fine through our masks. The other didn’t recognize our faces at all.
I walked up to a mirror and it says my lighting is okay. Come closer until your face fills a circle. Apparently I have dark circles, uneven texture. irritation and redness and eyelines. At least we’re on the less side? I don’t know. Woah. Hey Anthony, you should see this. I wasn’t sure it was doing anything and now look in the mirror.
Strong: I don’t really have words for describing this, but it is so funny seeing myself this made up.
Green: Just kind of like glammed up.
Strong: Yeah. I’m like super glammed up. And literally all I was doing was looking in this mirror and then I looked down on an iPad and Holy, wow.
Green: This is working with your mask on.
Strong: This is with my mask on. And if I pull my mask down, I am made up everywhere.
Green: Oh yea.
Strong: Like glossed and all. Oh, look at you.
Strong: Okay, so Anthony just took a step over towards me and now he’s made up to the nines. Okay. These experiences are among the many many ways that face mapping can be applied.
But because they’re so controversial most brands simply don’t want to talk about it. And mostly, they don’t have to. There’s no national requirement that companies disclose the way they gather or use our biometric data even though we can imagine a not-so-distant future when that data becomes more important than any document we have. This personal data is likely to replace all of them proving who we are and what we own.
Most of what we know about the use of face recognition by retailers started in 2013 when it became public that identity company NEC had about a dozen brands and hotels as clients and they were using its face-reading technology to identify celebrities and other VIPs as they walked through their doors.
The following year Facebook announced it applied neural networks to face ID for the first time, making it work significantly better. And retailers, including Walmart, began testing it as a way to identify people caught shoplifting.
By 2016 fast food companies were experimenting with other use cases. One partnership, between KFC and the Chinese tech giant Baidu, recommended menu items to customers based on their age and mood as deemed by face scanning. These days it’s also possible to pay with your face, though so far, these applications haven’t really caught on. And so, wherever you shop, it’s reasonable to assume you might encounter some aspect of this technology and it could be combined with any number of other trackers. But it’s equally true that much of the tracking that’s done in retail stores using computer vision involves no facial recognition at all.
Nair: If you build a website today, there are a lot of tools available that you can use to give you data, like how many people visited your website, who they were, how they navigated your website and so on and for e-commerce sites the eventual purchase activity as well. And you can use all of this data to understand visitor behavior and optimize your site. We do the exact same thing, but for physical spaces. My name is Arun Nair. I’m the CTO and co-founder of RetailNext.
Strong: Their tracking software is deployed in offices, museums, even bowling alleys, but their primary market is retail. Ceiling cameras equipped with computer vision track customers as they journey through the store. It can guess basic demographic information like gender, who’s an employee—based on whether they go behind the register, even interactions between employees and customers.
Nair: We even have a prediction algorithm that will tell you based on historical information when your store is going to be busy later in the day, later in the week. And it is extremely helpful for staffing. So making sure that when you do expect a peak, that there are people there to assist shoppers and they’re not standing in queue and so on as well as you’re not always staffed when no one needs to be there.
Strong: He says the company is capable of determining what you’re looking at, but it doesn’t track eye gaze, expressions, or faces. And they don’t individually identify anyone.
Nair: We do not know who they are as individuals, and we specifically try not to as well. And in actually a lot of cases, once we get that information, we throw away the video or we blur the video.
Strong: When it comes to privacy, he believes systems using face recognition for identity should be opt-in
Nair: Consent is not just about like, Oh, I put my data out there so you can do what you want. I think consent is also about you know, we want you to do this so that we can do this in return for you. Are you okay with that?
Strong: But he admits that’s easier said than done.
Nair: It’s not easy to opt out of those things. And even if you opt out, the challenge is that let’s say, you say that, Hey, I want to opt out of my face. As a technology company, I still have to store a digitized version of your face to make sure I don’t track you again in the future cause next time I see your face, I need something to map against to say that, Oh, I should be dropping this person’s face. But then again, you know, in a weird way, I’m now storing a digitized version of your face, which. Again, it’s not really your face, but it is a representation of it.
Strong: And these challenges aren’t going away. Most tracking technologies aren’t regulated, and we simply don’t know how often things like face data gets captured. What is clear the retail industry is shifting to a world that’s centered around real-time analysis of customer experiences.
Nair: I think they’re going to see more and more of that moving forward, where there’s fewer purchases actually happening in these locations, but that’s kind of how you’re learning about the brand. [00:12:15] Almost like advertising, as well as kind of building a brand loyalty.
Strong: Tracking customers and their interaction with the store doesn’t just help retailers know what’s selling It also gives them insight on what customers want.
Nair: You introduce a new product. And you want to make sure that people are seeing that product. Our algorithms will tell you if people actually go into an area of the store and interact with a product and actually make a purchase afterwards.
Balooch: I think that it’s a combination of AI with physical objects that creates really an exciting moment in time. You know, you could never really try a trend and then actually dispense it. That wasn’t possible ever. But now because of AI, we are able to really go through trends really quickly. We’re able to curate trends, we’re able to give people what they desire. My name is Guive Balooch and I run the global technology incubator at L’Oreal. I’ve been at the company for 15 years and my job is to find the intersection between beauty and technology.
Strong: L’Oreal is the world’s largest cosmetics company with Estee Lauder, Maybelline, Garnier and countless other consumer brands under its corporate umbrella.
Balooch: We started about eight years ago with an augmented reality app called makeup genius. That was the world’s first virtual try-on. And since then we’ve launched projects around personalized beauty like skincare personalization, foundation personalization. We’ve launched a UV sensor at the Apple store that’s a wearable that has no battery and can measure your UV exposure. And now we’re, we’re moving more and more towards mass personalization and finding ways to combine technologies like AR and AI to create new physical objects that can be magical for beauty consumers and hopefully delight our users.
Strong: And this is harder than it might sound. Designing experiences that let customers try on makeup in augmented reality presents huge technical challenges for face detection.
Balooch: You need to detect where the eye is and where the eyebrow is. And it has to be at a level of accuracy that when the product’s on there, it doesn’t look like it’s not exactly on your lip. And it’s, it’s funny because I come from an academic background with a PhD. So I didn’t realize how complicated that specific part of this technology is. I thought, “Oh, it’s okay. We’ll just get the software. It will be easy. We’ll just make it work.” But it turns out no, it’s really complicated because people’s lips can vary in shape, the color between your skin tone and your lip can also be very different. And so you need to have an algorithm that can detect it and make sure it works on people from very light to very dark skin.
Strong: And he says one of the largest impacts of AI in the beauty market could be more inclusivity—something the industry has long struggled with.
Balooch: I’m under this, you know, strong belief that inclusivity is the future of beauty and inclusivity means that every human being has the right to have a product that is what they need for themselves and to showcase to the world how they want to be showcased. And I think that only through things like AI and tech, will we be able to reach that level of personal relationship with people’s desires for their beauty habits.
Strong: Those habits are shaped around our skin. And skin tone has historically been one of the hardest technical and cultural challenges.
Balooch: We launched this project called which is this foundation blender. And when I first started this project, I thought it was going to be very simple because when I went to Home Depot umm I’m not really a handyman, but I went with my, my dad a lot to Home Depot and he would buy paint. He would match the paint and they would just make the paint right there. And I said, okay, it’s that easy? So when we first started the project, we realized, okay, you know, you just take a skin tone from a piece of, you know, a paper and you can just match the foundation. And I realized later that our skin is not like a wall, it’s biological tissue that changes depending on what kind of skin tone you have.
Strong: In short, the algorithm didn’t work.
Balooch: And so we had to stop and spend another six months to improve it. First we did that with a little device that kind of measures your skin tone, using a physical object, because your skin tone is hard to measure if you don’t actually touch the skin cause the light can change the color of your skin. And so depending on if you’re outside or if you’re inside, you could have a big difference in the measurement. But not anymore. Thanks to AI, I think more and more with AI, we’re going to be able to get accurate measurements. We have to test them and make sure that they work as well as objects. But once we get to a point, when we think we’re getting close to that, then you can solve some really, really big challenges. And in foundation, 50% of women can’t find the right shade of foundation. And there’s no way that the number of products on the shelf will ever solve that because you will always have more skin tones in the world than products you can put on the shelf.
Strong: And the future could open up a whole new class of personalized beauty tools.
Balooch: We can make objects that are, you know, not huge–handheld–and can do incredible things. Like in the future, you could imagine that you can dispense eyeshadow on your eyelid automatically just through detecting the face and being able to have an object that could dispense it.
Strong: To build that future, L’Oreal acquired a company called Modiface which makes augmented reality tools for more than 70 of the world’s top beauty brands.
Aarabi: One big step that happened a few years ago was going from photos to live video simulation. Really hard feat technologically, but really impactful on the consumer experience. Instead of having to take a photo and upload it, they could see a live video.
Strong: Parham Aarabi is the Founder and CEO of Modiface.
Aarabi: The next big step that I see that I’m really excited about is a combination of AI understanding of the face, along with our simulation. So not only telling you, okay, so you choose a lipstick and this is what it looks like, but saying, because you chose this lipstick and because your, you know, you have blue eyes, we believe this eye shadow might match it the best.
Strong: His background is in face and lip tracking.
Aarabi: And so we had created this sample demo where you could track someone’s lips and swap the lips with a celebrity, for example. My co-founder had the idea that before we do this, we should actually apply some changes on the, on the skin. And so it was really the combination of these two ideas that became the foundation of Modiface.
Strong: The beauty industry thrives on the in-person shopping experience. And even though e-commerce sales have long been on the rise this sector has been a lot slower than others. For context, the top ecommerce seller in beauty of 2018 was shampoo. But the pandemic is speeding things up. Online sales at beauty giant Sephora jumped 30 percent in the U.S. this year. And it’s also partnered with Modiface to develop an app that acts as a virtual store, complete with product tutorials and an augmented reality beauty counter.
Aarabi: You see a try-on button, you press that, and a window opens up. You see your own video in that window, but with different virtual products being shown.
Strong: And building consumer trust in these simulated products means engineering an experience as seamless as looking in a mirror.
Aarabi: If someone actually tries on a lipstick and a hair color and then videotapes themselves versus using our technology and then having a virtual simulation of those products, the two should be indistinguishable. The lag, within the simulation being applied versus when you’re looking at your face and you’re seeing movements needs to be not apparent to the user. And so these are huge challenges. One is of realism. You don’t want the eyeliner to be flickering on someone’s eyes and the second is to do it so fast that on a website in live video, you don’t notice any lag. So these are major, major challenges.
Strong: And it’s more than just cosmetics. Elements of face detection are increasingly used in medicine to diagnose disease. And he believes in future their products will detect all kinds of skin disorders.
Aarabi: So we’ve been pushing on this skin assessment, um, direction by looking at someone’s image. And based on that, knowing what skin care products are best for them, and more, the more we do this and the more that better we train our AI systems, we find that they’re increasing in the level of accuracy matching that of dermatologists. And I think if you follow that line, that this AI, that can actually not replace dermatologists, but really helped them as.. an objective tool that can look at someone’s face and make recommendations.
Strong: It feels like there’s more awareness of face recognition of its risks, immaturies and biases but also its increased presence in our lives and just raw potential. To me, it seems like we’ve just scratched the surface – in this messy digital race to something different and big. And it got me wondering how might one of its inventors feel about all this?
Atick: I started working on the human brain about a year after I graduated and made together with my, collaborators made some fundamental breakthroughs, which led to the creation of a field called the biometric industry and the first commercially viable face recognition. That’s why people refer to me as a founding father of face recognition and the biometric industry.
Strong: That’s Dr. Joseph Atick. He developed one of the first face recognition algorithms back in 1994.
Atick: The algorithm for how a human brain would recognize familiar faces became clear while we’re doing mathematical research at the Institute for advanced study in Princeton.
Strong: But the technology needed to capture those faces wasn’t yet in everyone’s pockets.
Atick: At the time, computers did not have cameras. Phones that had cameras did not exist. We had to build the eyes for the brain. We had a brain, we thought we knew how the brain would analyze signals, but we did not have the eyes that would get the information and the visual signal to the brain.
Strong: Webcams came along in the 90s and computers with video capabilities arrived on the market a few years after.
Atick: And that was an exciting time because all of a sudden the brain that we had built had finally the pair of eyes that would be necessary to, to see.
Strong: This was the breakthrough he and his team needed to bring their concept to life. So they started coding.
Atick: it was a long period of months of programming and failure and programming and failure
Strong: But eventually…
Atick: And one night, early morning, actually, we had just finalized, um, a version of the algorithm. We submitted the, source code for compilation in order to get a run code. And we stepped out, I stepped out to go to the washroom. And then when I stepped back into the room it spotted my face, extracted it from the background and it pronounced “I see Joseph”. And that was the moment where the hair on the back–I felt like something had happened. We were a witness. And I started, um, to call on the other people who were still in the lab and each one of them, they would come into the room. And I would say, it would say, I see Norman. I would see Paul, I would see Joseph. And we would sort of take turns running around the room just to see how many it can spot in the room.
Strong: They had built something that had never been built before. Months of math and coding and long nights seemed to be paying off. But within a few years that excitement turned to concern.
Atick: My, my concern about the technology that I helped create and invent started very quickly after I had invented it. I saw a future where our privacy would be at jeopardy if we did not put in place protection measures to prevent the abuse of this powerful technology.
Strong: And he wanted to do something about it.
Atick: So in 1998, I lobbied the industry and I said, we need to put together principles for responsible use. And this is where an organization called IBIA was born in 1998 as an industry association to promote responsible use. Um, and so I was the founder of that, that organization. And I felt good for a while because I felt we have gotten it right. I felt we’ve invented the technology, but then we put in place a responsible use code to be followed by whatever is the implementation. However, that code did not live the test of time. And the reason behind it is we did not anticipate the emergence of social media.
Strong: Face recognition relies on a database of images. The size, quality, and privacy conditions of this database is largely what determines how safe or intrusive the technology is. In 1998, Atick built his databases by manually scanning thousands of pictures and tagging them with names. It was tedious and limiting in size.
Atick: We have allowed the beast out of the bag by feeding it billions of faces and helping it by tagging ourselves. We are now in a world where machine learning is now allowing for the emergence of over 400 different algorithms of face recognition in the world. Therefore, any hope of controlling and requiring everybody to be, to be responsible in their use of face recognition is difficult.
Strong: And this is made worse by scraping, where a database is created by scanning the entire internet for public photos.
Atick: And so I began to panic in 2011, and I wrote an op-ed article saying it is time to press the panic button because the world is heading in a direction where face recognition is going to be omnipresent and faces are going to be everywhere available in, in, in databases. Computing power is becoming very, very massive to the point that we could potentially recognize billions of people. And at the time people said I was an alarmist, but they’re realizing that it’s exactly what’s happening today.
Strong: So in a way, he’s kind of lobbying against his own invention even though he still uses biometrics to help build things he believes might benefit the greater good like digital ID for people in developing nations.
Atick: The chilling effect is something that is unforgivable. If I cannot go outside in the street, because I believe somebody’s using an iPhone, could take a picture of me and connect me to my online profile and, this online and offline connection is, is a dangerous thing. And it’s happening right now.
Strong: And he thinks we urgently need some legal ground rules.
Atick: And so it’s no longer a technological issue. We cannot contain this powerful technology through technology. There has to be some sort of legal frameworks.
Strong: The way he sees it, the technological edge will keep pushing forward—with AI at the forefront. But the people building and using it? They’re at the center.
Atick: I believe there has to be some harmony between what technology can do for us and helps us live with dignity and have easier lives and connect with the people we love, but at the same time, it has to be within what our morals and our expectations as human beings allow it to be.
Strong: In other words, once again… it seems up to us. This episode was reported and produced by me, Anthony Green, Emma Cillekens, Tate Ryan-Mosley and Karen Hao. We’re edited by Michael Reilly and Gideon Lichfield. Thanks too to Kate Kaye with the Banned in PDX podcast. That’s it for season one. Thanks so much for choosing to spend your time with us. We’ll meet you back here in the new year until then happy holidays and… Thanks for listening, I’m Jennifer Strong.
How Twitter’s “Teacher Li” became the central hub of China protest information
It’s hard to describe the feeling that came after. It’s like everyone is coming to you and all kinds of information from all over the world is converging toward you and [people are] telling you: Hey, what’s happening here; hey, what’s happening there; do you know, this is what’s happening in Guangzhou; I’m in Wuhan, Wuhan is doing this; I’m in Beijing, and I’m following the big group and walking together. Suddenly all the real-time information is being submitted to me, and I don’t know how to describe that feeling. But there was also no time to think about it.
My heart was beating very fast, and my hands and my brain were constantly switching between several software programs—because you know, you can’t save a video with Twitter’s web version. So I was constantly switching software, editing the video, exporting it, and then posting it on Twitter. [Editor’s note: Li adds subtitles, blocks out account information, and compiles shorter videos into one.] By the end, there was no time to edit the videos anymore. If someone shot and sent over a 12-second WeChat video, I would just use it as is. That’s it.
I got the largest amount of [private messages] around 6:00 p.m. on Sunday night. At that time, there were many people on the street in five major cities in China: Beijing, Shanghai, Chengdu, Wuhan, and Guangzhou. So I basically was receiving a dozen private messages every second. In the end, I couldn’t even screen the information anymore. I saw it, I clicked on it, and if it was worth posting, I posted it.
People all over the country are telling me about their real-time situations. In order for more people not to be in danger, they went to the [protest] sites themselves and sent me what was going on there. Like, some followers were riding bikes near the presidential palace in Nanjing, taking pictures, and telling me about the situation in the city. And then they asked me to inform everyone to be cautious. I think that’s a really moving thing.
It’s like I have gradually become an anchor sitting in a TV studio, getting endless information from reporters on the scene all over the country. For example, on Monday in Hangzhou, there were five or six people updating me on the latest news simultaneously. But there was a break because all of them were fleeing when the police cleared the venue.
On the importance of staying objective
There are a lot of tweets that embellish the truth. From their point of view, they think it’s the right thing to do. They think you have to maximize the outrage so that there can be a revolt. But for me, I think we need reliable information. We need to know what’s really going on, and that’s the most important thing. If we were doing it for the emotion, then in the end I really would have been part of the “foreign influence,” right?
But if there is a news account outside China that can record what’s happening objectively, in real time, and accurately, then people inside the Great Firewall won’t have doubts anymore. At this moment, in this quite extreme situation of a continuous news blackout, to be able to have an account that can keep posting news from all over the country at a speed of almost one tweet every few seconds is actually a morale boost for everyone.
Chinese people grow up with patriotism, so they become shy or don’t dare to say something directly or oppose something directly. That’s why the crowd was singing the national anthem and waving the red flag, the national flag [during protests]. You have to understand that the Chinese people are patriotic. Even when they are demanding things [from the government], they do it with that sentiment.
Your microbiome ages as you do—and that’s a problem
These ecosystems appear to change as we age—and these changes can potentially put us at increased risk of age-related diseases. So how can we best look after them as we get old? And could an A-grade ecosystem help fend off diseases and help us lead longer, healthier lives?
It’s a question I’ve been pondering this week, partly because I know a few people who have been put on antibiotics for winter infections. These drugs—lifesaving though they can be—can cause mass destruction of gut microbes, wiping out the good along with the bad. How might people who take them best restore a healthy ecosystem afterwards?
I also came across a recent study in which scientists looked at thousands of samples of people’s gut microbe populations to see how they change with age. The standard approach to working out what microbes are living in a person’s gut is to look at feces. The idea is that when we have a bowel movement, we shed plenty of gut bacteria. Scientists can find out which species and strains of bacteria are present to get an estimate of what’s in your intestines.
In this study, a team based at University College Cork in Ireland analyzed data that had already been collected from 21,000 samples of human feces. These had come from people all over the world, including Europe, North and South America, Asia, and Africa. Nineteen nationalities were represented. The samples were all from adults between 18 and 100.
The authors of this study wanted to get a better handle on what makes for a “good” microbiome, especially as we get older. It has been difficult for microbiologists to work this out. We do know that some bacteria can produce compounds that are good for our guts. Some seem to aid digestion, for example, while others lower inflammation.
But when it comes to the ecosystem as a whole, things get more complicated. At the moment, the accepted wisdom is that variety seems to be a good thing—the more microbial diversity, the better. Some scientists believe that unique microbiomes also have benefits, and that a collection of microbes that differs from the norm can keep you healthy.
The team looked at how the microbiomes of younger people compared with those of older people, and how they appeared to change with age. The scientists also looked at how the microbial ecosystems varied with signs of unhealthy aging, such as cognitive decline, frailty, and inflammation.
They found that the microbiome does seem to change with age, and that, on the whole, the ecosystems in our guts do tend to become more unique—it looks as though we lose aspects of a general “core” microbiome and stray toward a more individual one.
But this isn’t necessarily a good thing. In fact, this uniqueness seems to be linked to unhealthy aging and the development of those age-related symptoms listed above, which we’d all rather stave off for as long as possible. And measuring diversity alone doesn’t tell us much about whether the bugs in our guts are helpful or not in this regard.
The findings back up what these researchers and others have seen before, challenging the notion that uniqueness is a good thing. Another team has come up with a good analogy, which is known as the Anna Karenina principle of the microbiome: “All happy microbiomes look alike; each unhappy microbiome is unhappy in its own way.”
Of course, the big question is: What can we do to maintain a happy microbiome? And will it actually help us stave off age-related diseases?
There’s plenty of evidence to suggest that, on the whole, a diet with plenty of fruit, vegetables, and fiber is good for the gut. A couple of years ago, researchers found that after 12 months on a Mediterranean diet—one rich in olive oil, nuts, legumes, and fish, as well as fruit and veg—older people saw changes in their microbiomes that might benefit their health. These changes have been linked to a lowered risk of developing frailty and cognitive decline.
But at the individual level, we can’t really be sure of the impact that changes to our diets will have. Probiotics are a good example; you can chug down millions of microbes, but that doesn’t mean that they’ll survive the journey to your gut. Even if they do get there, we don’t know if they’ll be able to form niches in the existing ecosystem, or if they might cause some kind of unwelcome disruption. Some microbial ecosystems might respond really well to fermented foods like sauerkraut and kimchi, while others might not.
I personally love kimchi and sauerkraut. If they do turn out to support my microbiome in a way that protects me against age-related diseases, then that’s just the icing on the less-microbiome-friendly cake.
To read more, check out these stories from the Tech Review archive:
At-home microbiome tests can tell you which bugs are in your poo, but not much more than that, as Emily Mullin found.
Industrial-scale fermentation is one of the technologies transforming the way we produce and prepare our food, according to these experts.
Can restricting your calorie intake help you live longer? It seems to work for monkeys, as Katherine Bourzac wrote in 2009.
Adam Piore bravely tried caloric restriction himself to find out if it might help people, too. Teaser: even if you live longer on the diet, you will be miserable doing so.
From around the web:
Would you pay $15,000 to save your cat’s life? More people are turning to expensive surgery to extend the lives of their pets. (The Atlantic)
The World Health Organization will now start using the term “mpox” in place of “monkeypox,” which will be phased out over the next year. (WHO)
After three years in prison, He Jiankui—the scientist behind the infamous “CRISPR babies”—is attempting a comeback. (STAT)
Tech that allows scientists to listen in on the natural world is revealing some truly amazing discoveries. Who knew that Amazonian sea turtles make more than 200 distinct sounds? And that they start making sounds before they even hatch? (The Guardian)
These recordings provide plenty of inspiration for musicians. Whale song is particularly popular. (The New Yorker)
Scientists are using tiny worms to diagnose pancreatic cancer. The test, launched in Japan, could be available in the US next year. (Reuters)
The Download: circumventing China’s firewall, and using AI to invent new drugs
As protests against rigid covid control measures in China engulfed social media in the past week, one Twitter account has emerged as the central source of information: @李老师不是你老师 (“Teacher Li Is Not Your Teacher”).
People everywhere in China have sent protest footage and real-time updates to the account through private messages, and it has posted them, with the sender’s identity hidden, on their behalf.
The man behind the account, Li, is a Chinese painter based in Italy, who requested to be identified only by his last name in light of the security risks. He’s been tirelessly posting footage around the clock to help people within China get information, and also to inform the wider world.
The work has been taking its toll—he’s received death threats, and police have visited his family back in China. But it also comes with a sense of liberation, Li told Zeyi Yang, our China reporter. Read the full story.
Biotech labs are using AI inspired by DALL-E to invent new drugs
The news: Text-to-image AI models like OpenAI’s DALL-E 2—programs trained to generate pictures of almost anything you ask for—have sent ripples through the creative industries. Now, two biotech labs are using this type of generative AI, known as a diffusion model, to conjure up designs for new types of protein never seen in nature.
Why it matters: Proteins are the fundamental building blocks of living systems. These protein generators can be directed to produce designs for proteins with specific properties, such as shape or size or function. In effect, this makes it possible to come up with new proteins to do particular jobs on demand. Researchers hope that this will eventually lead to the development of new and more effective drugs. Read the full story.