Connect with us

Tech

Podcast: In the AI of the Beholder

Published

on

Podcast: In the AI of the Beholder


Ideas about what constitutes “beauty” are complex, subjective, and by no means limited to physical appearances. Elusive though it is, everyone wants more of it. That means big business and increasingly, people harnessing algorithms to create their ideal selves in the digital and, sometimes, physical worlds. In this episode, we explore the popularity of beauty filters, and sit down with someone who’s convinced his software will show you just how to nip and tuck your way to a better life.

We meet:

  • Shafee Hassan, Qoves Studio founder 
  • Lauren Rhue, Assistant Professor of Information Systems at the Robert H. Smith School of Business

Credits

This episode was reported by Tate Ryan-Mosley, and produced by Jennifer Strong, Emma Cillekens, Karen Hao and Anthony Green. We’re edited by Michael Reilly and Bobbie Johnson.

Transcript

[TR ID]

[Montage of songs about beauty]

Strong: Beauty has always been one of society’s greatest obsessions. And for as long as we’ve worshipped it… we’ve also found ways to change and enhance it. From makeup and clothes… to airbrushing photos… or a surgical nip and tuck. And now? AI.

[Montage of news coverage about beauty filters] 

[Sound from an Apple keynote featuring photo augmentation where women are made to smile more. Audience cheers]

Strong: You may not realize it…but this technology is right at your fingertips. In the beauty filters on your phone and social media. The tech has gotten so good at detecting where your eyes, nose, and jawline are, it’s easier than ever to adjust those features. With a simple swipe, you can tweak the arch of your eyebrow, or tune the curve of your lips and construct your ‘ideal image’.

It’s possible there’ll be 45-billion cameras in the world by next year… along with ever more ways to use AI to parse, tag, edit and prioritize those images. Companies like Microsoft, NVIDIA and Face++… have all publicly released products meant to gauge beauty in some way. There’s even AI-driven systems that promise to look at images of your face to tell you how beautiful you are—(or aren’t)—and what you can do about it.

Hassan: So we’re showing you what the algorithm is looking for. And if you so wish to change it, you can, you know, using these, these surgeries.

Strong: But can anyone, or any thing, be truly objective about beauty?

Rhue: Let’s just say I’ve never seen a culturally sensitive beauty AI.

Strong: And will this new wave of beauty enhancement leave our next generation with more insecurities than ever? 

Veronica: There’s like a way that the filters are kind of like detrimental to people’s like mental health and can be really crippling for some people because they’re comparing themselves to that. 

Strong: I’m Jennifer Strong and this episode we look at the role of machines in shaping our standards of beauty and how those standards shape us.

[SHOW ID]

Veronica: When I’m going to use a face filter it’s because there are certain things that I want to look differently. So if I’m not wearing makeup or if I think I don’t necessarily look my best, the beauty filter sort of changes certain things about your appearance. 

Veronica: Hi, I’m Veronica. I am 19 years old and I’m from Minnesota.

Sophia: I’m Sophia, I’m 15. And I’m also from Minnesota.

Strong: They’re sisters… and avid users of social media. They use beauty filters to enhance how they look in photos. They’re showing my producer Tate Ryan-Mosley some of their favorites. 

Sophia: Do I look like that? No. Not one bit.

Tate: Describe what makes you look different in that picture?

Sophia: It has these massive lashes that make my eyes look beautiful. My lips triple the size and my nose tinier.

Veronica: My ideal filter. It is called the Naomi filter on Snapchat. It clears your skin and then makes your eyes huge. 

Tate: When did you start using them? Do you remember?

Veronica: Fifth grade? I dunno. It was more like funny at first. Like it was kind of like a joke, like people weren’t trying to look good when they use the filters

Sophia: I definitely was. Like 12 year old girls, like having access to something that makes you not look like you’re 12. Like that’s like the coolest thing ever. 

Strong: Filters are explosively popular. Some are funny… like the one that put puppy ears and a fake nose on your face. Others are branded, geotagged, and there’s artsy ones too. But hands down the most common kind are beauty filters, which change the appearance of someone in a photo in an effort to make them more attractive—often by reshaping and recoloring their features.

And the biggest fans are young girls.

For years now… these sisters have used filters almost everyday… But they still aren’t sure how they feel about them.

Veronica: With social media in general. It’s impossible not to compare yourself to people. But I think that when people do use filters like that and they don’t disclose it. I feel like that can cause people to become more insecure or more affected by it than they would on just a regular photo, because you’re less appreciating, like, their natural beauty compared to the beauty that was like kind of formulated to make them look perfect.

Sophia: That’s not normal. That’s not a normal body. // We feel so pretty in them. And it’s like, why.. 

Veronica: There’s this somewhat of a validation when you’re meeting that standard? Even if it’s only for like a picture…

[Sound from TEDx talk: Epidemic of Beauty Sickness] 

Engeln: About 15 years ago, I was an eager young graduate student. And I spent a lot of time teaching.  

Strong: This is Renee Engeln, a professor of psychology at Northwestern University, giving a TEDx talk.

Engeln: And the more I listened to my female students, the more I picked up on something troubling. These bright, talented young women were spending alarming amounts of time thinking about talking about trying to modify their physical appearance. 

Now our perceptions of beauty are complicated. They have deep evolutionary roots. From a scientific perspective, beauty is not just desirable, but also rare. 

Strong: She went on to study this problem, interviewing women on how they were affected by constantly seeing images of unrealistic beauty standards… and what she found was… unexpected.

Engeln: Women know that the women they see in these images, aren’t representative of the general population of women. They are very aware that in the real world, nobody, nobody actually looks like this…It doesn’t seem to matter. Knowing better isn’t enough. The same woman who said this, for example, this body type is unrealistic skinny and her ribs are showing and you’re kind of like, yeah, right on. She followed it up with, I feel like I want to be like that.

Strong: Engeln gave her talk in 2013…well before AI beauty filters. And these days we’re not just seeing Photoshopped models in magazinesbut photos of ourselves and our friends that have been retouched by algorithms.

And… it’s fueling an entirely new industry… 

Hassan: We realized that there’s a demand for learning how to correctly edit faces. And from that we realized there’s also a demand in assessing faces to understand what makes a face attractive or to better understand what changes will make a face look better, essentially.

Strong: Shafee Hassan is the founder of Qoves Studio. It’s just one of a number of new companies using neural networks to recognize things in people’s faces that could be deemed unattractive. He’s a structural engineer by training… which he says informs his work.  

Hassan: And these flaws show up time and time again. And they’re very common in certain ethnicities and less common in others and a computer can detect that really accurately because the pixel values, the color values are very similar regardless of where you’re looking at it or what section of the face it’s from.

Strong: Researchers believe social media giants like Facebook, Instagram and Tik Tok all use algorithms that measure the attractiveness of a face.

Hassan: …determine or predetermine if a piece of content is going to be successful or not, and then further push that content to a greater population of users.

Strong: To date, none have confirmed this. What we do know (from reporting by The Intercept) is that TikTok asked its content moderators to suppress videos with people they deemed unattractive, poor, or to have a disability. A TikTok spokesperson said those rules were a “early, blunt attempt at preventing bullying and no longer in place.”

And this is where companies like Hassan’s come in. From his perspective, arguing about whether it’s right or wrong to promote and suppress images of people based on their looks?… is kind of beside the point. He says this system is the reality and facial features impact social status, professional prospects and income. But he thinks his company can make that process more transparent.

Hassan: So we’re showing you what the algorithm is looking for. And if you so wish to change it, you can, you know, using these, these surgeries. And that’s also something we provide as well. We provide ways, solutions, and it doesn’t even have to be cosmetic. Sleep can improve your under eye contours, which a beauty algorithm may penalize you by like 0.5 of a mark.

Strong: Uh-huh. You heard that right. Surgeries to help people embody what they think machines are looking for. His YouTube channel focuses on just that—with videos that get more than a million views. Like this one:

[Sound from YouTube Video featuring Hassan]

Hassan: Welcome to the first episode of defining beauty… Where I attempt to explore what makes a face attractive in the most objective way possible.

Strong: And they offer detailed reports about these perceived flaws.

Hassan: Ideally human eyes should be one eye width apart… here’s an article written about a 2008 experiment on specifically interpupillary distance between the eyes and how they influence attractiveness.

Strong:  He sees surgery as a bigger part of our future, especially as the importance of our online image grows.

Hassan: The whole point is we want to clear how people see surgery into being a more positive tool of social mobility, because your looks influence the way you’re treated, the amount of money you earn, how your socioeconomic status can move up or down. If you have a deformed jaw, I’m not going to tell you that you’re beautiful, just the way you are. And I think you should get correction on that because research has shown that a Jaw cervical angle deformity of like say 130 degrees or greater is very stringently rated as very unattractive by like the mass majority of lay-person raters. So, so like the, the idea of this political correct way of beauty, beauty is something that I kind of want to take on, even though it’s controversial. I feel like a lot of people do agree with what I’m saying. And that’s obviously why I have a platform.

Strong: I asked Hassan if he’s received much criticism for this work.  

Hassan: Funnily enough, the most harsh criticism I received were from my friends and family when I started off and never criticism from anywhere in the greater internet, uh, people were very curious as to the technology. It does raise some concerns about privacy, but obviously we do our best to keep everything as secure as possible. It does raise some concerns about, um, I suppose, an overarching sense of control, you know, telling people this is wrong with your face, blah, blah, blah 

Strong:  But beauty algorithms have come under severe criticism for perpetuating racism and ageism. For example, in 2016, Microsoft and NVIDIA hosted a beauty pageant with an AI judge. And out of 6-thousand entries, almost all of the 44 winners were white.

Hassan: Well, one of the big issues with beauty algorithms is that they typically trend with Caucasian faces. And so they penalize, uh, faces with non-Eurocentric features very harshly because they’re not trained with that kind of feature. Now, one of the things, when we were developing our algorithm is train it with as many different faces as possible.  I’ve always believed that attractive people are a race of their own. And so their attractive features kind of transcend a Eurocentric or a Caucasian or an Afro-centric or whatever centric you want to look at. Sharp jaws, sharp cheekbones, lean, facial fat, like this isn’t a Eurocentric thing. This is just a biology thing.  

Strong:  And Hassan takes his, ‘inspiration’, from the deeply dystopian 90s film Gattaca.   

[Sounds from the theatrical trailer for Gattaca]

Hassan: So Gattaca is very impactful because a lot of people aren’t born the most genetically gifted. And this goes back to the idea of the celebrities at the top, the, the good-looking attractive people at the top being there just because they’re genetically gifted. I don’t entirely believe that’s how they got there. I think a lot of it has to do with a bit of help from surgery, a bit of help from diet, a bit of help from world-class trainers. These are things that they will never speak about, but it’s, it’s part of the illusion of being unreachable and being exalted from the everyday man. So Gattaca is the best epitome, the best representation of basically what our company is about.

Strong: While reporting this story… my producer Tate decided to try out his facial assessment tool. And watching what unfolds next makes me extremely uncomfortable. 

I had this experience at a trade show a few years back… and though I knew it was a gimmick… it still planted fears in my head. And now on this zoom screen?  It goes beyond scare tactics and overpriced face cream… this tool recommends needles and knives…  

Hassan: So, we’re on the website And so far so good, we scroll down. So this is your, um, image. We can upload it. I’m not a robot… Here. Here. Right  Uh, and these are the flaws that the computer detects.

Hassan: Deepened nasolabial folds. These are these lines here, and that’s because you’re smiling…Under eye contour depression, which is definitely here…the region just instantly sinks. And then it goes back up as it comes towards the cheekbones. So generally for attractive faces, the contour is inline. It’s flush with the eyes, So slight, slight dark circles. puffy lower eyelid, which I do agree  This eyelid is definitely really puffy for whatever reason, but this one is not. So that’s what it’s picked up instead of 0.5 or 0.58, which, which is decently strong. a nasal Jugal fat pad, uh, that’s this pad here, it’s very minor. And so at this 0.3, which is, I think accurate, it’s not something I worry in highly about the computer thinks that you have an Epicanthic fold which is an Asian monolid as they call it… and that’s probably because your upper eyelid fat covers up a lot of your upper eyelid. So it basically sees it as the whole thing, being one eyelid.  

Strong: Let’s hit the pause button here for some context… however weird it is for me to describe my friend and colleague this way…. you can’t see Tate. So, with her permission… here we go: she’s tall, blond, has these big blue eyes, strong cheekbones, and a giant smile… she’s young too, as in double digits younger than I am… and as far as those genetics go? She’s the daughter of a pro athlete. 

But we’re hearing recommendations on what she can do to fix her supposed flaws… including different types of plastic surgery… and I can’t help but think how harshly this tool might judge the rest of us… especially someone who isn’t young and white.

Strong: We’re going to take a short break, but first…  Our friends over at the Financial Times have relaunched their podcast, Tech Tonic. Find out how a device like your fitbit might be the first to know you’ve got covid… or what antitrust laws mean for a smoked fish specialist… innovation editor John Thornhill takes us into emergency rooms, cruise ships and classrooms to explore how tech has reshaped our world… and what that means for us. 

All five episodes are available now wherever you get your podcasts… just search tech tonic. 

We’ll be back … right after this.

[MIDROLL]

Strong: What does it mean to take already flawed standards of beauty… largely imposed upon us by ourselves… and instead? Hand this mess off to algorithms that are even more flawed, littered with bias, and that further reinforce eurocentric features as the definition of what’s beautiful…

Whether that’s an Instagram filter making eyes larger… skin smoother and jawlines sharper… Or software pointing out how your features miss the standardized mark…  

…and so we called up a researcher who investigates how technology impacts the choices we make.

Rhue: And I was looking at the facial recognition tools that were out there to try to better understand the pictures. And that’s when I realized that there were scoring algorithms for beauty.   

Strong: Lauren Rhue is a professor at the University of Maryland School of Business.

Rhue: And I thought that seems impossible. Beauty is completely in the eye of the beholder. There’s all these different cultural standards that have to do with beauty. How can you train an algorithm to determine whether or not someone is beautiful? 

Strong: This type of scoring is different from what Hassan does… but both apply the same technology.

Rhue: Well, you upload a picture and they, on a score of zero to 100, it’ll tell you how beautiful this person is. They actually, the paper that I’m writing, it’s looking at Face Plus Plus, and they divide it into a male score and a female score. So women think this person is beautiful, 85 out of a hundred, whereas men think maybe she’s 90 out of a hundred.

Strong: It’s mostly unclear which companies use beauty scoring algorithms… but for those that want to, they’re easily up for sale. For example, one of the largest players in this – Face Plus Plus, owned by Chinese tech unicorn, Megvii — Their beauty scoring feature is available as part of their face recognition system. Instagram and Facebook have denied using such algorithms. TikTok and Snapchat declined to comment… but Rhue says, just the recommendation algorithms themselves often end up gauging attractiveness… regardless of whether they’re intended to. 

Rhue: Well, if you look at what Instagram wants it’s going to be essentially models, right? You’re not going to see a lot of different types of facial features and expressions. And, and that’s going to perpetuate this idea of, of beauty because, um, because of the lack of diversity in what you see in Instagram, and what’s extremely popular on Instagram

Strong: In other words, the pictures judged to be most beautiful by users get the most likes… and that’s what gets recommended to others… 

Rhue: We’re narrowing the type of pictures that are available to everybody.

Strong: When you combine that with people pervasively applying those beauty filters to their photos… it’s led to something termed “the instagram face”… which is a particular aesthetic that’s prioritized and rewarded on social media. And it’s created a new idealized look that dominates the platform.

Rhue: I understand it’s more of an entertainment value as to why we have beauty filters, but our choice of beauty filters is definitely informed by the culture, right? Informed by what the beauty standards are. And a lot of times there are Eurocentric beauty standards, and you can see that with some of the facial recognition issues that have continued to crop up. So the fact that on zoom people with very dark skin can, literally, their skin gets lost. For Asian faces that their eyes weren’t originally seen by cameras. Right? And so at the fact that a lot of the beauty filters are there to make your eyes look larger. And part of it’s that that’s what people want. And that’s where I think the chicken and the egg comes in. Is it, how are you going to expand this, the idea of beauty away from just Eurocentric standards of beauty if we see these beauty filters that perpetuate certain characteristics as more attractive than others.

Strong: Social media is well-known to be exclusionary, as is the beauty industry. But so is AI.

Rhue: I became interested of course, to see if you could see these cultural biases in the algorithms. And of course you can. Let’s just say I’ve never seen a culturally sensitive beauty AI.

Strong: Rhue’s research found that women with lighter skin and hair were consistently rated as more attractive than women with darker skin and hair. And filters too, which use facial detection, are likely to have some racial bias built in. And the consequences go well beyond the digital world.

Rhue: I think we should be very careful when we think about choice in the digital space. I mean, there have been extensive studies that have shown the order in which you recommend something to somebody changes their actual preferences. So as we have all of these, uh, recommendation algorithms and these decision support tools that are helping us figure out what to buy or how to position ourselves in social media it’s changing what we think we want.

Strong: And she believes the applications of A-I in beauty are largely being overlooked by the tech community.

Rhue: It’s just not something that we’re really talking about. And I think that speaks to the importance of diversity in this space. A lot of people say, Oh, well, beauty is just not important because we’re tech people and we’re objective. But of course, I mean, beauty is this huge industry… it has such an impact on people. And the idea that there isn’t more research is, is really interesting to me.

Strong: Next episode… we look to the future of digital payments.

Omar Farooq: We believe that there’s a path forward where money can be smarter itself. So you can actually program the coin and it can control who it goes to. So, that is not really possible in today’s centralized systems. That can only be done in a decentralized, smart money enabled system.  

Strong: This episode was reported by Tate Ryan-Mosley, and produced by me, Emma Cillekens, Karen Hao and Anthony Green. We’re edited by Michael Reilly and Bobbie Johnson.

Thanks for listening, I’m Jennifer Strong. 

[TR ID]

Tech

People are already using ChatGPT to create workout plans

Published

on

The Download: ChatGPT workout plans, and cleaning up aviation


Hitting the gym

Despite the variable quality of ChatGPT’s fitness tips, some people have actually been following its advice in the gym. 

John Yu, a TikTok content creator based in the US, filmed himself following a six-day full-body training program courtesy of ChatGPT. He instructed it to give him a sample workout plan each day, tailored to which bit of his body he wanted to work (his arms, legs, etc), and then did the workout it gave him. 

The exercises it came up with were perfectly fine, and easy enough to follow. However, Yu  found that the moves lacked variety. “Strictly following what ChatGPT gives me is something I’m not really interested in,” he says. 

Lee Lem, a bodybuilding content creator based in Australia, had a similar experience. He asked ChatGPT to create an “optimal leg day” program. It suggested the right sorts of exercises—squats, lunges, deadlifts, and so on—but the rest times between them were far too brief. “It’s hard!” Lem says, laughing. “It’s very unrealistic to only rest 30 seconds between squat sets.”

Lem hit on the core problem with ChatGPT’s suggestions: they fail to consider human bodies. As both he and Yu found out, repetitive movements quickly leave us bored or tired. Human coaches know to mix their suggestions up. ChatGPT has to be explicitly told.

For some, though, the appeal of an AI-produced workout is still irresistible—and something they’re even willing to pay for. Ahmed Mire, a software engineer based in London, is selling ChatGPT-produced plans for $15 each. People give him their workout goals and specifications, and he runs them through ChatGPT. He says he’s already signed up customers since launching the service last month and is considering adding the option to create diet plans too. ChatGPT is free, but he says people pay for the convenience. 

What united everyone I spoke to was their decision to treat ChatGPT’s training suggestions as entertaining experiments rather than serious athletic guidance. They all had a good enough understanding of fitness, and what does and doesn’t work for their bodies, to be able to spot the model’s weaknesses. They all knew they needed to treat its answers skeptically. People who are newer to working out might  be more inclined to take them at face value.

The future of fitness?

This doesn’t mean AI models can’t or shouldn’t play a role in developing fitness plans. But it does underline that they can’t necessarily be trusted. ChatGPT will improve and could learn to ask its own questions. For example, it might ask users if there are any exercises they hate, or inquire about any niggling injuries. But essentially, it can’t come up with original suggestions, and it has no fundamental understanding of the concepts it is regurgitating

Continue Reading

Tech

How Roomba tester’s private images ended up on Facebook

Published

on

How Roomba tester’s private images ended up on Facebook


A Roomba recorded a woman on the toilet. How did screenshots end up on social media?

This episode we go behind the scenes of an MIT Technology Review investigation that uncovered how sensitive photos taken by an AI powered vacuum were leaked and landed on the internet.

Reporting:

  • A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
  • Roomba testers feel misled after intimate images ended up on Facebook

We meet:

  • Eileen Guo, MIT Technology Review
  • Albert Fox Cahn, Surveillance Technology Oversight Project

Credits:

This episode was reported by Eileen Guo and produced by Emma Cillekens and Anthony Green. It was hosted by Jennifer Strong and edited by Amanda Silverman and Mat Honan. This show is mixed by Garret Lang with original music from Garret Lang and Jacob Gorski. Artwork by Stephanie Arnett.

Full transcript:

[TR ID]

Jennifer: As more and more companies put artificial intelligence into their products, they need data to train their systems.

And we don’t typically know where that data comes from. 

But sometimes just by using a product, a company takes that as consent to use our data to improve its products and services. 

Consider a device in a home, where setting it up involves just one person consenting on behalf of every person who enters… and living there—or just visiting—might be unknowingly recorded.

I’m Jennifer Strong and this episode we bring you a Tech Review investigation of training data… that was leaked from inside homes around the world. 

[SHOW ID] 

Jennifer: Last year someone reached out to a reporter I work with… and flagged some pretty concerning photos that were floating around the internet. 

Eileen Guo: They were essentially, pictures from inside people’s homes that were captured from low angles, sometimes had people and animals in them that didn’t appear to know that they were being recorded in most cases.

Jennifer: This is investigative reporter Eileen Guo.

And based on what she saw… she thought the photos might have been taken by an AI powered vacuum. 

Eileen Guo: They looked like, you know, they were taken from ground level and pointing up so that you could see whole rooms, the ceilings, whoever happened to be in them…

Jennifer: So she set to work investigating. It took months.  

Eileen Guo: So first we had to confirm whether or not they came from robot vacuums, as we suspected. And from there, we also had to then whittle down which robot vacuum it came from. And what we found was that they came from the largest manufacturer, by the number of sales of any robot vacuum, which is iRobot, which produces the Roomba.

Jennifer: It raised questions about whether or not these photos had been taken with consent… and how they wound up on the internet. 

In one of them, a woman is sitting on a toilet.

So our colleague looked into it, and she found the images weren’t of customers… they were Roomba employees… and people the company calls ‘paid data collectors’.

In other words, the people in the photos were beta testers… and they’d agreed to participate in this process… although it wasn’t totally clear what that meant. 

Eileen Guo: They’re really not as clear as you would think about what the data is ultimately being used for, who it’s being shared with and what other protocols or procedures are going to be keeping them safe—other than a broad statement that this data will be safe.

Jennifer: She doesn’t believe the people who gave permission to be recorded, really knew what they agreed to. 

Eileen Guo: They understood that the robot vacuums would be taking videos from inside their houses, but they didn’t understand that, you know, they would then be labeled and viewed by humans or they didn’t understand that they would be shared with third parties outside of the country. And no one understood that there was a possibility at all that these images could end up on Facebook and Discord, which is how they ultimately got to us.

Jennifer: The investigation found these images were leaked by some data labelers in the gig economy.

At the time they were working for a data labeling company (hired by iRobot) called Scale AI.

Eileen Guo: It’s essentially very low paid workers that are being asked to label images to teach artificial intelligence how to recognize what it is that they’re seeing. And so the fact that these images were shared on the internet, was just incredibly surprising, given how incredibly surprising given how sensitive they were.

Jennifer: Labeling these images with relevant tags is called data annotation. 

The process makes it easier for computers to understand and interpret the data in the form of images, text, audio, or video.

And it’s used in everything from flagging inappropriate content on social media to helping robot vacuums recognize what’s around them. 

Eileen Guo: The most useful datasets to train algorithms is the most realistic, meaning that it’s sourced from real environments. But to make all of that data useful for machine learning, you actually need a person to go through and look at whatever it is, or listen to whatever it is, and categorize and label and otherwise just add context to each bit of data. You know, for self driving cars, it’s, it’s an image of a street and saying, this is a stoplight that is turning yellow, this is a stoplight that is green. This is a stop sign. 

Jennifer: But there’s more than one way to label data. 

Eileen Guo: If iRobot chose to, they could have gone with other models in which the data would have been safer. They could have gone with outsourcing companies that may be outsourced, but people are still working out of an office instead of on their own computers. And so their work process would be a little bit more controlled. Or they could have actually done the data annotation in house. But for whatever reason, iRobot chose not to go either of those routes.

Jennifer: When Tech Review got in contact with the company—which makes the Roomba—they confirmed the 15 images we’ve been talking about did come from their devices, but from pre-production devices. Meaning these machines weren’t released to consumers.

Eileen Guo: They said that they started an investigation into how these images leaked. They terminated their contract with Scale AI, and also said that they were going to take measures to prevent anything like this from happening in the future. But they really wouldn’t tell us what that meant.  

Jennifer: These days, the most advanced robot vacuums can efficiently move around the room while also making maps of areas being cleaned. 

Plus, they recognize certain objects on the floor and avoid them. 

It’s why these machines no longer drive through certain kinds of messes… like dog poop for example.

But what’s different about these leaked training images is the camera isn’t pointed at the floor…  

Eileen Guo: Why do these cameras point diagonally upwards? Why do they know what’s on the walls or the ceilings? How does that help them navigate around the pet waste, or the phone cords or the stray sock or whatever it is. And that has to do with some of the broader goals that iRobot has and other robot vacuum companies has for the future, which is to be able to recognize what room it’s in, based on what you have in the home. And all of that is ultimately going to serve the broader goals of these companies which is create more robots for the home and all of this data is going to ultimately help them reach those goals.

Jennifer: In other words… This data collection might be about building new products altogether.

Eileen Guo: These images are not just about iRobot. They’re not just about test users. It’s this whole data supply chain, and this whole new point where personal information can leak out that consumers aren’t really thinking of or aware of. And the thing that’s also scary about this is that as more companies adopt artificial intelligence, they need more data to train that artificial intelligence. And where is that data coming from? Is.. is a really big question.

Jennifer: Because in the US, companies aren’t required to disclose that…and privacy policies usually have some version of a line that allows consumer data to be used to improve products and services… Which includes training AI. Often, we opt in simply by using the product.

Eileen Guo: So it’s a matter of not even knowing that this is another place where we need to be worried about privacy, whether it’s robot vacuums, or Zoom or anything else that might be gathering data from us.

Jennifer: One option we expect to see more of in the future… is the use of synthetic data… or data that doesn’t come directly from real people. 

And she says companies like Dyson are starting to use it.

Eileen Guo: There’s a lot of hope that synthetic data is the future. It is more privacy protecting because you don’t need real world data. There have been early research that suggests that it is just as accurate if not more so. But most of the experts that I’ve spoken to say that that is anywhere from like 10 years to multiple decades out.

Jennifer: You can find links to our reporting in the show notes… and you can support our journalism by going to tech review dot com slash subscribe.

We’ll be back… right after this.

[MIDROLL]

Albert Fox Cahn: I think this is yet another wake up call that regulators and legislators are way behind in actually enacting the sort of privacy protections we need.

Albert Fox Cahn: My name’s Albert Fox Cahn. I’m the Executive Director of the Surveillance Technology Oversight Project.  

Albert Fox Cahn: Right now it’s the Wild West and companies are kind of making up their own policies as they go along for what counts as a ethical policy for this type of research and development, and, you know, quite frankly, they should not be trusted to set their own ground rules and we see exactly why with this sort of debacle, because here you have a company getting its own employees to sign these ludicrous consent agreements that are just completely lopsided. Are, to my view, almost so bad that they could be unenforceable all while the government is basically taking a hands off approach on what sort of privacy protection should be in place. 

Jennifer: He’s an anti-surveillance lawyer… a fellow at Yale and with Harvard’s Kennedy School.

And he describes his work as constantly fighting back against the new ways people’s data gets taken or used against them.

Albert Fox Cahn: What we see in here are terms that are designed to protect the privacy of the product, that are designed to protect the intellectual property of iRobot, but actually have no protections at all for the people who have these devices in their home. One of the things that’s really just infuriating for me about this is you have people who are using these devices in homes where it’s almost certain that a third party is going to be videotaped and there’s no provision for consent from that third party. One person is signing off for every single person who lives in that home, who visits that home, whose images might be recorded from within the home. And additionally, you have all these legal fictions in here like, oh, I guarantee that no minor will be recorded as part of this. Even though as far as we know, there’s no actual provision to make sure that people aren’t using these in houses where there are children.

Jennifer: And in the US, it’s anyone’s guess how this data will be handled.

Albert Fox Cahn: When you compare this to the situation we have in Europe where you actually have, you know, comprehensive privacy legislation where you have, you know, active enforcement agencies and regulators that are constantly pushing back at the way companies are behaving. And you have active trade unions that would prevent this sort of a testing regime with a employee most likely. You know, it’s night and day. 

Jennifer: He says having employees work as beta testers is problematic… because they might not feel like they have a choice.

Albert Fox Cahn: The reality is that when you’re an employee, oftentimes you don’t have the ability to meaningfully consent. You oftentimes can’t say no. And so instead of volunteering, you’re being voluntold to bring this product into your home, to collect your data. And so you’ll have this coercive dynamic where I just don’t think, you know, at, at, from a philosophical perspective, from an ethics perspective, that you can have meaningful consent for this sort of an invasive testing program by someone who is in an employment arrangement with the person who’s, you know, making the product.

Jennifer: Our devices already monitor our data… from smartphones to washing machines. 

And that’s only going to get more common as AI gets integrated into more and more products and services.

Albert Fox Cahn: We see evermore money being spent on evermore invasive tools that are capturing data from parts of our lives that we once thought were sacrosanct. I do think that there is just a growing political backlash against this sort of technological power, this surveillance capitalism, this sort of, you know, corporate consolidation.  

Jennifer: And he thinks that pressure is going to lead to new data privacy laws in the US. Partly because this problem is going to get worse.

Albert Fox Cahn: And when we think about the sort of data labeling that goes on the sorts of, you know, armies of human beings that have to pour over these recordings in order to transform them into the sorts of material that we need to train machine learning systems. There then is an army of people who can potentially take that information, record it, screenshot it, and turn it into something that goes public. And, and so, you know, I, I just don’t ever believe companies when they claim that they have this magic way of keeping safe all of the data we hand them, there’s this constant potential harm when we’re, especially when we’re dealing with any product that’s in its early training and design phase.

[CREDITS]

Jennifer: This episode was reported by Eileen Guo, produced by Emma Cillekens and Anthony Green, edited by Amanda Silverman and Mat Honan. And it’s mixed by Garret Lang, with original music from Garret Lang and Jacob Gorski.

Thanks for listening, I’m Jennifer Strong.

Continue Reading

Tech

The Download: ChatGPT workout plans, and cleaning up aviation

Published

on

The Download: ChatGPT workout plans, and cleaning up aviation


When I opened the email telling me I’d been accepted to run the London Marathon, I felt elated. And then terrified. Barely six months on from my last marathon, I knew how dedicated I’d have to be to keep running day after day, week after week, month after month, through rain, cold, tiredness, grumpiness, and hangovers.

The marathon is the easy part. It’s the constant grind of the training that kills you—and finding ways to keep it fresh and interesting is part of the challenge. Some exercise nuts think they’ve found a way to live their routines up: by using the AI chatbot ChatGPT as a sort of proxy personal trainer.

Its appeal is obvious. ChatGPT answers questions in seconds, saving the need to sift through tons of information, and asking follow-up questions will give you a more detailed and personalized answer. But is ChatGPT really the future of how we work out? Or is it just a confident bullshitter? Read the full story.

—Rhiannon Williams

How new technologies could clean up air travel

Aviation is a notorious “hard-to-decarbonize” sector. It makes up about 3% of the world’s greenhouse-gas emissions, and airline traffic could more than double from today’s levels by 2050. 

When it comes to flying, the technical challenge of cutting emissions is especially steep. Fuels for planes need to be especially light and compact, so planes can make it into the sky and still have room for people or cargo. But the industry has some promising ideas for cleaning up its act—and some of them are already taking off. Read the full story.

Continue Reading

Copyright © 2021 Seminole Press.