Connect with us

Tech

When AI becomes child’s play

Published

on

When AI becomes child’s play


Despite their popularity with kids, tablets and other connected devices are built on top of systems that weren’t designed for them to easily understand or navigate. But adapting algorithms to interact with a child isn’t without its complications—as no one child is exactly like another. Most recognition algorithms look for patterns and consistency to successfully identify objects. But kids are notoriously inconsistent. In this episode, we examine the relationship AI has with kids. 

We Meet:

  • Judith Danovitch, associate professor of psychological and brain sciences at the University of Louisville. 
  • Lisa Anthony, associate professor of computer science at the University of Florida.
  • Tanya Basu, senior reporter at MIT Technology Review.

Credits:

This episode was reported and produced by Jennifer Strong, Anthony Green, and Tanya Basu with Emma Cillekens. We’re edited by Michael Reilly.

Jennifer: It wasn’t long ago that playing hopscotch, board games or hosting tea parties with dolls was the norm for kids….

Some TV here and there… a day at the park… bikes.

But… we’ve seen hopscotch turn to TicToc… board games become video games… and dolls at tea parties… do more than just talk back

[Upsot: Barbie ad “Barbie.. This is my digital makeover.. I insert my own Ipad and open my app .. and the mirror lights up..  I do my eyeshadow lipstick and blush.. How amazing is that?”]

Jennifer: Kids are exposed to devices almost from birth, and often know how to use a touchscreen before they can walk. 

Thing is… these systems aren’t really designed for kids.

So… what does it mean to invite Alexa to the party? 

[Upsot.. 1’30-1’40 “Hi there and welcome to Amazon storytime. You can choose anything from pirates to princesses. Fancy that!”]

Jennifer: And… What happens when toys are connected to the internet and kids can ask them anything.. and they’ll not only answer back…. but also learn from your kids and collect their data.

Jennifer: I’m Jennifer Strong and this episode, we explore the relationship AI has with kids. 

Judith: My name is Judith Danovitch. I’m an associate professor of psychological and brain sciences at the University of Louisville. So, I’m interested in how children think, and specifically, I’m interested in how children think about information sources. For example, when they have a question about something, how do they go about figuring out where to find the answer and which answers to trust. 

Jennifer: So, when she found her son sitting alone talking to Siri one afternoon… It sparked her interest right away. She says he was four years old when he started asking it questions.

Judith: Like, what’s my name? And it seemed like he was kind of testing her to see what she would say in response. Like, did she actually, you know, know these things about him? The funny part was that the device belonged to my husband, whose name is Nick. And so when he said, what’s my name? She said, Nick. And he said, no, this is David. So, you know, it was plausible. It wasn’t even that she just said, I don’t know, she actually said something, but it was wrong. 

Jennifer: Then… he started asking questions that weren’t just about himself…

Judith: Which was really interesting because it seemed like he was really trying to figure out, is this device somehow watching me and can it see me right now? And then he moved on to asking what I can only describe as a really broad range of questions. Some of which I recognize as topics that we had talked about. So he asked her, for example, do eagles eat snakes? And I guess he and my husband had been talking about Eagles and snakes recently, but then he also asked her some really kind of profound questions that he hadn’t really asked us. So at one point he asked why do things die? Which you know is a pretty heavy thing for a four year old to be asking Siri.  

Jennifer: And as this went on… she started secretly taping him.

David: How do you get out of Egypt? 

Is buttface a bad word?

… And why do things die?

Judith: Later on that day after I stopped recording him and he had kind of lost interest in this activity, I asked him a bit more and he told me that he thought there really was a tiny person inside there. That’s who Siri was. She was a tiny person inside the iPad. And that’s who was answering his questions. He didn’t have as good of an insight into where she got her answers from. So he wasn’t able to say, Oh, they’re coming from the internet. And that’s one of the things that I’ve become very interested in is, well, when kids hear these devices, what, where do they think this information is coming from? Is it a tiny person or is it, you know, something else. And, and that ties into questions of, do you believe it? Right? So, should you trust what the device tells you in response to your question?

Jennifer: It’s the kind of trust that little kids place in their parents and teachers.

Judith: Anecdotally I think parents think like, oh, kids are gullible and they’ll trust everything they see on the internet. But actually what we’ve found both with research in the United States and with research with children in China is that young children in preschool ages about four to six are actually very skeptical of the internet and given the choice they’d rather consult a person.

Jennifer: But she says that could change as voice activated devices become more and more commonplace.

Judith: And we’ve been trying to find out if kids have similar kinds of intuitions about the devices as they do about the internet in general but we are seeing similar patterns with young children where again, young children given the choice are saying, I would rather go ask a person for information at least when they information has to do with facts. Like, you know, where does something live, where, where do these things come from? And most of our research has focused on facts.  

Jennifer: She does see a shift around 7 or 8 when kids start to trust the internet and voice assistants more. But she wants to be clear – this is early research… And it’s not that kids believe devices a hundred percent of the time as they get older they just believe them more often.  

But why are the youngest kids… the ones with big imaginations… more skeptical than older kids? 

Judith: Younger kids I think are skeptical frankly, because these are quite strange objects, right? In some ways they’re like people, right? It talks to you. It kind of has a back and forth conversation. But in other ways, these devices are nothing like a person at all, right? It’s this strange box. It doesn’t have a face or eyes or anything like that, that it suggests that it’s an animate object and it isn’t, of course, right? So, you know, for young kids, that should actually be quite difficult to figure out, and it makes sense that they might be cautious about it, that they might say, well, yeah, maybe you can do some things, but I’m going to go with the safe bet is to ask my mom or my teacher over this sort of strange unknown thing.

Jennifer: Alexa has more than 80,000 skills In the U.S.… and there’s developer support for skills aimed at kids. 

[School bell rings. Sounds of children chattering as mumble under next VO.]

Jennifer: Sesame Street designed one for kids to play hide-and-seek with Elmo… 

Elmo: Time to guess where Elmo is hiding!  

Jennifer: Nickelodeon has a way to put them under the sea with Spongebob Squarepants… 

[Spongebob Squarepants theme music]

Jennifer: And then there’s a game called Kids Court. It’s an app developed by Pretzel Labs for parents that let’s Alexa be the judge of their children’s squabbles.

Alexa: Welcome to kids court where everything unfair is finally brought to justice. All rise for the honorable judge, Lexi. Do you want to settle a case or hear about previous trials?

Child: Settle a case. 

Alexa: Prosecutor, please tell the court what happened.

Child: Brianna read my diary! 

Alexa: Kids court was created for cases like this. Is the defendant present or absent?

Child: present.   

Alexa: Defendant you’re in trouble. What do you plead? Say guilty if you did it or not guilty if you did. 

Brianna: Not guilty!

[Music transition ]

Jennifer: There’s a lot of companies offering connected experiences designed for kids. Among them, voice games aimed at parents wary of screen time. One system is called Yoto, but it’s got much more in common with a cassette player than a smart speaker. New content like music or stories can be added to the system via cards that, when inserted into the speaker, trigger a download from Yoto’s servers. 

There’s not much to it. There’s no voice assistant, no camera, no microphone.. and its pixelated display is really only meant to show the time or a cartoonish image related to what’s playing. 

Kate Daniels: The best part about it is it’s just so simple. I mean, our youngest turned two yesterday and he’s known how to use it for the last year. You know? I don’t think it needs to be all fancy.  

Jennifer: Kate and Brian Daniels just made the move from New York City to Boston with their three kids in tow—who are all avid users of Yoto. 

Parker Daniels: A song album My dad put on is Hamilton. Um, I really like it. 

Jennifer: That’s their 6 year old son Parker. He’s going through a binder filled with cards… which are used to operate the device. 

Parker Daniels: Um, and I’m now… I’m looking for the rest and I have like a whole, like book.  

Charlotte Daniels: And on some cards, there’s lots of songs and some there’s lots of stories, but different chapters. 

Jennifer: And that’s his younger sister, Charlotte. 

Brian Daniels: So we’re, we’re also able to, uh, record stories and put them on, uh, custom cards so that the kids can play the stories that I come up with. And they love when I tell them stories, but I’m not always available, you know, working from home and being busy. So this allows them to play those stories at any time. 

Jennifer: Screenless entertainment options are key for this family…. Which… apart from Friday night pizza and a movie… don’t spend much time gathered around the TV. But beyond limiting screen time (while they still can) Mom and Dad say they also enjoy peace of mind that the kids don’t have a direct line to Google. 

Kate Daniels: We have complete control over what they have access to, which is another great thing. We had an Alexa for awhile someone had given us and it was didn’t work well for us because they could say, Alexa, tell us about, and they could pick whatever they wanted and we didn’t know what was going to come back so we can really curate what they’re allowed to listen to and experience.

Jennifer: Still, they admit they haven’t quite figured out how to navigate introducing more advanced technology when the time comes.

Kate Daniels: I think that’s a really hard question. You know, we, as parents, we want to really curate everything that they’re exposed to, but ultimately we’re not going to be able to do that. Even with all of the softwares out there to [00:18:06] Big brother, their own phones and watch every text message and everything they’re surfing. I don’t, it’s a big question and I don’t think we have the answer yet. 

Tanya: So another reason why these voice games are becoming more popular is that they’re screen-free, which is really interesting and important. Given the fact that kids are usually recommended not to have more than two hours of screen time per day. And that’s when they’re about four or five. 

Hi my name is Tanya Basu, I’m a senior reporter at MIT Technology Review and I cover humans and technology. 

Younger kids, especially, should not be exposed to as much screen time. And audio based entertainment often seems healthier to parents because it gives them that ability to be entertained, to be educated, to think about things in a different way that doesn’t require basically a screen in front of their face and potentially, creating problems later down the road that we just don’t know about right now.

Jennifer: But designing these systems… isn’t without complications. 

Tanya:  A lot of it is that kids are learning how to speak, you know, you and I are having this conversation right now, we have an understanding of what a dialogue is in a way that children don’t. So there’s obviously that. There’s also the fact that kids don’t really sit still. So, you know, one might be far away or screaming or saying a word differently. And that obviously affects the way developers might be creating these games. And one big thing that a lot of people I talked to mentioned was the fact that kids are not a universal audience. And I think a lot of people forget that, especially ones who are developing these games… 

Jennifer: Still, she says the ability for kids to understand complexity shouldn’t be underestimated. 

Tanya: I’m honestly surprised that there aren’t more games for kids. And I’m surprised mostly that the games that are out there tend to be story kind of games and not, you know, a board game or something that is visually representative. We see with roblox and a lot of the more popular video games that came out during the pandemic, how complex they are, and the fact that kids can handle complex storylines, complex gaming, complex movement. But a lot of these voice games are so simple. And a lot of that is because the technology is just not there. But I am surprised that the imagination in terms of seeing where these games are going is quite limited thus far. So I’m really curious to see how these games develop over the next few years.

Jennifer: We’ll be back, right after this.

[MIDROLL]

Lisa: There’s always this challenge of throwing technology at kids and just sort of expecting them to adapt. And I think it’s a two way street. 

Jennifer: Lisa Anthony is an associate professor of computer science at the University of Florida. Her research focuses on developing interactive technologies designed to be used by children. 

Lisa: We don’t necessarily want systems that just prevent growth. You know, we do want children to continue to grow and develop and not necessarily use the AI as a crutch for all of that process, but we do want the AI to maybe help. It could act as a better support along the way. If we consider children’s developmental needs, expectations and abilities as we design these systems.

Jennifer: She works with kids to understand how they behave differently with devices than adults. 

Lisa: So, when they touch the touch screen or when they draw on the touch screen, what does that look like from a software point of view that we can then adapt our algorithms to recognize and interpret those interactions more accurately. // So some of the challenges that you see are really understanding kids’ needs, expectations and abilities with respect to technology, and it’s all going to be driven a lot by their motor skills, the progress of development, you know, their cognitive skills, socio emotional skills, and how they interact with the world is all going to be transitively applied to how they might interact with technology. 

Jennifer: For example, most kids simply lack the level of dexterity and motor control needed to tap a small button on a touchscreen—despite their small fingers. 

Lisa: So an adult might put their finger to the touchscreen, draw a square and one smooth stroke, all four sides and lift it up, a kid, especially a young kid, let’s say five, six years old is going to be, probably, picking up their finger at every corner. Maybe even in the middle of a stroke and then putting it down again to correct themselves and finish. And those types of small variances in how they make that shape can actually have a big impact on whether the system can recognize that shape if that type of data wasn’t ever used as part of the training process.

Jennifer: Programming this into AI models is critical, because handwriting recognition and intelligent tutoring systems are increasingly turning up in classrooms.

Most recognition algorithms look for patterns and consistency to identify objects. And kids…are notoriously inconsistent. If you were to task a child with drawing five squares in a row each one is going to look different to an algorithm. 

The needs of kids are changing as they grow… that means algorithms need to change too. 

So, researchers are looking to incorporate lessons learned from kids shows… like how children establish social attachments to animated characters that look like people. 

Lisa: That means they’re likely to ascribe social expectations to their interactions with that character. They feel warmly towards the character. They feel that the character is going to respond in predictable social ways. And this can be a benefit if your system is ready to handle that, but it can also be a challenge. If your system is not ready to handle that, it comes across as wooden. It comes across as unnatural. The children are going to be turned off by that. 

Jennifer: She says her research has also shown kids respond to AI systems that are transparent and can solve problems together with the child . 

Lisa: So kids wanted the system to be able to recognize it didn’t know the answer to their question, or it didn’t know enough information to answer your question or completed an interaction and just say, I don’t know, or tell me, you know, this information that will help me answer. And I think what we were seeing, well, we still tend to see actually is a design trend for AI systems where the AI system tries to gracefully recover from errors or lack of information without quote unquote, bothering the user, right. Without really getting them involved or interrupting them, trying to sort of gracefully exist in the background. Kids were much more tolerant of error and, and wanted to treat it like a collaborative problem, solving, experience 

Jennifer: Still, she admits there’s a long road ahead in developing systems with contextual awareness about interacting with children. 

Lisa: Often Google home returns sort of like an excerpt from the Google search results and it’s, it could be anything that comes back, right. And the kids have to then somehow listen to this long and sort of obscure paragraph and then figure out if their answer was ever contained in that paragraph anywhere. And they would have to get their parents’ help to interpret the information and a theme that you see a lot in this type of work and generally kids and technologies, they want to be able to do it themselves. They don’t really want to have to ask their parents for help because they want to be independent and engaged with the world on their own.

Jennifer: But how much we allow AI to play a part in developing that independence… is up to us. 

Lisa: Do we want AI to go in the direction of cars, for example, where for the most part, many of us own a car,have no idea how it works under the hood, how we can fix it, how we can improve it. What are the implications of this design decision or that design decision? Or do we want AI to be something where people… they’re really empowered and they have a potential to understand these big differences, these big decisions. So, I think that’s why for me, kids and AI education is really important because we want to make sure that they feel like this is not just a black box mystery element of technology in their lives, but something that they can really understand, think critically about affect change and perhaps contribute to building as well.

[CREDITS]

Jennifer: This episode was reported and produced by me, Anthony Green and Tanya Basu with Emma Cillekens. We’re edited by Michael Reilly.

Thanks for listening, I’m Jennifer Strong.

Tech

This startup’s AI is smart enough to drive different types of vehicles

Published

on

This startup’s AI is smart enough to drive different types of vehicles


Jay Gierak at Ghost, which is based in Mountain View, California, is impressed by Wayve’s demonstrations and agrees with the company’s overall viewpoint. “The robotics approach is not the right way to do this,” says Gierak.

But he’s not sold on Wayve’s total commitment to deep learning. Instead of a single large model, Ghost trains many hundreds of smaller models, each with a specialism. It then hand codes simple rules that tell the self-driving system which models to use in which situations. (Ghost’s approach is similar to that taken by another AV2.0 firm, Autobrains, based in Israel. But Autobrains uses yet another layer of neural networks to learn the rules.)

According to Volkmar Uhlig, Ghost’s co-founder and CTO, splitting the AI into many smaller pieces, each with specific functions, makes it easier to establish that an autonomous vehicle is safe. “At some point, something will happen,” he says. “And a judge will ask you to point to the code that says: ‘If there’s a person in front of you, you have to brake.’ That piece of code needs to exist.” The code can still be learned, but in a large model like Wayve’s it would be hard to find, says Uhlig.

Still, the two companies are chasing complementary goals: Ghost wants to make consumer vehicles that can drive themselves on freeways; Wayve wants to be the first company to put driverless cars in 100 cities. Wayve is now working with UK grocery giants Asda and Ocado, collecting data from their urban delivery vehicles.

Yet, by many measures, both firms are far behind the market leaders. Cruise and Waymo have racked up hundreds of hours of driving without a human in their cars and already offer robotaxi services to the public in a small number of locations.

“I don’t want to diminish the scale of the challenge ahead of us,” says Hawke. “The AV industry teaches you humility.”

Continue Reading

Tech

Russia’s battle to convince people to join its war is being waged on Telegram

Published

on

Russia’s battle to convince people to join its war is being waged on Telegram


Just minutes after Putin announced conscription, the administrators of the anti-Kremlin Rospartizan group announced its own “mobilization,” gearing up its supporters to bomb military enlistment officers and the Ministry of Defense with Molotov cocktails. “Ordinary Russians are invited to die for nothing in a foreign land,” they wrote. “Agitate, incite, spread the truth, but do not be the ones who legitimize the Russian government.”

The Rospartizan Telegram group—which has more than 28,000 subscribers—has posted photos and videos purporting to show early action against the military mobilization, including burned-out offices and broken windows at local government buildings. 

Other Telegram channels are offering citizens opportunities for less direct, though far more self-interested, action—namely, how to flee the country even as the government has instituted a nationwide ban on selling plane tickets to men aged 18 to 65. Groups advising Russians on how to escape into neighboring countries sprung up almost as soon as Putin finished talking, and some groups already on the platform adjusted their message. 

One group, which offers advice and tips on how to cross from Russia to Georgia, is rapidly closing in on 100,000 members. The group dates back to at least November 2020, according to previously pinned messages; since then, it has offered information for potential travelers about how to book spots on minibuses crossing the border and how to travel with pets. 

After Putin’s declaration, the channel was co-opted by young men giving supposed firsthand accounts of crossing the border this week. Users are sharing their age, when and where they crossed the border, and what resistance they encountered from border guards, if any. 

For those who haven’t decided to escape Russia, there are still other messages about how to duck army call-ups. Another channel, set up shortly after Putin’s conscription drive, crowdsources information about where police and other authorities in Moscow are signing up men of military age. It gained 52,000 subscribers in just two days, and they are keeping track of photos, videos, and maps showing where people are being handed conscription orders. The group is one of many: another Moscow-based Telegram channel doing the same thing has more than 115,000 subscribers. Half that audience joined in 18 hours overnight on September 22. 

“You will not see many calls or advice on established media on how to avoid mobilization,” says Golovchenko. “You will see this on Telegram.”

The Kremlin is trying hard to gain supremacy on Telegram because of its current position as a rich seam of subterfuge for those opposed to Putin and his regime, Golovchenko adds. “What is at stake is the extent to which Telegram can amplify the idea that war is now part of Russia’s everyday life,” he says. “If Russians begin to realize their neighbors and friends and fathers are being killed en masse, that will be crucial.”

Continue Reading

Tech

The Download: YouTube’s deadly crafts, and DeepMind’s new chatbot

Published

on

The YouTube baker fighting back against deadly “craft hacks”


Ann Reardon is probably the last person whose content you’d expect to be banned from YouTube. A former Australian youth worker and a mother of three, she’s been teaching millions of loyal subscribers how to bake since 2011. But the removal email was referring to a video that was not Reardon’s typical sugar-paste fare.

Since 2018, Reardon has used her platform to warn viewers about dangerous new “craft hacks” that are sweeping YouTube, tackling unsafe activities such as poaching eggs in a microwave, bleaching strawberries, and using a Coke can and a flame to pop popcorn.

The most serious is “fractal wood burning”, which involves shooting a high-voltage electrical current across dampened wood to burn a twisting, turning branch-like pattern in its surface. The practice has killed at least 33 people since 2016.

On this occasion, Reardon had been caught up in the inconsistent and messy moderation policies that have long plagued the platform and in doing so, exposed a failing in the system: How can a warning about harmful hacks be deemed dangerous when the hack videos themselves are not? Read the full story.

—Amelia Tait

DeepMind’s new chatbot uses Google searches plus humans to give better answers

The news: The trick to making a good AI-powered chatbot might be to have humans tell it how to behave—and force the model to back up its claims using the internet, according to a new paper by Alphabet-owned AI lab DeepMind. 

How it works: The chatbot, named Sparrow, is trained on DeepMind’s large language model Chinchilla. It’s designed to talk with humans and answer questions, using a live Google search or information to inform those answers. Based on how useful people find those answers, it’s then trained using a reinforcement learning algorithm, which learns by trial and error to achieve a specific objective. Read the full story.

—Melissa Heikkilä

Sign up for MIT Technology Review’s latest newsletters

MIT Technology Review is launching four new newsletters over the next few weeks. They’re all brilliant, engaging and will get you up to speed on the biggest topics, arguments and stories in technology today. Monday is The Algorithm (all about AI), Tuesday is China Report (China tech and policy), Wednesday is The Spark (clean energy and climate), and Thursday is The Checkup (health and biotech).

Continue Reading

Copyright © 2021 Seminole Press.