Connect with us

Tech

When AI becomes child’s play

Published

on

When AI becomes child’s play


Despite their popularity with kids, tablets and other connected devices are built on top of systems that weren’t designed for them to easily understand or navigate. But adapting algorithms to interact with a child isn’t without its complications—as no one child is exactly like another. Most recognition algorithms look for patterns and consistency to successfully identify objects. But kids are notoriously inconsistent. In this episode, we examine the relationship AI has with kids. 

We Meet:

  • Judith Danovitch, associate professor of psychological and brain sciences at the University of Louisville. 
  • Lisa Anthony, associate professor of computer science at the University of Florida.
  • Tanya Basu, senior reporter at MIT Technology Review.

Credits:

This episode was reported and produced by Jennifer Strong, Anthony Green, and Tanya Basu with Emma Cillekens. We’re edited by Michael Reilly.

Jennifer: It wasn’t long ago that playing hopscotch, board games or hosting tea parties with dolls was the norm for kids….

Some TV here and there… a day at the park… bikes.

But… we’ve seen hopscotch turn to TicToc… board games become video games… and dolls at tea parties… do more than just talk back

[Upsot: Barbie ad “Barbie.. This is my digital makeover.. I insert my own Ipad and open my app .. and the mirror lights up..  I do my eyeshadow lipstick and blush.. How amazing is that?”]

Jennifer: Kids are exposed to devices almost from birth, and often know how to use a touchscreen before they can walk. 

Thing is… these systems aren’t really designed for kids.

So… what does it mean to invite Alexa to the party? 

[Upsot.. 1’30-1’40 “Hi there and welcome to Amazon storytime. You can choose anything from pirates to princesses. Fancy that!”]

Jennifer: And… What happens when toys are connected to the internet and kids can ask them anything.. and they’ll not only answer back…. but also learn from your kids and collect their data.

Jennifer: I’m Jennifer Strong and this episode, we explore the relationship AI has with kids. 

Judith: My name is Judith Danovitch. I’m an associate professor of psychological and brain sciences at the University of Louisville. So, I’m interested in how children think, and specifically, I’m interested in how children think about information sources. For example, when they have a question about something, how do they go about figuring out where to find the answer and which answers to trust. 

Jennifer: So, when she found her son sitting alone talking to Siri one afternoon… It sparked her interest right away. She says he was four years old when he started asking it questions.

Judith: Like, what’s my name? And it seemed like he was kind of testing her to see what she would say in response. Like, did she actually, you know, know these things about him? The funny part was that the device belonged to my husband, whose name is Nick. And so when he said, what’s my name? She said, Nick. And he said, no, this is David. So, you know, it was plausible. It wasn’t even that she just said, I don’t know, she actually said something, but it was wrong. 

Jennifer: Then… he started asking questions that weren’t just about himself…

Judith: Which was really interesting because it seemed like he was really trying to figure out, is this device somehow watching me and can it see me right now? And then he moved on to asking what I can only describe as a really broad range of questions. Some of which I recognize as topics that we had talked about. So he asked her, for example, do eagles eat snakes? And I guess he and my husband had been talking about Eagles and snakes recently, but then he also asked her some really kind of profound questions that he hadn’t really asked us. So at one point he asked why do things die? Which you know is a pretty heavy thing for a four year old to be asking Siri.  

Jennifer: And as this went on… she started secretly taping him.

David: How do you get out of Egypt? 

Is buttface a bad word?

… And why do things die?

Judith: Later on that day after I stopped recording him and he had kind of lost interest in this activity, I asked him a bit more and he told me that he thought there really was a tiny person inside there. That’s who Siri was. She was a tiny person inside the iPad. And that’s who was answering his questions. He didn’t have as good of an insight into where she got her answers from. So he wasn’t able to say, Oh, they’re coming from the internet. And that’s one of the things that I’ve become very interested in is, well, when kids hear these devices, what, where do they think this information is coming from? Is it a tiny person or is it, you know, something else. And, and that ties into questions of, do you believe it? Right? So, should you trust what the device tells you in response to your question?

Jennifer: It’s the kind of trust that little kids place in their parents and teachers.

Judith: Anecdotally I think parents think like, oh, kids are gullible and they’ll trust everything they see on the internet. But actually what we’ve found both with research in the United States and with research with children in China is that young children in preschool ages about four to six are actually very skeptical of the internet and given the choice they’d rather consult a person.

Jennifer: But she says that could change as voice activated devices become more and more commonplace.

Judith: And we’ve been trying to find out if kids have similar kinds of intuitions about the devices as they do about the internet in general but we are seeing similar patterns with young children where again, young children given the choice are saying, I would rather go ask a person for information at least when they information has to do with facts. Like, you know, where does something live, where, where do these things come from? And most of our research has focused on facts.  

Jennifer: She does see a shift around 7 or 8 when kids start to trust the internet and voice assistants more. But she wants to be clear – this is early research… And it’s not that kids believe devices a hundred percent of the time as they get older they just believe them more often.  

But why are the youngest kids… the ones with big imaginations… more skeptical than older kids? 

Judith: Younger kids I think are skeptical frankly, because these are quite strange objects, right? In some ways they’re like people, right? It talks to you. It kind of has a back and forth conversation. But in other ways, these devices are nothing like a person at all, right? It’s this strange box. It doesn’t have a face or eyes or anything like that, that it suggests that it’s an animate object and it isn’t, of course, right? So, you know, for young kids, that should actually be quite difficult to figure out, and it makes sense that they might be cautious about it, that they might say, well, yeah, maybe you can do some things, but I’m going to go with the safe bet is to ask my mom or my teacher over this sort of strange unknown thing.

Jennifer: Alexa has more than 80,000 skills In the U.S.… and there’s developer support for skills aimed at kids. 

[School bell rings. Sounds of children chattering as mumble under next VO.]

Jennifer: Sesame Street designed one for kids to play hide-and-seek with Elmo… 

Elmo: Time to guess where Elmo is hiding!  

Jennifer: Nickelodeon has a way to put them under the sea with Spongebob Squarepants… 

[Spongebob Squarepants theme music]

Jennifer: And then there’s a game called Kids Court. It’s an app developed by Pretzel Labs for parents that let’s Alexa be the judge of their children’s squabbles.

Alexa: Welcome to kids court where everything unfair is finally brought to justice. All rise for the honorable judge, Lexi. Do you want to settle a case or hear about previous trials?

Child: Settle a case. 

Alexa: Prosecutor, please tell the court what happened.

Child: Brianna read my diary! 

Alexa: Kids court was created for cases like this. Is the defendant present or absent?

Child: present.   

Alexa: Defendant you’re in trouble. What do you plead? Say guilty if you did it or not guilty if you did. 

Brianna: Not guilty!

[Music transition ]

Jennifer: There’s a lot of companies offering connected experiences designed for kids. Among them, voice games aimed at parents wary of screen time. One system is called Yoto, but it’s got much more in common with a cassette player than a smart speaker. New content like music or stories can be added to the system via cards that, when inserted into the speaker, trigger a download from Yoto’s servers. 

There’s not much to it. There’s no voice assistant, no camera, no microphone.. and its pixelated display is really only meant to show the time or a cartoonish image related to what’s playing. 

Kate Daniels: The best part about it is it’s just so simple. I mean, our youngest turned two yesterday and he’s known how to use it for the last year. You know? I don’t think it needs to be all fancy.  

Jennifer: Kate and Brian Daniels just made the move from New York City to Boston with their three kids in tow—who are all avid users of Yoto. 

Parker Daniels: A song album My dad put on is Hamilton. Um, I really like it. 

Jennifer: That’s their 6 year old son Parker. He’s going through a binder filled with cards… which are used to operate the device. 

Parker Daniels: Um, and I’m now… I’m looking for the rest and I have like a whole, like book.  

Charlotte Daniels: And on some cards, there’s lots of songs and some there’s lots of stories, but different chapters. 

Jennifer: And that’s his younger sister, Charlotte. 

Brian Daniels: So we’re, we’re also able to, uh, record stories and put them on, uh, custom cards so that the kids can play the stories that I come up with. And they love when I tell them stories, but I’m not always available, you know, working from home and being busy. So this allows them to play those stories at any time. 

Jennifer: Screenless entertainment options are key for this family…. Which… apart from Friday night pizza and a movie… don’t spend much time gathered around the TV. But beyond limiting screen time (while they still can) Mom and Dad say they also enjoy peace of mind that the kids don’t have a direct line to Google. 

Kate Daniels: We have complete control over what they have access to, which is another great thing. We had an Alexa for awhile someone had given us and it was didn’t work well for us because they could say, Alexa, tell us about, and they could pick whatever they wanted and we didn’t know what was going to come back so we can really curate what they’re allowed to listen to and experience.

Jennifer: Still, they admit they haven’t quite figured out how to navigate introducing more advanced technology when the time comes.

Kate Daniels: I think that’s a really hard question. You know, we, as parents, we want to really curate everything that they’re exposed to, but ultimately we’re not going to be able to do that. Even with all of the softwares out there to [00:18:06] Big brother, their own phones and watch every text message and everything they’re surfing. I don’t, it’s a big question and I don’t think we have the answer yet. 

Tanya: So another reason why these voice games are becoming more popular is that they’re screen-free, which is really interesting and important. Given the fact that kids are usually recommended not to have more than two hours of screen time per day. And that’s when they’re about four or five. 

Hi my name is Tanya Basu, I’m a senior reporter at MIT Technology Review and I cover humans and technology. 

Younger kids, especially, should not be exposed to as much screen time. And audio based entertainment often seems healthier to parents because it gives them that ability to be entertained, to be educated, to think about things in a different way that doesn’t require basically a screen in front of their face and potentially, creating problems later down the road that we just don’t know about right now.

Jennifer: But designing these systems… isn’t without complications. 

Tanya:  A lot of it is that kids are learning how to speak, you know, you and I are having this conversation right now, we have an understanding of what a dialogue is in a way that children don’t. So there’s obviously that. There’s also the fact that kids don’t really sit still. So, you know, one might be far away or screaming or saying a word differently. And that obviously affects the way developers might be creating these games. And one big thing that a lot of people I talked to mentioned was the fact that kids are not a universal audience. And I think a lot of people forget that, especially ones who are developing these games… 

Jennifer: Still, she says the ability for kids to understand complexity shouldn’t be underestimated. 

Tanya: I’m honestly surprised that there aren’t more games for kids. And I’m surprised mostly that the games that are out there tend to be story kind of games and not, you know, a board game or something that is visually representative. We see with roblox and a lot of the more popular video games that came out during the pandemic, how complex they are, and the fact that kids can handle complex storylines, complex gaming, complex movement. But a lot of these voice games are so simple. And a lot of that is because the technology is just not there. But I am surprised that the imagination in terms of seeing where these games are going is quite limited thus far. So I’m really curious to see how these games develop over the next few years.

Jennifer: We’ll be back, right after this.

[MIDROLL]

Lisa: There’s always this challenge of throwing technology at kids and just sort of expecting them to adapt. And I think it’s a two way street. 

Jennifer: Lisa Anthony is an associate professor of computer science at the University of Florida. Her research focuses on developing interactive technologies designed to be used by children. 

Lisa: We don’t necessarily want systems that just prevent growth. You know, we do want children to continue to grow and develop and not necessarily use the AI as a crutch for all of that process, but we do want the AI to maybe help. It could act as a better support along the way. If we consider children’s developmental needs, expectations and abilities as we design these systems.

Jennifer: She works with kids to understand how they behave differently with devices than adults. 

Lisa: So, when they touch the touch screen or when they draw on the touch screen, what does that look like from a software point of view that we can then adapt our algorithms to recognize and interpret those interactions more accurately. // So some of the challenges that you see are really understanding kids’ needs, expectations and abilities with respect to technology, and it’s all going to be driven a lot by their motor skills, the progress of development, you know, their cognitive skills, socio emotional skills, and how they interact with the world is all going to be transitively applied to how they might interact with technology. 

Jennifer: For example, most kids simply lack the level of dexterity and motor control needed to tap a small button on a touchscreen—despite their small fingers. 

Lisa: So an adult might put their finger to the touchscreen, draw a square and one smooth stroke, all four sides and lift it up, a kid, especially a young kid, let’s say five, six years old is going to be, probably, picking up their finger at every corner. Maybe even in the middle of a stroke and then putting it down again to correct themselves and finish. And those types of small variances in how they make that shape can actually have a big impact on whether the system can recognize that shape if that type of data wasn’t ever used as part of the training process.

Jennifer: Programming this into AI models is critical, because handwriting recognition and intelligent tutoring systems are increasingly turning up in classrooms.

Most recognition algorithms look for patterns and consistency to identify objects. And kids…are notoriously inconsistent. If you were to task a child with drawing five squares in a row each one is going to look different to an algorithm. 

The needs of kids are changing as they grow… that means algorithms need to change too. 

So, researchers are looking to incorporate lessons learned from kids shows… like how children establish social attachments to animated characters that look like people. 

Lisa: That means they’re likely to ascribe social expectations to their interactions with that character. They feel warmly towards the character. They feel that the character is going to respond in predictable social ways. And this can be a benefit if your system is ready to handle that, but it can also be a challenge. If your system is not ready to handle that, it comes across as wooden. It comes across as unnatural. The children are going to be turned off by that. 

Jennifer: She says her research has also shown kids respond to AI systems that are transparent and can solve problems together with the child . 

Lisa: So kids wanted the system to be able to recognize it didn’t know the answer to their question, or it didn’t know enough information to answer your question or completed an interaction and just say, I don’t know, or tell me, you know, this information that will help me answer. And I think what we were seeing, well, we still tend to see actually is a design trend for AI systems where the AI system tries to gracefully recover from errors or lack of information without quote unquote, bothering the user, right. Without really getting them involved or interrupting them, trying to sort of gracefully exist in the background. Kids were much more tolerant of error and, and wanted to treat it like a collaborative problem, solving, experience 

Jennifer: Still, she admits there’s a long road ahead in developing systems with contextual awareness about interacting with children. 

Lisa: Often Google home returns sort of like an excerpt from the Google search results and it’s, it could be anything that comes back, right. And the kids have to then somehow listen to this long and sort of obscure paragraph and then figure out if their answer was ever contained in that paragraph anywhere. And they would have to get their parents’ help to interpret the information and a theme that you see a lot in this type of work and generally kids and technologies, they want to be able to do it themselves. They don’t really want to have to ask their parents for help because they want to be independent and engaged with the world on their own.

Jennifer: But how much we allow AI to play a part in developing that independence… is up to us. 

Lisa: Do we want AI to go in the direction of cars, for example, where for the most part, many of us own a car,have no idea how it works under the hood, how we can fix it, how we can improve it. What are the implications of this design decision or that design decision? Or do we want AI to be something where people… they’re really empowered and they have a potential to understand these big differences, these big decisions. So, I think that’s why for me, kids and AI education is really important because we want to make sure that they feel like this is not just a black box mystery element of technology in their lives, but something that they can really understand, think critically about affect change and perhaps contribute to building as well.

[CREDITS]

Jennifer: This episode was reported and produced by me, Anthony Green and Tanya Basu with Emma Cillekens. We’re edited by Michael Reilly.

Thanks for listening, I’m Jennifer Strong.

Tech

ChatGPT is about to revolutionize the economy. We need to decide what that looks like.

Published

on

ChatGPT is about to revolutionize the economy.  We need to decide what that looks like.


Power struggle

When Anton Korinek, an economist at the University of Virginia and a fellow at the Brookings Institution, got access to the new generation of large language models such as ChatGPT, he did what a lot of us did: he began playing around with them to see how they might help his work. He carefully documented their performance in a paper in February, noting how well they handled 25 “use cases,” from brainstorming and editing text (very useful) to coding (pretty good with some help) to doing math (not great).

ChatGPT did explain one of the most fundamental principles in economics incorrectly, says Korinek: “It screwed up really badly.” But the mistake, easily spotted, was quickly forgiven in light of the benefits. “I can tell you that it makes me, as a cognitive worker, more productive,” he says. “Hands down, no question for me that I’m more productive when I use a language model.” 

When GPT-4 came out, he tested its performance on the same 25 questions that he documented in February, and it performed far better. There were fewer instances of making stuff up; it also did much better on the math assignments, says Korinek.

Since ChatGPT and other AI bots automate cognitive work, as opposed to physical tasks that require investments in equipment and infrastructure, a boost to economic productivity could happen far more quickly than in past technological revolutions, says Korinek. “I think we may see a greater boost to productivity by the end of the year—certainly by 2024,” he says. 

Who will control the future of this amazing technology?

What’s more, he says, in the longer term, the way the AI models can make researchers like himself more productive has the potential to drive technological progress. 

That potential of large language models is already turning up in research in the physical sciences. Berend Smit, who runs a chemical engineering lab at EPFL in Lausanne, Switzerland, is an expert on using machine learning to discover new materials. Last year, after one of his graduate students, Kevin Maik Jablonka, showed some interesting results using GPT-3, Smit asked him to demonstrate that GPT-3 is, in fact, useless for the kinds of sophisticated machine-learning studies his group does to predict the properties of compounds.

“He failed completely,” jokes Smit.

It turns out that after being fine-tuned for a few minutes with a few relevant examples, the model performs as well as advanced machine-learning tools specially developed for chemistry in answering basic questions about things like the solubility of a compound or its reactivity. Simply give it the name of a compound, and it can predict various properties based on the structure.

Continue Reading

Tech

Newly revealed coronavirus data has reignited a debate over the virus’s origins

Published

on

Newly revealed coronavirus data has reignited a debate over the virus’s origins


Data collected in 2020—and kept from public view since then—potentially adds weight to the animal theory. It highlights a potential suspect: the raccoon dog. But exactly how much weight it adds depends on who you ask. New analyses of the data have only reignited the debate, and stirred up some serious drama.

The current ruckus starts with a study shared by Chinese scientists back in February 2022. In a preprint (a scientific paper that has not yet been peer-reviewed or published in a journal), George Gao of the Chinese Center for Disease Control and Prevention (CCDC) and his colleagues described how they collected and analyzed 1,380 samples from the Huanan Seafood Market.

These samples were collected between January and March 2020, just after the market was closed. At the time, the team wrote that they only found coronavirus in samples alongside genetic material from people.

There were a lot of animals on sale at this market, which sold more than just seafood. The Gao paper features a long list, including chickens, ducks, geese, pheasants, doves, deer, badgers, rabbits, bamboo rats, porcupines, hedgehogs, crocodiles, snakes, and salamanders. And that list is not exhaustive—there are reports of other animals being traded there, including raccoon dogs. We’ll come back to them later.

But Gao and his colleagues reported that they didn’t find the coronavirus in any of the 18 species of animal they looked at. They suggested that it was humans who most likely brought the virus to the market, which ended up being the first known epicenter of the outbreak.

Fast-forward to March 2023. On March 4, Florence Débarre, an evolutionary biologist at Sorbonne University in Paris, spotted some data that had been uploaded to GISAID, a website that allows researchers to share genetic data to help them study and track viruses that cause infectious diseases. The data appeared to have been uploaded in June 2022. It seemed to have been collected by Gao and his colleagues for their February 2022 study, although it had not been included in the actual paper.

Continue Reading

Tech

Fostering innovation through a culture of curiosity

Published

on

Fostering innovation through a culture of curiosity


And so I think a big part of it as a company, by setting these ambitious goals, it forces us to say if we want to be number one, if we want to be top tier in these areas, if we want to continue to generate results, how do we get there using technology? And so that really forces us to throw away our assumptions because you can’t follow somebody, if you want to be number one you can’t follow someone to become number one. And so we understand that the path to get there, it’s through, of course, technology and the software and the enablement and the investment, but it really is by becoming goal-oriented. And if we look at these examples of how do we create the infrastructure on the technology side to support these ambitious goals, we ourselves have to be ambitious in turn because if we bring a solution that’s also a me too, that’s a copycat, that doesn’t have differentiation, that’s not going to propel us, for example, to be a top 10 supply chain. It just doesn’t pass muster.

So I think at the top level, it starts with the business ambition. And then from there we can organize ourselves at the intersection of the business ambition and the technology trends to have those very rich discussions and being the glue of how do we put together so many moving pieces because we’re constantly scanning the technology landscape for new advancing and emerging technologies that can come in and be a part of achieving that mission. And so that’s how we set it up on the process side. As an example, I think one of the things, and it’s also innovation, but it doesn’t get talked about as much, but for the community out there, I think it’s going to be very relevant is, how do we stay on top of the data sovereignty questions and data localization? There’s a lot of work that needs to go into rethinking what your cloud, private, public, edge, on-premise look like going forward so that we can remain cutting edge and competitive in each of our markets while meeting the increasing guidance that we’re getting from countries and regulatory agencies about data localization and data sovereignty.

And so in our case, as a global company that’s listed in Hong Kong and we operate all around the world, we’ve had to really think deeply about the architecture of our solutions and apply innovation in how we can architect for a longer term growth, but in a world that’s increasingly uncertain. So I think there’s a lot of drivers in some sense, which is our corporate aspirations, our operating environment, which has continued to have a lot of uncertainty, and that really forces us to take a very sharp lens on what cutting edge looks like. And it’s not always the bright and shiny technology. Cutting edge could mean going to the executive committee and saying, Hey, we’re going to face a challenge about compliance. Here’s the innovation we’re bringing about architecture so that we can handle not just the next country or regulatory regime that we have to comply with, but the next 10, the next 50.

Laurel: Well, and to follow up with a bit more of a specific example, how does R&D help improve manufacturing in the software supply chain as well as emerging technologies like artificial intelligence and the industrial metaverse?

Art: Oh, I love this one because this is the perfect example of there’s a lot happening in the technology industry and there’s so much back to the earlier point of applied curiosity and how we can try this. So specifically around artificial intelligence and industrial metaverse, I think those go really well together with what are Lenovo’s natural strengths. Our heritage is as a leading global manufacturer, and now we’re looking to also transition to services-led, but applying AI and technologies like the metaverse to our factories. I think it’s almost easier to talk about the inverse, Laurel, which is if we… Because, and I remember very clearly we’ve mapped this out, there’s no area within the supply chain and manufacturing that is not touched by these areas. If I think about an example, actually, it’s very timely that we’re having this discussion. Lenovo was recognized just a few weeks ago at the World Economic Forum as part of the global lighthouse network on leading manufacturing.

And that’s based very much on applying around AI and metaverse technologies and embedding them into every aspect of what we do about our own supply chain and manufacturing network. And so if I pick a couple of examples on the quality side within the factory, we’ve implemented a combination of digital twin technology around how we can design to cost, design to quality in ways that are much faster than before, where we can prototype in the digital world where it’s faster and lower cost and correcting errors is more upfront and timely. So we are able to much more quickly iterate on our products. We’re able to have better quality. We’ve taken advanced computer vision so that we’re able to identify quality defects earlier on. We’re able to implement technologies around the industrial metaverse so that we can train our factory workers more effectively and better using aspects of AR and VR.

And we’re also able to, one of the really important parts of running an effective manufacturing operation is actually production planning, because there’s so many thousands of parts that are coming in, and I think everyone who’s listening knows how much uncertainty and volatility there have been in supply chains. So how do you take such a multi-thousand dimensional planning problem and optimize that? Those are things where we apply smart production planning models to keep our factories fully running so that we can meet our customer delivery dates. So I don’t want to drone on, but I think literally the answer was: there is no place, if you think about logistics, planning, production, scheduling, shipping, where we didn’t find AI and metaverse use cases that were able to significantly enhance the way we run our operations. And again, we’re doing this internally and that’s why we’re very proud that the World Economic Forum recognized us as a global lighthouse network manufacturing member.

Laurel: It’s certainly important, especially when we’re bringing together computing and IT environments in this increasing complexity. So as businesses continue to transform and accelerate their transformations, how do you build resiliency throughout Lenovo? Because that is certainly another foundational characteristic that is so necessary.

Continue Reading

Copyright © 2021 Seminole Press.