Connect with us

Tech

When AI becomes child’s play

Published

on

When AI becomes child’s play


Despite their popularity with kids, tablets and other connected devices are built on top of systems that weren’t designed for them to easily understand or navigate. But adapting algorithms to interact with a child isn’t without its complications—as no one child is exactly like another. Most recognition algorithms look for patterns and consistency to successfully identify objects. But kids are notoriously inconsistent. In this episode, we examine the relationship AI has with kids. 

We Meet:

  • Judith Danovitch, associate professor of psychological and brain sciences at the University of Louisville. 
  • Lisa Anthony, associate professor of computer science at the University of Florida.
  • Tanya Basu, senior reporter at MIT Technology Review.

Credits:

This episode was reported and produced by Jennifer Strong, Anthony Green, and Tanya Basu with Emma Cillekens. We’re edited by Michael Reilly.

Jennifer: It wasn’t long ago that playing hopscotch, board games or hosting tea parties with dolls was the norm for kids….

Some TV here and there… a day at the park… bikes.

But… we’ve seen hopscotch turn to TicToc… board games become video games… and dolls at tea parties… do more than just talk back

[Upsot: Barbie ad “Barbie.. This is my digital makeover.. I insert my own Ipad and open my app .. and the mirror lights up..  I do my eyeshadow lipstick and blush.. How amazing is that?”]

Jennifer: Kids are exposed to devices almost from birth, and often know how to use a touchscreen before they can walk. 

Thing is… these systems aren’t really designed for kids.

So… what does it mean to invite Alexa to the party? 

[Upsot.. 1’30-1’40 “Hi there and welcome to Amazon storytime. You can choose anything from pirates to princesses. Fancy that!”]

Jennifer: And… What happens when toys are connected to the internet and kids can ask them anything.. and they’ll not only answer back…. but also learn from your kids and collect their data.

Jennifer: I’m Jennifer Strong and this episode, we explore the relationship AI has with kids. 

Judith: My name is Judith Danovitch. I’m an associate professor of psychological and brain sciences at the University of Louisville. So, I’m interested in how children think, and specifically, I’m interested in how children think about information sources. For example, when they have a question about something, how do they go about figuring out where to find the answer and which answers to trust. 

Jennifer: So, when she found her son sitting alone talking to Siri one afternoon… It sparked her interest right away. She says he was four years old when he started asking it questions.

Judith: Like, what’s my name? And it seemed like he was kind of testing her to see what she would say in response. Like, did she actually, you know, know these things about him? The funny part was that the device belonged to my husband, whose name is Nick. And so when he said, what’s my name? She said, Nick. And he said, no, this is David. So, you know, it was plausible. It wasn’t even that she just said, I don’t know, she actually said something, but it was wrong. 

Jennifer: Then… he started asking questions that weren’t just about himself…

Judith: Which was really interesting because it seemed like he was really trying to figure out, is this device somehow watching me and can it see me right now? And then he moved on to asking what I can only describe as a really broad range of questions. Some of which I recognize as topics that we had talked about. So he asked her, for example, do eagles eat snakes? And I guess he and my husband had been talking about Eagles and snakes recently, but then he also asked her some really kind of profound questions that he hadn’t really asked us. So at one point he asked why do things die? Which you know is a pretty heavy thing for a four year old to be asking Siri.  

Jennifer: And as this went on… she started secretly taping him.

David: How do you get out of Egypt? 

Is buttface a bad word?

… And why do things die?

Judith: Later on that day after I stopped recording him and he had kind of lost interest in this activity, I asked him a bit more and he told me that he thought there really was a tiny person inside there. That’s who Siri was. She was a tiny person inside the iPad. And that’s who was answering his questions. He didn’t have as good of an insight into where she got her answers from. So he wasn’t able to say, Oh, they’re coming from the internet. And that’s one of the things that I’ve become very interested in is, well, when kids hear these devices, what, where do they think this information is coming from? Is it a tiny person or is it, you know, something else. And, and that ties into questions of, do you believe it? Right? So, should you trust what the device tells you in response to your question?

Jennifer: It’s the kind of trust that little kids place in their parents and teachers.

Judith: Anecdotally I think parents think like, oh, kids are gullible and they’ll trust everything they see on the internet. But actually what we’ve found both with research in the United States and with research with children in China is that young children in preschool ages about four to six are actually very skeptical of the internet and given the choice they’d rather consult a person.

Jennifer: But she says that could change as voice activated devices become more and more commonplace.

Judith: And we’ve been trying to find out if kids have similar kinds of intuitions about the devices as they do about the internet in general but we are seeing similar patterns with young children where again, young children given the choice are saying, I would rather go ask a person for information at least when they information has to do with facts. Like, you know, where does something live, where, where do these things come from? And most of our research has focused on facts.  

Jennifer: She does see a shift around 7 or 8 when kids start to trust the internet and voice assistants more. But she wants to be clear – this is early research… And it’s not that kids believe devices a hundred percent of the time as they get older they just believe them more often.  

But why are the youngest kids… the ones with big imaginations… more skeptical than older kids? 

Judith: Younger kids I think are skeptical frankly, because these are quite strange objects, right? In some ways they’re like people, right? It talks to you. It kind of has a back and forth conversation. But in other ways, these devices are nothing like a person at all, right? It’s this strange box. It doesn’t have a face or eyes or anything like that, that it suggests that it’s an animate object and it isn’t, of course, right? So, you know, for young kids, that should actually be quite difficult to figure out, and it makes sense that they might be cautious about it, that they might say, well, yeah, maybe you can do some things, but I’m going to go with the safe bet is to ask my mom or my teacher over this sort of strange unknown thing.

Jennifer: Alexa has more than 80,000 skills In the U.S.… and there’s developer support for skills aimed at kids. 

[School bell rings. Sounds of children chattering as mumble under next VO.]

Jennifer: Sesame Street designed one for kids to play hide-and-seek with Elmo… 

Elmo: Time to guess where Elmo is hiding!  

Jennifer: Nickelodeon has a way to put them under the sea with Spongebob Squarepants… 

[Spongebob Squarepants theme music]

Jennifer: And then there’s a game called Kids Court. It’s an app developed by Pretzel Labs for parents that let’s Alexa be the judge of their children’s squabbles.

Alexa: Welcome to kids court where everything unfair is finally brought to justice. All rise for the honorable judge, Lexi. Do you want to settle a case or hear about previous trials?

Child: Settle a case. 

Alexa: Prosecutor, please tell the court what happened.

Child: Brianna read my diary! 

Alexa: Kids court was created for cases like this. Is the defendant present or absent?

Child: present.   

Alexa: Defendant you’re in trouble. What do you plead? Say guilty if you did it or not guilty if you did. 

Brianna: Not guilty!

[Music transition ]

Jennifer: There’s a lot of companies offering connected experiences designed for kids. Among them, voice games aimed at parents wary of screen time. One system is called Yoto, but it’s got much more in common with a cassette player than a smart speaker. New content like music or stories can be added to the system via cards that, when inserted into the speaker, trigger a download from Yoto’s servers. 

There’s not much to it. There’s no voice assistant, no camera, no microphone.. and its pixelated display is really only meant to show the time or a cartoonish image related to what’s playing. 

Kate Daniels: The best part about it is it’s just so simple. I mean, our youngest turned two yesterday and he’s known how to use it for the last year. You know? I don’t think it needs to be all fancy.  

Jennifer: Kate and Brian Daniels just made the move from New York City to Boston with their three kids in tow—who are all avid users of Yoto. 

Parker Daniels: A song album My dad put on is Hamilton. Um, I really like it. 

Jennifer: That’s their 6 year old son Parker. He’s going through a binder filled with cards… which are used to operate the device. 

Parker Daniels: Um, and I’m now… I’m looking for the rest and I have like a whole, like book.  

Charlotte Daniels: And on some cards, there’s lots of songs and some there’s lots of stories, but different chapters. 

Jennifer: And that’s his younger sister, Charlotte. 

Brian Daniels: So we’re, we’re also able to, uh, record stories and put them on, uh, custom cards so that the kids can play the stories that I come up with. And they love when I tell them stories, but I’m not always available, you know, working from home and being busy. So this allows them to play those stories at any time. 

Jennifer: Screenless entertainment options are key for this family…. Which… apart from Friday night pizza and a movie… don’t spend much time gathered around the TV. But beyond limiting screen time (while they still can) Mom and Dad say they also enjoy peace of mind that the kids don’t have a direct line to Google. 

Kate Daniels: We have complete control over what they have access to, which is another great thing. We had an Alexa for awhile someone had given us and it was didn’t work well for us because they could say, Alexa, tell us about, and they could pick whatever they wanted and we didn’t know what was going to come back so we can really curate what they’re allowed to listen to and experience.

Jennifer: Still, they admit they haven’t quite figured out how to navigate introducing more advanced technology when the time comes.

Kate Daniels: I think that’s a really hard question. You know, we, as parents, we want to really curate everything that they’re exposed to, but ultimately we’re not going to be able to do that. Even with all of the softwares out there to [00:18:06] Big brother, their own phones and watch every text message and everything they’re surfing. I don’t, it’s a big question and I don’t think we have the answer yet. 

Tanya: So another reason why these voice games are becoming more popular is that they’re screen-free, which is really interesting and important. Given the fact that kids are usually recommended not to have more than two hours of screen time per day. And that’s when they’re about four or five. 

Hi my name is Tanya Basu, I’m a senior reporter at MIT Technology Review and I cover humans and technology. 

Younger kids, especially, should not be exposed to as much screen time. And audio based entertainment often seems healthier to parents because it gives them that ability to be entertained, to be educated, to think about things in a different way that doesn’t require basically a screen in front of their face and potentially, creating problems later down the road that we just don’t know about right now.

Jennifer: But designing these systems… isn’t without complications. 

Tanya:  A lot of it is that kids are learning how to speak, you know, you and I are having this conversation right now, we have an understanding of what a dialogue is in a way that children don’t. So there’s obviously that. There’s also the fact that kids don’t really sit still. So, you know, one might be far away or screaming or saying a word differently. And that obviously affects the way developers might be creating these games. And one big thing that a lot of people I talked to mentioned was the fact that kids are not a universal audience. And I think a lot of people forget that, especially ones who are developing these games… 

Jennifer: Still, she says the ability for kids to understand complexity shouldn’t be underestimated. 

Tanya: I’m honestly surprised that there aren’t more games for kids. And I’m surprised mostly that the games that are out there tend to be story kind of games and not, you know, a board game or something that is visually representative. We see with roblox and a lot of the more popular video games that came out during the pandemic, how complex they are, and the fact that kids can handle complex storylines, complex gaming, complex movement. But a lot of these voice games are so simple. And a lot of that is because the technology is just not there. But I am surprised that the imagination in terms of seeing where these games are going is quite limited thus far. So I’m really curious to see how these games develop over the next few years.

Jennifer: We’ll be back, right after this.

[MIDROLL]

Lisa: There’s always this challenge of throwing technology at kids and just sort of expecting them to adapt. And I think it’s a two way street. 

Jennifer: Lisa Anthony is an associate professor of computer science at the University of Florida. Her research focuses on developing interactive technologies designed to be used by children. 

Lisa: We don’t necessarily want systems that just prevent growth. You know, we do want children to continue to grow and develop and not necessarily use the AI as a crutch for all of that process, but we do want the AI to maybe help. It could act as a better support along the way. If we consider children’s developmental needs, expectations and abilities as we design these systems.

Jennifer: She works with kids to understand how they behave differently with devices than adults. 

Lisa: So, when they touch the touch screen or when they draw on the touch screen, what does that look like from a software point of view that we can then adapt our algorithms to recognize and interpret those interactions more accurately. // So some of the challenges that you see are really understanding kids’ needs, expectations and abilities with respect to technology, and it’s all going to be driven a lot by their motor skills, the progress of development, you know, their cognitive skills, socio emotional skills, and how they interact with the world is all going to be transitively applied to how they might interact with technology. 

Jennifer: For example, most kids simply lack the level of dexterity and motor control needed to tap a small button on a touchscreen—despite their small fingers. 

Lisa: So an adult might put their finger to the touchscreen, draw a square and one smooth stroke, all four sides and lift it up, a kid, especially a young kid, let’s say five, six years old is going to be, probably, picking up their finger at every corner. Maybe even in the middle of a stroke and then putting it down again to correct themselves and finish. And those types of small variances in how they make that shape can actually have a big impact on whether the system can recognize that shape if that type of data wasn’t ever used as part of the training process.

Jennifer: Programming this into AI models is critical, because handwriting recognition and intelligent tutoring systems are increasingly turning up in classrooms.

Most recognition algorithms look for patterns and consistency to identify objects. And kids…are notoriously inconsistent. If you were to task a child with drawing five squares in a row each one is going to look different to an algorithm. 

The needs of kids are changing as they grow… that means algorithms need to change too. 

So, researchers are looking to incorporate lessons learned from kids shows… like how children establish social attachments to animated characters that look like people. 

Lisa: That means they’re likely to ascribe social expectations to their interactions with that character. They feel warmly towards the character. They feel that the character is going to respond in predictable social ways. And this can be a benefit if your system is ready to handle that, but it can also be a challenge. If your system is not ready to handle that, it comes across as wooden. It comes across as unnatural. The children are going to be turned off by that. 

Jennifer: She says her research has also shown kids respond to AI systems that are transparent and can solve problems together with the child . 

Lisa: So kids wanted the system to be able to recognize it didn’t know the answer to their question, or it didn’t know enough information to answer your question or completed an interaction and just say, I don’t know, or tell me, you know, this information that will help me answer. And I think what we were seeing, well, we still tend to see actually is a design trend for AI systems where the AI system tries to gracefully recover from errors or lack of information without quote unquote, bothering the user, right. Without really getting them involved or interrupting them, trying to sort of gracefully exist in the background. Kids were much more tolerant of error and, and wanted to treat it like a collaborative problem, solving, experience 

Jennifer: Still, she admits there’s a long road ahead in developing systems with contextual awareness about interacting with children. 

Lisa: Often Google home returns sort of like an excerpt from the Google search results and it’s, it could be anything that comes back, right. And the kids have to then somehow listen to this long and sort of obscure paragraph and then figure out if their answer was ever contained in that paragraph anywhere. And they would have to get their parents’ help to interpret the information and a theme that you see a lot in this type of work and generally kids and technologies, they want to be able to do it themselves. They don’t really want to have to ask their parents for help because they want to be independent and engaged with the world on their own.

Jennifer: But how much we allow AI to play a part in developing that independence… is up to us. 

Lisa: Do we want AI to go in the direction of cars, for example, where for the most part, many of us own a car,have no idea how it works under the hood, how we can fix it, how we can improve it. What are the implications of this design decision or that design decision? Or do we want AI to be something where people… they’re really empowered and they have a potential to understand these big differences, these big decisions. So, I think that’s why for me, kids and AI education is really important because we want to make sure that they feel like this is not just a black box mystery element of technology in their lives, but something that they can really understand, think critically about affect change and perhaps contribute to building as well.

[CREDITS]

Jennifer: This episode was reported and produced by me, Anthony Green and Tanya Basu with Emma Cillekens. We’re edited by Michael Reilly.

Thanks for listening, I’m Jennifer Strong.

Tech

The Download: Algorithms’ shame trap, and London’s safer road crossings

Published

on

❤


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How algorithms trap us in a cycle of shame

Working in finance at the beginning of the 2008 financial crisis, mathematician Cathy O’Neil got a firsthand look at how much people trusted algorithms—and how much destruction they were causing. Disheartened, she moved to the tech industry, but encountered the same blind faith. After leaving, she wrote a book in 2016 that dismantled the idea that algorithms are objective. 

O’Neil showed how every algorithm is trained on historical data to recognize patterns, and how they break down in damaging ways. Algorithms designed to predict the chance of re-arrest, for example, can unfairly burden people, typically people of color, who are poor, live in the wrong neighborhood, or have untreated mental-­health problems or addictions.

Over time, she came to realize another significant factor that was reinforcing these inequities: shame. Society has been shaming people for things they have no choice or voice in, such as weight or addiction problems, and weaponizing that humiliation. The next step, O’Neill recognized, was fighting back. Read the full story.

—Allison Arieff

London is experimenting with traffic lights that put pedestrians first

The news: For pedestrians, walking in a city can be like navigating an obstacle course. Transport for London, the public body behind transport services in the British capital, has been testing a new type of crossing designed to make getting around the busy streets safer and easier.

How does it work? Instead of waiting for the “green man” as a signal to cross the road, pedestrians will encounter green as the default setting when they approach one of 18 crossings around the city. The light changes to red only when the sensor detects an approaching vehicle—a first in the UK.

How’s it been received? After a trial of nine months, the data is encouraging: there is virtually no impact on traffic, it saves pedestrians time, and it makes them 13% more likely to comply with traffic signals. Read the full story.

—Rachael Revesz

Check out these stories from our new Urbanism issue. You can read the full magazine for yourself and subscribe to get future editions delivered to your door for just $120 a year.

– How social media filters are helping people to explore their gender identity.
– The limitations of tree-planting as a way to mitigate climate change.

Podcast: Who watches the AI that watches students?

A boy wrote about his suicide attempt. He didn’t realize his school’s software was watching. While schools commonly use AI to sift through students’ digital lives and flag keywords that may be considered concerning, critics ask: at what cost to privacy? We delve into this story, and the wider world of school surveillance, in the latest episode of our award-winning podcast, In Machines We Trust.

Check it out here.

ICYMI: Our TR35 list of innovators for 2022

In case you missed it yesterday, our annual TR35 list of the most exciting young minds aged 35 and under is now out! Read it online here or subscribe to read about them in the print edition of our new Urbanism issue here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 There’s now a crazy patchwork of abortion laws in the US
Overturning Roe has triggered a legal quagmire—including some abortion laws that contract others within the same state. (FT $)
+ Protestors are doxxing the Supreme Court on TikTok. (Motherboard)
+ Planned Parenthood’s abortion scheduling tool could share data. (WP $)
+ Here’s the kind of data state authorities could try to use to prosecute. (WSJ $)
+ Tech firms need to be transparent about what they’re asked to share. (WP $)
+ Here’s what people in the trigger states are Googling. (Vox)

2 Chinese students were lured into spying for Beijing
The recent graduates were tasked with translating hacked documents. (FT $)
+ The FBI accused him of spying for China. It ruined his life. (MIT Technology Review)

3 Why it’s time to adjust our expectations of AI
Researchers are getting fed up with the hype. (WSJ $)
+ Meta still wants to build intelligent machines that learn like humans, though. (Spectrum IEEE)
+ Yann LeCun has a bold new vision for the future of AI. (MIT Technology Review)
+ Understanding how the brain’s neurons really work will aid better AI models. (Economist $)

4 Bitcoin is facing its biggest drop in more than 10 years
The age of freewheeling growth really is coming to an end. (Bloomberg $)
+ The crash is a threat to funds worth millions stolen by North Korea. (Reuters)
+ The cryptoapocalypse could worsen before it levels out. (The Guardian)
+ The EU is one step closer towards regulating crypto. (Reuters)

5 Singapore’s new online safety laws are a thinly-veiled power grab
Empowering its authoritarian government to exert even greater control over civilians. (Rest of World)

6 Recommendations algorithms require effort to work properly
Telling them what you like makes it more likely it’ll present you with decent suggestions. (The Verge)

7 China’s on a mission to find an Earth-like planet
But what they’ll find is anyone’s guess. (Motherboard)
+ The ESA’s Gaia probe is shining a light on what’s floating in the Milky Way. (Wired $) 

8 Inside YouTube’s meta world of video critique
Video creators analyzing other video creators makes for compelling watching. (NYT $)
+ Long-form videos are helping creators to stave off creative burnout. (NBC)

9 Time-pressed daters are vetting potential suitors over video chat
To get the lay of the land before committing to an IRL meet-up. (The Atlantic $)

10 How fandoms shaped the internet
For better—and for worse. (New Yorker $)

Quote of the day

“This is no mere monkey business.”

—A lawsuit filed by Yuga Labs, the creators of the Bored Ape NFT collection, against conceptual artists Ryder Ripps, claims Ripps copied their distinctive simian artwork, Gizmodo reports.

The big story

This restaurant duo want a zero-carbon food system. Can it happen?

September 2020

When Karen Leibowitz and Anthony Myint opened The Perennial, the most ambitious and expensive restaurant of their careers, they had a grand vision: they wanted it to be completely carbon-neutral. Their “laboratory of environmentalism in the food world” opened in San Francisco in January 2016, and its pièce de résistance was serving meat with a dramatically lower carbon footprint than normal. 

Myint and Leibowitz realized they were on to something much bigger—and that the easiest, most practical way to tackle global warming might be through food. But they also realized that what has been called the “country’s most sustainable restaurant” couldn’t fix the broken system by itself. So in early 2019, they dared themselves to do something else that nobody expected. They shut The Perennial down. Read the full story.

—Clint Rainey

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ A look inside the UK’s blossoming trainspotting scene (don’t worry, it’s nothing to do with the Irvine Welsh novel of the same name.)
+ This is the very definition of a burn.
+ A solid science joke.
+ This amusing Twitter account compiles some of the strangest public Spotify playlists out there (Shout out to Rappers With Memory Problems)
+ Have you been lucky enough to see any of these weird and wonderful buildings in person?



Continue Reading

Tech

The US Supreme Court just gutted the EPA’s power to regulate emissions

Published

on

The US Supreme Court just gutted the EPA’s power to regulate emissions


What was the ruling?

The decision states that the EPA’s actions in a 2015 rule, which included caps on emissions from power plants, overstepped the agency’s authority.

“Capping carbon dioxide emissions at a level that will force a nationwide transition away from the use of coal to generate electricity may be a sensible ‘solution to the crisis of the day,’” the decision reads. “But it is not plausible that Congress gave EPA the authority to adopt on its own such a regulatory scheme.”

Only Congress has the power to make “a decision of such magnitude and consequence,” it continues. 

This decision is likely to have “broad implications,” says Deborah Sivas, an environmental law professor at Stanford University. The court is not only constraining what the EPA can do on climate policy going forward, she adds; this opinion “seems to be a major blow for agency deference,” meaning that other agencies could face limitations in the future as well.

The ruling, which is the latest in a string of bombshell cases from the court, fell largely along ideological lines. Chief Justice John Roberts authored the majority opinion, and he was joined by his fellow conservatives: Justices Samuel Alito, Amy Coney Barrett, Neil Gorsuch, Brett Kavanaugh, and Clarence Thomas. Justices Stephen Breyer, Elena Kagan, and Sonia Sotomayor dissented.

What is the decision all about?

The main question in the case was how much power the EPA should have to regulate carbon emissions and what it should be allowed to do to accomplish that job. That question was occcasioned by a 2015 EPA rule called the Clean Power Plan.

The Clean Power Plan targeted greenhouse-gas emissions from power plants, requiring each state to make a plan to cut emissions and submit it to the federal government.

Several states and private groups immediately challenged the Clean Power Plan when it was released, calling it an overreach on the part of the agency, and the Supreme Court put it on hold in 2016. After a repeal of the plan during Donald Trump’s presidency and some legal back-and-forth, a Washington, DC, district court ruled in January 2021 that the Clean Power Plan did fall within the EPA’s authority.

Continue Reading

Tech

How to track your period safely post-Roe

Published

on

How to track your period safely post-Roe


3. After you delete your app, ask the app provider to delete your data. Just because you removed the app from your phone does not mean the company has gotten rid of your records. In fact, California is the only state where they are legally required to delete your data. Still, many companies are willing to delete it upon request. Here’s a helpful guide from the Washington Post that walks you through how you can do this.

Here’s how to safely track your period without an app.

1. Use a spreadsheet. It’s relatively easy to re-create the functions of a period tracker in a spreadsheet by listing out the dates of your past periods and figuring out the average length of time from the first day of one to the first day of the next. You can turn to one of the many templates already available online, like the period tracker created by Aufrichtig and the Menstrual Cycle Calendar and Period Tracker created by Laura Cutler. If you enjoy the science-y aspect of period apps, templates offer the ability to send yourself reminders about upcoming periods, record symptoms, and track blood flow.

2. Use a digital calendar. If spreadsheets make you dizzy and your entire life is on a digital calendar already, try making your period a recurring event, suggests Emory University student Alexa Mohsenzadeh, who made a TikTok video demonstrating the process

Mohsenzadeh says that she doesn’t miss apps. “I can tailor this to my needs and add notes about how I’m feeling and see if it’s correlated to my period,” she says. “You just have to input it once.” 

3. Go analog and use a notebook or paper planner. We’re a technology publication, but the fact is that the safest way to keep your menstrual data from being accessible to others is to take it offline. You can invest in a paper planner or just use a notebook to keep track of your period and how you’re feeling. 

If that sounds like too much work, and you’re looking for a simple, no-nonsense template, try the free, printable Menstrual Cycle Diary available from the University of British Columbia’s Centre for Menstrual Cycle and Ovulation Research.

4. If your state is unlikely to ban abortion, you might still be able to safely use a period-tracking app. The crucial thing will be to choose one that has clear privacy settings and has publicly promised not to share user data with authorities. Quintin says Clue is a good option because it’s beholden to EU privacy laws and has gone on the record with its promise not to share information with authorities. 

Continue Reading

Copyright © 2021 Seminole Press.