Connect with us

Tech

Podcast: Beating the AI hiring machines

Published

on

Podcast: Beating the AI hiring machines


When it comes to hiring, it’s increasingly becoming an AI’s world—we’re just working in it. In this, the final episode of Season 2 of our AI podcast “In Machines We Trust” and the conclusion of our series on AI and hiring, we take a look at how AI-based systems are increasingly playing gatekeeper in the hiring process—screening out applicants by the millions, based on little more than what they see in your résumé. But we aren’t powerless against the machines. In fact, an increasing number of people and services are designed to help you play by—and in some cases bend—their rules to give you an edge.

We Meet: 

  • Jamaal Eggleston, Work Readiness Instructor, The HOPE Program
  • Ian Siegel, CEO, ZipRecruiter
  • Sami Mäkeläinen, Head of Strategic Foresight, Telstra
  • Salil Pande, CEO, VMock
  • Gracy Sarkissian, Interim Executive Director, Wasserman Center for Career Development, New York University

We Talked To: 

  • Jamaal Eggleston, Work Readiness Instructor, The HOPE Program
  • Students and Teachers from The HOPE Program in Brooklyn, NY
  • Jonathan Kestenbaum, Co-founder & Managing Director of Talent Tech Labs
  • Josh Bersin, Global Industry Analyst
  • Brian Kropp, Vice President Research, Gartner
  • Ian Siegel, CEO, ZipRecruiter
  • Sami Mäkeläinen, Head of Strategic Foresight, Telstra
  • Salil Pande, CEO, VMock
  • Kiran Pande, Co-Founder, VMock
  • Gracy Sarkissian, Interim Executive Director, Wasserman Center for Career Development, New York University

Sounds From:

Credits

  • This miniseries on hiring was reported by Hilke Schellmann and produced by Jennifer Strong, Emma Cillekens, Anthony Green, and Karen Hao. We’re edited by Michael Reilly.

Transcript

Synthetic Jennifer: Hey everyone! This is NOT Jennifer Strong.  

It’s actually a deepfake version of her voice. 

To wrap up our hiring series, the two of us took turns doing the same job interview, because she was curious if the automated interviewer would notice. And, how it would grade each of us.

[beat / music]

So, human Jennifer beat me as a better match for the job posting, but just by a little bit.    

This deepfake? It got better personality scores. Because, according to this hiring software, this fake voice is more spontaneous.

It also got ranked as more innovative and strategic, while Jennifer is more passionate, and she’s better at working with others.

[Beat/ Music transition]

Jennifer: Artificial intelligence is increasingly used in the hiring process. 

(And this is the real Jennifer. Just, by the way.)

And these days algorithms decide whether a resume gets seen by a human, gauge personalities based on how people talk or play video games, and might even interview you. 

In a world where you no longer prepare for those interviews by putting your best foot forward—what does it mean to present your best digital self? 

Sot: Youtube clips montage: Vlogger 1: Want to know three easy hacks to significantly improve your performance on video interviews like HireVue, Spark Hire, or VidCruiter? Vlogger 2: Please do make sure you watch this from beginning to end, because I want to help you to pass your interview. Vlogger 3: And if you understand the key concepts, you can beat that algorithm and get the job. So let’s get started.

Jennifer: We look at just how far job seekers are willing to go to beat these tools.

Gracy Sarkissian: So there are all sorts of crazy stories about what students have done in the past to get their resume past the applicant tracking system. But what we do is we make sure that students know what to expect and are prepared to be successful. 

Jennifer: That success is measured by algorithms across a whole host of variables, from automated resume screeners attempting to predict an applicant’s job performance, to one-way video interviews,  where everything from a candidate’s word choice to their facial expressions might be analyzed. 

Ian Siegel: Literally this is one of those instances where conventional wisdom will kill you in your search for a job. And it’s such a shame because I think even many of the experts don’t realize how the industry is actually working today.

Jennifer: You can’t dress to impress an algorithm. So, what does it look like to game an automated system?  

Sami Makelainen: What if you  just had the AI interview an AI, could that be done? Could it be done now? Could it be done in the future? I mean—it’s fairly clear that in the not too distant future, you will have this kind of a much more common ability to develop artificial entities that look pretty much exactly like humans and act very much like humans. Or could we use one of these things to do the interviews for us? 

Jennifer: And in the absence of meaningful rules and regulation, where do we draw the line?

I’m Jennifer Strong, and in this final episode of a four-part series on AI and hiring we explore how we’re adapting to the automated process of finding a job.

[SHOW ID]

[TITLES]

Anonymous Jobseeker: These AIs or artificial intelligent robots are reading resumes through a parser. So if your resume is not up to par, it won’t go through to the next steps. 

Jennifer: That’s the job seeker we’ve followed throughout this series. She asked us to call her Sally but that’s not her real name. She’s critiquing the hiring practices of potential employers… and she fears it could impact her career. 

In a previous episode, she told us how she applied for close to 150 jobs before landing one and how she encountered AI at several points in the process.

Like Sally, the first time you might see AI during a job search is with a resume parser, or screener. It sorts and chooses which ones get passed along to the next stage of the hiring process. 

She suspected her resume wasn’t getting through.

And she did some further research, after she got her hands on some of this technology.

Anonymous Jobseeker: So right now, when I put my resume through, it reads me as a software engineer, with a hint of data analysis, which is my field. So that’s fine. 

Jennifer: A friend of hers is also working on this problem. He’s testing a different tool that puts a percentage match on how qualified it judges each resume to be for a given job.

Anonymous Jobseeker: He has another parser where it gives you your percentage. So he’s been asking other people who are data scientists and already far in the field for their resume and theirs go through 80% to 90%.  

Jennifer: They’re even testing templates they find online, just to see what happens and if that formatting helps.

But so far, when they fill out those templates they’ve all received a low match score—under 40-percent qualified.

Anonymous Jobseeker: If you just Google resume templates, if you need help with your resume, we tested those whatever popped up. And we realized the templates aren’t good. So, when you put the templates inside the parser, no matter what job you are, you’re still at that 40 or under 40. So, there’s a problem with the machine reading it. 

Jennifer: Sally is a programmer. She knows how to go about finding and testing this type of software. But, most of us don’t. We’re unlikely to know if these algorithms are reading our resume in the way we intended, and extracting the ‘right’ skills.

Anonymous Jobseeker: If you fill out a job application online and it says convert resume. And if, once you convert your resume, if the boxes aren’t filled in to what your resume is stating, then you know, your percentage is low. And that makes a lot of sense because when I was applying to like Goldman Sachs or Capital One, like bank industries and stuff when I pick, take the, um, information from my resume, it was never correct. And I always had to fill in the rest of the stuff to match with my resume.

Jennifer: She says when she made this discovery, it finally clicked.

And she wishes she understood how this worked before she started applying for jobs, because it would have helped with her imposter syndrome.

Anonymous Jobseeker: So everybody that doesn’t know about this doesn’t have a chance, ‘cause they don’t even know.

Jennifer: Over the course of this reporting we found a number of different groups trying to get under the hood of these systems. Whether to help themselves, or others, adapt and engage with these tools.

And, we visited a workforce readiness program in New York City called The Hope Program. Many of its participants have dealt with homelessness, substance abuse and long-term unemployment. 

Jamaal Eggleston: You see all the hoops, these students have to jump through just to land the job, where I hate to say another segment of the population might not have to go through as many hoops. So, I think it’s up to us to put on our armor and to combat it, because these are good people we’re talking about here. So it’s really become my life’s journey to help them. And we have to fight back. Too many good people were being left to the wayside.  

Jennifer: Jamaal Eggleston is known to his students as Mr. E. And, he says they’re struggling with the growing use of personality testing and other forms of automation in hiring.

Jamaal Eggleston: They come back frustrated. There’s a really big issue of not hearing back at all. It’s almost as if you do an application and your application goes into the matrix and it’s gone forever. Or you will get the automatic reply which is not very personable, and it gives no information. 

Jennifer: To him, it represents an uphill battle for students already at a disadvantage. 

Jamaal Eggleston: When it comes to their personality tests, they feel as if they’re being tricked, because it’ll be the same question, but phrased three different types of ways. It’s coming from creators, who do not share a cultural background at all with some of the applicants. 

Jennifer: So, he says he downloads examples of these personality tests, analyzes them, then uses what he finds to help train his students.

Jamaal Eggleston: So I’ll give them the three different phrasings of that question. So they’ll know what to look out for. If you’ve ever been in this situation, how would you handle it? And they know instantly that I taught them once a question is phrased that way. It’s going to be a behavioral question. So it’s something that they should look out for in a personality test and to take their time.

Jennifer: And they take these tests as part of their job training. Their results are projected onto a whiteboard during class and discussed as a group. 

Jamaal Eggleston: If these companies only knew, you know, all the great people that they excluded because of these practices. And they would have been a great breath of fresh air. They would have been hard capable workers, but because of these biases, whether it’s from the person who programmed the algorithms, or the algorithms themselves, that excluded these people, if they only knew, they would be kicking themselves, you know, wow, okay the person doesn’t have the same color skin as mine. They might talk with a different dialect or accent, but you know what, they came here and they worked their tail off.

[Musical transition]

Ian Siegel: If there are job seekers out there in the world who love searching for work—I have never met them. And if there are employers who feel like they are experts at recruiting—I have also never met them. Neither side is trained in the activity that they are engaging in.

Ian Siegel: My name is Ian Siegel. I am the CEO and co-founder of ZipRecruiter. 

Jennifer: It’s an AI powered marketplace where companies post jobs and people look for work.

Ian Siegel: Millions of businesses post jobs on our site every month. And tens of millions of job seekers look for work on our site every month. And we used AI to play the role of active matchmaker between them.

Jennifer: When we spoke to him at the start of this series, he told us the vast majority of resumes are now screened by a machine first, before a human enters the process.

And he believes anyone using traditional advice to create a resume is at risk of not making it through to the next round of the hiring process, because the audience for resumes is now algorithms.

Ian Siegel: All that advice you got about how to write a resume, is wrong. It’s no longer write something that stands out, use a beautiful design printed on vellum, use extraordinary prose to try to dress up your accomplishments, forget all that. You want to write like a caveman in the shortest, crispest words you can. You want to be declarative and quantitative, because software is trying to figure out who you are to decide whether you will be put in front of a human. And that’s the majority of jobs in America right now today.

Jennifer: Like others, he found problems with these tools that extract information from resumes.

So, the company built its own.

And he has some advice on getting a resume through.

Ian Siegel: Be explicit, and then if you have a skill, declare it. Ideally declare how you learned it. So I learned the skill by going through this certification process, here is my certification or my license number to validate that I have this skill. Because there are multiple industries, like if you’re a nurse, as long as you have a nursing license, you’re hired. There’s a desperate need for more nurses in America right now. If you’re a truck driver, if you have a truck driver’s license number you’re hired. So like your whole resume could be that one piece of information, ‘cause the rest really doesn’t matter to the employer. So, just make sure that you list all your skills as concretely and with as much evidence to support your expertise as possible.

Jennifer: And longer term, he sees a new way of recruiting becoming the norm.

Ian Siegel: There is a sensible way for this to all work, and that is the employer should go first. The employer should look at active job seekers inmarket, and pick the ones that they would like to see apply. Invite them to apply or directly recruit them. That’s a great experience. Job seekers hate applying to jobs, but guess what? They love getting recruited, and who wouldn’t? It’s literally like getting picked up at a bar. It’s being told you’re desirable and special. It just makes sense and puts everybody in the right headspace. Then the employer is winning because by recruiting, they’re going first, they’re expressing interest, which means they’re increasing the odds that they are going to get a positive response, because that person’s going to be so flattered by the fact that the employer went first. So it’s just a better, more efficient way for this to work. 

[Musical transition]

Jennifer: As part of this investigation we’ve been learning about a bunch of tools meant to help job seekers maximize their chances of success.

Hilke Schellmann is a reporting partner on this series. She’s also a professor of journalism who reports on this topic.

So, Hilke, what did you find about the tricks people are using to try and get an edge?

Hilke: So, one of the things I found is a whole niche industry of folks sharing ‘assessment secrets’ with one another online. 

Sot: Youtube clips montage2: Speaker 1: In this video today, we’re going to be talking about how you can pass your psychometric test, first time round. Speaker 2: Look into the camera, not look at the screen.  Speaker 3: Be expressive when you talk and change your voice tone when you speak, remember the AI will look for inconsistencies in what you say and how you behave. Speaker 2: And you then reveal the results of your actions and the results should always be positive. So whenever you get asked a question that says, tell me about a time when you. Or describe a situation you were in. See, it’s a behavioral type interview question and you have to give a specific situation.

Hilke: So, there are also the usual quora discussions and subreddits talking about the questions job seekers have encountered in video interviews, or, how to beat these games. And then, there’s some hiring vendors which offer candidates a chance to do AI mock interviews, before the big day.

Jennifer: Candidates can practice alone in a room., by talking into the camera and trying to convince someone, or a machine, that they’re the best candidate for the job?

Hilke: Yeah. Job seekers can also see their personality profiles. But there is a limit to how helpful this is, since most candidates won’t know what questions they will be asked. Like, for example I found one company that listed the seven stage hiring process at Amazon, that very clearly explained what candidates had to do. That company has also built AI games similar to what job seekers are being asked to play in the real world. So, the job seekers can train on those games ahead of time, (for a fee of course).

Jennifer: And you looked into a lot of companies that do this, did you anything interesting? 

Hilke: So apparently some job candidates who don’t have all the skills the job description asks for, they put the skills they lack in white on the resume. So it’s invisible to a human, but a computer would recognize the skills. Job seekers hope to get on the yes pile by doing this, and recruiters get frustrated by this.

Jennifer: Alright, might this be a way of leveling the playing field for job applicants who have less power now against AI. Or, is it kind of cheating and giving some applicants an edge over others?

Hilke: Well, some people who practice these assessments do get an edge over others, because they know what to expect now. But ,it’s not because they have practiced and practiced to work out how to get the high score (like in a video game), because that’s not how these assessments work.

These games are trying to assess your personality and ‘to win’ essentially, the algorithm compares your traits, to the traits of employees who already work at that firm. If you have similar personality traits, you advance to the next round in the hiring process. But the catch is, no one knows what those traits are. So I don’t know if you can call it cheating, when you don’t even really know the rules of the game you are playing.

Jennifer: And we don’t know exactly how AI scores job seekers, so, the people giving this advice, they might not know either.

Hilke: Yeah, and if that advice is inaccurate, it might even backfire for job seekers. But, I understand the anxiety people have around these new tools and their desire to understand how this works. And obviously that bit of practice might calm them on the big day… 

Jennifer: But like any other cat and mouse game, it’s only a matter of time before people use automation to fight back against this automation.

Hilke: That’s exactly what I was thinking. 

[Musical transition]

Jennifer: So you tested this out in a video interview, using just plain text to speech software to respond to the questions asked. 

Hilke: Yeah, I used a deepfake computer generated audio file to see if I could trick the interview software into believing that the deepfake is a human. 

[SOT: Hilke speaking]: And so the first question is, please introduce yourself. Please introduce yourself, deepfake. 

Computer-generated audio: My name is Hilke Shellman. I am an Emmy award-winning reporter and journalism professor at New York university. I have been a journalist for over a decade. 

Jennifer: Ok and the deep fake voice doesn’t have a face, so there’s no video here, and the system still gives it a score. 

Hilke: Yeah. The deepfake scored a 79% match score with the job. That’s actually pretty high. It also got a personality analysis, which told me that the deepfake is very innovative and not very consistent. It’s pretty social and not very reserved. 

Jennifer: Right.

Hilke: Yeah and the weirdest part was that I then tested it again, this time reading the same text with my actual voice.

Jennifer: And, what happened?

Hilke: Ahh, welk. The computer-generated voice actually scored higher than me reading the same text! 

Jennifer:  Wow. Sounds like you might want to consider taking your audio avatar on the road. 

Hilke: I guess so. 

[Musical transition] 

Jennifer: But we aren’t the only ones with this idea.

Sami Mäkeläinen: What if you just had the AI interview an AI?

Jennifer: Sami Mäkeläinen is an executive at Telstra, which is an Australian telecom company. 

Sami Mäkeläinen: Could that be done? Could it be done now? Could it be done in the future? I mean it’s fairly clear that in the not too distant future, you will have this kind of a, much more common ability to develop artificial entities that look like look pretty much exactly like humans, and act very much like humans. I thought that, well, could we use one of these things to do the interviews for us? 

Jennifer: He has a background in software engineering and his job is to study the implications of future tech trends.

Just out of curiosity, he and a few colleagues decided to test whether AI interviewers would recognize the difference between interviewing a human or another machine.

So they took a well-known AI interview system which uses video (he didn’t want to reveal which), and he paired it up with an avatar.

Sami Mäkeläinen: We just had a AI interview system. And we deployed an AI digital human, digital avatar, digital twin, (if you want to call it that), to sort of act as the mouthpiece for the human being interviewed. So you know the words that the avatar spoke came from humans, it was not a language model, or AI behind that part.

Jennifer: In other words, they wrote a script and it was performed by a deepfake. 

So, a fake voice on a fake video answered the questions posed by an AI interviewer.

And after about a dozen tests, how did this AI job candidate do?

Sami Mäkeläinen: Well, did it flunk the interview? No, it didn’t. It was fine from the AI interviewer perspective. It was as if it was interviewing anybody else.

Jennifer: They tested the same words, two ways. One spoken by a human, and one spoken by the avatar. And he says the outcome was similar for both. 

And, he has thoughts on what might happen next.

Sami Mäkeläinen: So say a few years from now, you’ll be able to have a very realistic looking digital twin of yourself, audio visual representation of you essentially. You can imagine a whole range of use cases for that. You could have it sit in, you know, a boring, large meeting for you that could uh and umm at the right intervals. You could use it in, you know, virtual gaming or gaming and virtual presence kind of an environment. Or you could use it for taking interviews for you. 

Jennifer: Though he’s not aware of others testing this technology with digital humans just yet.  And, if Hollywood movies can’t easily pull this off, he feels like there’s little danger the rest of us are going to be deploying avatars to do our bidding any time soon.  

But the fact the hiring tool couldn’t recognize it was interviewing a machine is a problem. And it means the software still has a way to go. 

Sami Mäkeläinen: So I suppose, ideally when when you have a system that ostensibly is interviewing a human, you would kind of want to make sure that it’s the human that you think you’re interviewing at the other end. Otherwise you would just hire a friend to do the AI interview for you, and it’d probably be far more convincing than an AI would be currently. There’s a whole range of things that these systems could do to verify that, you know, they are talking to who they think they are talking, but how exactly that will be developed is again, something that is to be determined. 

Jennifer: He says they don’t have any plans to test further, but if they did, he has thoughts about what they might try.

Sami Mäkeläinen: We didn’t dig deeper into can we possibly tweak the scores by optimizing facial expressions, or tone of voice or, you know, emotion or things like that? That’s not something that we delved into it. And, it was just, it was just a very simple, kind of a proof of concept. 

Jennifer:  And he thinks we also have to remember some of this isn’t new.

Sami Mäkeläinen: We’ve sort of been gaming, the interviews forever. Like when you have a human interview, you have even courses on how to behave there, what to say, what to do, what to wear. We will increasingly be utilizing, ‘quote unquote’ intelligent agents to do our bidding for us.

Jennifer:  But he says it’s important to realize hiring was never perfect to begin with.

 Sami Mäkeläinen: It’s easy to sort of start blaming the AI and the use of AI for many of these situations. And in many cases it’s warrantied, right? I don’t think anybody can say that it was a perfect process to begin with and, you know, then we come to like, how do we deploy these systems? How do we use them, how much responsibility do we give to them? The devil is always in the details. So on one level, I would want to completely agree that the cost of getting hiring wrong is too high. But on the other hand, we’ve essentially gotten it wrong as a society for decades. 

Jennifer:  In a moment, we look at some of what’s being done at the university level, to help students get ready to engage with these systems, when we come back.

[Midroll]

Jennifer: This new era in hiring can feel a little overwhelming for people looking for a job, who don’t always know how and when they’re being tested, or what exactly they’re being tested for. 

People are looking for ways to better prepare to engage with these AI systems, and it’s moved beyond individual curiosity and grassroots organizing. AI companies are also in this space, providing tools and training for job seekers. 

One of them is a company called VMock, which has business deals with hundreds of colleges and universities. Its AI-based software corrects hundreds of resumes to be more easily read by machines, and gives feedback on video interviews. 

Salil Pande: And in that first glance, if you actually went to the no pile, then the story is over. You might be the smartest kid that is coming out of your undergraduate program. You’re gone, you’re not going to get the second chance. The world has moved on to a very fast cycle, and it’s blip and you either yes or no.

Jennifer: Salil Pande is one of the company’s founders.

He says even just a few years ago, every step in the hiring process was done by a human. That’s no longer the case, especially for companies that hire a lot of recent college graduates and people with less professional experience, because that makes it harder for hiring managers to know who is the best person for the job.  

Salil Pande: Eventually when there is a high probability of success, that’s when human to human time interaction is happening, which means that early part, which was the rejection part has already been given to technology that, Hey, technology filter me the right resume, filter me the right, uh, LinkedIn profile, filter me the good pitches and also do some psychometric tests and everything put it all together for me. And then once all of this is done go schedule an interview for me, and that’s when I’m going to go, boom, one hour interview, I’m done.

Jennifer: VMock’s mission is to prepare students for a hiring field where their resumes and video interviews have to appeal to AI first.

Salil Pande: If you have not optimized your resume for that job description, the applicant tracking system that actually is kind of like working around that job description may not filter you into the yes pile. You may be in the no pile or a maybe pile. So, you have to think about how you’re going to just go through this early process where you’re going to deal with applicant tracking system. You’re going to deal with ah artificial intelligence system that is going to recognize your, your interviews, and everything else. What’s a good pitch? How do you highlight your top skills? What skills recruiters are looking for? What skills do you currently have? How do you present your skills when you don’t have the skill, but you have something else that could be taken as an example of that other skill, and you can actually present.

Jennifer: Pande says that career centers at universities are outmatched by the technology now employed by many large companies. That’s where he says VMock’s AI can help students beat the AI they’re encountering when they look for their first job. 

And one school using it is New York University.

Gracy Sarkissian: So students are encountering these systems early, earlier and earlier on. And I would say, you know, career centers are trying to keep up with these changes so that we can prepare our students more effectively when they don’t know what to expect. I think it’s this big unknown to students. And so our job is to demystify it a little bit. 

Jennifer: Gracy Sarkissian leads the Career Center at NYU. 

She says she brought in VMock to make the time career coaches have with students more efficient. 

Gracy Sarkissian: And once you integrate that feedback, you’ll see the score go up. So it just gives students some practice at not only getting feedback, but also seeing how a system might react to react or respond to their resume.

Jennifer: And she has some advice for job seekers trying to impress both AI and humans.

Gracy Sarkissian: Some students tell me, you know, I did what you guys told me to do. I made sure that my resume was filled with keywords. And now it sounds like, kind of like a cheesy marketing document. And so what I say, I understand, I hear you. Have two versions of your resume. Have the one that you’re going to apply to when you go through systems and have one that you are going to hand to someone, if you meet with someone and you want to impress them. And so that has helped students kind of say, okay, I get it. This is something that I have to do so that my resume gets picked up. 

Jennifer: Her team also prepares students for one way video interviews. 

Gracy Sarkissian: We don’t realize how much input we get when we’re having a one-on-one conversation with someone, or you’re, even if it’s a group or panel interview. You are looking at people in the eye, you are getting positive feedback. You might get negative feedback that might make you adjust your question. If you were nervous, there’s a good chance that you’ll feel a little empathy from someone in the room. Whereas when we’re interviewing with AI, it feels like a stranger, right? It feels like a stranger without a face. It’s a blank screen. And oftentimes you’re staring at yourself and so it can be a lonely process I think, um, for some of our students. 

Jennifer: It’s one of the reasons why she believes, in a tight labor market, employers might want to rethink some of these strategies, especially if they want to attract top talent.

Gracy Sarkissian: You know, we know Gen-Z students are, are a values driven generation, right? They want to make sure that they can connect with the culture of the organization. That the mission and values of the organization are, are in line with those. And that’s something that’s difficult to assess when you were interviewing in a virtual way. When you’re not meeting people, when you’re not speaking to people at an interview, when you’re not walking through an office and just kind of seeing work happen.  

Jennifer: But in a world where millions of companies receive millions of applications, tailoring to individuals isn’t something that scales.

And that lands us back in a position we’ve been before, blackbox decision-making, applied to everyone, leading to unintended consequences.

As we wrap up the second season of this podcast—and our four-part investigation of how AI is being used to make hiring decisions—we see the promise of using algorithms. But the reporting makes clear this is an emerging industry with many moving parts, and at least a few tools that just aren’t there yet. And in some cases, might actually do the opposite of what they intend. 

We’ve seen systems with bias against women, and people with disabilities, even a tool that predicts people named Jared will be successful on the job. Other tools rated candidates highly on their English language skills, though the recordings didn’t contain one word of English. We also uploaded recordings that had nothing to do with the interview questions asked, but were rated as a match for the skills required to do the job.

With little oversight, there’s also little transparency about what goes on inside the black box, and why the software makes the decisions it makes. Companies that build these tools aren’t required to tell anyone how their systems work, or why they should be trusted.

The good news? In many ways, we’re still at the beginning. And there’s opportunity to build better systems, if we’re honest about what’s not working, where the machines are coming up short, and if we make a decision not to value scale, efficiency, or speed above all.

[CREDITS]

Jennifer:  This miniseries on hiring was reported by Hilke Schellmann and produced by me, Emma Cillekens, Anthony Green, and Karen Hao. We’re edited by Michael Reilly.

That’s it for Season Two, we’re going to take a break and see you back here in the Fall.

Thanks so much for listening. I’m Jennifer Strong.

Tech

How the idea of a “transgender contagion” went viral—and caused untold harm

Published

on

How the idea of a “transgender contagion” went viral—and caused untold harm


The ROGD paper was not funded by anti-trans zealots. But it arrived at exactly the time people with bad intentions were looking for science to buoy their opinions.

The results were in line with what one might expect given those sources: 76.5% of parents surveyed “believed their child was incorrect in their belief of being transgender.” More than 85% said their child had increased their internet use and/or had trans friends before identifying as trans. The youths themselves had no say in the study, and there’s no telling if they had simply kept their parents in the dark for months or years before coming out. (Littman acknowledges that “parent-child conflict may also explain some of the findings.”) 

Arjee Restar, now an assistant professor of epidemiology at the University of Washington, didn’t mince words in her 2020 methodological critique of the paper. Restar noted that Littman chose to describe the “social and peer contagion” hypothesis in the consent document she shared with parents, opening the door for biases in who chose to respond to the survey and how they did so. She also highlighted that Littman asked parents to offer “diagnoses” of their child’s gender dysphoria, which they were unqualified to do without professional training. It’s even possible that Littman’s data could contain multiple responses from the same parent, Restar wrote. Littman told MIT Technology Review that “targeted recruitment [to studies] is a really common practice.” She also called attention to the corrected ROGD paper, which notes that a pro-gender-­affirming parents’ Facebook group with 8,000 members posted the study’s recruitment information on its page—although Littman’s study was not designed to be able to discern whether any of them responded.

But politics is blind to nuances in methodology. And the paper was quickly seized by those who were already pushing back against increasing acceptance of trans people. In 2014, a few years before Littman published her ROGD paper, Time magazine had put Laverne Cox, the trans actress from Orange Is the New Black, on its cover and declared a “transgender tipping point.” By 2016, bills across the country that aimed to bar trans people from bathrooms that fit their gender identity failed, and one that succeeded, in North Carolina, cost its Republican governor, Pat McCrory, his job.  

Yet by 2018 a renewed backlash was well underway—one that zeroed in on trans youth. The debate about trans youth competing in sports went national, as did a heavily publicized Texas custody battle between a mother who supported her trans child and a father who didn’t. Groups working to further marginalize trans people, like the Alliance Defending Freedom and the Family Research Council, began “printing off bills and introducing them to state legislators,” says Gillian Branstetter, a communications strategist at the American Civil Liberties Union.

The ROGD paper was not funded by anti-trans zealots. But it arrived at exactly the time people with bad intentions were looking for science to buoy their opinions. The paper “laundered what had previously been the rantings of online conspiracy theorists and gave it the resemblance of serious scientific study,” Branstetter says. She believes that if Littman’s paper had not been published, a similar argument would have been made by someone else. Despite its limitations, it has become a crucial weapon in the fight against trans people, largely through online dissemination. “It is astonishing that such a blatantly bad-faith effort has been taken so seriously,” Branstetter says.

Littman plainly rejects that characterization, saying her goal was simply to “find out what’s going on.” “This was a very good-faith attempt,” she says. “As a person I am liberal; I’m pro-LGBT. I saw a phenomenon with my own eyes and I investigated, found that it was different than what was in the scientific literature.” 

One reason for the success of Littman’s paper is that it validates the idea that trans kids are new. But Jules Gill-Peterson, an associate professor of history at Johns Hopkins and author of Histories of the Transgender Child, says that is “empirically untrue.” Trans children have only recently started to be discussed in mainstream media, so people assume they weren’t around before, she says, but “there have been children transitioning for as long as there has been transition-related medical technology,” and children were socially transitioning—living as a different gender without any medical or legal interventions—long before that.

Many trans people are young children when they first observe a dissonance between how they are identified and how they identify. The process of transitioning is never simple, but the explanation of their identity might be.

Continue Reading

Tech

Inside the software that will become the next battle front in US-China chip war

Published

on

screenshot of KiCad software for circuit board design and prototyping


EDA software is a small but mighty part of the semiconductor supply chain, and it’s mostly controlled by three Western companies. That gives the US a powerful point of leverage, similar to the way it wanted to restrict access to lithography machines—another crucial tool for chipmaking—last month. So how has the industry become so American-centric, and why can’t China just develop its own alternative software? 

What is EDA?

Electronic design automation (also known as electronic computer-aided design, or ECAD) is the specialized software used in chipmaking. It’s like the CAD software that architects use, except it’s more sophisticated, since it deals with billions of minuscule transistors on an integrated circuit.

Screenshot of KiCad, a free EDA software.

JON NEAL/WIKIMEDIA COMMONS

There’s no single dominant software program that represents the best in the industry. Instead, a series of software modules are often used throughout the whole design flow: logic design, debugging, component placement, wire routing, optimization of time and power consumption, verification, and more. Because modern-day chips are so complex, each step requires a different software tool. 

How important is EDA to chipmaking?

Although the global EDA market was valued at only around $10 billion in 2021, making it a small fraction of the $595 billion semiconductor market, it’s of unique importance to the entire supply chain.

The semiconductor ecosystem today can be seen as a triangle, says Mike Demler, a consultant who has been in the chip design and EDA industry for over 40 years. On one corner are the foundries, or chip manufacturers like TSMC; on another corner are intellectual-property companies like ARM, which make and sell reusable design units or layouts; and on the third corner are the EDA tools. All three together make sure the supply chain moves smoothly.

From the name, it may sound as if EDA tools are only important to chip design firms, but they are also used by chip manufacturers to verify that a design is feasible before production. There’s no way for a foundry to make a single chip as a prototype; it has to invest in months of time and production, and each time, hundreds of chips are fabricated on the same semiconductor base. It would be an enormous waste if they were found to have design flaws. Therefore, manufacturers rely on a special type of EDA tool to do their own validation. 

What are the leading companies in the EDA industry?

There are only a few companies that sell software for each step of the chipmaking process, and they have dominated this market for decades. The top three companies—Cadence (American), Synopsys (American), and Mentor Graphics (American but acquired by the German company Siemens in 2017)—control about 70% of the global EDA market. Their dominance is so strong that many EDA startups specialize in one niche use and then sell themselves to one of these three companies, further cementing the oligopoly. 

What is the US government doing to restrict EDA exports to China?

US companies’ outsize influence on the EDA industry makes it easy for the US government to squeeze China’s access. In its latest announcement, it pledged to add certain EDA tools to its list of technologies banned from export. The US will coordinate with 41 other countries, including Germany, to implement these restrictions. 

Continue Reading

Tech

Bright LEDs could spell the end of dark skies

Published

on

a satellite view of Earth on the hemisphere away from the sun with city lights visible


A global view of Earth assembled from data acquired by the Suomi National Polar-orbiting Partnership (NPP) satellite.

NASA

Specifications in the current proposal provide a starting point for planning, including a color temperature cutoff of 3,000 K in line with Pittsburgh’s dark-sky ordinance, which passed last fall. However, Martinez says that is the maximum, and as they look for consultants, they’ll be taking into account which ones show dark-sky expertise. The city is also considering—budget and infrastructure permitting—a “network lighting management system,” a kind of “smart” lighting that would allow them to control lighting levels and know when there is an outage. 

Martinez says there will be citywide engagement and updates on the status as critical milestones are reached. “We’re in the evaluation period right now,” she says, adding that the next milestone is authorization of a new contract. She acknowledges there is some “passionate interest in street lighting,” and that she too is anxious to see the project come to fruition: “Just because things seem to go quiet doesn’t mean work is not being done.”

While they aren’t meeting with light pollution experts right now, Martinez says the ones they met with during the last proposal round—Stephen Quick and Diane Turnshek of CMU— were “instrumental” in adopting the dark-sky ordinance.


In recent months, Zielinska-Dabkowska says, her “baby” has been the first Responsible Outdoor Light at Night Conference, an international gathering of more than 300 lighting professionals and light pollution researchers held virtually in May. Barentine was among the speakers. “It’s a sign that all of this is really coming along, both as a research subject but also something that attracts the interest of practitioners in outdoor lighting,” he says of the conference.

There is more work to be done, though. The IDA recently released a report summarizing the current state of light pollution research. The 18-page report includes a list of knowledge gaps to be addressed in several areas, including the overall effectiveness of government policies on light pollution. Another is how much light pollution comes from sources other than city streetlights, which a 2020 study found accounted for only 13% of Tucson’s light pollution. It is not clear what makes up the rest, but Barentine suspects the next biggest source in the US and Europe is commercial lighting, such as flashy outdoor LED signs and parking lot lighting. 

Working with companies to reduce light emissions can be challenging, says Clayton Trevillyan, Tucson’s chief building officer. “If there is a source of light inside the building, technically it’s not regulated by the outdoor lighting code, even if it is emitting light outside,” Trevillyan says. In some cases, he says, in order to get around the city’s restrictions, businesses have suspended illuminated signs inside buildings but aimed them outside. 

Light pollution experts generally say there is no substantial evidence that more light amounts to greater safety.

For cities trying to implement a lighting ordinance, Trevillyan says, the biggest roadblocks they’ll face are “irrelevant” arguments, specifically claims that reducing the brightness of outdoor lighting will cut down on advertising revenue and make the city more vulnerable to crime. The key to successfully enforcing the dark-sky rules, he says, is to educate the public and refuse to give in to people seeking exceptions or exploiting loopholes. 

Light pollution experts generally say there is no substantial evidence that more light amounts to greater safety. In Tucson, for example, Barentine says, neither traffic accidents nor crime appeared to increase after the city started dimming its streetlights at night and restricting outdoor lighting in 2017. Last year, researchers at the University of Pennsylvania analyzed crime rates alongside 300,000 streetlight outages over an eight-year period. They concluded there is “little evidence” of any impact on crime rates on the affected streets—in fact, perpetrators seemed to seek out better-lit adjacent streets. Barentine says there is some evidence that “strategically placed lighting” can help decrease traffic collisions. “Beyond that, things get murky pretty quickly,” he says.

Continue Reading

Copyright © 2021 Seminole Press.