Connect with us


Podcast: Hired by an algorithm



Podcast: Hired by an algorithm

If you’ve applied for a job lately, it’s all but guaranteed that your application was reviewed by software—in most cases, before a human ever laid eyes on it. In this episode, the first in a four-part investigation into automated hiring practices, we speak with the CEOs of ZipRecruiter and CareerBuilder, and one of the architects of LinkedIn’s algorithmic job-matching system, to explore how AI is increasingly playing matchmaker between job searchers and employers. But while software helps speed up the process of sifting through the job market, algorithms have a history of biasing the opportunities they present to people by gender, race…and in at least one case, whether you played lacrosse in high school.

We Meet:

  • Mark Girouard, Attorney, Nilan Johnson Lewis
  • Ian Siegel, CEO, ZipRecruiter
  • John Jersin, former Vice President of Product Management, LinkedIn
  • Irina Novoselsky, CEO, CareerBuilder 


This miniseries on hiring was reported by Hilke Schellmann and produced by Jennifer Strong, Emma Cillekens, and Anthony Green with special thanks to Karen Hao. We’re edited by Michael Reilly.



Jennifer: Searching for a job can be incredibly stressful, especially when you’ve been at it for a while. 

Anonymous Jobseeker: At that moment in time I wanted to give up, and I was like, all right, maybe this, this industry isn’t for me or maybe I’m just dumb. And I was just like, really beating myself up. I did go into the imposter syndrome, when I felt like this is not where I belong.

Jennifer: And this woman, who we’ll call Sally, knows the struggle all too well. She’s a black woman with a unique name trying to break into the tech industry. Since she’s criticizing the hiring methods of potential employers, she’s asked us not to use her real name.

Anonymous Jobseeker: So, I use Glassdoor, I use LinkedIn, going to the website specifically, as well as other people in my networks to see, hey, are they hiring? Are they not hiring? And yeah,  I think in total I applied to 146 jobs. 

Jennifer:  And.. she knows that exact number, because she put every application in a spreadsheet. 

Anonymous Jobseeker: I have a tracker in Excel. So every time I apply for a job, I use a tracker. After I apply, I look up recruiters on LinkedIn, I shoot them a quick message. Sometimes I got a reply, sometimes I didn’t.

Jennifer: Tech companies are scrambling to hire more women and people of color. She’s both, and she started to wonder why she wasn’t getting more traction with her job search. 

Anonymous Jobseeker: I’m a military veteran. I was four years active, four years reserve, and I went on two deployments. I’m from the Bronx. I’m a project baby. I completed my bachelor’s degree in information technology where there’s rarely any black people or any black women in general. 

Jennifer:  And, a few weeks ago, she graduated again. Now, she also has a master’s degree in information from Rutgers University in New Jersey, with specialties in data science and interaction design. 

For many of the software developer jobs she applied to, Sally was assessed not by a human but by artificial intelligence—in the form of services like resume screeners or video interviews. 

Anonymous Jobseeker: I’ve been involved in many HireVues, many cognify gaming interviews, and playing with my resume so that the AI could pick up my resume. Because being a black woman, you remain a little on the unknown side, so playing with resumes just to get picked up.

Jennifer: Using A-I in the hiring process got a huge push during the pandemic, because these tools make it easy to hire candidates without in-person contact. 

Anonymous Jobseeker: But it was just weird not having human interaction because it’s like, okay, so who’s picking me, is this robot thing picking me or is a human being picking me? Am I going to be working with robots? Or am I going to be working with humans?

Jennifer: These interactions are almost always one-sided, and she says that added to her doubts. 

Anonymous Jobseeker: For me, being a military veteran, being able to take tests and quizzes or being under pressure is nothing for me. But I don’t know why the cognitive tests gave me anxiety, but I think it’s because I knew that it had nothing to do with software engineering—that’s what really got me. But yeah, so basically you would have to solve each puzzle within a timeframe and if you didn’t get it, that’s where you lose points. So even though I got each one right, because I was a bit slower, it was like, no—reject, reject, reject, reject.

Jennifer: The first place you might find A-I in a hiring process is a tool that extracts information from resumes. It tries to predict the most successful applicants, and sorts those resumes into a pile. 

Anonymous Jobseeker: So yeah, it wasn’t later, until maybe about 130 applications, where I met other people who were like 200 applications in, or 50 applications in. And we all were just like, what is this? 

Jennifer: And it’s only the tip of the iceberg. There’s also chatbots, AI-based video games, social media checks, and then come the automated interviews. 

These are one-way video interviews where an algorithm analyzes a job candidate’s word choice, voice, and sometimes—even their facial expressions.  

Anonymous Jobseeker: It’s the tech industry. I don’t understand how the tech industry makes it difficult to get in, but then they complain that they don’t have enough people to hire.

Jennifer: At this point Sally is discouraged after loads of rejection.

But then—she has a realization 

Anonymous Jobseeker: And I was just like, all right, so it’s not me—it’s the AI. And then that’s when I got my confidence back and then I started reapplying to other things. 

Jennifer: It can be hard, or even impossible, to know how or why AI-systems make the decisions they do. 

But Sally wonders if one reason she wasn’t selected is that Black women, and college students who get a later start, are rarely represented in the training data used for these algorithms. 

Anonymous Jobseeker: Cause if this is me being a non-traditional student, I wonder other people, like if there was others, if they get affected by this. And then it’s like, do you email the company to let them know? Or it’s just like, because they told you no, forget them, like, no! Like, I don’t know, it’s like, like, how do you make something better without, I guess, being defensive.

Jennifer: I’m Jennifer Strong and with most people applying for jobs now screened by an automated system—we’re launching an investigation into what happens when algorithms try to predict how successful an applicant will be.

In a four-part series we’ll lift the curtain on how these machines work, dig into why we haven’t seen any meaningful regulation, and test some of these tools ourselves.


Today’s job hunts are a far cry from the past, when the process started by dressing up to go make your best first impression.


Man: This looks alright. Tell me, why are you interested in this job?

Young Man: I need a steady job Mr. Wiley, with the chance to go places. 

[music up]

Jennifer: These days, many people start the process having to get past a machine.

System: I will pass you to our AI interviewer now. Please wait a second. Hello. I am Kristine. Let’s do a quick test run to get you familiar with the experience. Good luck with your interview.  Just remember, please relax and treat this as a normal conversation.

Hilke: So, I first heard all about this new world of machines in hiring while chatting with a cab driver. 

Jennifer: Hilke Schellmann is an Emmy-award winning reporter writing a book about AI and hiring and she’s been investigating this topic with us.

Hilke: So this was in late 2017. I was at a conference in Washington DC and needed a ride to the train station. And I always ask how the drivers are doing. But, this driver’s reaction was a bit different. He hesitated for a second and then shared with me that he had had a weird day because he had been interviewed by a robot. That got me interested, and I asked him something like: “Wait a job interview by a robot? What?”. He told me that he had applied for a baggage handler position at an airport, and instead of a human being, a robot had called him that afternoon and asked him three questions. I had never heard of job interviews conducted by robots and made a mental note to look into it. 

Jennifer:  Ok, you’ve spent months digging into this. So, what have you learned?

Hilke: Hiring is profoundly changing from human hiring to hiring by machines. So, at that time little did I know that phone interviews with machines were just the beginning. When I started to dig in, I learned that there are AI-tools that analyze job applicants’ facial expressions and their voices, and try to gage your personality from your social media accounts. It feels pretty all-encompassing. A couple times I actually had to think for a minute if I was comfortable running my own information through these systems.

Jennifer:  And who’s using these systems?

Hilke: Well, at this point most of the Fortune 500 companies use some kind of AI technology to screen job applicants, like Unilever, Hilton, McDonald’s, IBM, and many, many, other large companies use AI in their hiring practices. 

To give you an idea of just how widespread this is—I attended an HR Tech conference a few months ago, and it felt like all of the tools for sale now have AI built in. 

Vendors I have been speaking to are saying that their tools are making hiring more efficient, faster, saving companies money and picking the best candidates without any discrimination. 

Jennifer: Right, because the computer is supposed to be making objective hiring decisions and not potentially biased ones, like humans do. 

Hilke: Yes. As we know, humans struggle to make objective hiring decisions. We love small talk, and finding connections to people we try to hire like where they are from. We often like it if folks went to the same schools we did. And all of that’s not relevant to whether someone can do a job. 

Jennifer:  And what do we know at this point about which tools work and which don’t?

Hilke: We don’t really know which work, and which don’t, because these tools don’t have to be licensed or tested in the United States. Jen—you and I could build an AI hiring tool and sell it. Most vendors claim that their algorithms are proprietary black boxes, but they assure us that their tools are tested for bias. That’s mandated by the federal government, but so far as I can tell there isn’t much third-party checking happening. 

Jennifer:  So, no one gets to see inside these tools?

Hilke: Only a few get access, like external auditors after an algorithm is already in use. And then there are lawyers and management psychologists who often are hired by the company that wants to potentially buy a tool—they have the financial power to strong arm a vendor to open up the black box. 

So, for example, I spoke with Mark Girouard. He’s an employment lawyer based in Minneapolis and one of the few people who’s ever gotten access. A few years back, he examined a resume screener that was trained on resumes of successful employees. It looked at what the resumes of high performers in this job have in common, and here’s what he found. 

Mark: Two of the biggest predictors of performance were having played high school lacrosse or being named Jared. Just based on the training data it was trained with, those correlates with performance. You know, that was probably a very simple tool where the data set it was fed was here’s, here’s, a bunch of resumes, and, here are individuals who are strong performers and here are their resumes and the tool just finds those correlations and says, these must be predictors of performance.

Hilke: So could somebody say, Oh, playing lacrosse in high school, maybe you’re very good at teamwork. Teamwork is something that’s job relevant here.

Mark Girouard: Right, or why not field hockey? And I would say it really was, you know, at some degree it was a lack of human oversight. There’s not a person opening the hood and seeing like what’s the machine actually doing.

Jennifer:  Yeah and that’s why we decided to test some of these systems and see what we’d find. 

Hilke: So, in this test I answered every question reading the Wikipedia text of the psychometrics entry in German. So, I’d assumed I’d just get back error messages saying, “hey we couldn’t score your interview,” but actually what happened was kind of interesting. So, it assessed me on me speaking German but gave me a competency score English score.

Jennifer:  But we begin with a closer look at jobs sites like LinkedIn and ZipRecruiter. Because They’re trying to match millions of people to millions of jobs… and in a weird twist these platforms are partially responsible for why companies need AI tools to weed through applications in the first place. 

They made it possible for job seekers to apply to hundreds of jobs with a click of a button. And now companies are drowning in millions of applications a year, and need a solution that scales. 

Ian Siegel: Oh, it’s it’s dwarfing humans. I mean, I, I, don’t like to be Terminator-ish in my marketing of AI, but look, the Dawn of robot recruiting has come and went, and people just haven’t caught up to the realization yet.

Ian Siegel: My name is Ian Siegel. I’m the CEO and co-founder of ZipRecruiter.

Jennifer:  It’s a jobs platform that runs on AI.

Ian Siegel: Forget AI, ask yourself what percentage of people who apply to a job today will have their resume read by a human. Somewhere between 75 and a hundred percent are going to be read by software. A fraction of that is going to be read by a human after the software is done with it. 

Jennifer: It fundamentally changes the way a resume needs to be written in order to get noticed, and we’ll get into that later in the series. 

But Siegel says something else that’s accelerating a shift in how we hire, is that employers only want to review a handful of candidates.  

Ian Siegel: There’s effectively this incredible premium put on efficiency and certainty, where employers are willing to pay up to 25% of the first year of a person’s salary in order to get a handful of quality candidates that are ready to interview. And so, I think, that we’re going to see adoption of, whether it’s machine learning or deep learning or whatever you want to call it, as the norm and the like table stakes to be in the recruiting field, in the literal like next 18 months. Not, I’m not talking five years out, I’m not talking the future of work, I’m talking about the now of work. 

Jennifer: Here’s how he describes his platform.

Ian Siegel: So, an employer posts a job, and we say other employers who have posted a job like this have liked candidates who look like that. And then we also start to learn the custom preferences of every employer who uses our service. So as they start to engage with candidates, we say, oh, okay, there’s tons of quality signal that they’re giving us from how they engage with these candidates. Like, do they look at a resume more than once? Do they give a thumbs up to a candidate? And then we can just start doing a, let’s go find more candidates who look like this candidate exercise, which is another thing that these algorithms are extremely good at. 

Jennifer: In other words, he thinks AI brings organization and structure to the hiring chaos. 

Ian Siegel: You end up with a job market that no longer relies on random chance, the right person happening upon the right job description or the right employer happening upon the right job seeker. But rather you have software that is thrusting them together, rapidly making introductions, and then further facilitating information to both sides along the way that encourages them to go faster, or stay engaged.

Jennifer: For example, Job seekers get notified when someone reads their resume.

Ian Siegel: They get a feeling like there is momentum, something happening, so that everybody has as much information as possible to make the best decisions and take the best actions they can to get the result they’re looking for. 

Jennifer:  The AI also notifies employers if a candidate they like is being considered by another company. 

Ian Siegel: And if you’re wondering like, how good is it? I mean, go to YouTube, pick a video you like, and then look at the right rail, like, look at how good they are at finding more stuff that you are likely to like. That is the wisdom of the crowd. That is the power of AI. We’re doing the exact same thing inside of the job category for both employers and for job seekers. 

Jennifer:  Like Youtube, their algorithm is a deep neural network.

And like all neural networks, it’s not always clear to humans why an algorithm makes certain decisions. 

Ian Siegel: It’s a black box. The way you measure it is you look at things like satisfaction, metrics, speed by which jobs are filled, speed at which job seekers find work. But you don’t know why it’s doing what it’s doing? But you can see patterns in what it’s doing.  

Jennifer:  Like, the algorithm learned that job seekers in New York’s tech industry, who applied to positions in LA, were often hired. 

Ian Siegel: We’ve encountered a number of sort of like astute observations or insights that the algorithm was able to derive just by the training data that we fed it. We wouldn’t have said like any job posting in LA, post in LA and post in New York. Like that’s just not something you would think to do. It’s a level of optimization beyond what humans would think to go to.

Jennifer: And he says satisfaction has jumped more than a third among hiring managers since introducing these deep neural networks.

Ian Siegel: So, like you’re getting into a realm of accomplishment and satisfaction that was literally unimaginable five years ago, like this is bleeding edge technology and the awareness of society has not caught up to it. 

Jennifer: But, bias in algorithmic systems is something people are becoming more aware of.  Going back to that YouTube analogy, it got in trouble for not knowing that their algorithm served more and more radical content to certain people.

Ian Siegel: It is a fundamental problem that affects the job category. And we take it deadly seriously at ZipRecruiter. We’ve been thinking about it since we first introduced these algorithms. We were aware of the potential for the bias to permeate our algorithms. You could be theoretically perfecting bias, you know, by giving people exactly what they want you give them I don’t know more and more old white men maybe, for example, whatever the bias would spit out.

Jennifer: That’s because the AI learns as it goes and is based on feedback loops. Their solution is to not let the AI analyze specific demographic information like names, addresses, or gendered terms like waitress. 

Ian Siegel: So, we strip a bunch of information from the algorithms, and I believe we are as close to a merit based assessment of people as can currently be done.

Jennifer:  But how can ZipRecruiter and other job sites know for sure if there’s bias on their platforms, without knowing why the algorithm matches specific people to jobs? 

One person asking this question is John Jersin. He’s the former Vice President of Product at Linkedin. And, a few years back he found some unsettling trends when he took a closer look at the data it gathers on its users.

And he says it all starts with what the AI is programmed to predict.

John Jersin: What AI does in its most basic form is tries to optimize something.. So, it depends a lot on what that AI is trying to optimize and then also on whether there are any constraints on that optimization that have been placed on the AI. So most platforms are trying to optimize something like the number of applications per job or the likelihood that someone is to respond to a message. Some platforms and this was a key focus at LinkedIn, try to go deeper than that and try to optimize for the number of hires. So not just more people applying, but also the right people applying.

Jennifer: The largest platforms rely heavily on the three types of data they collect. That gets used to make decisions about which opportunities job seekers see,  and which resumes recruiters see. 

John Jersin: The three types of data are the explicit data. What’s on your profile, the things that you can actually read, the implicit data, which is things that you can infer from that data. So, for example, if you wrote down on your profile, you’re a software engineer, and you worked at this particular company, we might be able to infer that you know certain kinds of technologies. That you know how to code, for example, is a pretty obvious one, but it gets a lot more sophisticated than that. The third type of data is behavioral data. What actions you’re taking on the platform can tell us a lot about what kinds of jobs you think are fit for you, or which kinds of recruiters reaching out about opportunities are more relevant to you.

Jennifer:  This all looks great on paper. The algorithm doesn’t include the gender or names of applicants, their photos or pronouns. So, in theory there shouldn’t be any gender or racial bias. Right? But there are differences in the data. 

John Jersin: So we found, for example, that men tend to be a little bit more verbose. They tend to be a little bit more willing to identify skills that they have, maybe at a slightly lower level than women who have those same skills, who would be a little less willing to identify those skills as something that, that, they want to be viewed as having. So, you end up with a profile disparity that might mean there’s slightly less data available for women, or women might put data on their profile that indicates a slightly higher level of skill or higher level of experience for the same statement, versus what a man might put on their profile.

Jennifer: In other words, the algorithm doesn’t get told who’s a man and who’s a woman, but the data gives it away: Many women only add skills to their resumes once they’ve mastered them, but many men add skills much earlier. So, in an automated world, it often appears that men have more skills than women, based on their profiles.  

And women, on average, understating their skills, with men, on average, exaggerating their skills, is of course also a problem with traditional hiring. But, Jersin found other signals in the data that the AI picks up on as well. 

John Jersin: How often have you responded to messages like this? How aggressive are you when you’re applying to jobs? How many keywords did you put on your profile, whether or not they were fully justified by your experience. And so the algorithm will make these decisions based on something that you can’t hide from the recruiter—you can’t turn off. And to some extent, that’s the algorithm working exactly as it was intended to work. It’s trying to find any difference it can to get this job in front of somebody who’s more likely to apply or to get this person in front of a company who’s more likely to reach out to them. And they’re going to respond as a result. But what happens is these behavioral differences, which can be linked to your cultural identity, to your gender identity, what have you, they drive the difference. So, the bias is a part of the system. It’s built in.

Jennifer: So different genders behave differently on the platform, the algorithm picks up on that, and it has consequences. 

John Jersin: Part of what happens with these algorithms is they don’t know who’s who. They just know, hey, this person is more likely to apply for a job. And so they want to show that job to that person because that’ll get an apply, that’ll score a point for the algorithm. It’s doing what it’s trying to do. One thing that you might start realizing is that, oh, well, if this group applies to a job a little bit more often than this other group, or this groups willing to apply to a job that they’re not quite qualified for, it might be more of a step up for them than this other group, than that AI might make the decision to start showing certain jobs to one group versus the other. 

Jennifer: It means, the A-I may start recommending more men than women for a job, because men, on average, go after job opportunities more aggressively than women, and the A-I ‘may be’ optimized not just to recommend qualified people for a given job, but recommend people who are ‘also’ likely to apply for it. 

And on the other side of the marketplace, the same thing is probably happening as well. The AI may show less senior roles to qualified women and more senior roles to qualified men, just because men are more likely to apply to those jobs. 

John Jersin: Because of your gender, because of your cultural background, if that entails a certain behavioral difference, you’re going to receive different opportunities that other groups will not receive. Or worse, you might not be receiving opportunities that other groups are receiving simply because you behave a little bit differently on their platform. And we don’t really want our systems to work that way. We certainly shouldn’t want our systems to work that way to pick up on these potentially minor behavioral differences and then drive this radical difference in terms of opportunity and outcome as a result. But that’s what happens in AI.

Jennifer: Before he left LinkedIn, Jersin and his team built another AI to combat these tendencies. It tries to catch the bias before the other AI releases matches to recruiters. 

John Jersin: What representative results can do is rearrange the results so that it actually maintains that composition of people across those two different groups. So instead of, for example, the AI trying to optimize the people in that group and shift more towards men and show 70 percent men, and 30 percent women. It’ll make sure that it continues to show 50 percent of each.

Jennifer:  Basically, he built AI to fight existing AI, to try to make sure everyone has a fair chance to get a job. 

And he says examples like the problem Amazon faced when testing their in-house resume sorter helped pave the way for developers to understand how unintentional bias can creep into the most well-intentioned products. 

John Jersin: What they did was they built an AI, that worked in recruiting and basically tried to solve this matching problem. And the data set that they were using was from people’s resumes. And so, they would parse through those resumes and they would find certain words that were more correlated with being a fit for a particular job.

Jennifer: The tech industry is predominantly male… and since the algorithm was trained on these mostly male resumes, the AI picked up those preferences.

This led Amazon’s algorithm to downgrade resumes with words that suggested the applicants were female. 

John Jersin: Unfortunately, some of those words were things like she or her or him, which identified something that has absolutely nothing to do with qualification for a job and obviously identified something about gender.

Jennifer: Amazon fixed the programs to be neutral to those particular words, but that’s no guarantee against bias elsewhere in the tool. So executives decided it was just best to scrap it. 

John Jersin: We’re talking about people’s economic opportunities, their careers, their ability to earn income, and support their families. And we’re talking about these people not necessarily getting the same opportunities presented to them because, they’re in a certain gender group, because they’re in a certain cultural group. 

Jennifer: We called other job platforms too to ask about how they’re dealing with this problem, and we’ll get to that in just a moment. 


Jennifer: To understand what job platforms are doing to combat the problem John Jersin described tackling during his days at LinkedIn, we reached out to other companies to ask about this gender drift. 

Indeed didn’t provide us with details. LinkedIn confirms it still uses representative results. And—Monster’s head of product management says he believes they’re not using biased input data, but isn’t testing for this problem specifically either.

Then we spoke to CareerBuilder, and they told us they aren’t seeing  the same problems LinkedIn found because their AI tries to match people to jobs in a very different way. 

They revamped their algorithm a couple of years back, because of a problem unrelated to bias. 

Irina Novoselsky: We really saw that there’s this big gap in the workforce. That companies today aren’t going to have the needs from the current workforce.

Jennifer: Irina Novoselsky is the Chief Executive of CareerBuilder.

Irina Novoselsky: It means that high paying jobs are going to continue to increase in salary. Low-Paying jobs are going to increase too, but it’s going to hollow out the middle class. 

Jennifer: She says that’s because supply and demand for these roles will continue to be an issue. And, the company uncovered the problem when analyzing 25 years of data from connecting candidates with jobs. 

Irina Novoselsky: And we used all of that information, that data, and leveraged our AI to create a skills based search. What does that mean? That means that you are matched and you look for jobs based on your skillset, on your transferable skill set. 

Jennifer: She says thinking about the workforce this way could help move employees from troubled sectors, where there’s too many people and not enough jobs, to ones that really need workers.

Irina Novoselsky: When COVID happened, the whole airline industry got massively impacted. And when you look at it, flight attendants were out of a job for a significant period of time. But one of the things that our data and our algorithms suggested, that they had a 95% match to customer service roles, which happened to be one of the highest sought after roles and the biggest supply and demand imbalance, meaning that for every person looking there was over 10 jobs. And so when you match based on their skills, because they are dealing with problems, their communication skills, their logistic handlers, their project managers, and so when you look at that high customer satisfaction and customer interaction skillset, they were a perfect match.

Jennifer: But some skill matches are more surprising than others. 

Irina Novoselsky: Prison guards, when you look at their underlying skillset are a huge match for veterinary technicians: Empathy, communication, strength, being able to, to manage difficult situations. The by-product of this is increased diversity, because if you think about it, you’re now not looking for the same type of person that you’ve been looking for that has that experience. You’re widening your net and you’re able to get a very different type of person into that role, and we have seen that play out where our clients have been able to get a much more diverse skill set using our tools. 

Jennifer:  Her team also found differences when they took a closer look at the gender data. It turns out that a long list of required skills in a job description keeps many women away. And how it’s written also matters a great deal.

Irina Novoselsky: Women are more likely to respond to the words on a job description. And so if that job description isn’t written in gender neutral tones, you’re not going to get the same amount of men / women to apply.

Jennifer:  CareerBuilder also has AI that suggests gender neutral words in job descriptions, to avoid language like “Coding ninja” or “rockstar,”which may deter some women from applying. 

The company also found women and people of color, on average, apply to fewer jobs overall. And they built an AI to fix that too.  

Irina Novoselsky: And so this is where we really believe that shift towards skills is so disruptive. Not only because it helps solve this gap, that we just don’t have enough supply for the demand that’s out there, but it’s opening up this net of people that normally wouldn’t have applied. We’re pushing the jobs to them. We’re telling this candidate, we’re applying on your behalf, you don’t have to do anything. 

Jennifer:  But how good are these measures at avoiding unintentional bias? 

Honestly it’s hard to know. More auditing is needed, and it’s incredibly hard to do from the outside. In part, because researchers only ever get to see a tiny fraction of the data that these algorithms are built on.

And making sure men and women get served the same opportunities is also a problem on social media. 

Facebook got in trouble for discriminatory job ads a few years back. It settled several lawsuits alleging the company and its advertisers were discriminating against older workers, by allowing companies to show job ads only to people of a certain age, and in that case excluding potential job applicants who are older. 

Facebook vowed to fix the problem of direct discrimination in ad targeting, and although they did in theory, in practice three scientists from the University of Southern California recently showed the unintentional discrimination Jersin found at LinkedIn is still present on Facebook. The researchers didn’t find the problem on LinkedIn.

It remains to be seen how regulators will deal with this problem. In the U-S that’s handled by the Equal Employment Opportunity Commission. 

It’s recently taken a closer look at this industry, but is yet to issue any guidelines. 

Meanwhile, if you’re wondering how Sally is doing, the woman searching for a job at the start of this episode. After 146 applications she’s accepted a job, but they hired her the old fashioned way. 

Anonymous Jobseeker: I went straight for the interview, old fashioned style face-to-face and that’s how I got it. They basically hired me off of my projects and what I already did, which is what I like. ‘Cause it’s like, I’m showing you I can do the job. 


Jennifer:  Next episode, the rise of AI job interviews, and machines scoring people on the words they use, their tone of voice—sometimes even their facial expressions.

Join us as we test some of these systems. 

Hilke: So… I was scored six out of nine… and my skill level in English is competent. What’s really interesting about this is I actually didn’t speak English. 


Jennifer:  This miniseries on hiring was reported by Hilke Schellmann and produced by me, Emma Cillekens, and Anthony Green with special thanks to Karen Hao. 

We’re edited by Michael Reilly.

Thanks for listening… I’m Jennifer Strong.


AI and data fuel innovation in clinical trials and beyond



AI and data fuel innovation in clinical trials and beyond

Laurel: So mentioning the pandemic, it really has shown us how critical and fraught the race is to provide new treatments and vaccines to patients. Could you explain what evidence generation is and then how it fits into drug development?

Arnaub: Sure. So as a concept, generating evidence in drug development is nothing new. It’s the art of putting together data and analyses that successfully demonstrate the safety and the efficacy and the value of your product to a bunch of different stakeholders, regulators, payers, providers, and ultimately, and most importantly, patients. And to date, I’d say evidence generation consists of not only the trial readout itself, but there are now different types of studies that pharmaceutical or medical device companies conduct, and these could be studies like literature reviews or observational data studies or analyses that demonstrate the burden of illness or even treatment patterns. And if you look at how most companies are designed, clinical development teams focus on designing a protocol, executing the trial, and they’re responsible for a successful readout in the trial. And most of that work happens within clinical dev. But as a drug gets closer to launch, health economics, outcomes research, epidemiology teams are the ones that are helping paint what is the value and how do we understand the disease more effectively?

So I think we’re at a pretty interesting inflection point in the industry right now. Generating evidence is a multi-year activity, both during the trial and in many cases long after the trial. And we saw this as especially true for vaccine trials, but also for oncology or other therapeutic areas. In covid, the vaccine companies put together their evidence packages in record time, and it was an incredible effort. And now I think what’s happening is the FDA’s navigating a tricky balance where they want to promote the innovation that we were talking about, the advancements of new therapies to patients. They’ve built in vehicles to expedite therapies such as accelerated approvals, but we need confirmatory trials or long-term follow up to really understand the evidence and to understand the safety and the efficacy of these drugs. And that’s why that concept that we’re talking about today is so important, is how do we do this more expeditiously?

Laurel: It’s certainly important when you’re talking about something that is life-saving innovations, but as you mentioned earlier, with the coming together of both the rapid pace of technology innovation as well as the data being generated and reviewed, we’re at a special inflection point here. So, how has data and evidence generation evolved in the last couple years, and then how different would this ability to create a vaccine and all the evidence packets now be possible five or 10 years ago?

Arnaub: It’s important to set the distinction here between clinical trial data and what’s called real-world data. The randomized controlled trial is, and has remained, the gold standard for evidence generation and submission. And we know within clinical trials, we have a really tightly controlled set of parameters and a focus on a subset of patients. And there’s a lot of specificity and granularity in what’s being captured. There’s a regular interval of assessment, but we also know the trial environment is not necessarily representative of how patients end up performing in the real world. And that term, “real world,” is kind of a wild west of a bunch of different things. It’s claims data or billing records from insurance companies. It’s electronic medical records that emerge out of providers and hospital systems and labs, and even increasingly new forms of data that you might see from devices or even patient-reported data. And RWD, or real-world data, is a large and diverse set of different sources that can capture patient performance as patients go in and out of different healthcare systems and environments.

Ten years ago, when I was first working in this space, the term “real-world data” didn’t even exist. It was like a swear word, and it was basically one that was created in recent years by the pharmaceutical and the regulatory sectors. So, I think what we’re seeing now, the other important piece or dimension is that the regulatory agencies, through very important pieces of legislation like the 21st Century Cures Act, have jump-started and propelled how real-world data can be used and incorporated to augment our understanding of treatments and of disease. So, there’s a lot of momentum here. Real-world data is used in 85%, 90% of FDA-approved new drug applications. So, this is a world we have to navigate.

How do we keep the rigor of the clinical trial and tell the entire story, and then how do we bring in the real-world data to kind of complete that picture? It’s a problem we’ve been focusing on for the last two years, and we’ve even built a solution around this during covid called Medidata Link that actually ties together patient-level data in the clinical trial to all the non-trial data that exists in the world for the individual patient. And as you can imagine, the reason this made a lot of sense during covid, and we actually started this with a covid vaccine manufacturer, was so that we could study long-term outcomes, so that we could tie together that trial data to what we’re seeing post-trial. And does the vaccine make sense over the long term? Is it safe? Is it efficacious? And this is, I think, something that’s going to emerge and has been a big part of our evolution over the last couple years in terms of how we collect data.

Laurel: That collecting data story is certainly part of maybe the challenges in generating this high-quality evidence. What are some other gaps in the industry that you have seen?

Arnaub: I think the elephant in the room for development in the pharmaceutical industry is that despite all the data and all of the advances in analytics, the probability of technical success, or regulatory success as it’s called for drugs, moving forward is still really low. The overall likelihood of approval from phase one consistently sits under 10% for a number of different therapeutic areas. It’s sub 5% in cardiovascular, it’s a little bit over 5% in oncology and neurology, and I think what underlies these failures is a lack of data to demonstrate efficacy. It’s where a lot of companies submit or include what the regulatory bodies call a flawed study design, an inappropriate statistical endpoint, or in many cases, trials are underpowered, meaning the sample size was too small to reject the null hypothesis. So what that means is you’re grappling with a number of key decisions if you look at just the trial itself and some of the gaps where data should be more involved and more influential in decision making.

So, when you’re designing a trial, you’re evaluating, “What are my primary and my secondary endpoints? What inclusion or exclusion criteria do I select? What’s my comparator? What’s my use of a biomarker? And then how do I understand outcomes? How do I understand the mechanism of action?” It’s a myriad of different choices and a permutation of different decisions that have to be made in parallel, all of this data and information coming from the real world; we talked about the momentum in how valuable an electronic health record could be. But the gap here, the problem is, how is the data collected? How do you verify where it came from? Can it be trusted?

So, while volume is good, the gaps actually contribute and there’s a significant chance of bias in a variety of different areas. Selection bias, meaning there’s differences in the types of patients who you select for treatment. There’s performance bias, detection, a number of issues with the data itself. So, I think what we’re trying to navigate here is how can you do this in a robust way where you’re putting these data sets together, addressing some of those key issues around drug failure that I was referencing earlier? Our personal approach has been using a curated historical clinical trial data set that sits on our platform and use that to contextualize what we’re seeing in the real world and to better understand how patients are responding to therapy. And that should, in theory, and what we’ve seen with our work, is help clinical development teams use a novel way to use data to design a trial protocol, or to improve some of the statistical analysis work that they do.

Continue Reading


Power beaming comes of age



Power beaming comes of age

The global need for power to provide ubiquitous connectivity through 5G, 6G, and smart infrastructure is rising. This report explains the prospects of power beaming; its economic, human, and environmental implications; and the challenges of making the technology reliable, effective, wide-ranging, and secure.

The following are the report’s key findings:

Lasers and microwaves offer distinct approaches to power beaming, each with benefits and drawbacks. While microwave-based power beaming has a more established track record thanks to lower cost of equipment, laser-based approaches are showing promise, backed by an increasing flurry of successful trials and pilots. Laser-based beaming has high-impact prospects for powering equipment in remote sites, the low-earth orbit economy, electric transportation, and underwater applications. Lasers’ chief advantage is the narrow concentration of beams, which enables smaller trans- mission and receiver installations. On the other hand, their disadvantage is the disturbance caused by atmospheric conditions and human interruption, although there are ongoing efforts to tackle these deficits.

Power beaming could quicken energy decarbonization, boost internet connectivity, and enable post-disaster response. Climate change is spurring investment in power beaming, which can support more radical approaches to energy transition. Due to solar energy’s continuous availability, beaming it directly from space to Earth offers superior conversion compared to land-based solar panels when averaged over time. Electric transportation—from trains to planes or drones—benefits from power beaming by avoiding the disruption and costs caused by cabling, wiring, or recharge landings.

Beaming could also transfer power from remote renewables sites such as offshore wind farms. Other areas where power beaming could revolutionize energy solutions include refueling space missions and satellites, 5G provision, and post-disaster humanitarian response in remote regions or areas where networks have collapsed due to extreme weather events, whose frequency will be increased by climate change. In the short term, as efficiencies continue to improve, power beaming has the capacity to reduce the number of wasted batteries, especially in low-power, across-the- room applications.

Public engagement and education are crucial to support the uptake of power beaming. Lasers and microwaves may conjure images of death rays and unanticipated health risks. Public backlash against 5G shows the importance of education and information about the safety of new, “invisible” technologies. Based on decades of research, power beaming via both microwaves and lasers has been shown to be safe. The public is comfortable living amidst invisible forces like wi-fi and wireless data transfer; power beaming is simply the newest chapter.

Commercial investment in power beaming remains muted due to a combination of historical skepticism and uncertain time horizons. While private investment in futuristic sectors like nuclear fusion energy and satellites booms, the power-beaming sector has received relatively little investment and venture capital relative to the scale of the opportunity. Experts believe this is partly a “first-mover” problem as capital allocators await signs of momentum. It may be a hangover of past decisions to abandon beaming due to high costs and impracticality, even though such reticence was based on earlier technologies that have now been surpassed. Power beaming also tends to fall between two R&D comfort zones for large corporations: it does not deliver short-term financial gain, but it is also not long term enough to justify a steady financing stream.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Continue Reading


The porcelain challenge didn’t need to be real to get views



The porcelain challenge didn’t need to be real to get views

“I’ve dabbled in the past with trying to make fake news that is transparent about being fake but spreads nonetheless,” Durfee said. (He once, with a surprising amount of success, got a false rumor started that longtime YouTuber Hank Green had been arrested as a teenager for trying to steal a lemur from a zoo.)

On Sunday, Durfee and his friends watched as #PorcelainChallenge gained traction, and they celebrated when it generated its first media headline (“TikTok’s porcelain challenge is not real but it’s not something to joke about either”). A steady parade of other headlines, some more credulous than others, followed. 

But reflex-dependent viral content has a short life span. When Durfee and I chatted three days after he posted his first video about the porcelain challenge, he already could tell that it wasn’t going to catch as widely as he’d hoped. RIP. 

Nevertheless, viral moments can be reanimated with just the slightest touch of attention, becoming an undead trend ambling through Facebook news feeds and panicked parent groups. Stripping away their original context can only make them more powerful. And dubious claims about viral teen challenges are often these sorts of zombies—sometimes giving them a second life that’s much bigger (and arguably more dangerous) than the first.

For every “cinnamon challenge” (a real early-2010s viral challenge that made the YouTube rounds and put participants at risk for some nasty health complications), there are even more dumb ideas on the internet that do not trend until someone with a large audience of parents freaks out about them. 

Just a couple of weeks ago, for instance, the US Food and Drug Administration issued a warning about boiling chicken in NyQuil, prompting a panic over a craze that would endanger Gen Z lives in the name of views. Instead, as Buzzfeed News reported, the warning itself was the most viral thing about NyQuil chicken, spiking interest in a “trend” that was not trending.

And in 2018, there was the “condom challenge,” which gained widespread media coverage as the latest life-threatening thing teens were doing online for attention—“uncovered” because a local news station sat in on a presentation at a Texas school on the dangers teens face. In reality, the condom challenge had a few minor blips of interest online in 2007 and 2013, but videos of people actually trying to snort a condom up their nose were sparse. In each case, the fear of teens flocking en masse to take part in a dangerous challenge did more to amplify it to a much larger audience than the challenge was able to do on its own. 

The porcelain challenge has all the elements of future zombie content. Its catchy name stands out like a bite on the arm. The posts and videos seeded across social media by Durfee’s followers—and the secondary audience coming across the work of those Durfee deputized—are plausible and context-free. 

Continue Reading

Copyright © 2021 Seminole Press.