When it comes to hiring, it’s increasingly becoming an AI’s world—we’re just working in it. In this, the final episode of Season 2 of our AI podcast “In Machines We Trust” and the conclusion of our series on AI and hiring, we take a look at how AI-based systems are increasingly playing gatekeeper in the hiring process—screening out applicants by the millions, based on little more than what they see in your résumé. But we aren’t powerless against the machines. In fact, an increasing number of people and services are designed to help you play by—and in some cases bend—their rules to give you an edge.
- Jamaal Eggleston, Work Readiness Instructor, The HOPE Program
- Ian Siegel, CEO, ZipRecruiter
- Sami Mäkeläinen, Head of Strategic Foresight, Telstra
- Salil Pande, CEO, VMock
- Gracy Sarkissian, Interim Executive Director, Wasserman Center for Career Development, New York University
We Talked To:
- Jamaal Eggleston, Work Readiness Instructor, The HOPE Program
- Students and Teachers from The HOPE Program in Brooklyn, NY
- Jonathan Kestenbaum, Co-founder & Managing Director of Talent Tech Labs
- Josh Bersin, Global Industry Analyst
- Brian Kropp, Vice President Research, Gartner
- Ian Siegel, CEO, ZipRecruiter
- Sami Mäkeläinen, Head of Strategic Foresight, Telstra
- Salil Pande, CEO, VMock
- Kiran Pande, Co-Founder, VMock
- Gracy Sarkissian, Interim Executive Director, Wasserman Center for Career Development, New York University
- This miniseries on hiring was reported by Hilke Schellmann and produced by Jennifer Strong, Emma Cillekens, Anthony Green, and Karen Hao. We’re edited by Michael Reilly.
Synthetic Jennifer: Hey everyone! This is NOT Jennifer Strong.
It’s actually a deepfake version of her voice.
To wrap up our hiring series, the two of us took turns doing the same job interview, because she was curious if the automated interviewer would notice. And, how it would grade each of us.
[beat / music]
So, human Jennifer beat me as a better match for the job posting, but just by a little bit.
This deepfake? It got better personality scores. Because, according to this hiring software, this fake voice is more spontaneous.
It also got ranked as more innovative and strategic, while Jennifer is more passionate, and she’s better at working with others.
[Beat/ Music transition]
Jennifer: Artificial intelligence is increasingly used in the hiring process.
(And this is the real Jennifer. Just, by the way.)
And these days algorithms decide whether a resume gets seen by a human, gauge personalities based on how people talk or play video games, and might even interview you.
In a world where you no longer prepare for those interviews by putting your best foot forward—what does it mean to present your best digital self?
Sot: Youtube clips montage: Vlogger 1: Want to know three easy hacks to significantly improve your performance on video interviews like HireVue, Spark Hire, or VidCruiter? Vlogger 2: Please do make sure you watch this from beginning to end, because I want to help you to pass your interview. Vlogger 3: And if you understand the key concepts, you can beat that algorithm and get the job. So let’s get started.
Jennifer: We look at just how far job seekers are willing to go to beat these tools.
Gracy Sarkissian: So there are all sorts of crazy stories about what students have done in the past to get their resume past the applicant tracking system. But what we do is we make sure that students know what to expect and are prepared to be successful.
Jennifer: That success is measured by algorithms across a whole host of variables, from automated resume screeners attempting to predict an applicant’s job performance, to one-way video interviews, where everything from a candidate’s word choice to their facial expressions might be analyzed.
Ian Siegel: Literally this is one of those instances where conventional wisdom will kill you in your search for a job. And it’s such a shame because I think even many of the experts don’t realize how the industry is actually working today.
Jennifer: You can’t dress to impress an algorithm. So, what does it look like to game an automated system?
Sami Makelainen: What if you just had the AI interview an AI, could that be done? Could it be done now? Could it be done in the future? I mean—it’s fairly clear that in the not too distant future, you will have this kind of a much more common ability to develop artificial entities that look pretty much exactly like humans and act very much like humans. Or could we use one of these things to do the interviews for us?
Jennifer: And in the absence of meaningful rules and regulation, where do we draw the line?
I’m Jennifer Strong, and in this final episode of a four-part series on AI and hiring we explore how we’re adapting to the automated process of finding a job.
Anonymous Jobseeker: These AIs or artificial intelligent robots are reading resumes through a parser. So if your resume is not up to par, it won’t go through to the next steps.
Jennifer: That’s the job seeker we’ve followed throughout this series. She asked us to call her Sally but that’s not her real name. She’s critiquing the hiring practices of potential employers… and she fears it could impact her career.
In a previous episode, she told us how she applied for close to 150 jobs before landing one and how she encountered AI at several points in the process.
Like Sally, the first time you might see AI during a job search is with a resume parser, or screener. It sorts and chooses which ones get passed along to the next stage of the hiring process.
She suspected her resume wasn’t getting through.
And she did some further research, after she got her hands on some of this technology.
Anonymous Jobseeker: So right now, when I put my resume through, it reads me as a software engineer, with a hint of data analysis, which is my field. So that’s fine.
Jennifer: A friend of hers is also working on this problem. He’s testing a different tool that puts a percentage match on how qualified it judges each resume to be for a given job.
Anonymous Jobseeker: He has another parser where it gives you your percentage. So he’s been asking other people who are data scientists and already far in the field for their resume and theirs go through 80% to 90%.
Jennifer: They’re even testing templates they find online, just to see what happens and if that formatting helps.
But so far, when they fill out those templates they’ve all received a low match score—under 40-percent qualified.
Anonymous Jobseeker: If you just Google resume templates, if you need help with your resume, we tested those whatever popped up. And we realized the templates aren’t good. So, when you put the templates inside the parser, no matter what job you are, you’re still at that 40 or under 40. So, there’s a problem with the machine reading it.
Jennifer: Sally is a programmer. She knows how to go about finding and testing this type of software. But, most of us don’t. We’re unlikely to know if these algorithms are reading our resume in the way we intended, and extracting the ‘right’ skills.
Anonymous Jobseeker: If you fill out a job application online and it says convert resume. And if, once you convert your resume, if the boxes aren’t filled in to what your resume is stating, then you know, your percentage is low. And that makes a lot of sense because when I was applying to like Goldman Sachs or Capital One, like bank industries and stuff when I pick, take the, um, information from my resume, it was never correct. And I always had to fill in the rest of the stuff to match with my resume.
Jennifer: She says when she made this discovery, it finally clicked.
And she wishes she understood how this worked before she started applying for jobs, because it would have helped with her imposter syndrome.
Anonymous Jobseeker: So everybody that doesn’t know about this doesn’t have a chance, ‘cause they don’t even know.
Jennifer: Over the course of this reporting we found a number of different groups trying to get under the hood of these systems. Whether to help themselves, or others, adapt and engage with these tools.
And, we visited a workforce readiness program in New York City called The Hope Program. Many of its participants have dealt with homelessness, substance abuse and long-term unemployment.
Jamaal Eggleston: You see all the hoops, these students have to jump through just to land the job, where I hate to say another segment of the population might not have to go through as many hoops. So, I think it’s up to us to put on our armor and to combat it, because these are good people we’re talking about here. So it’s really become my life’s journey to help them. And we have to fight back. Too many good people were being left to the wayside.
Jennifer: Jamaal Eggleston is known to his students as Mr. E. And, he says they’re struggling with the growing use of personality testing and other forms of automation in hiring.
Jamaal Eggleston: They come back frustrated. There’s a really big issue of not hearing back at all. It’s almost as if you do an application and your application goes into the matrix and it’s gone forever. Or you will get the automatic reply which is not very personable, and it gives no information.
Jennifer: To him, it represents an uphill battle for students already at a disadvantage.
Jamaal Eggleston: When it comes to their personality tests, they feel as if they’re being tricked, because it’ll be the same question, but phrased three different types of ways. It’s coming from creators, who do not share a cultural background at all with some of the applicants.
Jennifer: So, he says he downloads examples of these personality tests, analyzes them, then uses what he finds to help train his students.
Jamaal Eggleston: So I’ll give them the three different phrasings of that question. So they’ll know what to look out for. If you’ve ever been in this situation, how would you handle it? And they know instantly that I taught them once a question is phrased that way. It’s going to be a behavioral question. So it’s something that they should look out for in a personality test and to take their time.
Jennifer: And they take these tests as part of their job training. Their results are projected onto a whiteboard during class and discussed as a group.
Jamaal Eggleston: If these companies only knew, you know, all the great people that they excluded because of these practices. And they would have been a great breath of fresh air. They would have been hard capable workers, but because of these biases, whether it’s from the person who programmed the algorithms, or the algorithms themselves, that excluded these people, if they only knew, they would be kicking themselves, you know, wow, okay the person doesn’t have the same color skin as mine. They might talk with a different dialect or accent, but you know what, they came here and they worked their tail off.
Ian Siegel: If there are job seekers out there in the world who love searching for work—I have never met them. And if there are employers who feel like they are experts at recruiting—I have also never met them. Neither side is trained in the activity that they are engaging in.
Ian Siegel: My name is Ian Siegel. I am the CEO and co-founder of ZipRecruiter.
Jennifer: It’s an AI powered marketplace where companies post jobs and people look for work.
Ian Siegel: Millions of businesses post jobs on our site every month. And tens of millions of job seekers look for work on our site every month. And we used AI to play the role of active matchmaker between them.
Jennifer: When we spoke to him at the start of this series, he told us the vast majority of resumes are now screened by a machine first, before a human enters the process.
And he believes anyone using traditional advice to create a resume is at risk of not making it through to the next round of the hiring process, because the audience for resumes is now algorithms.
Ian Siegel: All that advice you got about how to write a resume, is wrong. It’s no longer write something that stands out, use a beautiful design printed on vellum, use extraordinary prose to try to dress up your accomplishments, forget all that. You want to write like a caveman in the shortest, crispest words you can. You want to be declarative and quantitative, because software is trying to figure out who you are to decide whether you will be put in front of a human. And that’s the majority of jobs in America right now today.
Jennifer: Like others, he found problems with these tools that extract information from resumes.
So, the company built its own.
And he has some advice on getting a resume through.
Ian Siegel: Be explicit, and then if you have a skill, declare it. Ideally declare how you learned it. So I learned the skill by going through this certification process, here is my certification or my license number to validate that I have this skill. Because there are multiple industries, like if you’re a nurse, as long as you have a nursing license, you’re hired. There’s a desperate need for more nurses in America right now. If you’re a truck driver, if you have a truck driver’s license number you’re hired. So like your whole resume could be that one piece of information, ‘cause the rest really doesn’t matter to the employer. So, just make sure that you list all your skills as concretely and with as much evidence to support your expertise as possible.
Jennifer: And longer term, he sees a new way of recruiting becoming the norm.
Ian Siegel: There is a sensible way for this to all work, and that is the employer should go first. The employer should look at active job seekers inmarket, and pick the ones that they would like to see apply. Invite them to apply or directly recruit them. That’s a great experience. Job seekers hate applying to jobs, but guess what? They love getting recruited, and who wouldn’t? It’s literally like getting picked up at a bar. It’s being told you’re desirable and special. It just makes sense and puts everybody in the right headspace. Then the employer is winning because by recruiting, they’re going first, they’re expressing interest, which means they’re increasing the odds that they are going to get a positive response, because that person’s going to be so flattered by the fact that the employer went first. So it’s just a better, more efficient way for this to work.
Jennifer: As part of this investigation we’ve been learning about a bunch of tools meant to help job seekers maximize their chances of success.
Hilke Schellmann is a reporting partner on this series. She’s also a professor of journalism who reports on this topic.
So, Hilke, what did you find about the tricks people are using to try and get an edge?
Hilke: So, one of the things I found is a whole niche industry of folks sharing ‘assessment secrets’ with one another online.
Sot: Youtube clips montage2: Speaker 1: In this video today, we’re going to be talking about how you can pass your psychometric test, first time round. Speaker 2: Look into the camera, not look at the screen. Speaker 3: Be expressive when you talk and change your voice tone when you speak, remember the AI will look for inconsistencies in what you say and how you behave. Speaker 2: And you then reveal the results of your actions and the results should always be positive. So whenever you get asked a question that says, tell me about a time when you. Or describe a situation you were in. See, it’s a behavioral type interview question and you have to give a specific situation.
Hilke: So, there are also the usual quora discussions and subreddits talking about the questions job seekers have encountered in video interviews, or, how to beat these games. And then, there’s some hiring vendors which offer candidates a chance to do AI mock interviews, before the big day.
Jennifer: Candidates can practice alone in a room., by talking into the camera and trying to convince someone, or a machine, that they’re the best candidate for the job?
Hilke: Yeah. Job seekers can also see their personality profiles. But there is a limit to how helpful this is, since most candidates won’t know what questions they will be asked. Like, for example I found one company that listed the seven stage hiring process at Amazon, that very clearly explained what candidates had to do. That company has also built AI games similar to what job seekers are being asked to play in the real world. So, the job seekers can train on those games ahead of time, (for a fee of course).
Jennifer: And you looked into a lot of companies that do this, did you anything interesting?
Hilke: So apparently some job candidates who don’t have all the skills the job description asks for, they put the skills they lack in white on the resume. So it’s invisible to a human, but a computer would recognize the skills. Job seekers hope to get on the yes pile by doing this, and recruiters get frustrated by this.
Jennifer: Alright, might this be a way of leveling the playing field for job applicants who have less power now against AI. Or, is it kind of cheating and giving some applicants an edge over others?
Hilke: Well, some people who practice these assessments do get an edge over others, because they know what to expect now. But ,it’s not because they have practiced and practiced to work out how to get the high score (like in a video game), because that’s not how these assessments work.
These games are trying to assess your personality and ‘to win’ essentially, the algorithm compares your traits, to the traits of employees who already work at that firm. If you have similar personality traits, you advance to the next round in the hiring process. But the catch is, no one knows what those traits are. So I don’t know if you can call it cheating, when you don’t even really know the rules of the game you are playing.
Jennifer: And we don’t know exactly how AI scores job seekers, so, the people giving this advice, they might not know either.
Hilke: Yeah, and if that advice is inaccurate, it might even backfire for job seekers. But, I understand the anxiety people have around these new tools and their desire to understand how this works. And obviously that bit of practice might calm them on the big day…
Jennifer: But like any other cat and mouse game, it’s only a matter of time before people use automation to fight back against this automation.
Hilke: That’s exactly what I was thinking.
Jennifer: So you tested this out in a video interview, using just plain text to speech software to respond to the questions asked.
Hilke: Yeah, I used a deepfake computer generated audio file to see if I could trick the interview software into believing that the deepfake is a human.
[SOT: Hilke speaking]: And so the first question is, please introduce yourself. Please introduce yourself, deepfake.
Computer-generated audio: My name is Hilke Shellman. I am an Emmy award-winning reporter and journalism professor at New York university. I have been a journalist for over a decade.
Jennifer: Ok and the deep fake voice doesn’t have a face, so there’s no video here, and the system still gives it a score.
Hilke: Yeah. The deepfake scored a 79% match score with the job. That’s actually pretty high. It also got a personality analysis, which told me that the deepfake is very innovative and not very consistent. It’s pretty social and not very reserved.
Hilke: Yeah and the weirdest part was that I then tested it again, this time reading the same text with my actual voice.
Jennifer: And, what happened?
Hilke: Ahh, welk. The computer-generated voice actually scored higher than me reading the same text!
Jennifer: Wow. Sounds like you might want to consider taking your audio avatar on the road.
Hilke: I guess so.
Jennifer: But we aren’t the only ones with this idea.
Sami Mäkeläinen: What if you just had the AI interview an AI?
Jennifer: Sami Mäkeläinen is an executive at Telstra, which is an Australian telecom company.
Sami Mäkeläinen: Could that be done? Could it be done now? Could it be done in the future? I mean it’s fairly clear that in the not too distant future, you will have this kind of a, much more common ability to develop artificial entities that look like look pretty much exactly like humans, and act very much like humans. I thought that, well, could we use one of these things to do the interviews for us?
Jennifer: He has a background in software engineering and his job is to study the implications of future tech trends.
Just out of curiosity, he and a few colleagues decided to test whether AI interviewers would recognize the difference between interviewing a human or another machine.
So they took a well-known AI interview system which uses video (he didn’t want to reveal which), and he paired it up with an avatar.
Sami Mäkeläinen: We just had a AI interview system. And we deployed an AI digital human, digital avatar, digital twin, (if you want to call it that), to sort of act as the mouthpiece for the human being interviewed. So you know the words that the avatar spoke came from humans, it was not a language model, or AI behind that part.
Jennifer: In other words, they wrote a script and it was performed by a deepfake.
So, a fake voice on a fake video answered the questions posed by an AI interviewer.
And after about a dozen tests, how did this AI job candidate do?
Sami Mäkeläinen: Well, did it flunk the interview? No, it didn’t. It was fine from the AI interviewer perspective. It was as if it was interviewing anybody else.
Jennifer: They tested the same words, two ways. One spoken by a human, and one spoken by the avatar. And he says the outcome was similar for both.
And, he has thoughts on what might happen next.
Sami Mäkeläinen: So say a few years from now, you’ll be able to have a very realistic looking digital twin of yourself, audio visual representation of you essentially. You can imagine a whole range of use cases for that. You could have it sit in, you know, a boring, large meeting for you that could uh and umm at the right intervals. You could use it in, you know, virtual gaming or gaming and virtual presence kind of an environment. Or you could use it for taking interviews for you.
Jennifer: Though he’s not aware of others testing this technology with digital humans just yet. And, if Hollywood movies can’t easily pull this off, he feels like there’s little danger the rest of us are going to be deploying avatars to do our bidding any time soon.
But the fact the hiring tool couldn’t recognize it was interviewing a machine is a problem. And it means the software still has a way to go.
Sami Mäkeläinen: So I suppose, ideally when when you have a system that ostensibly is interviewing a human, you would kind of want to make sure that it’s the human that you think you’re interviewing at the other end. Otherwise you would just hire a friend to do the AI interview for you, and it’d probably be far more convincing than an AI would be currently. There’s a whole range of things that these systems could do to verify that, you know, they are talking to who they think they are talking, but how exactly that will be developed is again, something that is to be determined.
Jennifer: He says they don’t have any plans to test further, but if they did, he has thoughts about what they might try.
Sami Mäkeläinen: We didn’t dig deeper into can we possibly tweak the scores by optimizing facial expressions, or tone of voice or, you know, emotion or things like that? That’s not something that we delved into it. And, it was just, it was just a very simple, kind of a proof of concept.
Jennifer: And he thinks we also have to remember some of this isn’t new.
Sami Mäkeläinen: We’ve sort of been gaming, the interviews forever. Like when you have a human interview, you have even courses on how to behave there, what to say, what to do, what to wear. We will increasingly be utilizing, ‘quote unquote’ intelligent agents to do our bidding for us.
Jennifer: But he says it’s important to realize hiring was never perfect to begin with.
Sami Mäkeläinen: It’s easy to sort of start blaming the AI and the use of AI for many of these situations. And in many cases it’s warrantied, right? I don’t think anybody can say that it was a perfect process to begin with and, you know, then we come to like, how do we deploy these systems? How do we use them, how much responsibility do we give to them? The devil is always in the details. So on one level, I would want to completely agree that the cost of getting hiring wrong is too high. But on the other hand, we’ve essentially gotten it wrong as a society for decades.
Jennifer: In a moment, we look at some of what’s being done at the university level, to help students get ready to engage with these systems, when we come back.
Jennifer: This new era in hiring can feel a little overwhelming for people looking for a job, who don’t always know how and when they’re being tested, or what exactly they’re being tested for.
People are looking for ways to better prepare to engage with these AI systems, and it’s moved beyond individual curiosity and grassroots organizing. AI companies are also in this space, providing tools and training for job seekers.
One of them is a company called VMock, which has business deals with hundreds of colleges and universities. Its AI-based software corrects hundreds of resumes to be more easily read by machines, and gives feedback on video interviews.
Salil Pande: And in that first glance, if you actually went to the no pile, then the story is over. You might be the smartest kid that is coming out of your undergraduate program. You’re gone, you’re not going to get the second chance. The world has moved on to a very fast cycle, and it’s blip and you either yes or no.
Jennifer: Salil Pande is one of the company’s founders.
He says even just a few years ago, every step in the hiring process was done by a human. That’s no longer the case, especially for companies that hire a lot of recent college graduates and people with less professional experience, because that makes it harder for hiring managers to know who is the best person for the job.
Salil Pande: Eventually when there is a high probability of success, that’s when human to human time interaction is happening, which means that early part, which was the rejection part has already been given to technology that, Hey, technology filter me the right resume, filter me the right, uh, LinkedIn profile, filter me the good pitches and also do some psychometric tests and everything put it all together for me. And then once all of this is done go schedule an interview for me, and that’s when I’m going to go, boom, one hour interview, I’m done.
Jennifer: VMock’s mission is to prepare students for a hiring field where their resumes and video interviews have to appeal to AI first.
Salil Pande: If you have not optimized your resume for that job description, the applicant tracking system that actually is kind of like working around that job description may not filter you into the yes pile. You may be in the no pile or a maybe pile. So, you have to think about how you’re going to just go through this early process where you’re going to deal with applicant tracking system. You’re going to deal with ah artificial intelligence system that is going to recognize your, your interviews, and everything else. What’s a good pitch? How do you highlight your top skills? What skills recruiters are looking for? What skills do you currently have? How do you present your skills when you don’t have the skill, but you have something else that could be taken as an example of that other skill, and you can actually present.
Jennifer: Pande says that career centers at universities are outmatched by the technology now employed by many large companies. That’s where he says VMock’s AI can help students beat the AI they’re encountering when they look for their first job.
And one school using it is New York University.
Gracy Sarkissian: So students are encountering these systems early, earlier and earlier on. And I would say, you know, career centers are trying to keep up with these changes so that we can prepare our students more effectively when they don’t know what to expect. I think it’s this big unknown to students. And so our job is to demystify it a little bit.
Jennifer: Gracy Sarkissian leads the Career Center at NYU.
She says she brought in VMock to make the time career coaches have with students more efficient.
Gracy Sarkissian: And once you integrate that feedback, you’ll see the score go up. So it just gives students some practice at not only getting feedback, but also seeing how a system might react to react or respond to their resume.
Jennifer: And she has some advice for job seekers trying to impress both AI and humans.
Gracy Sarkissian: Some students tell me, you know, I did what you guys told me to do. I made sure that my resume was filled with keywords. And now it sounds like, kind of like a cheesy marketing document. And so what I say, I understand, I hear you. Have two versions of your resume. Have the one that you’re going to apply to when you go through systems and have one that you are going to hand to someone, if you meet with someone and you want to impress them. And so that has helped students kind of say, okay, I get it. This is something that I have to do so that my resume gets picked up.
Jennifer: Her team also prepares students for one way video interviews.
Gracy Sarkissian: We don’t realize how much input we get when we’re having a one-on-one conversation with someone, or you’re, even if it’s a group or panel interview. You are looking at people in the eye, you are getting positive feedback. You might get negative feedback that might make you adjust your question. If you were nervous, there’s a good chance that you’ll feel a little empathy from someone in the room. Whereas when we’re interviewing with AI, it feels like a stranger, right? It feels like a stranger without a face. It’s a blank screen. And oftentimes you’re staring at yourself and so it can be a lonely process I think, um, for some of our students.
Jennifer: It’s one of the reasons why she believes, in a tight labor market, employers might want to rethink some of these strategies, especially if they want to attract top talent.
Gracy Sarkissian: You know, we know Gen-Z students are, are a values driven generation, right? They want to make sure that they can connect with the culture of the organization. That the mission and values of the organization are, are in line with those. And that’s something that’s difficult to assess when you were interviewing in a virtual way. When you’re not meeting people, when you’re not speaking to people at an interview, when you’re not walking through an office and just kind of seeing work happen.
Jennifer: But in a world where millions of companies receive millions of applications, tailoring to individuals isn’t something that scales.
And that lands us back in a position we’ve been before, blackbox decision-making, applied to everyone, leading to unintended consequences.
As we wrap up the second season of this podcast—and our four-part investigation of how AI is being used to make hiring decisions—we see the promise of using algorithms. But the reporting makes clear this is an emerging industry with many moving parts, and at least a few tools that just aren’t there yet. And in some cases, might actually do the opposite of what they intend.
We’ve seen systems with bias against women, and people with disabilities, even a tool that predicts people named Jared will be successful on the job. Other tools rated candidates highly on their English language skills, though the recordings didn’t contain one word of English. We also uploaded recordings that had nothing to do with the interview questions asked, but were rated as a match for the skills required to do the job.
With little oversight, there’s also little transparency about what goes on inside the black box, and why the software makes the decisions it makes. Companies that build these tools aren’t required to tell anyone how their systems work, or why they should be trusted.
The good news? In many ways, we’re still at the beginning. And there’s opportunity to build better systems, if we’re honest about what’s not working, where the machines are coming up short, and if we make a decision not to value scale, efficiency, or speed above all.
Jennifer: This miniseries on hiring was reported by Hilke Schellmann and produced by me, Emma Cillekens, Anthony Green, and Karen Hao. We’re edited by Michael Reilly.
That’s it for Season Two, we’re going to take a break and see you back here in the Fall.
Thanks so much for listening. I’m Jennifer Strong.
These robots know when to ask for help
A new training model, dubbed “KnowNo,” aims to address this problem by teaching robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Andy Zeng, a research scientist at Google DeepMind who helped develop the new technique, says that while robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense.
For example, when asked to bring you a Coke, the robot needs to first understand that it needs to go into the kitchen, look for the refrigerator, and open the fridge door. Conventionally, these smaller substeps had to be manually programmed, because otherwise the robot would not know that people usually keep their drinks in the kitchen.
That’s something large language models (LLMs) could help to fix, because they have a lot of common-sense knowledge baked in, says Zeng.
Now when the robot is asked to bring a Coke, an LLM, which has a generalized understanding of the world, can generate a step-by-step guide for the robot to follow.
The problem with LLMs, though, is that there’s no way to guarantee that their instructions are possible for the robot to execute. Maybe the person doesn’t have a refrigerator in the kitchen, or the fridge door handle is broken. In these situations, robots need to ask humans for help.
KnowNo makes that possible by combining large language models with statistical tools that quantify confidence levels.
When given an ambiguous instruction like “Put the bowl in the microwave,” KnowNo first generates multiple possible next actions using the language model. Then it creates a confidence score predicting the likelihood that each potential choice is the best one.
The Download: inside the first CRISPR treatment, and smarter robots
The news: A new robot training model, dubbed “KnowNo,” aims to teach robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Why it matters: While robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense. That’s something large language models could help to fix, because they have a lot of common-sense knowledge baked in. Read the full story.
Medical microrobots that travel inside the body are (still) on their way
The human body is a labyrinth of vessels and tubing, full of barriers that are difficult to break through. That poses a serious hurdle for doctors. Illness is often caused by problems that are hard to visualize and difficult to access. But imagine if we could deploy armies of tiny robots into the body to do the job for us. They could break up hard-to-reach clots, deliver drugs to even the most inaccessible tumors, and even help guide embryos toward implantation.
We’ve been hearing about the use of tiny robots in medicine for years, maybe even decades. And they’re still not here. But experts are adamant that medical microbots are finally coming, and that they could be a game changer for a number of serious diseases. Read the full story.
5 things we didn’t put on our 2024 list of 10 Breakthrough Technologies
We haven’t always been right (RIP, Baxter), but we’ve often been early to spot important areas of progress (we put natural-language processing on our very first list in 2001; today this technology underpins large language models and generative AI tools like ChatGPT).
Every year, our reporters and editors nominate technologies that they think deserve a spot, and we spend weeks debating which ones should make the cut. Here are some of the technologies we didn’t pick this time—and why we’ve left them off, for now.
New drugs for Alzheimer’s disease
Alzmeiher’s patients have long lacked treatment options. Several new drugs have now been proved to slow cognitive decline, albeit modestly, by clearing out harmful plaques in the brain. In July, the FDA approved Leqembi by Eisai and Biogen, and Eli Lilly’s donanemab could soon be next. But the drugs come with serious side effects, including brain swelling and bleeding, which can be fatal in some cases. Plus, they’re hard to administer—patients receive doses via an IV and must receive regular MRIs to check for brain swelling. These drawbacks gave us pause.
Sustainable aviation fuel
Alternative jet fuels made from cooking oil, leftover animal fats, or agricultural waste could reduce emissions from flying. They have been in development for years, and scientists are making steady progress, with several recent demonstration flights. But production and use will need to ramp up significantly for these fuels to make a meaningful climate impact. While they do look promising, there wasn’t a key moment or “breakthrough” that merited a spot for sustainable aviation fuels on this year’s list.
One way to counteract global warming could be to release particles into the stratosphere that reflect the sun’s energy and cool the planet. That idea is highly controversial within the scientific community, but a few researchers and companies have begun exploring whether it’s possible by launching a series of small-scale high-flying tests. One such launch prompted Mexico to ban solar geoengineering experiments earlier this year. It’s not really clear where geoengineering will go from here or whether these early efforts will stall out. Amid that uncertainty, we decided to hold off for now.