Connect with us

Tech

Podcast: Can AI fix your credit?

Published

on

Podcast: Can AI fix your credit?


Credit scores have been used for decades to assess consumer creditworthiness, but their scope is far greater now that they are powered by algorithms. Not only do they consider vastly more data, in both volume and type, but they increasingly affect whether you can buy a car, rent an apartment, or get a full-time job. In this second of a series on automation and our wallets, we explore just how much the machines that determine our credit worthiness have come to affect far more than our financial lives.

We Meet:

  • Chi Chi Wu, staff attorney at National Consumer Law Center  
  • Michele Gilman, professor of law at University of Baltimore
  • Mike de Vere, CEO Zest AI

Credits:

This episode was produced by Jennifer Strong, Karen Hao, Emma Cillekens and Anthony Green. We’re edited by Michael Reilly.

Transcript:

[TECH REVIEW ID] 

Miriam: It was not uncommon to be locked out of our hotel room or to have a key not work and him have to go down to the front desk and handle it. And it was not uncommon to pay a bill at a restaurant and then have the check come back. 

Jennifer: We’re going to call this woman Miriam to protect her privacy. She was 21 when she met the man she would marry… and.. within a few short years.. turn her life… and her financial position… up-side-down.

Miriam: But he always had a reason and it was always someone else’s fault.

Jennifer: When they first met, Miram was working two jobs, she was writing budgets on a whiteboard, and she was making a dent in her student debt.

Her credit was clean.

Miriam: He took me out to dinner and he took me on little trips, you know, two or three night vacation deals to the beach or, you know, local stuff. And he always paid for everything and I just thought that was so fun.

Miriam: And then he started asking if he could use my empty credit cards for one of his businesses. And he would charge to the full amount, about 5,000 and then pay it off within, I mean, two or three days every time. And he just called it flipping. That happened for a while. And during that, that just became a normal thing. And so I kind of stopped paying attention to it. 

Jennifer: Until one day…her entire world came crashing down.

Miriam: I had, let’s see a six year old, a two year old and a four year old and it’s Halloween morning and we’re in the dining room getting ready to take her to preschool. And, um, the FBI came and arrested my husband and like, it’s just like the movies, you know, they go through all your stuff and they send a bunch of men with muddy boots and guns into your house. 

Jennifer: A federal judge convicted her husband of committing a quarter million dollars of wire fraud… and Miriam discovered tens of thousands of dollars of debt in her name. 

She was left to pick up the pieces… and the finances.

Miriam: I mean my credit score was below 500 at one point. I mean, it just plummeted and that takes a long time to dig out of, but I have learned that it’s sort of a little by little thing… which I had to educate myself on.  I mean, since this whole debacle here, um, I’ve never missed anything. It’s like… more important to me than most things… is keeping my credit score golden.

Jennifer: She’s a survivor of what’s known as “coerced debt,”. It’s a form of economic abuse… usually by a partner or family member.  

Miriam: There’s no physical wounds. Right. And there’s, this, isn’t something you can just like call the police on somebody. And, and also it’s not usually a hostile situation. It’s usually pretty, it’s a calm conversation where he works his way in and then gets what he wants.

Jennifer: Economic abuse isn’t new… but like identity theft, it’s become a whole lot easier in a digital world of online forms and automated decisions.

Miriam: I know what an algorithm is. I get that. But like, what do you mean my credit algorithm? 

Jennifer: She got back on her feet… but many don’t… and as algorithms continue to take over our financial credit system…some argue this could get a lot worse.

Gilman: We have a system that makes people  who are experiencing hardship out of their control, look like deadbeats, which in turn impacts their ability to gain the opportunities necessary to escape poverty and gain economic stability. 

Jennifer: But others argue the right credit-scoring algorithms… could be the gateway to a better future… where biases can be eradicated… and the system made fairer. 

De Vere: So from my perspective, credit equals opportunity. It’s really important as a society that we get that right. We believe there can be a 2.0 version of that, leveraging machine learning. 

Jennifer: I’m Jennifer Strong and in this second of a series on automation and our wallets… we explore just how much the machines that determine our credit worthiness.. have come to affect far more than our financial lives. 

[IMWT ID]

Jennifer: It used to be when someone wanted a loan…they formed relationships with people at a bank or credit union who made decisions about how safe, or risky, that investment seemed.

Like this scene from the 1940’s Christmas classic, It’s a Wonderful Life… where the film’s main character decides to loan his own money to customers to keep his business afloat…. after an attempted run on the bank.

George: I got $2,000! Here’s $2000 this will tie us over until the bank reopens. All right, Tom, how much do you need?

Tom: $242.

George: Oh Tom. Just enough to tide you over until the bank reop—.

Tom: I’ll take $242!

George: There you are. 

Tom: That’ll close my account. 

George: Your account is still here. That’s a loan!

Jennifer: These days banks make loans without ever meeting many of their customers… Often, these decisions are automated… based on data from your credit report… which tracks things like credit card balances, car loans, student debt… and includes a mix of other personal data…   

In the 1950s the industry wanted a way to standardize these reports… so data scientists figured out a way to take that information… run it through a computer model and spit out a number…. 

That’s your credit score… and it’s not just banks who use them to make decisions. Depending on where you live, all sorts of groups refer to this number… including landlords…insurance companies… even, employers.

Wu: Consumers are not the customers for credit bureaus. We are, or our data is the commodity. We’re not the customers, we’re the chicken. We, we’re the thing that gets sold….

Jennifer: Chi Chi Wu is a consumer advocate and attorney at the National Consumer Law Center. 

Wu: And so, as a result, the incentives in this market are kind of messed up. The incentives are to serve the needs of creditors and other users of reports and not consumers.

Jennifer: When it comes to credit reports, there are three keepers of the keys…. Equifax, Experian, and Transunion. 

But these reports are far from comprehensive… and they can be inaccurate. 

Wu: There are unacceptably high levels of errors in credit reports. Um, now the data from the definitive study by the federal trade commission found that, uh, one in five consumers had a verified error on their credit report. And one in 20 or 5% had an error so serious it would cause them to be denied for credit, or they would have to pay more. 

Jennifer: Complaints to the federal government about these reports have exploded in recent years…  and last year during the pandemic? Complaints about errors doubled.

These make up more than half of all complaints filed with the C-F-P-B — or the Consumer Financial Protection Bureau of the U-S government.

But Wu believes even without any errors, the way credit scores are used… is a problem. 

Wu: So the problem is employers… landlords. They start looking at credit reports and credit scores as some sort of reflection of a person’s underlying responsibility, their value as a person, their character. And that’s just completely wrong. What we see is people end up with negative information on their credit report because they’ve struggled financially because something bad has happened to them. So people who’ve lost their jobs, who’ve gotten sick. Um, they can’t pay their bills. And this pandemic is the perfect illustration of that and you can really see this in the racial disparities in credit scoring. The credit scores for black communities are much lower than for white communities and for Latin X communities, it’s somewhere in between. And has nothing to do with character. It has everything to do with inequality.

Jennifer: And as the industry replaces older credit-scoring methods with machine learning…she worries this could entrench the problem. 

Wu: And if left unchecked, if there is no intentional control for this, if we are not wary of this, the same thing will happen to those algorithms that happened to credit scoring, which will be, they will impede the progress of the historically marginalized communities.

Jennifer: She especially worries about companies who promise their credit-scoring algorithms are more fair because they use alternative data….data that’s supposedly less prone to racial bias…

Wu: Like your cell phone bill, or your rent, um, to the more funky fringy, big data. What’s in your social media feed for the first type of alternative data that is sort of conventional or financial, um, my mantra has been the devil’s in the detail. Some of that data looks promising. Other types of that data can be very risky. So that’s my concern about artificial intelligence and machine learning. Not that we should never use them. You just, you have to use them, right? You have to use them with intentionality. They could be the solution. If they’re told one of your goals is to minimize disparities for marginalized groups. You know your goal is to be as predictive or more predictive with less disparities.

Jennifer: Congress is considering restricting employers’ use of credit reports… and some states have moved to ban them in setting insurance rates… or  access to affordable housing.

But awareness is also an issue.

Gilman: There are a lot of credit reporting harms that are impacting people without their knowledge. And if you don’t know that you’ve been harmed, you can’t get assistance or remedies,

Jennifer: Michelle Gilman is a clinical law professor at the University of Baltimore…

Gilman: I wasn’t taught about algorithmic decision-making in law school and most law students still aren’t. And they can be very intimidated by the thought of having to challenge an algorithm.

Jennifer: She’s not sure when she first noticed that algorithms were making decisions for her clients. But one case stands out… of an elderly and disabled client whose home health care hours under the Medicaid program were drastically cut.. even though the client was getting sicker…

Gilman: And it wasn’t until we were before an administrative law judge in a contested hearing that it became clear the cut in hours was due to an algorithm. And yet the witness for the state who was a nurse, couldn’t explain anything about the algorithm. She just kept repeating over and over that it was internationally and statistically validated, but she couldn’t tell us how it worked, what data was fed into it, what factors it weighed, how the factors were weighed. And so my student attorney looks at me and we’re looking at each other thinking, how do we cross examine an algorithm?

Jennifer: She connected with other lawyers around the country who were experiencing the same thing. And she realized the problem was far bigger …

Gilman: And when it comes to algorithms, they are operating across almost every aspect of our client’s lives.

Jennifer: And credit reporting algorithms are the most pervasive.

Her firm sees victims who get saddled with unexpected debt…sometimes due to hardship…other times from medical bills…or… because of identity theft, where someone else takes loans in your name… 

But the impact is the same…it weighs down credit scores… and even when the debt is cleared, it can have long-term effects.

Gilman: As a good consumer lawyer, we need to know that sometimes just resolving the actual litigation in front of you, isn’t enough. You have to also go out and clean up the ripple effects of these algorithmic systems. A lot of poverty lawyers share the same biases that the general population does in terms of seeing a computer generated outcome and thinking it’s neutral, it’s objective, it’s correct. It’s somehow magic. It’s like a calculator. And none of those assumptions are true, but we need the training and the resources to understand how these systems operate. And then we need as a community to develop better tools so that we can interrogate those systems so that we can challenge these systems.

<music transition> 

Jennifer: After the break… We look at the effort to automate fairness in credit reporting.

[midroll]

De Vere: AI helps in two ways: it’s more data and better math. And so if you think of limitations on current math, you know, they can pull in a couple of dozen variables. And, uh, if I tried to describe to you Jennifer, uh, with two dozen variables, you know, I could probably get to a fairly good description, but imagine if I could pull in more data and I was describing you with 300 to a thousand variables that signal and resolution results in a far more accurate prediction of your credit worthiness as a borrower.

Jennifer: Mike de Vere is the CEO of Zest AI. It’s one of several companies seeking to add transparency to the credit and loan approval process… with software designed to account for some of the current issues with credit scores… including racial, gender and other potential bias.

To understand how it works…we first need a little context. In the U-S it’s illegal for lenders (other than mortgage lenders) to gather data on race. This was originally meant to prevent discrimination.

But a person’s race has a strong correlation with their name…where they live… where they went to school…and how much they’re paid. That means…even without race data…a machine learning algorithm can learn to discriminate anyway…simply because it’s baked in.

So, lenders try to check for this and weed out the discrimination in their lending models. The only problem? To verify how you’re doing you kind of need to know the borrowers’ race… without that…lenders are forced to make an educated guess. 

De Vere: So the accepted approach is an acronym BISG and it basically uses two variables, your zip code and your last name. And so my name is Mike De Vere and the part of California I’m from, with a name like that I would come out as Hispanic or Latin X, but yet I’m Irish.

Jennifer: In other words…the industry standard for how to do this is often flat out wrong. So his company takes a different approach.

De Vere: We believe there can be a 2.0 version of that—leveraging machine learning. 

Jennifer: Rather than predict race on only two variables…it uses many more…like the person’s first and middle names…and other geographic data – like their census tract… or school board district.

He says in a recent test in Florida, this method outperformed the standard model by 60-percent.

De Vere: Why does that matter? That matters because it’s your yard stick to how you’re doing.  

Jennifer: Then, he takes an approach called adversarial de biasing.

The basic idea is this. The company starts with one machine learning model that’s trained to predict how risky a given borrower is.

De Vere: Let’s say it has 300 to 500 data points to assign risk for an individual.

Jennifer: It then has a second machine learning model that tries to guess the race of that borrower… (based on the findings of the first one). 

If the predictions of the second model match the outputs of the race predictor… he says it means the system is encoding bias…and should be adjusted… by tweaking how much it weighs each of the data points.

De Vere: So those 300 to 500 signals we can tune up or tune down if it becomes a proxy for race. And so what you end up with is not only a performant model that delivers good economics, but at the same time, you have a model that is nearly colorblind in that process. 

Jennifer: He says it’s led to more inclusive lending practices.

De Vere: We work with one of the largest credit unions in the U-S out of Florida. And so what that means for our credit union is more yeses for more of their members. But what they were really excited about is it was a 26% increase in approval for women. Twenty-five percent increase in approval for members of color.

Jennifer: While it’s encouraging… Anyone claiming to have a fix for decades of harm caused by algorithmic decision-making… will have a lot to overcome to win people’s trust.

It’s a task made even harder when the proposed fix to a bad algorithm… is another algorithm.

The Treasury Department recently issued guidance – highlighting the use of AI credit underwriting as a key risk for banking… warning of the costs that come with their opaque nature… and adding a note that, quote, “Bank management?.. should be able to explain and defend underwriting and modeling decisions.” 

Which… even with the most transparent tools… still feels like a tall order. 

And without modern regulation it’s also unclear just who monitors these credit scoring monitors… and who decides whether things like phone data or information from social media are fair play?

Especially while the end results continue to be used for non-credit purposes… like employment or insurance.

 [CREDITS]

This episode was produced by me, Karen Hao, Emma Cillekens and Anthony Green. We’re edited by Michael Reilly.

Thanks for listening, I’m Jennifer Strong. 

[TECH REVIEW ID]

Tech

This startup’s AI is smart enough to drive different types of vehicles

Published

on

This startup’s AI is smart enough to drive different types of vehicles


Jay Gierak at Ghost, which is based in Mountain View, California, is impressed by Wayve’s demonstrations and agrees with the company’s overall viewpoint. “The robotics approach is not the right way to do this,” says Gierak.

But he’s not sold on Wayve’s total commitment to deep learning. Instead of a single large model, Ghost trains many hundreds of smaller models, each with a specialism. It then hand codes simple rules that tell the self-driving system which models to use in which situations. (Ghost’s approach is similar to that taken by another AV2.0 firm, Autobrains, based in Israel. But Autobrains uses yet another layer of neural networks to learn the rules.)

According to Volkmar Uhlig, Ghost’s co-founder and CTO, splitting the AI into many smaller pieces, each with specific functions, makes it easier to establish that an autonomous vehicle is safe. “At some point, something will happen,” he says. “And a judge will ask you to point to the code that says: ‘If there’s a person in front of you, you have to brake.’ That piece of code needs to exist.” The code can still be learned, but in a large model like Wayve’s it would be hard to find, says Uhlig.

Still, the two companies are chasing complementary goals: Ghost wants to make consumer vehicles that can drive themselves on freeways; Wayve wants to be the first company to put driverless cars in 100 cities. Wayve is now working with UK grocery giants Asda and Ocado, collecting data from their urban delivery vehicles.

Yet, by many measures, both firms are far behind the market leaders. Cruise and Waymo have racked up hundreds of hours of driving without a human in their cars and already offer robotaxi services to the public in a small number of locations.

“I don’t want to diminish the scale of the challenge ahead of us,” says Hawke. “The AV industry teaches you humility.”

Continue Reading

Tech

Russia’s battle to convince people to join its war is being waged on Telegram

Published

on

Russia’s battle to convince people to join its war is being waged on Telegram


Just minutes after Putin announced conscription, the administrators of the anti-Kremlin Rospartizan group announced its own “mobilization,” gearing up its supporters to bomb military enlistment officers and the Ministry of Defense with Molotov cocktails. “Ordinary Russians are invited to die for nothing in a foreign land,” they wrote. “Agitate, incite, spread the truth, but do not be the ones who legitimize the Russian government.”

The Rospartizan Telegram group—which has more than 28,000 subscribers—has posted photos and videos purporting to show early action against the military mobilization, including burned-out offices and broken windows at local government buildings. 

Other Telegram channels are offering citizens opportunities for less direct, though far more self-interested, action—namely, how to flee the country even as the government has instituted a nationwide ban on selling plane tickets to men aged 18 to 65. Groups advising Russians on how to escape into neighboring countries sprung up almost as soon as Putin finished talking, and some groups already on the platform adjusted their message. 

One group, which offers advice and tips on how to cross from Russia to Georgia, is rapidly closing in on 100,000 members. The group dates back to at least November 2020, according to previously pinned messages; since then, it has offered information for potential travelers about how to book spots on minibuses crossing the border and how to travel with pets. 

After Putin’s declaration, the channel was co-opted by young men giving supposed firsthand accounts of crossing the border this week. Users are sharing their age, when and where they crossed the border, and what resistance they encountered from border guards, if any. 

For those who haven’t decided to escape Russia, there are still other messages about how to duck army call-ups. Another channel, set up shortly after Putin’s conscription drive, crowdsources information about where police and other authorities in Moscow are signing up men of military age. It gained 52,000 subscribers in just two days, and they are keeping track of photos, videos, and maps showing where people are being handed conscription orders. The group is one of many: another Moscow-based Telegram channel doing the same thing has more than 115,000 subscribers. Half that audience joined in 18 hours overnight on September 22. 

“You will not see many calls or advice on established media on how to avoid mobilization,” says Golovchenko. “You will see this on Telegram.”

The Kremlin is trying hard to gain supremacy on Telegram because of its current position as a rich seam of subterfuge for those opposed to Putin and his regime, Golovchenko adds. “What is at stake is the extent to which Telegram can amplify the idea that war is now part of Russia’s everyday life,” he says. “If Russians begin to realize their neighbors and friends and fathers are being killed en masse, that will be crucial.”

Continue Reading

Tech

The Download: YouTube’s deadly crafts, and DeepMind’s new chatbot

Published

on

The YouTube baker fighting back against deadly “craft hacks”


Ann Reardon is probably the last person whose content you’d expect to be banned from YouTube. A former Australian youth worker and a mother of three, she’s been teaching millions of loyal subscribers how to bake since 2011. But the removal email was referring to a video that was not Reardon’s typical sugar-paste fare.

Since 2018, Reardon has used her platform to warn viewers about dangerous new “craft hacks” that are sweeping YouTube, tackling unsafe activities such as poaching eggs in a microwave, bleaching strawberries, and using a Coke can and a flame to pop popcorn.

The most serious is “fractal wood burning”, which involves shooting a high-voltage electrical current across dampened wood to burn a twisting, turning branch-like pattern in its surface. The practice has killed at least 33 people since 2016.

On this occasion, Reardon had been caught up in the inconsistent and messy moderation policies that have long plagued the platform and in doing so, exposed a failing in the system: How can a warning about harmful hacks be deemed dangerous when the hack videos themselves are not? Read the full story.

—Amelia Tait

DeepMind’s new chatbot uses Google searches plus humans to give better answers

The news: The trick to making a good AI-powered chatbot might be to have humans tell it how to behave—and force the model to back up its claims using the internet, according to a new paper by Alphabet-owned AI lab DeepMind. 

How it works: The chatbot, named Sparrow, is trained on DeepMind’s large language model Chinchilla. It’s designed to talk with humans and answer questions, using a live Google search or information to inform those answers. Based on how useful people find those answers, it’s then trained using a reinforcement learning algorithm, which learns by trial and error to achieve a specific objective. Read the full story.

—Melissa Heikkilä

Sign up for MIT Technology Review’s latest newsletters

MIT Technology Review is launching four new newsletters over the next few weeks. They’re all brilliant, engaging and will get you up to speed on the biggest topics, arguments and stories in technology today. Monday is The Algorithm (all about AI), Tuesday is China Report (China tech and policy), Wednesday is The Spark (clean energy and climate), and Thursday is The Checkup (health and biotech).

Continue Reading

Copyright © 2021 Seminole Press.