Cathy O’Neil discusses the current lack of fairness in artificial intelligence and much more.

[This article was first published on DataCamp Community - r programming, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Hugo Bowne-Anderson, the host of DataFramed, the DataCamp podcast, recently interviewed Cathy O’Neil, author of the blog mathbabe.org and several books on data science, including Weapons of Math Destruction.

Here is the podcast link.

Introducing Cathy O’Neil

Hugo: Hi there, Cathy, and welcome to DataFramed.

Cathy: Thank you. I’m glad to be here.

Hugo: It’s such a great pleasure to have you on the show and I’m really excited to be here to talk about a lot of different things surrounding data science, and ethics, and algorithmic bias, and all of these types of things. But before we get into the nitty-gritty, I’d love to know a bit about you. Perhaps you can start off by telling us what you do and what you’re known for in the data community.

Cathy: Well, I’m a data science consultant and I started a company to audit algorithms, but I guess I’ve been a data scientist for almost as long as there’s been that title. And actually, I would argue that I was a data scientist before that title existed, because I worked as a quant in finance starting in 2007. And I think of that as a data science job, even though other people might not agree. I mean, and the reason I’m waffling is because when I entered data science maybe I would say in 2011, a large question in my mind was, “To what extent is this a thing?” I wrote a book actually called Doing Data Science just to explore that question. I co-authored it with Rachel Schutt. The idea of that book was, what is data science? Is it a thing? Is it new? Is it important? Is it powerful? Is it too powerful? Things like that. So what am I known for? I think I’m known for being a gadfly, for sort of calling out bullshit, for possibly … People think of me as overly negative about the field. I think of myself as the antidote to Kool-Aid.

Hugo: Well yeah, I think a lot of the work you’ve done as kind of a restoring or restorative force to a lot of what’s happening at blinding speed in tech, in data science, in how algorithms are becoming more and more a part of our daily lives.

Cathy: Yeah. That sounds nice. Thank you.

Hugo: You’re welcome. And I like kind of the way you motivated your book that you co-wrote Doing Data Science in terms of exploring what data science is, because you actually have a nice working definition in there which is something along the lines of like a data savvy, quantitatively minded coding literate problem solver. And that’s how I like to think of data science work in general as well.

Cathy: Yeah. You know, I’ve actually kind of discarded that definition in preference for a new definition, which I just came up with like a couple months ago, but I’m into. And this is a way of distinguishing the new part of data science from the old part of data science. So it’s not really a definition of data science per se, but it is what it a definition of what I worry about in data science, if you will, which is that data science doesn’t just predict the future, it causes the future. So that distinguishes it from astronomers. Astronomers use a lot of quantitative techniques. They use tons of data. They’re not new, so are they data scientists? In the first definition that you just told me from my book, probably yes, but in the second no, because the point is that they can tell us when Halley’s Comet is coming back, but they’re not going to affect when Halley’s comet is coming back. And that’s the thing that data science, or I should say data scientists need to understand about what they’re doing. For the most part, they are not just sort of predicting but they’re causing. And that’s where it gets these society wide feedback loops that we need to start worrying about.

Doing Data Science

Hugo: Agreed completely. And I look forward to delving into these feedback loops. And this idea of a feedback loop in data science work in algorithms and modeling is one of the key ingredients of what you call a Weapon of Math Destruction, which I really look forward to getting back to. But I like the idea that you’ve moved on from a quasi-definition you had in your book Doing Data Science. Because the question, I mean, Doing Data Science was written five or six years ago now. Is that right?

Cathy: Yeah. 2012 I think.

Hugo: Yeah. Right. So I’m wondering, looking back on that, if you were to rewrite it or do it again, what do you think is worth talking about now that you couldn’t see then?

Cathy: Well to be clear each chapter was a different lecture in a class at Columbia. It was taken from those lectures including a lecture I gave about finance, including a lecture that we had the data scientists from Square speak. We had people that were probably not considered data scientists but statisticians speaking, et cetera. So it was a grab bag, and in that way it was actually really cool because it was all over the place and broad, and we could see how sort of these techniques were broadly applicable, various techniques, and we could also go into networks in one chapter and time series in another. And that was neat because we could have a survey if you will, of stuff. But it wasn’t meant to be a deep dive in any given direction.

Cathy: If I rewrote it now, I would probably, if I kept with that survey approach, I would be surveying a totally different world because we have very different kinds of things going on now. I guess we also have some sort of through streams, like we have some things that are still happening that were happening then that we’ve emphasize more. I think in particular I would spend a lot more time on recommendation engines, although we do have a chapter on recommendation engines, from the former CEO of Hunch, I believe, to try to understand a person by 20 questions and then sort of recommend what kind of iPhone they should buy or something like that. But nowadays, I’d spend a lot more time exploring things like to what extent do the youtube recommendations radicalize our youth?

What is a weapon of math destruction?

Hugo: That’s interesting because I think what that does is it puts data science and data science work, as we’ve been discussing already, into a broader societal context, and assesses and communicates around the impact of all the work that happens in data science. So I think that provides a nice segue into a lot of the work you’ve done, which culminated in your book, Weapons of Math Destruction, that I’d like to spend a bit of time on. So could you tell me kind of the basic ingredients of what a weapon of math destruction actually is?

Cathy: Sure. A weapon of math destruction is a kind of algorithm that I feel we’re not worrying enough about. It’s important, and it’s secret, and it’s destructive. Those are the three characteristics. By important, I mean it’s widespread, it’s scaled, it’s used on a lot of people for important decisions. I usually think of the categories of decisions in the following, like financial decisions, so it could be a credit card or insurance or housing. Or livelihood decisions like do you get a job? Do you keep your job? Are you good at your job? Do you get a raise? Or liberty. So how are you policed, how are you sentenced to prison? How are you given parole? Your actual liberty. And then the fourth category would be information. So how are you fed information? How is your environment online and particular informed through algorithms and what kind of long-term effects are those having on different parts of the population? So those are the four categories. They’re important. One of the things that I absolutely insist on when we talk about weapons of math destruction or algorithms or regulation in particular is that we really focus in on important algorithms. There’s just too many algorithms to worry about. So we have to sort of triage and think about which ones actually matter to people.

Cathy: And then the second thing is that they’re secret. Almost all of these are secret. Even people don’t even know that they exist, never mind understand how they work. And then finally, they’re making important secret decisions about people’s lives and they fuck up. They make mistakes, and it’s destructive for that individual who doesn’t get the opportunity or the job or the credit card or the housing opportunity or they get imprisoned too long. So it’s destructive for them. But as an observation, and this goes back to the feedback loop thing, it’s not just destructive for an individual, but it actually sort of undermines the original goal of the algorithm and sort of creates a destructive feedback loop on the level of society.

What are the most harmful weapons of math destruction?

Hugo: Yeah. And a point you make in your book, which we may get to, is that they can also feed into each other and exacerbate conditions that already exist in society such as being unfair on already underrepresented groups. So before we get there though, could you provide, I mean you’ve provided a nice kind of framework of the different buckets of these algorithms and WMDs and where they fall, but could you provide a few concrete examples of what you consider to be the most harmful WMDs?

Cathy: Yeah. So I’ll give you a few. And I choose these in part because they’re horrible, but also because they all fail in totally different ways. I want to make the point that there’s not one solution to this problem. So the first one comes from the world of teaching, public school teaching. So there was this thing called the Value Added Model for teachers which was used to fire a bunch of teachers, and unfairly because it turned out it was not much better than a random number generator. Didn’t contain a lot of information about a specific teacher. And in instances where it did seem to be sort of an extreme value, it was manipulated by previous teachers cheating. So you couldn’t really control your numbers, but if your a previous teacher cheated, then your number would go down. So it was like this crazy system.

Hugo: Yeah. Because if I remember correctly, the baseline is set by where your students were in the previous year or who taught them right?

Cathy: Yeah. The idea was like how well did your students do relative to their expected performance in a standardized test? And it was a very noisy question in terms of statistics. Unless the previous teacher in the previous year had cheated on those kids’ tests, and those kids did extremely well relative to what they actually understood, which would force them of course to do extremely badly the next year, even if you’re a good teacher. So it would look really bad for you. But long story short, it was normally speaking, when there wasn’t cheating involved, just a terrible statistically non robust model. And yet it was being used to fire people. So that’s the first example. The next example is this example from hiring, which is a story about Kyle Beam, this young man who noticed in a personality test that he had to take to get a job that he failed. He noticed some of the questions were exactly the same questions that he had been given in a mental health assessment when he was being treated for bipolar disorder. So that was an embedded illegal mental health assessment, that is illegal under the Americans with Disability Act, which makes it illegal for any kind of health exam, including a mental health exam to be administered as part of a hiring process. So that’s another example, and I should add that it wasn’t just one job. Kyle ended up taking seven different versions of this test, I should say he ended up taking the same exact test seven different times when he applied to seven different chain stores, all of them in the Atlanta, Georgia area. So he wasn’t just precluded from that one job, he was precluded from almost any minimum wage work in the area.

Cathy: And it wasn’t just him, it was anybody who would have failed that mental health assessment, which is to say a vast community of people with mental health status. So that’s a great example of the feedback loop I was mentioning. Because of the scale of this Kronos test, it wasn’t just destructive for the individual, but it was undermining the exact goal of personality tests and also undermining the overall goal of the ADA, which is to avoid the systematic filtering out of subpopulations. And so that’s the second example. And the third example I would give is what we call recidivism risk algorithms in the criminal justice system where you have basically questionnaires that end up with a score for recidivism risk that is handed to a judge and being told to the judge this is so objective, scientific measurement of somebody’s risk of recidivism, recidivism being the likelihood of being arrested after leaving prison.

Cathy: And the problem with that, well there’s lots of problems with that, but the very immediate problem with that is that the questions on the questionnaire are almost entirely proxies for race and class. So they ask questions like, “Did you grow up in a high crime neighborhood?” I mean, you grew up in a high crime neighborhood if you’re a poor black person, fact. That’s almost the definition of high crime neighborhood. That’s where the police are sent to arrest people, historically from the broken windows policy, the theory of policing, to the present day. And by the way I should add, in part that has been propagated by another algorithm which is predictive policing. So you’re being asked all these proxies for poverty, proxies for race and class. Other questions are like, “Are you a member of a gang? Do you have mental health problems? Do you have addiction problems?”

Cathy: A lot of this kind of information is only held against you if you are poor, and richer people, white people get treated, they don’t get punished for this kind of thing. So long story short, it’s basically a test to see how poor you are and how minority you are. And then if you’re score is higher, which it is if you are poor and if you’re black, then you get sent to prison for longer. Now I should say as toxic as that algorithm is, and as obvious as it is that it creates negative feedback loops, one of the things that sort of the jury is still out on is whether that is actually that different from what we have already. We have already a racist and classist system, not to mention judges. And we have evidence for that.

Cathy: And the idea was we’re going to get better. We’re going to be more scientific, we’re going to be more objective. It’s not at all clear that kind of scoring system would do so. Nor is it clear by the way, because there’s been lots of, not lots, but there’s been some amount of testing since my book came out about how judges actually use these scoring systems. It’s not clear that they use them the way that they’re intended. And there’s all sorts of evidence now that judge either ignore them or they ignore them in certain cases, but listen to them in other cases. For example, they ignore them in black courtrooms and they use them in white courtrooms. So they actually keep a lot of people, especially if they’re being used for pretrial detention considerations, they’ll let white people out of incarceration pretrial, but then they’re going to ignore them in urban districts where they are going to keep black people incarcerated before trial.

Cathy: Long story short, there’s also a lot of questions around how they’re actually being used, but it’s a great example of a weapon of math destruction just created as if the nature of algorithms will make things more fair. I guess going to your earlier point, no algorithm is perfect, and we couldn’t expect that to be perfect. The reason these sort of society wide just destructive feedback loops get propagated, get created by these algorithms isn’t just because they’re imperfect. It’s because they’re being used, as I said in that example, but more broadly they’re more funneling people in different classes and for different genders or races or different mental health status or disability status. They’re funneling them onto a path which they were sort of ‘already on’ depending on their demographics.

Biases and Algorithms

Hugo: Yeah, and I think speaking to your point of the fact that these algorithms might not be creating new biases, I mean they may as well, but that they’re encoding societal biases and keeping people on a path they may have been on already, I think something distinct from that is that they’re actually scalable as well, right?

Cathy: Yeah, right. So we shouldn’t be surprised of course now that we say it out loud, they’re propagating past practices, they’re automating the status quo, they’re just doing what was done in the past and acting like, “Since this happened in the past, in a pattern, we should predict that it will happen in the future.” But the way they’re being actually utilized, it means not just that we predict it will happen, but we’re going to cause it to happen. If you are more likely to pay back a loan, you’re more likely to get a loan. So the people who are who are deemed less likely are going to be cut out of the system. They’re not going to be offered a credit card. And since all the algorithms work in concert and similarly to each other, this becomes a rule and it’s highly scaled, even if it’s not the exact same algorithm, which it was in the case of Kyle Beam with the Kronos algorithm, same exact algorithm being used. But even if it isn’t, the fact is data scientists do their job similarly across different companies in the same industry, so online credit decisioning is going to be based on similar kinds of demographic questions.

Hugo: I also think, I mean there are a lot of different avenues we can take here, and for people who want more after this conversation, I highly recommend Cathy’s book Weapons of Math Destruction. Something I’d like to focus on is that in all of these models, the value added model for teaching, the hiring model, these models to predict recidivism rate, one really important aspect of these is that they’re not interpretable. We can’t tell why they make the predictions they do. The fact that they’re black box in that sense. And the relationship between this inability to interpret them, the inability of a teacher to go and say why have you given me this rating? And they’re pointed to the algorithm, and the fact that this combined with the scalability really makes en masse lack of accountability and lack of fairness, correct?

Cathy: Yeah. I mean it’s exactly right. And I talked about that as in fact a characteristic of a weapon of mass destruction, that it’s secret. And that’s a really important part of it because when you have something that’s important and secret, it’s almost always going to be destructive. A good data science model has a feedback loops and it incorporates its mistakes, but there’s no reason for their mistakes to be incorporated when we don’t alert people to them. So that’s this sort of lack of accountability is a real problem for the model. But it’s also obviously a real problem for the people who are scored incorrectly because they have no appeal. There’s no due process. And to that point, there were six teachers in Houston that won a lawsuit. They were fired based on their value added model scores. They sued and won and the judge found that their due process rights had been violated, and I’m sort of sitting around waiting for that to happen in every other example that I have mentioned, but also in lots and lots of other examples that are similar where you have this secret important decision made about you. Why is that okay?

Hugo: So this is a retroactive, I suppose, I don’t want to use the word solution, but a way of dealing with what has happened. I agree that action needs to be taken across the board. I’m wondering what some viable solutions are to stop this happening in future. And I love the fact that we opened this conversation with you telling us that you work in consulting now, in particular, in algorithmic audits, and I’m wondering if that will be a part of the solution going forward and what else we can do as a data science community to make sure that we’re accountable?

Cathy: I mean, yes. So there’s two different approaches and one of them is transparency and one of them is auditability. And honestly I think we need to consider both very carefully. We have to think about what it means for something to be transparent. Certainly it wouldn’t be very useful to hand over the source code to teachers to tell them, “This is how you’re being evaluated, here are the coefficients that we’ve trained on this data.” No, that would not be useful, so we need to understand what we mean by transparency. And I sort of worked out a kind of idea that I think is worth a try. It’s kind of a sensitivity analysis. I mean that’s the technical term, but really what it looks like is first confirm that the data that you have about me is correct. Next what if something had changed a little bit, what if this kid had gotten a slightly better score? What if that kid hadn’t been in my class? What if I had another kid? What if I’d been teaching a different school? What if I’d been teaching in a different classroom in my school? What if I’d had, 30 kids instead of 20, how would my score change?

Cathy: And it’s not going to prove everything. It would catch obvious errors, it would catch obvious instabilities, which actually that algorithm in particular had. So if you found out that your score would go from bad to good based on one small change, then you would know that this is a bogus algorithm. So that’s one idea at the level of transparency, but I would insist on suggesting that you really don’t know whether an algorithm is fair, just knowing how your own score works. Even if you really, really understood your own score, you wouldn’t know if it’s fair.

Cathy: Fairness is a statistical concept. It’s a notion that we need to understand at an aggregate level. So I am pushing for the idea of auditing as just as important as transparency really, to ask the questions along the lines of for whom does this algorithm fail? Does this fail more often for black people or white people, does it fail more often for women than for men, et cetera. And that’s a question you cannot get to just by understanding your own score or whether your own data is correct or incorrect. It’s a question that has to be asked at a much higher level with much more access. Now to your point that I myself have an algorithmic auditing company, I do. But guess what, it doesn’t have that many customers, sadly. And it’s a result of the fact that algorithms essentially don’t have that much scrutiny. There’s not much leverage to convinced somebody to audit their algorithms.

Cathy: I have some clients and those clients are great and I love them. They are clients who really do want to know whether their algorithm is working as intended and they want to know either for their own sake because money’s on the line or their reputation’s on the line, or for some third party on behalf of some third party, like the investors or their customers or the public at large. They want to know whether it’s working. What I really started my company for though is to audit algorithms that I think are generally speaking, the algorithms that companies don’t want to have audited, if you see where I’m going with this. It’s those algorithms that are profiting from racism or profiting from bypassing the Americans with Disability Act. Those are the very algorithms that I want to be auditing, but I don’t have those clients yet, and I don’t have them because we’re still living in a sort of plausible deniability situation with respect to algorithms.

What are the incentives for algorithmic audits?

Hugo: So it may not currently be within these companies interests to be audited, right? So where do you see these incentives coming from? I can imagine the endgame could be legislators catching up with technology. Another thing we currently have is that data scientists and the community as a whole are in relative positions of being able to make requests to their own company. So you could imagine we’re having this conversation now around checklists versus oaths versus codes of conduct within the data science community as a whole. And you could imagine algorithmic audits becoming part of a checklist or an oath or a code of conduct. So I’m wondering, where you see the incentives for companies in late stage capitalism coming from?

Cathy: Yeah, I mean I know there’s a lot of really ethical data scientists out there and I love them all, but I don’t expect their power to be sufficient to get their company that they work for to start worrying about this, in general. So I think it has to come from fear honestly, and that’s either fear of federal regulators, not holding my breath for that to happen. Or fear of litigation, so that essentially their compliance officer says you have to do this or else we’re taking on too much risk and we’re going to get screwed. Just in this example, Kyle Beam was applying to work at Kroger’s grocery store when he got redlighted by that Kronos algorithm. So Kroger’s grocery store was licensing the Kronos algorithm with a license agreement that said they wouldn’t understand the algorithm that Kronos had built, but they had this indemnification clause, extra contract on top of their licensing agreement that said if there’s any problem with this algorithm, Kronos would pay for the problem. So they would take on the risk.

Cathy: But Kronos is not a very big company. It was working with seven huge companies just in the Atlanta, Georgia area taking on the risk, which is stupid because honestly the fair hiring law, the ADA, the onus is on the large company, not on some small data vendor. So when Kyle’s father, who’s a lawyer, sued, he filed a class action lawsuit, seven class action lawsuits against every one of those large companies. Those large companies are on the hook for the settlement if it ends up as a settlement. And Kronos is going to go bankrupt very, very quickly if that ends up being settled for lots of money.

Cathy: So it’s just one example, but it’s I think a very important example to demonstrate the fact that the company’s using these algorithms for HR or what have you, and that’s often the framework, the setup is that some small company builds the algorithm and then licenses to some large either company or government agency in the case of predictive policing, recidivism or, for that matter, teacher evaluation. And they can’t just offshore the risk because it’s those large companies that are going to be on the hook for the lawsuits. Right now, the world is that those large companies do not see the risk. They do not acknowledge the risk. And so far they’ve gotten away with it.

Hugo: Your discussion of Kronos there really reminded me something that really surprised me when reading Weapons of Math Destruction was how, I mean I knew about a lot of these cases, but about a lot of the data vendors and small companies that build these models, I’d heard of hardly any of them. That kind of shocked me with respect to how much impact they are having and can have, in the future, on society.

Cathy: Yeah. You know, it’s this kind of funny thing where we as a society are waking up, and that’s a very important thing. The public itself is starting to say, “Hey wait, algorithms aren’t necessarily fair.” But how do we know that? It’s because we use Google search and we use Facebook and we see these, I would say, consumer facing algorithms, one by one, on a daily basis. And so we see the flaws of those things and we see the longer term sort of societal effects of being outraged by the news we see on Facebook everyday. Those happen to be sort of obvious examples of problematic algorithms, but they also have it to be like some of the hardest, biggest, most complex algorithms out there. I would not know actually how to go about auditing them.

Cathy: Let me put it this way, there’d be a thousand different ways to audit them, and you’d have to sort of think really hard about each way of how to set up a test. Whereas just asking whether specific personality tests or application filter which is also used, algorithm that filters out applications for jobs, whether that is legal is a much more finite doable question, but because of the nature of those algorithms, we may send in an application for a job, we don’t even know our application is being filtered by an algorithm, so how is the public going to find out it’s wrong or their application was wrongly classified? It’s completely hidden from our view, and I would say that most of the algorithms that are having strong effects on our lives, college admissions offices all use algorithms too, we don’t know about them. We can’t complain if they go wrong because we’re never made aware of them. And yet those are the ones that desperately need to be audited.

Where to read more on what’s happening now?

Hugo: And so in terms of where people can find out more about these types of algorithms and the challenges we’re facing as a society, I know for example the recidivism work, ProPublica has done a lot of great work on that. I follow Data And Society and AI Now Institute, but I’m wondering do you have any suggestions for where people can read more widely about what’s happening now?

Cathy: I mean the good news is that there’s lots and lots of people thinking about this. The bad news is ProPublica, AI Now, any kind of sort of outside group, even with the best intentions, doesn’t have access to these algorithms. That’s a large part of why I did not go that route. I’m not an academic. I don’t have the goal of having a sort of think tank that audits algorithms from the outside because you can’t. You literally can’t audit algorithms that are HR algorithms from the outside. You have to be invited in. So that’s why I started a company that theoretically anyway could be invited in to audit an algorithm. But then the problem I still have, in spite of the fact that I’m willing to sign a non-disclosure agreement, is that nobody wants my services because of this plausible deniability issue. Literally there are people that I have talked to that want my services, but then their corporate lawyers come on the phone and they say, “What if you find a problem with our algorithm that we don’t know how to fix? And then later on when somebody sues us, in discovery it’s found that we knew there was a problem with this algorithm? That’s no good. We can’t use your services. Goodbye.”

Challenges Facing Data Privacy

Hugo: So I’d like to move slightly and just think about the broader context of data science and the data revolution, and I’m wondering what other important challenges you think we are facing with respect to the amount of data there is, data privacy, and all the work that’s happening.

Cathy: I mean, I’d say the biggest problem is that, we live in a putatively free society, and we’re having a lot of problems thinking about how to deal with this in a large part because it doesn’t give way to that many individual stories. I think I’ve found a few stories like the Kyle Beam story, et cetera, found some teachers who were fired unfairly by the value added model, but the way our policy making works in this country is like they need to find victims and the people get outraged and then they complain and then the policymakers pass laws. And the nature of this statistical harm is different, and it’s harder to measure and so it’s harder to imagine laws being passed. And that’s the best case scenario when you live in a society that actually cares. I guess the best best case scenario might be happening in Europe where they actually do pass laws. Although I think it’s much more focused on privacy and less focused on this kind of algorithmic discrimination. But in terms of what I worry about the most, I’m looking at places like China with their social credit score, which are intrinsically not trying to be fair, they are just sort of explicitly trying to nudge people or strong arm people really into behaving well, and they’re social control mechanisms and they’re going to be very, very successful.

Hugo: So what lessons do you think we can take from history to help approach this issue? And I mean in particular from previous technological revolutions such as the industrial revolution, but there may be others.

Cathy: Well I mean, so there’s lots of different answers to that. One of them is it took us a while to catch up with pollution because it was like, who in particular is harmed by a polluted river? So it’s kind of an external, what is it called? Externality. And so we have externalities here which we are not keeping track of in the same kind of way. And it took us a while to actually care enough about our environment to worry about how chemicals change it. But we ended up doing that and in large part because of the book Silent Spring, but other things as well. And then another example I like to give is sort of if you think about the exciting new invention called the car, people were super excited about the car, but it was also really dangerous. And over time, we have kept track of car related deaths, and we have lowered them quite a bit because of inventions like the seatbelt and the crash test dummies, et cetera.

Cathy: And we started paying attention to what makes something safer. Not to say that they’re totally safe because they’re not, they’re still not. But we have traded the convenience for the risk. I feel like best case scenario in our future interactions with algorithms, we’re going to be doing a similar kind of trade where we need algorithms, they’re so efficient and convenient, but we have to be aware of that risk. And the first step of that is to sort of measure the deaths. We measured car deaths, car related deaths. We need to measure algorithmic related harm, and that goes back to the point I was making at least twice already, which is that we are not aware currently of the harm because it’s invisible to us. And so when I talk to policy makers, which I do, I beg them not to regulate algorithms by saying, “Here’s how you have to make an algorithm,” because I think that would be possibly too restrictive, but regulate algorithms in saying, “Tell us how this is going wrong, measure your harm, show us who’s getting harmed.” That’s the very first step. And understanding how to make things safer.

Hugo: And I think this speaks also to have a greater general cultural contextual challenge we’re facing is in that as part of the political cycle, the amount of deaths incurred in a society forms of fundamental part. In a lot of respects, in America and other countries, but the amount of unfairness and poverty isn’t necessarily something that’s discussed in the same framework. Right?

Cathy: Can you say that again?

Hugo: Yeah. So deaths are something which are immediately quantifiable and are able to brought to legislators and politicians as part of the political cycle, whereas the amount of poverty isn’t necessarily something that is as interesting in the news cycle and the political cycle.

Cathy: Yeah, that’s a good point. It’s harder to quantify inequality than it is to quantify deaths. And that goes back to the question of what does our political system respond to, if anything? I mean right now it’s just a complete shit show, but even in the best of times it responds better to stories of cruelty and death than it does to silent mistakes that nevertheless cost people real opportunities. So it’s hard to measure what does an opportunity loss cost to not getting a particular job or not being, here’s another one that’s even relevant to current lawsuits going on with Facebook, not being shown an ad for a job that you might have wanted because Facebook’s getting in trouble for showing ads only to young people. And so it’s like age issue, or only to men, so women don’t get to see STEM related job ads. And so how much harm is that for a given person? It’s not obviously the most harmful thing that’s ever happened to someone. So it’s not as exciting at a policy level, but if it happened systemically which it does, it’s a problem for society.

Hugo: Yeah. And that speaks to another really important point in terms of accountability and transparency that you can be shown stuff in your online experience and I’m shown something totally different and legislators are shown something entirely different. And this type of targeting is a relatively new phenomenon.

Cathy: That’s right. I mean that’s one of the reasons it’s so hard to pin down is… it’s going to my earlier point if you get to see what you get to see, but that’s not a statistical statement about what people get to see. Flipping that in the other direction, it is an amazing tool for predatory actions like payday lending or for- profit colleges, it’s like they can’t believe how lucky they got. They used to have a lot of trouble locating their victims, desperate poor people, but now they couldn’t be happier because they’ve got this system and it’s called the internet that finds them for them, cheaply, and en masse and scaling and in a way that’s exceedingly easy to scale. So it’s a fantasy come true for those kinds of bad actors. But then the question becomes how do we even keep track of that? If they are actually going after those people that are in sort of a very real way, voiceless, and don’t have the political capital to make their problems a priority.

What should future data scientists practice that isn’t happening yet?

Hugo: So I’ve got one final question for you, Cathy. We’re in the business of data science education here at DataCamp. Because of that, a lot of our listeners will be the data analysts and data scientists of the future. And I’d like to know what you’d like to see them do in their practice that isn’t happening yet.

Cathy: I just wrote a paper, it’s not out yet, but it will be out pretty soon about ethics and artificial intelligence with a philosopher named Hanna Gunn. It’s called The Ethical Matrix. I actually don’t know what it’s called, but I think it’s something along the lines of The Ethical Matrix. At least it introduces this concept of an ethical matrix, and it’s a very simple idea. The idea is to broaden our definition of what it means for an algorithm to work. So when you ask somebody, “Does this algorithm work?” They always say yes, and then you say, “What do you mean?” And then it’s like, “Oh, it’s efficient.” And so you’re like, “But beyond that does it work?” And that’s when it becomes like, what do you mean? And then even if they want to go there, they’re like is that an infinitely complicated question that I don’t know how to attack?

Cathy: So the idea of this ethical matrix is sort of to give a rubric to address this question, and it’s something that I claim we should do before we start building an algorithm, that we should do with everyone who is involved. And so to that point, the first step in building an ethical matrix is to understand who the stakeholders are, and to get those stakeholders involved in the construction of the matrix, and to embed the values of the stakeholders in a balanced way relative to their concerns. So the rows are the stakeholders, the columns are the concerns, and then you go through each cell of the matrix and try to decide are these stakeholders at high risk for this concern to go very, very wrong.

Cathy: It’s basically as simple as that. But our theory is that if this becomes part of the yoga of building a data driven algorithm, then it will theoretically at least help us consider much more broadly what it means for an algorithm to work, what it means for it to have long term negative consequences. Things to monitor for making sure that they’re not going wrong, et cetera. And it will bring us from the narrow point of view, it’s working because it’s working for me and I’m making money, which is the one by one ethical matrix, the stakeholder is me or my company and the only concern is profit. Broaden that out to look at all the people that we’re affecting, including maybe the environment. Look at all of the concerns they might have: fairness, transparency, false positives, false negatives, and consider all of these and balance their concerns explicitly.

Hugo: Well, I for one am really excited about reading this paper and we’ll include a link in the show notes to it as well. When it’s out.

Cathy: When it’s out. Great, cool.

Hugo: Fantastic. Look, Cathy, thank you so much for coming on the show. I’ve enjoyed this conversation so much.

Cathy: Great. Thank you for having me. Thanks Hugo.

To leave a comment for the author, please follow the link and comment on their blog: DataCamp Community - r programming.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)