4 ways machine learning perpetuates fraud

Machine Learning


Using machine learning, Optimize your models and streamline your businessYou are in control.

In effect, ML models embody and implement the policies that control access to opportunities and resources like credit, employment, housing, and even freedom, when it comes to arrest prediction models that inform parole and sentencing.

Insurance risk models determine how much each policyholder should pay, and targeted marketing determines who gets discounts, special deals, and even awareness of specific financial products.

When ML acts as a gatekeeper to these opportunities, it can perpetuate or amplify social injustices, harming disadvantaged groups by disproportionately and frequently denying them access. Below are four examples.

How does machine learning perpetuate bias?

  1. Machine learning models can use race and national origin as inputs, allowing them to make decisions based on protected class status.
  2. Models can deny access or opportunities to some groups over others.
  3. Lack of representativeness in model training means that the model will perform poorly for under-represented groups.
  4. The models can predict sensitive information, and people can use it to directly discriminate against those groups.

More articles by Eric SiegelPredictive AI makes your business more efficient in surprisingly simple ways

1. Discriminatory Model

A model that takes a protected class, such as race or nationality, as input and makes a decision directly based on that class. These models clearly discriminateTheir discriminatory behaviour is more visible and detectable than those who discriminate but keep the basis for their decisions secret.

For example, such a model could punish black people for being black. Although illegal in some circumstances and relatively uncommon so far, some prominent experts in ML ethics have been vocal advocates for allowing protected classes as inputs to models.

2. Mechanical Bias

Inequality in false positive rates between groups means the model is wrong Denying approval or access to an opportunity Visit some groups more frequently than others. This is a model Discriminatory This is because (as mentioned above) the model can use other unprotected input variables as proxies for protected classes.

For example, ProPublica is well known. Exposure A rearrest prediction model that incorrectly incarcerates black defendants more than white defendants.

3. Coded gaze

If a group is underrepresented in the training data, the resulting model will perform poorly for members of that group. Exclusive ExperiencesFor example, facial recognition systems fail more often on black people than on other races. This phenomenon, also known as representation bias, For voice recognition.

4. Inferring Sensitive Attributes

The model prediction is Revealing Group MembershipSexual orientation, whether or not you are pregnant, etc. Whether to quit work Or whether or not you die.

In cases like these, otherwise harmless data can be used to extract sensitive information about people, from researchers showing it may be possible to predict someone's race from the number of likes they get on Facebook, to Chinese authorities using facial recognition to identify and track the Uighur minority, an ethnic group systematically oppressed by the government.

Define standards and take a stand

The question you should always ask is, “Who is this failing for?” To tell Cathy O'Neill, Mathematical Weapons of Destruction And he is one of the most visible activists in the field of ML ethics. This fundamental question brings to mind the four issues above and many more. It is an impassioned call to action, reminding us to pursue ethical consideration as an exercise in empathy.

These ethical challenges can only be addressed by proactive leaders. Companies using ML are most often hindered by corporate PR whitewashing, which often amounts to merely posturing when companies want their ML deployments to be “fair, equitable, accountable and responsible.”

These are vague platitudes that cannot lead to concrete action. Ethical Theatrethey are more concerned with protecting their own public image than with protecting the public. For example, it is rare for a company to clearly take a position on any of the four issues I listed above.

O'Neill has responded to the indifference to these and other issues with another weapon: shame. She advocates shaming as a way to combat companies that deploy analytics irresponsibly. Her most recent book, Shame Machinetackles “predatory corporations” and criticizes shaming for attacking below instead of above.

The fear of embarrassment plagues clients in her model-auditing consulting business. “People hire me to look at algorithms,” O'Neill says. “To be honest, most of the time, they do it because they've gotten into trouble, they're embarrassed, or they're like, 'I don't want to be accused of this, and I think this is risky.'”

But I also urge you to think about higher ideals: do good instead of avoiding bad; work to improve equality instead of avoiding shame; and work to set ethical ML standards as a form of social activism.

To do this, we need to define clear standards to take a position on, not just communicate vague platitudes. To start with, I would advocate the following standards, which I believe are necessary but not sufficient: ban discriminatory models, balance false positive rates across protected groups, enable an individual right to demand explanations for algorithmic decisions (at least in the public sector), and diversify analytics teams.

On minority empowermentBlack women, here's what you need to do to get to the top

Fighting fraud with machine learning

Your role is important: As a stakeholder in the ML adoption effort, you have a powerful and influential voice – possibly much more powerful than you realize.

You are one of a relatively small number of people who shape and set the trajectory of a system that automatically determines the rights and resources that large numbers of consumers and citizens have access to.

“The decisions made by an organization's analytical models are decisions made by that organization's senior management team,” said Alan Sammy, director of data science and audit analytics at Canada Post.

ML can help, not harm. Widespread adoption of ML creates unprecedented new opportunities to proactively fight injustice, rather than perpetuate it. When models can be shown to have the potential to have a disproportionately negative impact on protected groups, quantifying that issue puts it on the agenda and draws attention to it.

AI Playbook Cover
Image courtesy of MIT Press

Analytics provide quantitative options to address inequities by adjusting for them, and the very same operational framework for using ML to automate or support decision-making can be leveraged to deploy models tailored to improve social justice.

As you work Adopting ML successfullymake sure you are leveraging this powerful technology for good. Optimizing for only one objective, such as increasing profits, will lead to negative or disastrous results.

But if we embrace human goals, science can help us achieve them. O'Neill recognizes this: “In theory, we can make things fairer. We can choose the values ​​we aspire to and embed them in our code. We can do that. I think that's the most exciting thing about the future of data science.”

Over the past decade, I've devoted a significant portion of my work to ML ethics. For more detailed information, including a visual explanation of machine bias, arguments against explicitly discriminatory models, and details about the standards I propose, check out my writing and videos here.

This article is an excerpt from the book AI Playbook: Mastering the rare art of deploying machine learning Reprinted with permission from the publisher, MIT Press.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *