I have been wanting to write this part for a while now; let's get started!
So, lostlovefairy told me about how people have started relying on AI more and more to the extent that patients would come to the doctor and tell them to do a certain procedure because ChatGPT or another similar AI model told them that based on the entered symptoms.
This isn't just the medical field, but in every field—even science—AI is taking over and seemingly doing better than humans in the same scenario, and even when it might seem like it's not exactly an issue, it certainly is.
If you've ever known COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which was developed and owned by Northpointe (now Equivant), used to assess the likelihood of becoming a recidivist.
sources to look for more in-depth research: Sam Corbett-Davies, Emma Pierson, Avi Feller, and Sharad Goel (October 17, 2016) "A computer program used for bail and sentencing decisions was labeled biased against blacks. It's actually not that clear." The Washington Post. Retrieved January 1, 2018.
Aaron M. Bornstein (December 21, 2017). "Are Algorithms Building the New Infrastructure of Racism?". Nautilus, No. 55. Retrieved January 2, 2018.
The term recidivist means that the person does the same crime again.
In other terms, COMPAS is software that uses an algorithm to assess potential recidivism risk. Northpointe created risk scales for general and violent recidivism and for pretrial misconduct. According to the COMPAS Practitioner's Guide, the scales were designed using behavioral and psychological constructs "of very high relevance to recidivism and criminal careers."
Wikipedia source link here →
I started on clinical diagnosis... why did we end up discussing COMPAS?
There's a reason; let me explain. So, as you can see, COMPAS was a diagnostic tool used in order to assess criminal offenses. And it seemed effective enough, until the court, jury, and police started to use it everywhere. You see where I am going with this? A person could be scaled by numbers to decide if they could reoffend or not.
And while it's not based on numbers everywhere... everything is becoming less and less narrow with terms of how things are measured and scaled up, and that's problematic because if you're trying to simplify everything with the basis of something that's a black box, what you miss out on is transparency and accuracy—two things that matter the most.
Here's some key points as summarized by ChatGPT for COMPAS (since there's too much context that i couldn't summarize right away and hence the help):
COMPAS was praised for offering an objective, data-driven way to predict recidivism. It was designed to assess the risk of offenders re-offending, aiding judges, parole boards, and law enforcement in making more informed decisions about sentencing, parole, and rehabilitation. In theory, this promised a more consistent and unbiased approach than relying solely on human judgment.
The tool provided a quick, standardized assessment across various cases, potentially reducing judicial workload and saving time in overburdened court systems. It allowed for streamlined decision-making in complex criminal cases, offering quantitative risk scores based on numerous factors.
COMPAS was initially heralded for its perceived objectivity—the idea being that algorithms, unlike humans, would not be swayed by emotions, personal biases, or inconsistent reasoning. It was marketed as a way to remove subjective biases from decision-making and promote fairness.
Major consequence as a result of it:
Racial bias and injusticeLack of Transparency ("Black Box" Nature)
CZYTASZ
A Human's Guide To Detecting AI Generated Content
LosoweHey, hey, hi this is your neighborhood friendly trainwreck, Sara and I am here with a guidebook this time, I hope you find it useful! ♡ Now, I am not too sure if this concept has already been done, whether in or out of Wattpad, but I am here to shar...
