How AI can be used to make life and death decisions | Tech Rasta


By the 2000s, an algorithm was developed in the US to identify recipients for donated kidneys. But some people are unhappy with how the algorithm is designed. In 2007, Los Angeles-based kidney transplant candidate Clive Grave told a room full of medical professionals that their algorithm was biased against older people like himself. The algorithm is designed to protect the kidneys for the maximum number of years. This favors younger, wealthier and white patients, Grave and other patients argue.

Such bias is common in algorithms. What is less common is that the designers of those algorithms admit that there is a problem. After years of consulting with ordinary people like Grave, the designers found a less biased way to increase the number of years saved, among other things, by taking into account age as well as overall health. One important change is that many donors, often people who died young, are no longer matched only with recipients in the same age bracket. Some of those kidneys can now go to elderly people if they are healthy. Like the Scribner Committee, the algorithm still doesn’t make decisions that everyone agrees on. But the process by which it was developed is hard to fault.

“I didn’t want to sit there and give an injection. If you want, you press the button.

Philip Nitschke

Nietzsche is also asking tough questions.

A former doctor who burned his medical license after a years-long legal battle with the Australian Medical Board, Nitschke is credited with being the first person to legally administer a voluntary lethal injection to another person. In the nine months between July 1996, when the Northern Territory of Australia introduced legislation legalizing euthanasia, and March 1997, when the Australian federal government repealed it, Nitschke helped four of his patients commit suicide.

First, Bob Dent, a 66-year-old carpenter who has suffered from prostate cancer for five years, explained his decision in an open letter: “If I put a pet in the same position, I would be prosecuted.”

Nitschke wanted to support his patients’ decisions. However, he was uncomfortable with the role they were asking him to play. So he made a machine in his place. “I didn’t want to sit there and give an injection,” he said. “If you want, you press the button.”

The machine isn’t much to look at: it’s essentially a laptop hooked to a syringe. But it achieved its goal. Sarco was an iteration of that original device, which was later acquired by the Science Museum in London. Nitschke believes the next step will be an algorithm that can perform psychological assessment.

But there are chances that those hopes will be dashed. Designing a program that can assess one’s mental health is an unsolved problem—and a controversial one. As Nietzsche himself noted, doctors disagree about what it means for a sane person to choose to die. “You can get a dozen different answers from a dozen different psychiatrists,” he says. In other words, there is no common ground on which even an algorithm can be built.



Source link