Reading Time: 4 minutes
Mental health is exactly as important as physical health. Learn about “the norm,” denying our differences in demeanor, and how Xenophobia is a real thing.
We have seen artificial intelligence at work in the operating room, records section, consulting room, laboratory, and diagnostics center. In medicine, AI can outperform humans when it comes to arriving at highly researched and accurate results. Emerging technology is completely redefining the industry. The days of crude tools and giant machines have given way to phone applications and the adoption of wearable technology that can provide amazing user experiences while easing physician burnout.
As AI in medicine becomes more and more widely accepted, we must consider some of the potential dangers that come along with allowing these machines to take the lead while we all follow. Remember that AI is still in its formative years. Information gathered from recent happenings shows that we should keep that in mind at all times and that adequate care must be taken to understand the working principles behind these systems. Along with that, we ought to be aware of the threats and risks they may bring to the table.
The dangers of living with a machine.
No matter how human-like it can present, a robot or AI system will always remain a piece of invention prone to a technical breakdown. They are not as perfect as we think they are. Artificial intelligence should not be viewed as an end to human errors — we should expect them to have flaws.
Within these flaws lie the threats.
The level of the harm inflicted by AI is categorized as either material or immaterial. The former encompasses the loss of lives, damage to property, and the exposure of us to further health and safety risks. The latter is concerned with intangible things like limitations to the freedom of expression, the loss of privacy, or discrimination. As the risks associated with the adoption of AI increase, there is a growing body of interest aimed at creating and instituting a regulatory framework that can minimize them.
So you may attain a better understanding of how safe or how dangerous living with machines can be, what follows are some of the most debated upon risks:
a. The disruption of fundamental human rights
This is a major concern. As medicine deals with a glut of sensitive data, permission should always be gained from patients before it is accessed. Unfortunately, that is not always the case when it comes to AI-based systems. Also, discrimination can arise — the type of training data used can introduce several biases in the results provided.
For example, Irwin et al. (2009) built a natural language processing system designed to assist dentists in recording charts with voice commands. Identifying language and intonation was one of the most challenging flaws encountered. In their research, they discovered that their NLP model functioned more efficiently with American English than with other languages. This was a direct result of the training data. Working best when presented with a similar intonation and accent as its training data, a system may only recognize voice commands from subjects of certain races.
These issues threaten basic privacy protection and nondiscrimination rights, which is a direct contradiction of the values upon which the EU and other unions and countries around the world are founded.
b. Machine-based decisions are accurate but may not always be optimal
It is unclear what level of trust we should have for machines when they must make life or death decisions. Often, getting them to “show their work” simply isn’t possible. For decisions with a lot of gravity, this is troublesome. Sure, machines can be decisive. But what they ultimately determine depends on how well they are built and then trained.
For instance, a self-driving car might choose to obey a traffic rule over the life of a pedestrian. A robo-doc may decide to prescribe a wrong medical intervention because it is easier to carry out. The list could be endless.
You think I’m exaggerating?
Well, in 2016 Microsoft designed a moral robot, an AI named Chatbot Tay, to interact with others online like a teenage girl. In less than 24 hours, she was spewing racist rhetoric on Twitter.
c. Increased monitoring of individuals
What apps do you have on your phone? Even the basic use of AI is increasing the tracking and recording of our daily activities. While this may serve to extract useful data for research purposes, certain machines may be doing so without consent. The more we rely on automated systems to take care of our needs, the more we will lose our grasp on personal data.
d. Safety risks and liability
Error begets risk. As illustrated above, technology is not immune. For instance, a flaw in image recognition technology could lead to a wrong diagnosis or possibly even an operation on an incorrect body part.
To learn more about liability and accountability, check out our previous post here.
Medicine requires a high level of proactive thinking. Physicians must always be on their toes. While they can use machines to gain insights into a function of interest, there is no need to leave a robot entirely to its own devices.
Artificial intelligence presents a vast array of benefits and potential uses in medicine. However, as we move forward, we must account for their flaws and use our resourcefulness to combat them.
- (Feb 2020) Report From The Commission To The European Parliament- The Council And The European Economic And Social Committee Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics
Check out our other posts:
- Wearable Devices: The Key To Unlock Your Productivity
- AI: The Key to Fertility Treatment?
- “Paging Doctor Internet”
- 3 Powerful Discoveries Inspired by COVID-19
- Google’s AI System Bests Radiologists at Detecting Breast Cancer