Ethics Issues Linked to Artificial Intelligence, Part 1

​Date: October 4, 2023​
Time: 2:00-3:30 PM
Place: USCB Hilton Head - One Sand Shark Road​
Moderator: Neil Funnell & Paul Weismantel

AI is the “next big thing” will affect all of our lives in some way, Neil Funnell said.

There have been calls to slow down the development and uses of AI and recommendations for regulating it, but the movement is in fast forward, said Paul Weismantel.

Neil then raised the question of the ethical issues in the use of AI for medical diagnosis, suggesting that a robot might be 95% accurate in reading a mammogram and that it might be used more frequently for low income women. “Is that fair?” he asked.

One audience member pointed out that the robot and the radiologist may be equally accurate. Another pointed out that the robot may be more accurate than the doctor. Another said that the income of the patient should in no way determine how medical professionals get the data they need for medical care.

Paul suggested that transparency is essential, that doctors should disclose the information that they are using AI in diagnosing medical conditions. AI Chatbots, he said, are being “trained” to measure a patient’s feelings.

However, said someone from the audience, in psychiatry, the doctor’s personal interaction with the patient is crucial to diagnosis and therapy.

“Photo recognition by AI has improved over time. Should police use it? Should we limit use of AI by police? Can AI help police with shootings in classrooms, for example?” Paul asked. “There are varied kinds of police responses to live situations. Suppose an AI robot was sent into the shooting area. Could it make a valid decision about the best course of action?”

From the audience: “If the bad guy is certain to have on a ski mask, maybe the robot could make a good judgment. If not, who is the bad guy and who is the good guy? Sometimes the cops can’t tell. Can the robot? Should we give the robot a gun?”

Neil said that AI industry leaders recently told the president that AI needs regulating, but Congress is reluctant to do it. Paul said that Congress was afraid in the early days of the Internet to legislate how and when it could be used.

Several audience members called for restraint from some source when AI leads to bad consequences. Somehow, they said, we need ways to make intelligent decisions on when and how to use AI.

Paul asked the audience to think about basic cruise control on cars and then to consider “generative” versions that are leading to driverless vehicles. Cruise control has been taught to “think.” That process is happening in more and more fields, he said. Those fields include content creation, design and art, software development, language translation (a great application), health care, gaming and finance. There is great promise, he added, in the use of AI in health care, education, agriculture and business activity. There is no program, however, to remove emotion out of the stock market.

Disadvantages that AI brings to society include job cuts, overall costs, atrophy of human skills, bias in training, lack of creativity, lack of transparency, the demand for huge servers demanding huge amounts of electricity.

An audience member asked if AI offers a step up from Google search.
Paul said he uses Claude. There are also Open AI and Google Bard. He suggested to use these in addition to search as search delivers options as to sources, whereas these AI tools do not typically expose where the response came from.

The question of Hyena (a Stanford University research project) was raised, representing a different way to approach AI that could save energy & increase performance. Paul pointed out that it may have a place in some sue cases but uses a “short cut” in filtering input that is unproven in offering topical accuracy.

Previous
Previous

Ethics Issues Raised by Artificial Intelligence, Part 2