Ethics Issues Raised by Artificial Intelligence, Part 2

Date: November 2, 2022​​
Time: 2:00-3:30 PM
Place: USCB Hilton Head - One Sand Shark Road
Moderator: Neil Funnell & Paul Weismantel

This was the second of a two-part series of discussions about Artificial Intelligence (AI). The program aired more of the many dilemmas created as AI has increasing numbers of impacts on our society.

Neil Funnell provided three questions to trigger the group’s thinking:

  • How should we use AI to improve traffic management?

  • What does AI say about mandatory drug testing in schools?

  • Can AI give us insight into the effect of banning certain books from school libraries?

Paul Weismantel distributed copies of an AI-generated letter about local traffic problems and potential solutions to them. The letter contained general information, without attribution; opinions; and recommendations, with no sources to verify their validity. The question, he said is whether anyone should sign such a letter and present it for publication.

One member of the audience said he would never sign and submit a letter he had not personally written. Former school principal Hank Noble said he had signed many letters translated from his own English version into Spanish because he believed communication with Spanish-speaking parents was so important.
One member of the audience suggested that Chat GPT and Claude and BARD, three AI systems, could show how smart they are by presenting solutions to society’s problems instead of simply describing them.

Paul’s responded that AI, when asked for solutions, is likely to say, “On the one hand this and on the other hand that.”

An audience member asked the rhetorical question of whether AI makes people dumber or smarter. Another asked how AI distinguishes between fact and fiction, how it handles misinformation and disinformation.

Paul’s response was that users of AI must bring their education and judgment to whatever AI offers, also that AI systems depend on the “trainers” to label material “false” when appropriate.

Each AI  bot consists of a library, its gathered information, plus its “training,” the software controlling  how the information is presented. Our ethical issues, said a member of the audience, revolve around the questions of where AI gets its information, how it processes it and then how do we use it.

Pieces of AI are really dangerous, said Paul. “Deep fakes” occur when photos of people can be processed to show them doing things they never did. With three seconds of a recorded voice, AI can create a video showing a person saying things he or she never said. “A small amount of people can do a lot of damage,” he said.
AI is beginning to “watermark” material to eliminate the success of these deceptive processes, he said.

“Is medical information in AI tightly controlled?” someone asked.

“Some of it is and some of it is not,” Paul answered.

Fortunately, he said, some AI is beginning to offer, through a “click,” information on the sources of what it provides; in that case users can judge the reliability of the sources. Notice, he reminded the audience, that Wikipedia offers at the bottom of its articles a list of references for the sources of its information.

Someone then said, “I believe AI needs regulation. It’s scary.” Another asked, ”Who is providing guardrails and who is policing them?”

Consensus was that AI is moving faster by far than governments trying to keep up with it. President Biden recently put forth several guidelines for the use of AI in government services, which is as far as he can go by executive order; Congress has gathered information but has not enacted any AI legislation. The European Union is drafting rules for AI. Australia has put in some regulations in the interest of child protection. Great Britain has established seven principles for ethical use of AI. The United States generally is depending on “voluntary commitments” and transparency from the AI bots.

Previous
Previous

Gun Control - Society’s Ethical Balance of Safety and Freedom