Ethical Aspects of Artificial Intelligence

Date:  October 6, 2021
Venue: USCB Hilton Head and Zoom


Neil Funnell, Ethics Society board member, introduced Paul Weismantel, digital electronics and social media expert, and raised a few questions for the audience:

  • How many of you know what artificial intelligence - AI is? 

  • How many of you use it? 

  • What ethical issues emerge with the displacement of people by machines as when Morgan Stanley laid off 200 of 600 employees? 

  • Who is liable when AI is wrong?  

  • Who benefits from technology?  

  • How should benefits from technology be distributed?


Paul began by saying that in the month long preparation for his talk, he had found 90 articles or ethical issues emanating from technology. He compressed the sprawling subject and provided varied information before asking the audience to consider major critical ethical questions.

By simulation of human intelligence, the computer is initiating human behavior, he said and we are obligated to realize that two things in life are infinite: ”the universe and human stupidity”.  Four components are involved when AI makes decisions: 

  • Perception based on data collected;  

  • Comprehension based on storage of knowledge and understanding;  

  • Action based on the comprehension processed and analyzed;  

  • Cold learning based on experience end of quotation marks. 

Machines, he said, cannot really learn as we think of the word, instead they process data and produce output.

Medical professionals, for example depend on MRI equipment  to compare many, many images before offering a diagnosis. Tesla and its quest for a practical self- driving car is using 24 cameras to feed data into neural engines.

Ethical and practical challenges arise when AI is used for investigation into crimes, military threats, missing children, potential terrorism, evaluation of resumes and AI interviews.  

As examples: AI cannot, he pointed out, ”reframe a problem”, think different and cannot judge a result independently. Machines do not know whether a ”glitch” is major or minor. As valuable as AI can be, human judgment continues to be required.

Facebook was off-line for 6.5 hours recently, allegedly because of a ”glitch”.   Paul said that the ”glitch” had to have been caused by human error and emphasized that it is humans who must be held accountable. Currently moving into five new countries every year, Facebook bills its addictive appeal by using algorithms defined content for presentation to customers on the basis of material to which they have already responded. Whether that contact is accurate or not, destructive or not, is immaterial. 

College students are turning in research papers written by machines, Paul said, challenges to professors continue to grow. The drive continues to build for increasing amounts of data, but volume does not guarantee quality.

So Paul asked: what regulates or what regulation is needed?

From the audience:  

How does China shut down Internet use when Chinese leaders don’t like the contact their people are getting? 
Answer: The Chinese owned and controlled the systems over there, but much of the population knows how to use VPN, (virtual private network), to circumvent government action.

How can social media be legally and ethically regulated? 
Answer: In the United States the federal communications commission, FCC, technically has the authority to rule the airwaves, and the Securities and  Exchange Commission , SEC, has vast authority to regulate businesses. Unfortunately, the FCC has for many years not been willing to regulate in the common good. Neither has the SEC figured out how to handle the AI ethical issues.

If there is going to be regulation, censorship, how can it be contained? 
Answer: It depends on how the regulations are written, or how judgment plays into the public good.

How would it work out to sue Facebook for harm it has caused?
Answer: A lawsuit for harm caused by words puts a heavy burden on the plaintiff.

The second question Paul raised was ”Should all use require transparency?"
Answer: The problem seemed to be who would do the requiring and how would it be enforced.

An audience member questioned about what should happen when workers are laid off because of technology?  
Answers: Several comments were made from the audience:  
The loss of human capital sometimes means the loss of a lot of smarts inside a business operation. The shareholders and customers may benefit. The displaced employees may suffer. Also they may be restrained or retrained for other work, and some believe it should be the obligation of government and business to help them.

Someone commented that documentary maker Ken Burns is troubled by AI. 
Answer: Paul responded that his worries are justified and that Ken Burns himself will not be replaced by machines. TV news anchors Paul said, could be replaced by AI because they simply read Content Gathering news -- commenting on it, however, requires human judgment.

Gordon Haist, Director and Board chair, raised the question of the interaction between business ethics and AI ethics asking: How can the CEO run his business ethically, given the almost mandatory use of AI? He also asked how long will it take for AI to develop patterns of decision-making based on morals. 

Special thanks to Paul Weismantel for his excellent, technical explanation of AI, Neil Funnell for moderating and asking thought-provoking questions, Betsy Doughtie for Zooming our meeting, Andrea Sisino for USCB technical support and Fran Bollin for her always magnificent summary.  

We look forward to our next presentation: Ethics of Gene Modification, presented by Doctor Collin Moseley November 3rd, 2021 at USCB.

Previous
Previous

Ethics of Genome Editing