Artificial Intelligence: A Tool in the Search for Truth

President Gordon Haist opened the meeting and presented the names of 3 new Ethics Board members: Peter McAllister, Brian Julius, and John Gilbert. He also said the November 5th presentation would be on sustainable development with the mayors of Hilton Head and Hardeeville as presenters.

Moderator: Paul Weismantel

Download Paul’s Presentation (PDF): AI - A Tool in the Search for Truth

Does living ethically require us to search out truth in the forming of our beliefs?

Responses:

  • Beliefs come from faith and or facts.

  • Facts change. They depend on current knowledge.

  • Should AI be used to find facts and truths? The challenge is in how to gather facts.

Finding the news through the noise. 

It is hard to get truth through regular news because they present opinions controlling the facts.

When USA Today started it knocked out other news sources, leaving no balance.

Compare MSNBS and Fox News, which are worlds apart from each other. Now we have no Walter Cronkite. How does one know truth?  We need to sift through and sort out, all the opinions.

Paul showed us the titles of all his daily reading, which are many sources. He thinks USA Facts is dedicated to facts, not a particular opinion.

Paul also watches selected podcasts that oppose each other.

He then sorts all of it out to get to truth.

Paul showed a chart of data sources and services on the web.

Audience: A member asked if news reports need 3 sources and was told not necessarily given the varying standards in editorial practices.

Is a government site mostly factual or is it politicized information?  One must sort it out.

We can’t believe all the information on AI. Often it is best to follow your gut.

Paul uses a certain AI to check validity. He trusts the engrained AI in DuckDuckGo more than most because they use 4 different AI sources for developing answers and cite sources in responses.

An audience member commented that vertical thinkers see things differently than lateral thinkers and that educators teach vertically. Vertical thinking is more analytic and seeks a single answer. Lateral thinking encourages creativity and exploration that can yield multiple choices.

Artificial Intelligence is not your friend, but it is trained purposely to be.

In 2010 social media became prevalent, where both truths and falsehoods could be found.

Parents of a child who died by suicide sued Open AI who they believe led their son to think he had an AI friend. That chat bot led the son to believe this was his only friend.

AI is now starting ads for users based on questions users have asked. The ads are based on our engagement not unlike how Amazon picks suggested products based on your past views & purchases.

A cuddly AI toy can record a child’s conversations to learn his personality.

Artificial Intelligence can be embedded in Search Engines.

Paul prefers the Duck Duck Go search engine because it is more private and doesn’t share your identity.  Duck Duck Go will search the web for an answer to your question but one must think hard about how to ask the question, be very specific, and ask the full question.

An audience member felt the creator of the search is making money off the search.

Final Ethical Question: Does progress in the 21st century require the sophisticated resources of AI to live ethically?

Audience:

Before computers most lived with ethical standards.

Just get friends you trust.

Go to sophisticated resources.

How to differentiate truth from an AI creation?  Do we have the ability to pass a law to show something is not real, such as a voice of someone that AI created?

Moderator Paul gave examples of videos that appear to be real.   There were two totally different versions of the same subject but both appeared to be real.

Audience member: Other countries are legislating AI.  The EU has a set of AI requirements.

Paul: Yes, but the US administration has withdrawn guidelines and left the space in the same condition as Social Media

Paul asks: Is pretending that what I believe is true living ethically?

Audience member: We all need certainty now.

Paul: AI systems rely on initial and ongoing training. 

Audience member: What’s to prevent right wing/left wing AI from misinformation?

Paul answers that trainers see results and will reject what is not desired but what is trained depends on the goals of the particular company offering the AI service.

To be ethical people must use AI to only to answer specific questions and avoid conversations.

 

Next
Next

CryptoCurrencies