jdgator
Senior Member
AI-generated speech and graphic representations of people are so realistic that many people do not realize they are not observing a human. This is problematic for vulnerable groups such as children and senior citizens.
I am of the opinion that companies using AI to communicate with the public should be forced to disclose their use of the AI and how it is specifically being used.
For instance, a video ad featuring an AI-generated actor should be required to verbally and textually disclose that neither the likeness nor voice are real prior to the commercial.
In another example, an AI chatbot that provides telephone-based customer support should be forced to disclose that the caller is not talking to a human prior to providing customer support.
I think the FTC could easily extend its truth-in-advertising laws cover AI. I think something like this could maintain public trust without impairing commerce. Thoughts?
I am of the opinion that companies using AI to communicate with the public should be forced to disclose their use of the AI and how it is specifically being used.
For instance, a video ad featuring an AI-generated actor should be required to verbally and textually disclose that neither the likeness nor voice are real prior to the commercial.
In another example, an AI chatbot that provides telephone-based customer support should be forced to disclose that the caller is not talking to a human prior to providing customer support.
I think the FTC could easily extend its truth-in-advertising laws cover AI. I think something like this could maintain public trust without impairing commerce. Thoughts?