Overview:
As artificial intelligence and machine learning generate deep fakes for marketing and advertising purposes, affected people may turn to lawsuits over false representation and the right of publicity. How might parties–or regulators–use surveys to protect consumers?
What Are Deep Fakes?
Deep fakes are video and audio that appear to show a natural person, but are actually created by artificial intelligence based on human inputs. A recent example of deep fakes is the use of David Beckham’s voice in the Malaria Must Die global campaign. With Beckham’s cooperation, advertisers imitated his voice using computers to create an anti-malaria advertisement in nine different languages. Although Beckham supported the use in this case, the technology may also be used to create clips in which public figures appear to be saying things they did not say, or support initiatives they do not support. Deep fakes have also been used to create music allegedly inspired by the voices of famous musicians, who had nothing to do with the production and did not authorize the use of their voices. What threats do deep fakes pose to consumers and celebrities, and how might consumer surveys be used to help regulators provide appropriate remedies?
Regulating Generative AI Content
The European Union was the first to pass comprehensive AI legislation that categorizes the risk level of artificial intelligence and its uses. In the United States, there is no comprehensive federal legislation regulating AI, although AI’s developers, users, operators, and deployers will be subject to existing laws. But where can existing laws regulate new technology?
Two answers come from intellectual property law:
1. Under the Lanham Act: AI “deep fakes” may be deemed false advertising that is material to consumer purchasing decisions.
2. Right of Publicity: Various states have legislation protecting any natural person’s right to control their own identity, including likeness, voice, signature, and photograph.
Deep fakes, and the use of deep fakes to sell products or unfairly compete to influence consumer behavior, may soon be at the center of legal disputes. Consumer surveys can help plaintiffs and defendants make their cases.
Consumer Surveys and AI
Regulatory disputes alleging false advertising or right of publicity often require evidence of consumer deception or confusion. A false advertising survey could measure whether consumers have confused deep fakes and real content. A likelihood of confusion survey could measure whether consumers believe a celebrity actually made the statement, or gave permission or approval for the fake work. Consumer surveys could also be used to determine whether consumers made purchasing decisions based on the seeming affiliation between the product or service and a celebrity deep fake.
In the case of musicians whose likenesses and voices are used to create AI-generated music, a likelihood of confusion survey could be used to measure consumer confusion as to affiliation and authorship. Similarly, surveys could be used to measure whether consumers believe that a deep fake clip is an authentic video of a public figure, like a politician or a celebrity, and whether their beliefs about that figure changed their voting or purchasing behavior.
As technology continues to advance–and in the absence of comprehensive legislation–surveys offered as evidence in litigation can be a mechanism for measuring the influence of AI. If you require a consumer survey for an AI-related intellectual property matter, contact MMR Strategy Group.
To read more about AI and likelihood of confusion, click here.