Stay up to date on the latest learnings from the Round community by subscribing to our newsletter. Subscribe

Martin Adams Explores the Ethics of Synthetic Content

Round Editorial

From a young Elvis crooning beside Simon Cowell on America’s Got Talent to TikTok’s mesmerizing and humorous deepfake Tom Cruise (labeled as a parody account), the tech firm Metaphysic specializes in using technology to make synthetic AI-generated content. As Elon Musk and over 1,000 other tech leaders and researchers called for a moratorium on AI development through the “Future of Life'' letter, Metaphysic co-founder Martin Adams joined Round to discuss the ethical implications of advancements at the razor’s edge of AI tech. Here are some key takeaways:

1. The Future of Life letter will positively influence the conversation on artificial intelligence, whether or not it leads to an AI pause

“The rate of agility of technologists is faster than regulators,” Adams said, acknowledging the perils in the rapidly evolving AI space. A clear example is the explosion of ChatGPT development over the past six months. Because innovation will always outpace regulation, Adams believes tech firms should engage policymakers to help develop a reasonable and informed legal framework for AI. The Future of Life Institute’s (FLI) open letter most likely will not lead to a pause on AI research, but that doesn’t mean its impact will be trivial. Companies need goodwill to be sustainable. By sparking a more robust conversation around ethics and AI, the FLI letter will help push firms to adopt a balanced and ethical approach to development.

2. Ethically creating hyper-realistic synthetic content begins with a commitment to consent

Copyright laws were not written for a world where people can use technology to create fully convincing recreations of other people. Adams believes this has to change. To encourage firms to pursue a business model that prioritizes consent, copyright laws should be updated to enhance and expand identity protections online. Doing this will create a natural brake on development because synthetic content will only proliferate as fast as people can understand and consent to it. New copyright laws around online identity could also lay a foundation for other safeguards, such as authentication using biometric data or blockchain technology.

3. High-quality deepfakes are undetectable to the general public so authentication and labeling will be vital

In a world already experiencing the corrosive effects of misinformation, software allowing people to swap faces, voices and other characteristics pose a serious threat to democracies worldwide. As deepfake technology advances, it will become increasingly important to set parameters for how to label and regulate synthetic content. The event Q&A with Adams** **generated many open questions about this: As the internet mediates more and more of our experience, how can we prepare for and safeguard against identity theft and viral misinformation? How will we authenticate synthetic content and what technologies and regulations could help us distinguish synthetic and real content? Simple labeling would be a start. Adams also believes the use of biometric data and emerging blockchain technology will play a role in identity authentication. At present, companies developing synthetic media can also avoid contributing to the global misinformation problem by setting clear limits on who can access their technology. For example, Metaphysic has a firm policy against doing work in the political sphere and labels all AI-generated content.

Recommended Articles

How AI Will Change Your Engineering Organization in 2024

Jesse Knight
December 19, 2023
View Article

In 2024 Artificial Intelligence will mark a pivotal moment in the evolution of engineering. This transformation will redefine the very essence of engineering workflows, decision-making, and innovation. Read on for predicitions of how these changes will play out in engineering orgniazations across the coming year.

blog-img

Are Chatbots Entering a Golden Age?

Round Editorial
November 8, 2023
View Article

Sonny Patel, VP of Product Engineering & Conversational Intelligence at LivePerson, discusses the latest advancements in chatbots.

blog-img

An Optimistic Primer on Generative AI

Round Editorial
November 8, 2023
View Article

What happens to humans when computers can make art or write books? Should we be worried? Greg Hochmuth explains the recent explosion in AI creativity.

blog-img
header-logo
social-iconssocial-icons
Subscribe to our monthly newsletter
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

© 2023 Round. All rights reserved | Privacy Policy | Terms of Service