From a young Elvis crooning beside Simon Cowell on America’s Got Talent to TikTok’s mesmerizing and humorous deepfake Tom Cruise (labeled as a parody account), the tech firm Metaphysic specializes in using technology to make synthetic AI-generated content. As Elon Musk and over 1,000 other tech leaders and researchers called for a moratorium on AI development through the “Future of Life'' letter, Metaphysic co-founder Martin Adams joined Round to discuss the ethical implications of advancements at the razor’s edge of AI tech. Here are some key takeaways:
“The rate of agility of technologists is faster than regulators,” Adams said, acknowledging the perils in the rapidly evolving AI space. A clear example is the explosion of ChatGPT development over the past six months. Because innovation will always outpace regulation, Adams believes tech firms should engage policymakers to help develop a reasonable and informed legal framework for AI. The Future of Life Institute’s (FLI) open letter most likely will not lead to a pause on AI research, but that doesn’t mean its impact will be trivial. Companies need goodwill to be sustainable. By sparking a more robust conversation around ethics and AI, the FLI letter will help push firms to adopt a balanced and ethical approach to development.
Copyright laws were not written for a world where people can use technology to create fully convincing recreations of other people. Adams believes this has to change. To encourage firms to pursue a business model that prioritizes consent, copyright laws should be updated to enhance and expand identity protections online. Doing this will create a natural brake on development because synthetic content will only proliferate as fast as people can understand and consent to it. New copyright laws around online identity could also lay a foundation for other safeguards, such as authentication using biometric data or blockchain technology.
In a world already experiencing the corrosive effects of misinformation, software allowing people to swap faces, voices and other characteristics pose a serious threat to democracies worldwide. As deepfake technology advances, it will become increasingly important to set parameters for how to label and regulate synthetic content. The event Q&A with Adams** **generated many open questions about this: As the internet mediates more and more of our experience, how can we prepare for and safeguard against identity theft and viral misinformation? How will we authenticate synthetic content and what technologies and regulations could help us distinguish synthetic and real content? Simple labeling would be a start. Adams also believes the use of biometric data and emerging blockchain technology will play a role in identity authentication. At present, companies developing synthetic media can also avoid contributing to the global misinformation problem by setting clear limits on who can access their technology. For example, Metaphysic has a firm policy against doing work in the political sphere and labels all AI-generated content.
Sonny Patel, VP of Product Engineering & Conversational Intelligence at LivePerson, discusses the latest advancements in chatbots.
What happens to humans when computers can make art or write books? Should we be worried? Greg Hochmuth explains the recent explosion in AI creativity.
Katie Pizzolato, Director of Theory & Applications at IBM Quantum, explains a radically new form of computation and how it may impact the future.