What happens to humans when computers can make art or write books? Should we be worried? Greg Hochmuth explains the recent explosion in AI creativity.
Lately, we’ve all seen experiments in AI creativity crowding our feeds — from highly stylized, AI-enhanced selfies to convincing college-level essays written by ChatGBT. But what does this technology mean for the future of human work?
While it’s easy to worry about machines replacing us, Greg Hochmuth is optimistic that generative AI will make our jobs easier, faster, and more fun — not obsolete. He joined Round to provide our members with a primer on this technology and its potential applications. Here are four key takeaways.
How can new AI models understand what looks good to the human eye or interpret the tone of a particular text? Hochmuth explains that combining new machine learning algorithms with massive data sets has allowed these technologies to explode. That’s why the most powerful AI models today utilize texts, code, and images — the data sets most available to mine in large quantities. To create GPT-3, a large-language model capable of understanding and finding patterns in human language, developers at OpenAI trained the program on 45TB of text, including books, Wikipedia, Reddit, and much of the English-language internet. Stable Diffusion trained on 5 billion (!) internet-based images to create an AI model that ‘sees’ and processes images much like a human eye. In the future, Hochmuth anticipates the rise of AI models based on growing data sets in videos, music, and other media.
When an app like LensaAI turns a selfie into something that looks hand-drawn, it’s using its massive database of reference images to create original artwork. But you can also use AI models to synthesize existing work. For example, AI can summarize legal briefs and find bugs in Python. If you feed an entire season of White Lotus to an AI, the program will search the text for connections and make plot predictions to write the next episode (granted, with varying results). “These models have incredibly canny, incredibly powerful insight into what they were trained on,” says Hochmuth. Take GitCode, which Hochmuth calls “auto-complete” on steroids for writing code. “I use it every day now. It’s eerie — it feels like it’s reading your mind.” He predicts we’ll soon see similar AI assistant technology in other fields, from customer service support to graphic design.
Hochmuth calls the AI models available today “relatively low level and not that smart, compared to where we’re heading.” In the future, he anticipates we’ll train models on higher quality data, including private, customized data, to supercharge their processing power. We’ll also develop new machine-learning algorithms that allow us to analyze quantitative data from scientific or mathematical research at the same level we can process written text. Of course, these advancements have consequences — imagine AI-enhanced spam or phishing scams. And as generative AI models become more sophisticated, it will be harder to distinguish human from computer-made work. But Hochmuth doesn’t fear this fate because “as soon as something becomes ubiquitous and easy, we as humans find that boring.” He thinks AI will merely raise the bar and change our interests, pushing us all to do better work.
Dan Van Tran (DVT) is the CTO of Collectors Holdings, a leading player in the collectibles industry. Tran talked through one of the most difficult challenges tech and product leaders must solve: building transformational technology on top of aging technology.
Katherine Woo from Airbnb.org shares their new non-profit model that leverages tech's talent and mindset to provide homes to people in times of crisis.
Should this exist, not can this exist? Co-Founder of Flickr & Hunch encourages technologists to take more responsibility for what they are creating.