In 1970, a Monty Python skit brought forth into the collective consciousness the use of the word “spam,” which went on to describe the large chunk of emails and pesky messages in the internet age. Now we have “Slop”.
We’ve lived through a unique time in human history. As keenly observed by one of my professors, Prof. Ranjay Krishna, we had the opportunity to experience a period in our civilization where any knowledge we consumed carried the implicit assumption that what we were observing reflected universal truth. Video, audio clips, photos — all were simple, irrefutable proofs that could not be voided of events happening, of people’s opinions, and simple facts.
Early human civilization could not completely trust the oral stories passed down — one only needs to look towards urban myths and legends to see this. Similarly, future generations will not be able to have implicit trust in the swaths of videos and images which can be generated with a set of few GPUs. But in this recent short period of history, we could believe and trust stories on the Great Wide Web. But not anymore.
To generate sentences selling products or regurgitate random opinions via a few tokens via a large language model is trivial now, and you only need to look ahead to see the possible ripples of this on the future of content and consumption of knowledge via media on the internet. Like a set of ever-growing and uncontrollable weeds, such slop is invading our digital townhouses and is only kept within boundaries by the gardeners of each walled stronghold — with moderation tools and bot-detection algorithms as their weapons of choice.
Maggie Appleton, in the essay “The Expanding Dark Forest and Generative AI”, describes a world of dark forests — where humanity has to hide in digital gardens. As language models and diffusion image models permeate and become widely available, the content landscape has dramatically changed — with the cost of content dramatically shrinking and becoming more and more automated.
Take a moment to think — as of right now, with what percentage probability can you prove that someone you’ve only interacted with over a social platform such as X is truly human?
We can no longer consider any message on any forum with the same assumption we’ve had previously — that the person we’re interacting with through our keyboard is human. This is a dramatic change from how we’ve used and consumed media through the internet. This shift will lead to more power and human attention moving to the walled gardens — the platforms that control and moderate human interactions online.
This isn’t necessarily a dystopian future. It’s just different. And difference creates opportunity.
Second-order effects will also lead to more stringent policies being adopted by platforms that were previously much more relaxed with their moderation policies. Curated and selective digital rooms — like selective Discord servers and group chats — will grow rapidly with people in waitlists to be approved via verifiable proof. New and exciting future paradigms of interactions may yet still come forward, leveraging such close connections. Moderation tools, easily pluggable and usable by budding forums and platforms, will also see larger adoption.
The key question to ask as a builder is whether the value proposition the platform is focused on is for knowledge or connection. Knowledge could be transmitted by content created by the model, but if users crave community and social connections that would likely not be served by the current capabilities of models.
I look forward to the new landscape of products that find the sweet spot: using models to amplify human potential rather than replace it. They’ll create spaces where AI and human-generated content coexist, each enhancing the other. We have to learn how to swim through the generated content, using it to propel towards more creativity and authentic connections.
Originally published on Haecceity