Brainrot

Is slop a dead end or a temporary passage?

With the release of Sora 2 from OpenAI, people have been talking more about our collective human effort toward helpful vs unhelpful technology.

In 1999, a much smaller Nvidia released the GeForce 256. This card was the world's first GPU (Graphics Processing Unit). It was designed specifically to enhance the graphics performance of video games. Today, GPUs are used across most industries in one way or another, thanks to their superior parallel processing capabilities. You may only be familiar with GPUs because they're mentioned in the same breath as generative AI. Here are some other use cases:

  1. Medical Imaging and Healthcare: GPUs aid in the math of computational imagery, enabling greater image quality with less radiation. They also enable rapid genome sequencing and molecular simulations for drug discovery and personalized medicine.
  2. Weather forecasting and disaster modeling: GPUs enable more accurate severe-storm forecasting, modeling of evacuation routes and simulations of resource staging.
  3. Robotics and Autonomous Vehicles: Real-time perception, sensor fusion, path planning, and control systems in self-driving cars, drones, and industrial robotics use GPUs for computer vision and decision-making.

Note: Self-driving vehicles are already proving to be somewhere between 80-91% safer than human driving, not to mention that a large majority of that remaining 9-20% are accidents caused by humans, or a mechanical failure of the car itself (ie: a wheel falling off). Worldwide, 1.2 million people die in traffic accidents annually. Approximately 40,000 of those deaths are in the United States. That's a greater death toll than that of 9/11 every month, in the United States alone. For a closer look, see the essay that I've also linked at the bottom of this post, Please let the robots have this one.

The through line is that the architecture of a GPU makes them perfect for simulation, which underpins many of our most advanced life-saving and quality-of-life-enhancing technologies.

While the above examples are pretty uncontroversially "good" developments, it's important to emphasize that they were all directly downstream of consumer entertainment. In a very straightforward way, the demand for higher fidelity video games created a mass market for parallel hardware, which researchers then repurposed to model and simulate the real world. It took time to shake out, but it did shake out, because there are always people looking for solutions to problems, and there are always solutions to find in unexpected places.

This phenomenon of cross-pollinating technology is not unique to GPUs. Many technologies had to first be invented for the purpose of x, before they morphed into being useful for y.

  1. GPS was invented for the military and is now considered a necessity for modern life.
  2. F1 pit stop techniques were used to demonstrably improve the efficacy of ICU patient handoffs in hospital settings.
  3. Xbox 360 Kinect sensors became commonplace in research labs across many industries as an ultra low-cost motion capture device, despite being marketed and sold as a gaming device for families and children.

There seems to always exist a high likelihood that any new technology will find its home, or at least an unrelated usefulness, somewhere far away from its initial implementation. With video models such as Sora 2, that "home away from home" is anticipated to be in their utility as world models, meaning an AI model that can accurately simulate the physical world.

It is up for debate whether or not video models are on the right track or not. They're obviously getting much better at creating realistic video, but it would be presumptive to assume that this means they will be able to accurately or consistently simulate physics or other aspects of the physical world in the future. This is an area of considerable debate, and has a similar atmosphere to the "Is AI intelligent or is it just parroting intelligent sounding words?" argument.

To be clear, we are not yet at the point where we have anything close to a stable, robust world model. But when thinking about the future, it's best not to mistake the initial commercial application of a technology for its final use case.

Rate of Change

What will be the current day version of this screenshot?

I recently overheard a conversation where someone exclaimed that a video wasn't AI-generated because the "hands weren't messed up."

The messed-up-AI-hands meme was a notable point in time where both AI skeptics and enthusiasts agreed that AI could not reliably generate realistic images of hands. It took roughly a few months for that claim to become outdated. The skeptics ran (and some continue to run) with it, while those continuing to test new image generation models updated their beliefs.

With the rate of change happening in AI, it is not safe to base your beliefs on current capabilities rather than the observable trajectory of capabilities. Maybe that ascending line eventually plateaus, but for now it's still moving up and to the right.

A systemic failure (or lack of an attempt) to educate and update peoples' understanding of AI capability will lead to large portions of the population being more vulnerable to disinformation, scams, manipulation, and falling behind in economic competitiveness. Unfortunately there are not many robust solutions to these problems yet. All the more reason to take personal initiative and layer skepticism with a healthy amount of curiosity re: the rate of technological progress.

For now, we are being presented with an imperfect, new technology in a hyper-commercialized form factor and being told that it will change the world. There is much to criticize in this approach, and it's fair to assume that they (OpenAI) are jumping the gun with their messaging, considering that most consumers will see the productization of the technology (the Sora app) and automatically adopt the default 21st century skepticism toward corporations that makes modern life tolerable.

Whether video models will become useful world simulators remains an open question. What we can say with more confidence is that judging emerging technology by its initial implementation has historically been a mistake. Somewhere right now, an engineer is halfway done building the next thing, and whatever that ends up being, it will be objectively more powerful, capable, and/or useful than what we have currently.

For those who want more on the topic, I've linked some recent writing that you might enjoy below:

  1. Please Let The Robots Have This One, an essay on the safety profile of self-driving vehicles
  2. Against Tech Inevitability
  3. Global Call for AI Red Lines, presented at the 80th UN General Assembly
  4. Terence Tao documented his process of using GPT-5 to solve an open MathOverflow question
  5. A piece from Vox to back up the sentiment of AI video being unbridled, useless slop, if that's how you're feeling about it
  6. A great post from a biomedical scientist involved in cancer research that mirrors the thesis of cross-industry tech pollination

If you enjoyed this piece, you can sign up for updates, leave a comment below, or read related essays.

COL