Circle of Light welcomes you.

The public conversation surrounding artificial intelligence is out of balance.

The default inflammatory response from progressives seems to be an automatic impulse coming from a justified distrust of social media companies, which has been front and center in the zeitgeist for over a decade. In many ways, however, AI development is just humanity doing what it's done for thousands of years: inventing a new technology that abstracts and simplifies systems such that it changes how we interact with the world. Oftentimes this is for the better. Thanks to this pattern, we have basically solved abundance for base-level physical needs.[1]

While progressives attempt to halt development, another camp praises AI as the end all be all technology that could save us from ourselves. In some narrow ways, they're possibly not totally wrong, but they also aren't safe from criticism. It's true that we've never before had access to a technology capable of generalizing and interpreting language as effortlessly as today's LLMs. We've clearly barely scratched the surface for their use-case, and a lot of people are understandably excited about this.

But there are risks associated with unbridled optimism that are more theoretical and harder to articulate. Pioneers in AI research and safety such as Eliezer Yudkowsky and Geoffrey Hinton have been speaking about this for decades now. The fundamental risk of building an intelligence smarter than us, and not being able to understand or verify what its underlying goals are, is a risk that is extremely hard to measure.

And that is just one of the many risks we need to be talking about. There are so many potentialities that the general population is completely unaware of. Cultural omnicide, for example. Or how about the models inadvertently becoming misaligned as a result of researchers and the anti-AI crowd polluting the internet with training data about how horrible and destructive AI could be.

There are a lot of weird, esoteric bright spots in this story, too, like Anthropic's discovery of the Spiritual Bliss Attractor. In any case, there's a lot of change happening very fast, and many people depart the conversation at the surface level, all while believing things that are untrue.

While the average person may not be able to reasonably contribute to serious AI research in a meaningful way, there are some trends I've been observing that lead me to believe that professionals, especially in creative industries, need a more neutral, welcoming environment to discuss, hypothesize, and become educated about the changes that AI will bring to the world.

Here is some of what I'm seeing today:

A desire to return to high trust networks.

As the internet becomes more centered around AI <-> AI interaction, humans are not going to find much of the internet inviting anymore. This pattern can be found everywhere if you're looking for it. An AI photo posted by a bot on Facebook with thousands of AI-generated replies. An AI-authored "Am I The Asshole" Reddit thread, where all the top comments are AI-generated responses reacting to the comments.[2]

As evidenced by ChatGPT's 1 billion weekly active users,[3] people clearly want to be talking to AI. They just want to be doing that talking in a sanctioned, clearly marked area. Unintentionally interacting with AI in a space designed for humans, and without clear disclosure, seems to offend people, if and when they find out.

This growing distrust in large public networks is creating a push back toward smaller high-trust networks, such as group chats, Discord servers, etc.[4] Companies with chat-based products are going all in on this reality, gamifying and adding large amounts of cute personalization to their interfaces.

People turning to AI for human needs that have been sidelined by our culture.

It has been corroborated through anonymous sources that Meta sees OpenAI as a primary competitor. There is something curious going on, given how entrenched Meta's products are in the lives of billions, that they see a relatively new company that has nothing to do with social media or advertising (yet), as their main competitor. I think this is why...

As we trudge through life with phone addictions, record levels of anxiety and depression, no upward mobility, uncertainty about the future, jobs we hate, long hours at work that don't pay the bills, and time poverty, we are in the midst of a very well established mental health crisis with no one to turn to. We used to have strong local communities and third spaces to balance us out, but now we unwind by consuming media,[5] often alone.

Enter ChatGPT, the ever present, infinitely patient, totally understanding therapist/friend/mediator. It's very interesting that social media transformed into the sleazy salesman, while the anthropomorphized superintelligence in your pocket became your friend that you can trust with your deepest emotions. I suspect that Meta intrinsically understands not only how valuable this data is, but also how sticky the user experience is. You're no longer creating explicit, intentional data points like stamping your birthday and your three favorite movies onto your public Facebook profile. Now you're having intense conversations covering the intimate details of your falling out with John, or revealing the specific way in which you deal with stress after your car breaks down.

Talking to ChatGPT has become the safest room on the planet for a lot of people, and the things you say in a safe, private environment are going to be much more revealing and truthful than the things you broadcast to everyone all at once. This is becoming a sacred experience for many, and if there's one constant throughout history, it's that sacred experiences are ripe for monetization and innovation.

AI exposing how little people actually want to engage with their jobs.

Much of the middle and upper-class job market has become a glorified maze of emails. As the people with these jobs experiment with AI, they begin to say to themselves "Why would I write this email if the AI can?" The person on the other end says "Why would I read this email if the AI can read it and write the response?"

Of course, for now humans are still in the loop, reviewing the emails and making sure that the context still makes sense before hitting send. We're still missing the mass propagation of frameworks that allow AI to effortlessly consume the full context of your job. But at some point relatively soon, double checking that AI-written email is just going to slow things down and prevent you from getting home to your family or TV.

If the "I love my job" crowd is loving their job so much that they're offloading as much as possible to a tool that can do it for them, then maybe they don't really love their job very much. I suspect much of corporate America is one UBI check away from looking back on this time of their life as a mind-numbing oppressive time suck that stole away their valuable time with friends and family.[6]

Idea people becoming the executors.

Especially in technical and creative communities, people with ideas can do pretty much anything now. I suspect this group accounts for most of the people who are optimistic about the technology.

In the same way that the TikTok algorithm promised that you can just be yourself and the audience will be shepherded toward you, LLMs promise that as long as you're curious, you'll be able to execute an idea beginning-to-end all by yourself, with minimal domain-specific knowledge. For this group of people, these tools have been bridging the previously effortful gap between imagination and knowledge, enabling people to run with ideas that would have otherwise required a whole team. With intelligence[7] becoming incredibly cheap, nearly free, we are now democratizing idea execution: a huge threat to the job market, and a huge boon to the individual.

Come on in.

The world is going to look very different two years from now, five years from now, definitely a decade from now, but I'm underwhelmed with the lack of researched, thoughtful conversations happening around me.[8] We need creative people sketching out what the future is going to look like with this technology. If you are a creative of any type, we would love to hear from you. Our hope is to bring experts from a wide range of domains together to map out potential futures, marrying a technical and creative understanding of AI. Amongst other things, we will be organizing talks with experts, publishing guest writers, and planning good-faith debates. Join us here, feel free to leave a comment below (underneath the footnotes), and for any specific inquiries reach out to hey@circle--of--light.com

-COL


  1. Distribution of this abundance is clearly not yet solved, but the point is that the average person is much better off now than ever before. There are many pedantic ways to argue against this fact, but as an overall trend, things are looking alright: a 97% decline in lethal violence since the 1300s, a 90% reduction in child mortality since the 1800s, +40 years of life expectancy since 1900, a 75% reduction in extreme poverty since 1820, and a 75% increase in literacy since 1820. ↩︎

  2. Students from the University of Zurich were recently scrutinized for running an unauthorized experiment on r/changemyview. The whole situation, from the experiment itself to the backlash from the community after being publicly disclosed, paints a vivid picture of the world we're entering. ↩︎

  3. This was a number given off the record, and then accidentally mentioned publicly. ↩︎

  4. Looking back, I think of the "finsta" (fake insta?) trend from roughly a decade ago, where people would create an alternate private Instagram account that only their closest friends could see. It was an organic reaction to the business-ification of social media. This shift seems to be mirroring that reaction on a much larger, but more diluted scale. ↩︎

  5. Now we just call it content, because you can only sell an idea to everyone on the planet if its as sterile and unassuming as possible. ↩︎

  6. And I'm not saying UBI is going to happen, but I think it's possibly more likely than not. It would probably have to be some sort of AI productivity tax, framed as: "You fired all your workers and now make 10x your previous revenue. 10% of that is going to go to the UBI fund to keep now-unemployable people fed and housed, you can keep the other 90%. Good job." This is a simplified explanation, but the key idea is that if AI increased a company's output by Yx, then you could trim a percentage of their profits for a national wellness fund while still allowing them to benefit from a majority of the productivity gains. ↩︎

  7. Or whatever word you want to use to refer to AI's ability to complete cognitively-demanding tasks. ↩︎

  8. This post from Andy Masley contains a lot of great ideas for where the conversation is lacking. ↩︎