Deep Learning With The Wolf
Deep Learning With The Wolf
đŸș The Wolf Reads AI — Day 26: “The First Law of Complexodynamics”
0:00
-16:12

đŸș The Wolf Reads AI — Day 26: “The First Law of Complexodynamics”

Kolmogorov meets coffee as Scott Aaronson proposes a law for the rise and fall of complexity over time—just before entropy flattens it all.

📜 Paper: The First Law of Complexodynamics (2011, Shtetl-Optimized)

✍ Author: Scott Aaronson

đŸ›ïž Institution: MIT

📆 Date: August 2011


What This Piece Is About

This isn’t a journal article. It’s a blog post. But make no mistake—this post has legs.

In it, theoretical computer scientist Scott Aaronson explores a question posed by physicist Sean Carroll:

Why does complexity in physical systems rise, peak, and then fall—unlike entropy, which just rises forever?

To answer this, Aaronson proposes a tongue-in-cheek but deeply thoughtful “First Law of Complexodynamics”:

Complexity peaks at intermediate times.


So
 What Does That Mean?

Think of a cup of coffee. When you pour in cream, the swirls and marbling at the start are beautiful—and complex. But fast-forward a bit, and all you’ve got is a uniform beige liquid. Low complexity. High entropy.

Aaronson argues that many systems follow this same arc:

  • Low complexity at the start (everything orderly)

  • High complexity in the middle (patterns, structure, richness)

  • Low complexity again at the end (uniform, disordered mush)


How He Explains It

Using Kolmogorov complexity—the length of the shortest computer program that can describe a string—Aaronson introduces the concept of:

🌀Complextropy

A proposed measure that captures how structured but unpredictable a system is. Not too ordered, not too random—just complicated.

He suggests this measure should peak midway through a system’s evolution, and taper off as entropy dominates.

He even proposes using gzip compression as a proxy to measure it in real-world data. (Yes, you might someday study physics with zip files.)


Why It Still Matters

  • The post helped popularize the idea that complexity is not monotonic.

  • It introduced “complextropy” into the broader conversation about physical and computational systems.

  • It planted the seed for exploring how to measure complexity meaningfully over time—which still challenges ML, neuroscience, and physics.


Relevance to AI

While Aaronson’s post doesn’t mention AI, his framing is eerily relevant to:

  • LLMs: Do they peak in “interestingness” before converging to generic outputs?

  • Training curves: When do models develop structure vs. noise?

  • Simulation: Can we measure complexity in evolving environments or agent behavior?


Memorable Quote

“Entropy increases monotonically, while complexity or interestingness first increases, then hits a maximum, then decreases.”

Or, more poetically:

“Complexity lives in the messy middle.”


đŸ€– About the Podcast

This episode's audio summary was generated using Google NotebookLM, based on Scott Aaronson’s original 2011 blog post: The First Law of Complexodynamics.

We reviewed the transcript for accuracy, and while the tone is casual and conversational, the core ideas—Kolmogorov complexity, sophistication, and the complextropy conjecture—are faithfully and thoughtfully presented.

NotebookLM won’t stir your coffee, but in this case, it did a fine job of explaining why your swirls looked smarter than they should have.


Read the original post here.


#ScottAaronson #Complexodynamics #Complextropy #KolmogorovComplexity #SeanCarroll #WolfReadsAI #EntropyVsComplexity #ComputationalPhysics #AIphilosophy#ShtetlOptimized

Discussion about this episode

User's avatar