đ Paper: The First Law of Complexodynamics (2011, Shtetl-Optimized)
âïž Author: Scott Aaronson
đïž Institution: MIT
đ Date: August 2011
What This Piece Is About
This isnât a journal article. Itâs a blog post. But make no mistakeâthis post has legs.
In it, theoretical computer scientist Scott Aaronson explores a question posed by physicist Sean Carroll:
Why does complexity in physical systems rise, peak, and then fallâunlike entropy, which just rises forever?
To answer this, Aaronson proposes a tongue-in-cheek but deeply thoughtful âFirst Law of Complexodynamicsâ:
Complexity peaks at intermediate times.
So⊠What Does That Mean?
Think of a cup of coffee. When you pour in cream, the swirls and marbling at the start are beautifulâand complex. But fast-forward a bit, and all youâve got is a uniform beige liquid. Low complexity. High entropy.
Aaronson argues that many systems follow this same arc:
Low complexity at the start (everything orderly)
High complexity in the middle (patterns, structure, richness)
Low complexity again at the end (uniform, disordered mush)
How He Explains It
Using Kolmogorov complexityâthe length of the shortest computer program that can describe a stringâAaronson introduces the concept of:
đComplextropy
A proposed measure that captures how structured but unpredictable a system is. Not too ordered, not too randomâjust complicated.
He suggests this measure should peak midway through a systemâs evolution, and taper off as entropy dominates.
He even proposes using gzip compression as a proxy to measure it in real-world data. (Yes, you might someday study physics with zip files.)
Why It Still Matters
The post helped popularize the idea that complexity is not monotonic.
It introduced âcomplextropyâ into the broader conversation about physical and computational systems.
It planted the seed for exploring how to measure complexity meaningfully over timeâwhich still challenges ML, neuroscience, and physics.
Relevance to AI
While Aaronsonâs post doesnât mention AI, his framing is eerily relevant to:
LLMs: Do they peak in âinterestingnessâ before converging to generic outputs?
Training curves: When do models develop structure vs. noise?
Simulation: Can we measure complexity in evolving environments or agent behavior?
Memorable Quote
âEntropy increases monotonically, while complexity or interestingness first increases, then hits a maximum, then decreases.â
Or, more poetically:
âComplexity lives in the messy middle.â
đ€ About the Podcast
This episode's audio summary was generated using Google NotebookLM, based on Scott Aaronsonâs original 2011 blog post: The First Law of Complexodynamics.
We reviewed the transcript for accuracy, and while the tone is casual and conversational, the core ideasâKolmogorov complexity, sophistication, and the complextropy conjectureâare faithfully and thoughtfully presented.
NotebookLM wonât stir your coffee, but in this case, it did a fine job of explaining why your swirls looked smarter than they should have.
Read the original post here.
#ScottAaronson #Complexodynamics #Complextropy #KolmogorovComplexity #SeanCarroll #WolfReadsAI #EntropyVsComplexity #ComputationalPhysics #AIphilosophy#ShtetlOptimized
Share this post