Deep Learning With The Wolf
Deep Learning With The Wolf
๐Ÿบ The Wolf Reads AI โ€” Day 23: Kolmogorov Complexity and Algorithmic Randomness
0:00
-5:06

๐Ÿบ The Wolf Reads AI โ€” Day 23: Kolmogorov Complexity and Algorithmic Randomness

๐Ÿ’พ What if randomness is just the stuff you canโ€™t compress?

The Wolf Reads AI: Day 23: Kolmogorov Complexity and Algorithmic Randomness

Authors: Andrei Shen, Vladimir Uspensky, Nikolay Vereshchagin

Published: 2017

Read the PDF here. Find the book at the American Mathematical Society bookstore.


๐Ÿง  The Main Idea

What does it mean for something to be random?

What does it mean for something to be simple?

Kolmogorov Complexity gives us a radical way to answer both:

The complexity of a string is the length of the shortest computer program that can produce it.

A string is random if no shortcut existsโ€”no smaller program can reproduce it. Itโ€™s incompressible.

This book by Shen, Uspensky, and Vereshchagin explores that idea from every angleโ€”bridging computation, probability, and information theory into one sweeping framework.


๐Ÿ” Why It Still Matters

Kolmogorov Complexity shapes how we think about data, patterns, and learningโ€”even if we canโ€™t compute it exactly:

  • In data compression, where meaning hides in patterns

  • In machine learning, when we aim to generalize rather than memorize

  • In anomaly detection, where something โ€œtoo weirdโ€ signals a deeper pattern

  • In theoretical AI, as we consider the limits of what a system can learn

At its heart is a deep truth:

The best explanation is the shortest one that still works.

If GPT-4 had a guiding principle, it wouldnโ€™t be โ€œknow everything.โ€

It would be: Find the shortest true story.


โš™๏ธ How It Works

Kolmogorov Complexity (KC) is usually written like this:

K(x) = length of the shortest program p such that p outputs x

  • A string like "111...1" repeated 1000 times? KC is low โ€” easy to generate.

  • A random-looking 1000-bit string with no pattern? KC is high โ€” no shortcut.

The book introduces:

  • Plain and prefix complexity โ€” dealing with how we write and decode programs

  • Algorithmic randomness โ€” when strings pass every statistical test for randomness

  • Incompressibility lemmas โ€” showing that most strings simply canโ€™t be shrunk


  • ๐Ÿ“Ž Key Concepts Explained

    Algorithmic Randomness

    A string is algorithmically random if no program shorter than the string can generate it. It looks random โ€” and is random, by the logic of compression.

    Prefix-Free Coding

    To keep math clean, programs must be self-contained โ€” one canโ€™t be the prefix of another. This prevents confusion when decoding.

    Incompressibility

    Most strings canโ€™t be compressed. If you find one that can, youโ€™ve probably uncovered structure, meaning, or a regularity worth investigating.


Memorable Quote from the Book

โ€œRandomness is a lack of pattern. A string is random if it is incompressible.โ€


โœ๏ธ Editorโ€™s Note

This isnโ€™t just math.

Itโ€™s philosophy, logic, and computer science rolled into one.

Kolmogorov Complexity and Algorithmic Randomness teaches us how to thinkโ€”not just about AI, but about explanation itself. Itโ€™s a mirror for our models, and maybe for us.

๐ŸŽ™ Podcast Note:

The podcast accompanying this article is created with Google NotebookLM. The two hosts you hear are AI-generated. They dive into the subject material with great enthusiasm. Every once in a while, they miss a detail. Today, they referred to this article as a โ€œpaper.โ€ A minor detail. This is a short article, not a paper, which would be a much longer work with detailed citations. (Although, admittedly, when I have the time, I do sometimes provide detailed citations for my articles. Today is not that day.) More AI paper fun tomorrow.

Coming Tomorrow

๐Ÿงฎ Minimum Description Length (MDL) โ€” the idea that the best model is the one that explains the data in the fewest bits.

Think of Occamโ€™s razor, but with math.

#KolmogorovComplexity #AlgorithmicRandomness #WolfReadsAI #InformationTheory #Compression #ShenUspenskyVereshchagin #AIphilosophy #SutskeverPapers #OccamsRazor #ComplexityScience

Discussion about this episode

User's avatar