Gell-Mann checks

by Cleo Scrolls 🌱

27th Sep 2024

tl;dr: "Gell-Mann amnesia" is a cognitive bias—an observation of human failure. The mechanism behind it can be instrumentalized. Call it "Gell-Mann checks".

Importantly, Gell-Mann checks are only valid insofar as the truth-finding mechanism being judged is good at generalizing.

That is, you can only judge a whole mind by its part if it treats every part the same way.

I used to nod my head at most of what my philosophy teacher said—all seemed coherent. Then he talked about nuclear power, and he just didn't get it. To avoid Gell-Mann amnesia, I updated to "everything my philosophy teacher says and has said might be bullshit"[1]

Is this fair? My teacher is supposed to be a specialist, after all. I don't have high priors on a given philosophy teacher grokking nuclear power from first principles. Fine.

But you know who do claim to be generalists? Newspapers.

When The New York Times covers nuclear power, they claim to approach the subject with as much rigor as they do politics. So if they clearly don't get nuclear power, that's evidence against them getting politics.

This was Crichton's initial observation about amnesia: it wasn't about individuals, who often hide behind the guise of specialization, but newspapers, who are supposed to be generalists through and through.[2]

But surely, my philosophy teacher isn't an entirely different person when it comes to different subjects![3]

There's got to be some coherency.

Some interconnected web I can draw evidence of trustworthiness from.

Nuclear power and philosophy are two windows onto the same mind: why wouldn't evidence correlate?

Well.

Generalizing is hard

If you don't have even a tentative grasp of the subject at hand, you're kind of doomed from the beginning.

The NYT could always decide they need a new AI branch, but what then? As a journalist, you've spent years perfecting the art of meta-level writing, hopping from subject to subject and relying on "experts" (those with citations and decade-long degrees) for factual accuracy, with no incentive to go deep on a subject yourself and build a gears-level model of the stuff.

Now they put you in charge of the AI branch. You're tasked with understanding the AI world in a sufficiently detailed and accurate way that literally millions of readers won't be deceived.

Who do you turn to? Yann Lecun and Geoffrey Hinton both have a lot of citations; how are you supposed to differentiate?

I suspect this is made worse by the fact that as a journalist, you tend to take the political aspect of things first. This seems to be what journalism does--suck in all subjects, no matter how disparate, and shove them into the political realm. And in politics, almost everything operates in highly abstract simulacrum levels.

Which works fine in politics! These days, political success seems to be only loosely tied to object-level issues.[4] That's not the case in AI. So you're diving into a completely unknown world with few trusted sources, and on top of that you have to retrain the way your brain thinks (to operate on lower simulacrum levels).

All subjects are not equal

The NYT or Time clearly not getting AI is evidence against their trustworthiness.

But because we can expect this field to be difficult to dip one's toes into, it isn't as strong evidence against them as if they clearly didn't get economics, say.

Most people aren't actually generalists

Nor strive to be! My philosophy teacher seems fine with the idea that he only understands a fraction of the world.

He's not the type to try understanding nuclear power from first principles; he was content using it to argue his point, and ditched it later on. Its philosophical side, that's all he needs to know about the technology!

It's like he isn't even trying to be a generalist. No remorse felt for all the fields he can't learn about.

Meanwhile some of us try to become generalists and fail. Careers by definition restrict the domains we're comfortable operating in. And so society doesn't make it easy to build the kind of generalized truth-finding mechanism that would make Gell-Mann checks logically infallible.

(The sequences can are an attempt to fight against that trend.)

Gell-Mann checks are still useful

Regardless of how limited anyone's understanding of Bayes is, everyone has a confusiometer. If they pick up a new subject and don't notice their own confusion before delivering their perspective to a class of gullible young minds, that's evidence they're not generally great at epistemics.

So if one day they're knee-deep in Hegel and don't understand a word, they might be more liable than most to push past their confusion and deliver their lessons with their usual confidence. Noticing my teacher's inadequacy in physics should update me at least a little in favor of the "yah that's just BS" hypothesis.

Gell-Mann tests work. But they have limits.

My (empty) ~blog: croissantology.com

[1] This probably generalizes to all philosophy teachers. Cough.

[2] The wiki is here.

I'm not ideal for this, but if nobody does it, I'll create a Wikipedia page for "Gell-Mann amnesia", because for some reason that doesn't exist yet

(Use the Low-hanging Fruit, Luke!).

[3] Though with Elon Musk and engineering vs. politics that seems to be the case.

[4] They still haven't repealed The Dread Dredging Act!