On intuition’s relationship to rationality via language

A three year old draft blog post I just found. Feels worth publishing – the improvements in AI since then if anything make it clearer, and all the “right now” caveats justified.

It’s better to start by thinking of us as pattern matching devices first.

Not simple ones – such as modern deep learning AI that essentially does layered functions to measure correlation. Complex ones, that model causality in a sophisticated way we don’t remotely understand yet.

That’s intuition. Which is an odd word for it, that you’d only invent if you were overly focussed on language, not what we actually are.

So to language. One function of language is to attempt to describe what we pattern matched, our model of the world, to others. To influence them, to train them, to explain yourself to them.

If you’ve learnt a foreign language, or even just lots of varied words for similar concepts in your native language, you’ll know this is hazardous and inaccurate at best. No surprise – a few thousand words even in combination can’t cover conveying the exact logic of hundreds of trillions of synapses.

Rationality and science are attempts to improve our thoroughness and willpower to agree truth, and make explicit our working.

Sometimes this goes reasonably well – academic Maths rarely ends up with persistent mistakes. But it only does this by intense training of people with a very specific ability, and by picking the easy use cases. By definition, Maths is about the cases where logic prevails – and even then there’s Gödel’s theorem to confound that simplistic view.

But generally, it is going to be inaccurate.

You can’t work out everything with data or current AI. You can check you have made mistakes with it. You can feed your brain insights with it. If you’ve got loads of data and lots of resources and you do it really carefully, you can do random controlled trials (A/B testing) and at least be sure about the causal direction of what you learnt.

However hard you try, this won’t ever be a model as sophisticated as only a human mind’s intuition is right now. It’s difficult to use intuition though, because our minds can just as likely be wrong. Good cultural practices of training and validation can hone our minds and insight.

Once we’ve gained our truth, there is only the limited bandwidth of language to try to help others gain it too.

Leave a Reply

Your email address will not be published. Required fields are marked *