You don't get an AI by making a better nerd

Your mind isn't a mini-scientist whipping up theories from observations

They say it works like this:

Information flows in from the senses.

The algorithm processes the data.

A solution to the problem spits out the other end, in words or behaviors.

You think you can build a mind like that?

Try again.


There's a lot of financial incentives, not to mention nerd fantasies, resting on the belief that you can fire up the ol' code editor and whip up a mind out of Python scripts.

Silicon Valley is riddled with such true believers, from the lowliest coders to the VC angels that fund them.

They dress it up in all kinds of tech-speak, sure. But the basic pattern hasn't changed since the 17th century.

Input -> Process -> Output

That's the stuff of a mind.

Except it isn't so.

Here's three things to think about.


1 - The problem of induction

The "information processing" theory explains minds as "natural-born scientists".

From infancy, all animals with complex nervous systems are busy testing hypotheses. Humans are theory-building animals, constructing inner models of the world from observations of events around us.

This is all a primitive version of what labcoated observers do in the lab. Scientists whose paychecks depend on funding for science explain everything as science, fancy that one.

It sounds reasonable enough – and there's the problem.

Reasonable.

What makes a belief into a reasonable belief?

You've either got to test it against observations, or it's got to be an analytic truth – which only means that it's true according to its meaning, or to its conceptual coherence with other concepts. (The go-to example: "All bachelors are unmarried men.")

That's fine and all – but where does that belief about reasonable beliefs come from?

"You only know a truth if you know it by observation or if it is self-evidently true."

Right – so then what makes that assertion into a true statement? It's not an observable truth... and it's not self-evidently true.

Ouch.

If you build a machine that only "knows" what it can explain from its observations, you don't have a mind at all. You have a machine that might be pretty clever like a fox... you might even call it smart in certain ways.. but it won't be a mind that can think for itself.


2 - The problem of complex systems

Real-life genius John von Neumann once argued that the simplest description of a complex system is the behavior of the complex system.

Any explanation you offer up in mathematics or verbal descriptions must be more complicated than the system.

This is why climate predictions and economic forecasts always fail. They try to use mathematical formulas and verbal descriptions to predict the future behavior of complex systems with many interacting parts.

You can do this with a simple machine. You can take a watch apart and understand the basic working principles more easily that you can understand the intricacies of the actual machine.

Not so with complex things. Like living organisms, or their parts – like the human brain.

There's a reason that AI is turning more and more to "machine learning" methods, which mimic the behavior of the human brain.

You get more intelligent machines that are better problem-solvers.

The price? The people designing these things have no idea how they work. There's no way to reverse-engineer the operating principles into a simple set of rules.

Any system complex enough to be intelligent is too complex to explain with rules and theories.


3 - The problem of care

Machines don't care about anything.

They just pull in the data, run the algorithm, and spit out the results.

Modern-day intellectuals seduced by the promise of science to explain everything will tell you that data-crunching is responsible for everything you think, believe, wish, hope, desire, and dream.

Everything human is really the product of astronomically complex computations going on in your brain.

Caring is an illusion spit out by your hacked-together mammalian brain.

If you view the world as nothing but a bunch of atoms jiggling around in empty space, you might take a dim view on ideas like love, courage, justice, and wisdom.

Then again – how'd you get to that atomic world-view, anyway?

Somebody had to think about it. Build it up as a picture of the world in their own imagination. Convince others that it had something going for it.

They had to care about this little pet world-picture as something important and persuade others it was worth caring about for them, too.

Big brains as far apart as Aristotle and Martin Heidegger understood that you can't talk about facts and truths (with a lowercase "t") unless things show up for somebody.


It might seem easy enough to code up an AI from scratch. Truth is, the idea of an "intelligence" in the abstract is not much more than nerd theology.

They've got a highly abstract idea of what intelligence is... and an unshaking faith in the power of computer code and bad studies in the cog-sci lab to flesh out the details.

The only thing stopping them is the fact that minds don't work like that.


Like this article? You'll get to read all the member-only posts if you join us.

Want to leave a comment? You'll need to join us inside the the private rogue planet community.

Members can discuss this article over at the rogue planet zone on SocialLair.