Sterling stigmatizes Singularity sycophants

Nerds get all bent out of shape over this author's moderately critical comments on AI

Sterling stigmatizes Singularity sycophants

Once upon a time, Bruce Sterling fired back at The Singularity:

This aging sci-fi notion has lost its conceptual teeth... Google is not going to finance any eschatological cataclysm in which superhuman intelligence abruptly ends the human era. Google is a firmly commercial enterprise.

Continuing:

It's just not happening. All the symptoms are absent. Computer hardware is not accelerating on any exponential runway beyond all hope of control. We're no closer to "self-aware" machines than we were in the remote 1960s. Modern wireless devices in a modern Cloud are an entirely different cyber-paradigm than imaginary 1990s "minds on nonbiological substrates" that might allegedly have the "computational power of a human brain." A Singularity has no business model, no major power group in our society is interested in provoking one, nobody who matters sees any reason to create one, there's no there there.

Now this is an interesting set of remarks.

The nerds all hyperventilated about this, as nerds are prone to doing.

There's a timely reminder for you: Even the best and brightest don't read shit.

They sure don't think about what they read. Not when the sunk cost of a whole identity is on the line.

Sterling's comments are right on the money, if not quite in the way the Singularity fanboys take them.

As I've written about before, the real event of AI is not in the threat of creating an artificial human-like person that thinks and acts like an adult human being.

In the hands of the Silicon Valley tech-cult grifters, the Superintelligent Artificial General Intelligence is not too different from a Greek god.

The AGI is an exceptionally competent, idealized, possibly autistic, human being. The AGI they speak of tends to be rather stupid for all of its alleged intellectual skills.

Stupid has a specific meaning here. The machine never seems capable of asking why it is doing what it is doing and whether this is a worthwhile goal.

Stupid from our point of view, then. But why shouldn't that be the most pressing consideration? What other kind of stupidity is there?

The fact that we can even hypothesize about stupid super-genius minds brings us right to the real issue.

The true threat of AI is how it turns our understanding of human beings inside-out.

No longer is man the intellectual standard that the machine must meet.

Self-awareness and conscious experiences are non-essential workers in this new cognitive economy. What matters here is raw intellectual horsepower. Computing. Running algorithms.

In this new jungle of intelligence, man is but one machine among many, many possible machines. And if we can design and build machines with greater cognitive powers and fewer cognitive defects than any human?

Then so much the worse for the old biological wet-ware cooked up by Ma Nature.

And therein we have the problem.

The AGI cultists think that smart machines will be at once more intelligent than humans, to a degree that we cannot conceive of how they think... while also stupider than humans, to the degree that they act out their instructions like algorithm, not questioning and perhaps not even able to question their actions.

The old "universal paperclip maker" is case in point

That machine is smart enough to conquer the world and transform all matter in the solar system into paperclips... yet it never stops to ask itself if this is a worthy goal. Might there be something more than making paperclips?

The nerds insist that this is conceptually possible. Intellect has no necessary connection to any inner motives or external goals. The behaviors of a manufactured AI have no restraints. It acts to achieve a given system of goals.

The go-to example would be the way nature optimizes life-forms for ecological niches.

That's an intelligent process – it solves problems with ruthless efficiency – but it's not a conscious or self-aware process (according to The Science). There's no mind responsible for the "fit" between an organism and its environment.

But there's a trade-off here that needs more air time.

Anything intelligent enough to be labeled "intelligent" has more going for it than  problem-solving skills.

That's part of being a mind. But intellectual problem-solving skills suppose that the thinking mind is able to notice the salient parts of its world.

What makes even a dull plant different from a rock is that the plant can respond to its environment. Rocks lie there until acted upon. Plants, like all life-forms, are self-moving.

And there's the trick.

What shows up for a mind depends on the mind's embodiment.

Any intelligence is going to have a body.

Any intelligence with a body will be located in a situation.

There's no such thing as intelligence in the abstract.

There's only concrete examples of concrete agents interacting with concrete environments.

A human-like intelligence – meaning, actual human beings, since we're the only examples of human-like intelligence – is capable of thinking about its goals and purposes and objectives.

True enough, we don't do this nearly as often as we could or should. But it's also true that we aren't just utility-maximizing robots. We aren't just acting out imperatives built into our genes or the wiring baked into our brains.

All of these physical, biological, and psychological forces are transformed in human beings.

We can in principle think about all of these forces and decide "I'm not going to do that".

Now square this with the Superintelligent AGI which is cognitively more capable but stunted when it comes to self-reflection.

... any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.
- George Dyson

You can say, sure, this mind is only superficially intelligent in the way humans are.

We talk about it with words we know, words like "self", "person", and "conscious", because that's all we have. But when you dig in to the technical specs, the AGI doesn't need our kind of self-awareness or consciousness for its intelligence. It's smarter at doing things, but it doesn't share in the particular kind of intelligence that humans have.

Okay. I can buy that up to a point. But then let's circle back to Sterling again:

We're getting what Vinge predicted would happen without a Singularity, which is "a glut of technical riches never properly absorbed." There's all kinds of mayhem in that junkyard, but the AI Rapture isn't lurking in there. It's no more to be fretted about than a landing of Martian tripods.

Notice what he isn't saying.

He's not saying that intelligent machines won't transform everything.

As I'm reading this, he's saying something far more important.

"There's all kinds of mayhem in that junkyard..."

Singularity fanatics are so focused on one particular kind of threat – the super-genius AI in a Box – that they can't see what's happening

Intelligence isn't human-like.

Intelligence doesn't think as humans think... intelligence doesn't have to be conscious, it doesn't have to speak English, it doesn't have to think in declarative sentences, it doesn't have to know what it is doing.

Ants are one of the most successful organisms on Earth. They are smart in their own ways, at least as far as being adapted to their niches and adaptable to changes.

No ants write sonatas.

What Sterling hints at... and I have no idea if this is his point, but it fits... is a different sort of upset.

It won't be the "intelligence explosion" that has compelled the nerds for 30 years.

Intelligence is something different, something perhaps more sinister...

A mindless process that cannibalizes and weaponizes human-like intelligence, unseen and already among us.

It's like a Shoggoth. The Thing. A non-conscious intelligence that out-smarts humans. We can't see it because it doesn't fit our expectations, which are built into us to discover other humans. We see faces. We speak to people that we expect to speak back.

A machine intelligence might not do any of these things.

Worse yet, it might only do them if it wants something from us. Which somehow manages to make it all the more spine-chilling.


Like this article? – You'll get to read all the member-only posts if you join us.

Want to leave a comment? You'll need to join us inside the the private rogue planet community.

Members can discuss this article over at the rogue planet zone on SocialLair.