Will Elon Musk's Neuralink brain chip turn you into a stupid robot? (Part 2)
What words mean is more than "word games"
Last time on rogue planet
- Why the mind isn't a machine
- How "the science" got its own explanations bass-ackwards
- What's really happening when scientists try to explain the mind
Here in part 2, we'll look at the origins of the myth of mind as a machine... how philosophy of language can be a real-life problem for Elon's brain chip... and a hint of how language really works (if this is right, you'll want to make a hard pass at that brain implant...)
Let's pick up where we left off, with the problem of meaning.
Humans are more than noise-making primates
By the 18th century many of the smart boys believed that they could explain what words mean by treating the word as a symbol "pointing at" the objects that it symbolizes.
The word "tree" means tree.
Which tree? That tree over there? The picture of a tree that pops into your mind when you read the word "tree"?
Different folks had different ideas on this. Thomas Hobbes and John Locke, who made a guest appearance back in part 1, argued that words designated thoughts in the speaker's mind.
It's like there's a glowing arrow that points from "tree" to that tree right over there.
Later on, you had thinkers like Condillac who developed the Hobbes-Locke view. Words are just "tags" for thoughts in the speaker's mind. How does this happen? Simple.
Human language is just like the cry of an animal. Our ancestors went around grunting and pointing and, eventually, somebody figured out that when Og grunted "rock" at Grog, he meant that lump of hard stuff over there.
If you want to explain language, you just need to cook up a theory about how those animal sounds developed into signs that transmit the information in Og's head over to Grog.
That's a seductive viewpoint... and too easy by far.
Here's a story that helps show why.
I once tormented a poor psychology grad student over drinks
I asked him to explain how he could account for the meaning of an abstract concept like a number with a materialist theory of mind.
He proceeded to arrange a group of six empty beer glasses into a rough pyramid shape.
"See?" he told me with pride, "All you have to do is put them together and count."
"Is that the number six?" I asked him, pointing at the pyramid of glasses.
"... No. It's six glasses."
"Then what makes these objects into a symbol for the number six unless you've already got the concept of '6' in your head?"
"But you just count the glasses."
"There's six people standing over there in that group. Lots of groups of six people, in fact. And what about those six chairs over there. Those six paintings on the wall. What makes all of those things into six?"
"But you just count them."
"Count what? These are glasses. Those are paintings. This is a chair. People over there. Where do you get the number 6 out of those things?"
Poor guy, I don't think he ever came around to the point:
Putting six things into a pile doesn't make it six unless you're showing it to somebody who already groks the concept "six".
Elsewise you've just got a bunch of dirty glasses on a table.
If you say that one and one make two... you can have the one and the other. It's the "and" you've got to work for.
He had it bass-ackwards. You don't get to the concept of a number by counting... you can count because you understand the concept of a number.
I won't argue that he might have had an okay (if crude) theory of how a person might learn how to count.
But he didn't say anything about what it means to get it right when you count. What makes it true that these six glasses, those six people, these six chairs, and the six paintings on the all are all "six" items?
Hint: it isn't the matter that makes up those objects.
That time when philosophy of language mattered in your life
That anecdote brings us around to a serious problem in the philosophy of language.
How is it that a word becomes meaningful to you?
There was a time in your life when you couldn't read the words on this page.
People who aren't native English speakers can't read them right now. If you can't read the words in a language, they're a jumble of lines on a screen.
What happens when the squiggles and scribbles transform into words that you can read fluently?
Fans of the Hobbes-Locke-Condillac theory philosophize that words mean something because they stand for thoughts in your head.
Today's science fanboy grifters translate "thoughts" into "stuff happening in the brain" without losing a step. The magic is nothing so magical after all. When you learn a language, it's only a matter of training yourself to connect the sign to the thought.
To understand the trouble with this point of view, you need only look back at the story of the number six.
Associating symbols with objects outside of language is not easy or obvious
What's going on with that "stands for"?
What's happening when a word relates to the object it tags?
Is it really like a glowing arrow pointing between the word and the thing?
That's not very scientific.
But this exactly what the tough-minded materialists who don't need no philosophy believe.
Even though it's not at all clear how a materialistic science could even begin to explain the linkage between words and "stuff".
The technical term for this link is the reference of a word. The word "Venus" refers to the second planet orbiting our sun, the planet Venus.
The reference between the word/symbol and the symbolized object is absolutely vital for any materialist theory.
They have to talk about stuff in the world which isn't just language or mathematical equations. The equations have to accurately describe reality outside the bare mathematics.
Let me see the theory in physics that explains this.
Find me the differential equations that explain this feature of words.
Show it to me with particles and math.
You can't explain reference without a lot of very non-materialistic arguments that come from outside of physics and outside of all science
Some try to explain reference through plain cause-and-effect. If you can set up a pattern of correlations, between information in your brain and information out in the environment, then you've got your meanings.
Causal theories of reference are thin and unconvincing. They don't say anything about the fact that there are signs, or that some of those signs stand for things that aren't in a language or a mind.
Maybe you can explain simple animal behaviors with this kind of causal or informational theory.
What you don't get is more complicated kinds of experiences and thoughts in human persons. You don't get the shared context of a culture, or the kinds of mental happenings that require language.
The crude theorizing behind Musk's brain chip assumes a causal theory like this. If you believe that language is nothing but a tool for transmitting thoughts, then there's no obstacle to the Neuralink program.
This is a serious problem that today's vulgar materialists don't pay nearly enough attention to. Philosophy isn't science, and it's science that wears the pants around here now, didn't you get the memo?
Only problem is, it leaves them to say stupid things.
When scientists turn up their nose at philosophy, they don't stop doing philosophy
They do philosophy badly.
Badly, like cribbing their theories of mind and language from 17th century Christian monarchists. Not even a clue.
How seriously are you going to take people who don't know the history of their own discipline and aren't curious enough to look into it? I don't trust incurious minds.
The difference between explaining knowledge by its causes and explaining how humans have knowledge at all is not a difference you can gloss over by shouting "science!" over and over again.
A grazing sheep knows where to find the green grass. But the sheep doesn't know that the green grass is green. Sheep don't have the concept of grass. Sheep just eat.
Human beings looking at the same pasture see the color green and we know that color as green.
That's the difference between moving around on instinct and having a concept.
That's the part that needs explaining.
Stay tuned for the conclusion in part 3.
You can turn people into machines without losing everything that makes us people
Like this article? You'll get to read all the member-only posts if you join us.
Want to leave a comment? You'll need to join us inside the the private rogue planet community.
Members can discuss this article over at the rogue planet zone on SocialLair.