In the last video I said that the challenges of natural sciences like physics and chemistry are trivial compared to the challenges of a science of the mind.
Why? Because in none of these fields do you have to worry about how the dynamics of a physical system — how the system evolves and changes over time — can be influenced by its mental states, and in particularly by the content of its mental states.
But what do I mean by the “content” of mental states? Is the content of a belief different from the belief itself? And why does it matter to the scientific project of constructing a theory of the mind and intelligent behavior?
That’s what I’m going to talk about in this video.
But before we get into talking about beliefs and mental states, I need to introduce a couple of useful concepts borrowed from the philosophy of language and mind. We’ll start with the type/token distinction.
You see this sentence on the screen:
“Your cat is bigger than my cat.”
Question: How many words are in that sentence?
If you think about it for a second, you see that the question is ambiguous.
If you count each occurrence of a word you get 7. But the word “cat” appears twice, and that’s the same word.
So in another sense there are only 6 distinct words in the sentence.
The way we remove the ambiguity is to specify whether we’re asking for how many word tokens there are in the sentence, or how many word types there are.
Here, each of the individual words is a token, an occurrence, of that particular word.
So there are two tokens of the word “cat” here, but only one instance of the word TYPE.
The type/token distinction isn’t anything esoteric, we use it all the time.
If I’ve got two US nickels and three US quarters in my pocket, I’ve got 5 coins in total. That’s 5 coin tokens. How many types of coins? Just two, nickels and quarters. In this case, if I’m adding up my money, I care about about how many tokens I have, because I’m adding each individual token.
But sometimes what we care about is the type. This car is the 2016 Nissan Sentra. If you and I both bought the same make and model of car, and I bump into you I’ll say “what a coincidence, we both bought the same car”.
That’s not a confusing statement, because we implicitly understand that what I’m referring to is the car TYPE — we both bought Nissan Sentras — not the car TOKEN. If I was referring to the car TOKEN, then I’d be saying that there is one physical car that we both bought, but that’s clearly not what I’m saying.
Similarly, if two women show up to a party in the same dress, that may be embarrassing, but not because they’re literally wearing the SAME dress — there are two distinct dress tokens at the party. But they’re instances of the same dress TYPE.
The type/token distinction is also, in some sense, a metaphysical distinction. Tokens can be concrete individuals, like this nickel in my pocket, this car I’m driving, this dress that this particular woman is wearing.
Types are different. If we choose to think of them as entities, they’re abstract entities. The nickel is in my pocket, but where is the type - US Five Cent Coin - located? If we think of types as categories, or classes, or concepts, there’s still an issue of what exactly such things are.
But these issues don’t prevent us from using the type-token distinction, or recognizing it when it’s being used.
I’ll leave these metaphysical questions aside for now, because the topic of this video is the nature of mental states, and in particular, the content of mental states, and I want to get back to that.
However, before we can do that, we need another quick digression, another application of the type-token distinction.
Snow is white. Snow is white. Snow is white.
Question: How many sentences are on the screen?
By now the answer should be easy. There are three sentence tokens here, but only one sentence type.
How about this one?
Snow is white. La neige est blanche. La nieve es blanca.
Most of you probably recognize that these are just the same sentence written in different languages. The second is French, the third is Spanish.
Three sentence tokens. But they mean the same thing.
Here’s another way of saying this. We have three different sentences that assert the same proposition. This is the standard way that the term “proposition” is used in linguistics and the philosophy of language.
It’s the thing that different sentence tokens of the same type have in common. The proposition is the sentence type.
But in this case, it’s also connected to a fundamental concept of linguistics and semantics, the concept of MEANING. The proposition expresses the meaning shared by each of the sentences. One could even say that the proposition IS the meaning of the sentence.
This is a very natural way of talking, once you reflect on examples like this. The meaning of a sentence can’t be identified with any particular sentence token in any particular language, because you can say the same thing in any number of different languages, using different vocabulary and different grammar.
Here is “snow is white” in German.
Here it is in Finnish.
And here it is in … can you guess? Morse code. Dots and dashes.
All of these sentences mean the same thing, to anyone who understands the language. They assert the same proposition. But the proposition that is asserted isn’t itself a sentence in any particular language. It’s something different. Something above and beyond any particular language.
We have another word to describe what all these sentences have in common.
They have the same content.
This is how I’m using the word “content” when I talk about the content of a belief or a mental state. If I believe that snow is white, then the content of that belief is just the meaning of the proposition being asserted by the belief. It’s the proposition that snow is white.
So, if Suzie believes that snow is white, the content of her belief is the proposition that expresses this fact, that stuff that falls from the clouds when it’s cold is white.
Now, I’ve drawn Suzie in an outdoor scene because I want to emphasize that this semantic concept of “meaning” isn’t something completely internal to language. When we use language to talk about things in the world, meaning connects language to what is outside of language. At least part of the meaning of the words and sentences we use is constituted by the fact that they can point to, or refer to, actual or possible states of affairs in the world.
Here’s Suzie with a different belief. She believes that Jupiter is a gas giant.
What is the content of this belief? It’s the claim that Jupiter is a gas giant. Her belief isn’t about her or her mental states, it’s about the planet Jupiter.
Now, the content of a belief doesn’t have to be a true proposition, of course. If Suzie believed that Jupiter is a small rocky planet, that would be the content of her belief, but it would be false belief.
What makes it false? The same thing that makes any proposition about the world false. It’s false because what it asserts doesn’t correspond to the facts.
This is important. Beliefs are the sorts of things that can be true or false. But notice that this is another feature that propositions have. By definition, a proposition is the sort of thing that can be true or false.
So what are we saying here? We’re saying that a belief is a mental state that is about something, it has a certain content. That content can be expressed by a proposition, which asserts that such-and-such is the case.
In the example depicted here, it’s the proposition that the snowman is wearing a red scarf.
Now let’s ask another question. Carl comes along and sees the same snowman. He also forms the belief that the snowman is wearing a red scarf.
Question: How many beliefs are depicted on the screen?
Here’s one answer. Carl has his belief, and Suzie has her belief, so there are two beliefs here.
Another answer is that there is only one belief depicted here. Carl and Suzie both believe the same thing — that the snowman is wearing a red scarf.
So, using our type-token distinction, we can say that there are two belief tokens depicted, but only one belief type.
What is the belief type? It’s the type expressed by the propositional content of the belief.
And notice: the propositional content of the belief is something that can be shared. When we’re focusing on this content of the belief, we can say that Carl and Suzie share the same belief. Not the belief token, the belief type.
How is it shared? It’s shared in exactly the same way that different sentence tokens can share the same meaning. They share the same meaning because they assert the same thing about the world.
So, in a course on brains, minds and computers, why do we need to talk about this? Because a science of the mind, the brain and intelligent behavior has to account for these facts.
Mental states have content. They have meaning. They make claims about the world. They’re the sort of thing that can can be true or false. And they can be shared, in the same way that words and sentences can share meaning.
In cognitive science, any scientific story of how intelligent behavior arises needs to grapple with the semantic features of mental states and how they’re coupled to states of the physical brain and body in such a way that changes in the content of our mental states result in changes in our behavior.
That’s the task that I claim is harder than anything else we’ve attempted in modern science. But as we’ll see, the computational model was attractive to early pioneers in cognitive science precisely because it suggested a strategy for dealing with these issues.
Now, before we move on, I want to address an objection that I know some people will have to one of the moves I made earlier. Let’s go back to Carl and Suzie and the question of whether there’s two beliefs here or just one.
The objection is that Carl and Suzie have different beliefs, and will always have different beliefs. Why? Because brains are immensely complex and the state of Carl’s brain will never exactly match the state of Suzie’s brain. And the emotional content of the belief may be different. Maybe Carl has a childhood fear of snowmen and Suzie just loves them, so the feeling quality of their respective beliefs won’t match up. And maybe there are other qualitative features of their experience of the snowman that are very different. So how can we say with any certainty that Carl and Suzie share the same belief?
There are actually two objections here, but there are standard responses to both of them. Let’s start with the first one.
Let’s grant there there are differences in the physical state of Carl’s brain and Suzie’s brain, when they’re each entertaining the belief that the snowman is wearing a red scarf.
But remember this example. The configuration of symbols is very different in each of the sentence tokens on the left. In fact, they have almost nothing in common, apart from the fact that they express the same proposition, that snow is white.
Similarly, maybe the semantic properties of mental states don’t depend on specific configurations of the physical brain. Why should we think they would? Why couldn’t different brain state configurations all be instances of the same mental state type?
That’s the standard reply to this objection.
But now we can bring up the other part of the objection. People can experience beliefs in different ways. Their experience of the world is unique to them. Even if the content is the same, wouldn’t this entail that Carl and Suzie actually hold two different beliefs?
There’s something right about this. Here’s a linguistic example. Look at these two sentences. My dog Phoebe died. My dear Phoebe passed away.
Do these sentences mean the same thing or not?
In one sense they do, in another sense they don’t. What they have in common is generally called “cognitive meaning”. They convey the same objective fact, or the same information — they assert that my dog died. They have the same truth-conditions. If one is true, the other is true. If one is false, the other is false. Cognitive meaning is that aspect of meaning that affects the truth or falsity of sentences.
Where they differ is in the emotional tone or coloring. They convey different subjective attitudes or feelings about my dog dying.
This difference in meaning is sometimes called “expressive meaning”. The first sentence is relatively flat and colorless in comparison with the second.
Picking up on expressive meaning is often very important in interpersonal communication. But if our primary concern is with the way that beliefs represent the world, then we’re interested in cognitive meaning. And cognitive meaning can be shared, even when expressive meaning is not shared.
So, Carl and Suzie may well have different beliefs regarding the snowman, in the sense that they have different expressive meaning for Carl and Suzie.
But the cognitive meaning of their belief — the objective facts asserted by the belief, that can be judged as either true or false — will still be the same for Carl and Suzie. And in that sense, they can share the same belief.
To close, I want to introduce a final term, used in the philosophy of mind and language to refer to mental states that have cognitive or propositional content.
Here’s Carl. He’s contemplating Tom, and Tom’s being late for their meeting.
There are lots of mental states that we can attribute to Carl that are different from one another, but that each take the same proposition as their object.
Carl believes that Tom will be late. Carl hopes that Tom will be late. Carl is angry that Tom will be late. Carl doubts that Tom will be late.
Believing that P, hoping that P, being angry that P, doubting that P — these are called propositional attitudes. They express a psychological attitude toward a proposition.
Propositional attitudes are a type of mental state. Not all mental states have propositional content, so they’re only a subset of the broader category of mental states.
But they’re the most important mental states to understand if we’re talking about intelligent human behavior and human reasoning.
Why? Because conscious, deliberative reasoning involves the propositional attitudes.
Remember this example from the previous video. I believe that dark clouds carry rain. I believe that rain will make we wet and uncomfortable if I’m not protected from it. I want to avoid this. I want more information about the likelihood that it will rain today. These mental states are what lead me to do the things I do to avoid getting wet.
So if we’re going to build robots that can do this, and do it in anything like the way that human beings do it, we need to understand how mental states acquire propositional content and how that content plays a role in guiding behavior.
At least, that’s how most people who work in cognitive science see it. My goal with these first few videos has been to set up a problem for which the computational model of the brain can be seen as at least a partial solution. This discussion about mental content is part of that setup.
In the next section of the course I’m going to try to unpack and explain the computer model in a way that highlights how and why it was viewed as a solution to the problem of understanding the relationship between mental processes and intelligent behavior.