Quoting Terry Tao, albeit with small modifications,

There is an island upon which a tribe resides. The tribe consists of 1000 people, with various eye colours. Yet, their religion forbids them to know their own eye color, or even to discuss the topic; thus, each resident can (and does) see the eye colors of all other residents, but has no way of discovering his or her own (there are no reflective surfaces). If a tribesperson does discover his or her own eye color, then their religion compels them to commit ritual suicide at noon the following day in the village square for all to witness. All the tribespeople are highly logical and devout, and they all know that each other is also highly logical and devout (and they all know that they all know that each other is highly logical and devout, and so forth).

For the purposes of this logic puzzle, “highly logical” means that any conclusion that can logically deduced from the information and observations available to an islander, will automatically be known to that islander.

Of the 1000 islanders, it turns out that 100 of them have blue eyes and 900 of them have brown eyes, although the islanders are not initially aware of these statistics (each of them can of course only see 999 of the 1000 tribespeople).

One day, a blue-eyed foreigner visits to the island and wins the complete trust of the tribe.

One evening, he addresses the entire tribe to thank them for their hospitality.

However, not knowing the customs, the foreigner makes the mistake of mentioning eye color in his address, remarking “how unusual it is to see another blue-eyed person like myself in this region of the world”.

What effect, if anything, does this

faux pashave on the tribe?

Although, it might not initially be obvious this is a problem about thinking about thinking about thinking about thinking — and so on. As before, I will proceed with diagrams. One argument, which is inductive, proceeds in this way: if there were only one blue-eyed villager, he would not know his eyes were blue. Everyone else would know his eyes were blue. Because the stranger pointed out there was an individual with blue eyes, an individual our sole blue-eyed villager had never seen, he would immediately know it was him and kill himself the next day.

In the case of two blue-eyed people. Let us assume, they would tend to hope for the best, holding in their minds the optimistic conjecture, that they do not have blue eyes, unless facts prove this conjecture wrong. Each blue-eyed person knows he has seen one blue-eyed person in his life. He assumes the best and thinks that the blue-eyed person is the only blue-eyed person. In this case, he thinks the situation in the first diagram is true. However, in the first diagram the blue individual knows immediately he is the only blue-eyed person and kills himself the next day. However, because there are two blue-eyed people, each one is waiting for the other to kill himself. That is to say, there are two blue eyed people thinking that there is only one blue-eyed person whom they think thinks he must be the only blue eyed person. When the first day goes by and no one kills themselves, it becomes obvious to everyone that there are actually two blue eyed people.

More complicated still is the case of three blue eyed people. Each blue eyed person hopes for the best, that they don’t have blue eyes. They don’t know because, as we said before, they have never seen themselves. In this case, they see two blue eyed people. Their mental model for reality is that there are two blue-eyed people and further, they can have mental models of how these blue-eyed people think. They find themselves thinking these blue-eyed people must think like the people in the previous diagram. These hypothetical blues don’t know their own color. They assume the best, that they are brown-eyed and that the other blue is the only blue and so on.

It’s intriguing isn’t it? The nested models of thinking. Each of the three constructs a mental model of the two others. In those mental models, each of the two think of the remaining one.

For models of five, six, seven, as many blues as there are, it goes on like this. One blue kills himself in one day. Two blues kill themselves in two days. Three blues kill themselves in three days. All the while, there is thinking about thinking about thinking.

If the browns know that there are only two possible eye colors, on the day after the blues all die, the browns die too.

Such fearful symmetry.

Hi Kareem,

This is a very interesting problem, and I think your solution is excellent. The thought bubbles really help in visualizing how the problem works.

There’s one thing that’s bugging me about this though. If there are two eye colors in the populations, then if there are at least 4 people of each eye color that everyone can see someone of the opposite eye color, whom they know can also see someone of the opposite eye color (to that person), whom that person also knows can see someone of the opposite color…..

In this case, it seems like the population should be inherently unstable. Logically, if you know everyone can see what eye colors exist, then everyone should already be counting for n days (where n is the number of eye color X – lets say blue – that each person can see) to see when they should kill themselves.

It’s a confusing problem, and maybe the solution would be more sociological – for instance how long ago the religion started, or how good adherence is – rather than mathematical. But I’d be interested in hearing your thoughts.

Pingback: In the Long Run We Are All Dead II « The Twofold Gaze

Hi Epidemos,

I think I have addressed your concerns in my second article.

The core issue is that the stranger contradicts a particular set of mental models, that make other interdependent mental models impossible.

Before the stranger says what he says, these mental models do not contradict any observable evidence.

I don’t think it quite works, however, because the tribespeople are still uncertain of their own eye color. The outsider has said that there are people of blue eye colour, which is readily apparent. To assume that he was talking specifically about you is illogical. Everyone in the tribe knows that there are people in the tribe with blue eyes, and so your own eye colour can not be inferred in any way from the statement of the outsider.

Unless I completely missed something.

Hi Dave,

You can interpret the stranger’s statement as meaning “Blue-eyed tribespeople exist on this island” or “There is at least one tribesperson on this island who has blue eyes”.

The special thing about him saying this (which we can assume people knew themselves) is everybody heard him say it and everybody knows everybody

elseheard him say it.I describe what happens in the case of one blue-eyed person. He has never seen any blue-eyed people and so he realizes that it must be him. Then I go on to describe what should happen in the case of two-blue eyed people.

I hope these comments have helped to contextualize the article for you.

Hi Kareem,

Ah, I see what you’re saying. The population is initially stable because the inhabitants can include the statement “everyone knows that everyone knows that everyone…. that there are at least 0 blue eyed people” in their mental model before the stranger makes his comment. After that, this statement is no longer tenable, since it is now common knowledge that there is at least 1 blue eyed person. And that common knowledge is the catalyst for the instability in the population: the issue at hand is not what is known but what is known to be known.

It’s really interesting to see that there is a logical/mathematical basis for the initial population stability, as I was almost convinced that the stability had to be due to sociological variables and was therefore unanswerable as the problem is written: i.e. that you had to take on faith that the population could ever get to that point.

This is such a fascinating problem, thanks for posting on it!

With logis everything is ok, but you left sociological component, the Dunbar’s number. So your theory works up until 150 people, anything larger than that, nobody will leave the island. Would love to hear your thoughts on that. :)

You are right. This argument does require people to remember quite a lot of things and so constraints on memory should come into effect.

I look at this puzzle as a complete abstraction. I would not use it to predict a similar situation in real life.

I believe your logic is quite incorrect.

The simplest way to show that nothing will happen is to look at four cases, and here it is a bit easier to consider Tao’s original case of 100 blue eyed and 100 brown eyed islanders.

1. The stranger remarks on seeing blue eyes.

2. The stranger remarks on seeing brown eyes.

3. The stranger remarks on seeing both blue and brown eyes.

4. The stranger asks everyone to deduce his/her own eye color without making any remark about what he sees. Assume there is a taboo about thinking about one’s own eye color until he arrives or some such detail.

Clearly one can’t both deduce in case 1 that all the blue eyed people commit suicide on the 100th day and in case 2 deduce that the brown eyed people commit suicide on the 100th day. That’s just silly. In both cases no new information is given by the stranger so people can’t behave differently based on his words.

If that doesn’t give one pause then consider case 3. How does that work out?

And finally, consider case 4 where, since the earlier situations didn’t involve new information but only a trigger to start thinking, we only ask of the stranger that he start people thinking about their eye color. The marvelously logical islanders should act the same in this case as in the other cases and here clearly people have absolutely no reason to believe anything about their own eye color.

So where does the inductive argument fail? On an island with one blue eyed person and some brown eyed people clearly something happens if the foreigner claims to see blue eyes. Likewise an island with two blue eyed people.

However nothing at all happens on an island with three blue eyed people. This is because everyone deduces, regardless of one’s own eye color, that nothing will happen on the first day following. Hence no new information will be gleaned from observing events on that day. Since no new information is acquired by waiting one day how can there be any information acquired on any subsequent day?

If you don’t like the inductive argument blowing up that fast consider an island with four blue eyed people where not only does everyone know that nothing will happen on the first day but everyone knows that everyone else knows it as well. So what is everyone waiting for? They can even discuss this openly without giving away anyone’s eye color and all can agree that there is just no telling what one’s eye color is.

For the induction to get going there must be in some person’s mind the possibility that another person might commit suicide the first night. That is to say, someone must at least have the possibility of thinking he is the only blue eyed person. For an island with lots of blue eyed people, no one is wondering at all about islands with one blue eyed person hence there is no induction at all.

It is amazing the legs this problem has, including a Wikipedia mention of it as a good example of common knowledge. Maybe it’s the allure of ritual suicide. A more prosaic formulation would have a small number of people, say four people with blue hats and four with brown hats, trying to deduce their own hat color and each being obligated to announce his color if he knows it, at one minute intervals. They can even discuss their common knowledge. If they are logical they will all give it up in well under a minute.

Jim

I’m with Jim – for a truly rigorous mathematical proof it’s not sufficient to demonstrate that there is one line of proof/thought that the logicians/islander can follow. If there is any other proof or line of thought that they might follow then the inductive step fails – since it requires the common knowledge that everyone will construct the same partial proof each day.

Alternatively, you might claim that the proposition: “All the tribespeople are highly logical and devout, and they all know that each other is also highly logical and devout (and they all know that they all know that each other is highly logical and devout, and so forth).” is untenable, given Godel’s theorem….

Actually, perhaps it is easy to put Jim’s mind at rest by really considering the case N=3 more carefully…

One of the blue-eyed, sees two others. She must assume that both of them might be considering suicide (if she is not blue).

Hence, clearly at least for N=3 there is “in some person’s mind the possibility that another person might commit suicide the first night.”. The blue-eyed only know N=2 or N=3, and if N=2, two hypothetical people are thinking about suicide.

Now to try stretching this to N=4, what might a hypothetical person be thinking a hypothetical person to be thinking…?

Pingback: Some stuff » two inductive problems

I guess what I’m missing is how they know that only two eye colours are possible. It’s probable but not 100% certain that there are only two colours, but a third colour is possible and since they do not possess a method of determining their own eye colour for certain, the doubt of the possibility of hazel or green eyes must prevent mass suicide.

While I do appreciate the nested logic, I can’t see anyone committing suicide once three blue eyed people exist.