Newcomb’s Problem, Neuroscience and Free Will

By Greg Hickey

___

Imagine that you have agreed to participate in a neuroscientific research experiment on predicting human behavior. You undergo a functional magnetic resonance imaging (fMRI) scan of your brain while you watch a video. The video opens with an image of two boxes labeled A and B, and a narrator asks you to choose the contents of one of the boxes and state your choice on the count of three. The narrator counts, and as he says “three” a circle appears around Box A just before you were about to say “A.” The scenario is repeated with Boxes C and D and once again, a circle appears around Box D just before you were about to say “D.” A similar choice is presented ten times, and each time the video correctly predicts your choice of boxes.

The video narrator continues:

Our research uses fMRI to scan a subject’s brain and predict his or her choices based on data from that scan. It builds upon previous studies in neuroscience, human psychology and MRI technology, which have demonstrated that the outcome of an individual’s decision may be encoded in the prefrontal and parietal cortices of the brain. fMRI scans can detect this information well before the individual consciously makes a decision, and researchers can use this data to predict a subject’s behavior with greater accuracy than the subject’s own expectations about his or her behavior. You have just witnessed a small sample of these predictive capabilities. Using the same predictive MRI technology, we will now move to a more advanced scenario.

The narrator goes on to explain this new scenario as the video illustrates the experimental design. He tells you that you may choose between the contents of two closed, opaque boxes labeled A and B. Box A will contain $1,000. Box B will contain either nothing or $1,000,000. You will have two options: (1) take only the contents of Box B, or (2) take the contents of both boxes. While you watch the video, the fMRI scans your brain and inputs the data into an algorithm used to predict your decision. As you have observed, this technology has already been used to predict your decisions with great accuracy. Furthermore, the narrator assures you, it has predicted the decisions of many previous subjects presented with the same scenario with remarkable accuracy. In short, you should have no reason to believe the fMRI algorithm will fail to predict your choice. If the algorithm predicts you will take only Box B, a researcher will put $1,000,000 in Box B. If it predicts you will take both boxes, the researcher will put nothing in Box B. If it predicts you will randomize your choice, such as by flipping a coin, the researcher will put nothing in Box B. By the time you complete the scan, the algorithm will have made its prediction and the researcher will have put $1,000,000 or nothing in Box B. You can make your choice of boxes at any time after the researcher acts.

The scenario I have just described is an adaptation of Newcomb’s problem, a thought experiment first proposed in 1960 by the theoretical physicist William Newcomb following his meditations on another well-known thought experiment, the prisoner’s dilemma. Newcomb’s problem attracted attention in philosophical circles following Robert Nozick’s 1969 paper “Newcomb’s Problem and Two Principles of Choice” and a discussion in the March 1973 issue of Scientific American. In place of the research experiment and fMRI brain scan, Nozick describes a sentient Predictor: “a being in whose power to predict your choices you have enormous confidence.” Later descriptions of this Predictor include “a being from another planet, with an advanced technology and science, who you know to be friendly,” “God,” “a superior intelligence from another planet,” “a super computer capable of probing your brain” and “a graduate student from another planet, checking a theory of terrestrial psychology.”

Compelling arguments exist for both options in Newcomb’s problem. On one hand, you should take both boxes. At the time you make your decision, there is already either nothing or $1,000,000 in Box B. If there is nothing in Box B, you should take both boxes so you win $1,000 instead of nothing. If there is $1,000,000 in Box B, you should take both boxes so you win $1,001,000 instead of $1,000,000. Unless you imagine some enigmatic instance of backward causation in which your decision causes the Predictor to act differently at an earlier point in time, there is no reason to modify your decision based on what the Predictor is likely to have done. To further elucidate this argument, imagine your best friend accompanies you when you make your choice and the back sides of both boxes are transparent. Your friend can see what is in each box, even though you cannot. Unless the money exists in some sort of quantum state which resolves only when you open Box B, this modification does not change the terms of the scenario. And no matter what is in the boxes, your friend will advise you to take both boxes.

On the other hand, you should take Box B. You know the Predictor has been extremely accurate in the past. So everyone (or nearly everyone) who took both boxes ended up with $1,000, while (nearly) everyone who took Box B ended up with $1,000,000. Let’s say the Predictor is merely 90% accurate. If you were to play the prediction game over and over, you would expect to win an average of $101,000 (0.9 x $1,000 + 0.1 x $1,001,000) by taking both boxes, compared to an average of $900,000 (0.9 x $1,000,000 + 0.1 x $0) by taking only Box B. In fact, as long as the Predictor is better than 50.05% accurate, you can expect to win more money over the average of repeated Newcomb’s problem scenarios by taking Box B.

Hence the dilemma. In Nozick’s words, “To almost everyone it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem with large numbers thinking that the opposing half is just being silly.” I have also put this problem to numerous people and observed a similar split.

In an attempt to clarify the dilemma, Nozick describes two principles of choice:

Expected Utility Principle: Given the choice of possible actions, an agent should perform the action with maximal expected utility.

Dominance Principle: An agent should perform action A over action B if, for each state of the world, the agent either 1) prefers the consequences of A to those of B or 2) is indifferent between the consequences except for one state in which the agent prefers the consequences of A.

To see how to apply these principles, let us represent Newcomb’s problem using the following matrix:

Predictor guesses both boxes Predictor guesses one box
You choose both boxes Win $1,000 Win $1,001,000
You choose one box Win $0 Win $1,000,000

As I previously suggested, choosing both boxes dominates choosing one box. Whichever state obtains, you win $1,000 more by taking both boxes than you do by taking one box. However, if you take both boxes, it is very likely the Predictor guessed your choice and put nothing in Box B. And if you take one box, it is very likely the Predictor put $1,000,000 in Box B. Thus, the expected utility of choosing both boxes is very close to $1,000, and the expected utility of choosing one box is very close to $1,000,000 (again, if the Predictor is 90% accurate, the expected utilities for choosing both boxes and choosing one box are $101,000 and $900,000, respectively). So the dominance principle recommends taking both boxes while the expected utility principle recommends taking one. How do we decide which principle to apply?

Let’s look at the following example. Suppose you want to bet on the upcoming Los Angeles-New York basketball game. You know of a gambling website that offers new users site credit if they lose their first bet as a way to attract new customers. If you bet on a favorite and lose, you receive $21 in site credit. If you bet on an underdog and lose, you receive $19 in site credit. Los Angeles is favored to win the game, so a $10 bet on Los Angeles pays $8 if Los Angeles wins while a $10 bet on New York pays $12 if New York wins. If your team loses, you lose your $10 bet but win $21 in site credit if you bet on Los Angeles and $19 if you bet on New York for a net gain of $11 or $9, respectively. We thus have the following matrix:

Los Angeles wins New York wins
You bet on Los Angeles Win $8 Win $11
You bet on New York Win $9 Win $12

According to the dominance principle, you should bet on New York. If Los Angeles wins, you lose your $10 bet but earn $19 for a net gain of $9 (better than the $8 you could have won by betting on Los Angeles). And if New York wins, you win $12 (better the $11 net you could have made by betting on Los Angeles). Whatever the outcome of the game, you stand to win more money by betting on New York.

However, suppose you have a trustworthy friend who works for the gambling website. He tells you the website plans to bribe the referee to influence the outcome of the game. If more people bet on Los Angeles, the website will bet heavily on New York through a separate sportsbook and pay the referee to favor New York, expecting to make enough to cover their money-back promotion and turn a sizable profit to boot. If more people bet on New York, the website will bet heavily on Los Angeles and pay the referee to favor Los Angeles. Moreover, your friend tells you bets are evenly split between Los Angeles and New York and betting is about to close. Your bet will decide how the website instructs the referee to influence the game. If you bet on Los Angeles, the referee will officiate the game so as to make it very likely New York will win. If you bet on New York, the referee will make it very likely that Los Angeles wins.

In this instance, the expected utility principle takes hold. If you bet on Los Angeles, you will likely win $11. If you bet on New York, you will likely win $9. Therefore, you should bet on Los Angeles. In this case, the expected utility principle supersedes the dominance principle because your action (which team you bet on) influences which state obtains (which team wins). The states are not probabilistically independent of the actions. So it seems legitimate to use the dominance principle it and only if the states are probabilistically independent of the actions and to use the expected utility principle if the states and actions are not probabilistically independent.

Nozick offers another example in which the states are not probabilistically independent of actions. Sam knows either Dave or Frank is his father. Dave died of an excruciatingly painful heritable disease, and Frank did not. The disease is genetically dominant, Sam’s mother does not have it, and Dave had two dominant alleles. If Dave is Sam’s father, Sam will suffer and die from the disease; if Frank is his father, he will not contract the disease. Furthermore, let us assume the tendency to follow an intellectual life is genetically inherited. Again, this tendency is genetically dominant, Sam’s mother does not have it, and Dave had two dominant alleles.

So Dave was DD for the disease and II for the intellectual tendency, while Sam’s mother and Frank are both dd and ii. Sam has just graduated from college with a Bachelor’s degree in philosophy and is expected to be a top pick in the upcoming NBA Draft. He must decide whether to pursue a Ph.D. in philosophy or a career as a professional basketball player. He would prefer to get his Ph.D. rather than play professional basketball, but he would much prefer to play basketball and not die of the terrible disease than to get his Ph.D. and develop the disease. We can represent Sam’s decision with the following matrix:

Dave is father Frank is father
Sam chooses Ph.D. -20 100
Sam chooses NBA -25 95

Pursuing a Ph.D. dominates playing in the NBA. However, if Sam chooses an intellectual life, then it is very likely he has the inherited tendency and his father is Dave. If he does not choose an intellectual life, then it is very likely he does not have the tendency and his father is Frank. So the states (what inherited traits Sam has) are not probabilistically independent of his actions (his choice of career path). The most likely outcomes are 1) Sam chooses to pursue a Ph.D., Dave is his father, and Sam contracts the disease, or 2) Sam chooses to play in the NBA, Frank is his father, and Sam does not contract the disease. Therefore, Sam applies the expected utility principle and chooses the NBA.

But this line of reasoning is absurd! It makes no sense for Sam to say, “I am choosing to play basketball because in doing so I will be less likely to die of a dread disease.” Yes, if Sam chooses the NBA, it is likely he did so because Dave is not his father, meaning he will not die of a terrible disease. But Sam’s career choice does not dictate whether or not he has the intellectual tendency, who his father is, or whether he will contract the disease. All of those facts were fixed and determined long ago.

Nozick goes on to ask what distinguishes the example of Sam and his father from Newcomb’s problem. Both scenarios present choices in which the states are not probabilistically independent of the actions but the states (who Sam’s father is, what is in the second box) are already fixed and determined. Why do we think Sam would be insane to choose to play basketball in order to prevent himself from having already inherited a terrible disease but find it plausible to choose one box instead of two in Newcomb’s problem?

Ultimately, Nozick struggles to find a satisfactory answer to this question. He advocates taking two boxes but realizes his examples will fail to convince most one box proponents. In contrast, Newcomb reportedly supported taking one box. Half a century later, neither side has put forth a convincing argument for its position. The confounding dilemma of Newcomb’s problem continues.

In fact, new technologies have only heightened the tension. As I indicated with my variant of the problem, modern neuroscience has proven capable of predicting some human decisions well in advance of those decisions being made. A 2008 study used fMRI scans of subjects’ brains to show that the outcome of a subject’s decision could be encoded in certain brain areas up to ten seconds before it entered the subject’s awareness (“Unconscious Determinants of Free Decisions in the Human Brain,” Nature Neuroscience). In 2010, researchers performed fMRI scans of subjects’ brains while the subjects watched a video advocating sunscreen use. Following the scan, the fMRI data was used to predict the subjects’ sunscreen use over the subsequent week. The predictions based on the fMRI data were more accurate than the subjects’ own predictions about their behavior (“Predicting Persuasion-Induced Behavior Change from the Brain,” The Journal of Neuroscience).

These studies, combined with other neuroscientific research, demonstrate that some aspects of our decision-making processes occur unconsciously. And this unconscious brain activity can be used to predict our decisions. It is now plausible to imagine performing a Newcomb’s problem scenario without appealing to God or a superintelligent extraterrestrial being.

Moreover, if the Predictor (whether an fMRI machine or an alien) is as accurate as claimed, it raises questions about the existence of free will. If the Predictor were perfect, if it guessed one box and put $1,000,000 in Box B, then it looks like the subject could not take both boxes. Whatever brain activity the Predictor used to make its predictions would appear to dictate the subject’s subsequent decision. The subject’s behavior would be determined by his or her unconscious brain chemistry. Even Nozick admits that if determinism is true and your actions are dictated in advance, then you should take one box. (Again, much modern neuroscientific research claims to show this determinism is true—though other studies dispute that claim). If you choose to take two boxes, you are holding out hope the Predictor might be wrong, either because determinism exists and the Predictor cannot infallibly read your mind or because you do have free will and can change your mind.

In the words of biochemist and science fiction writer Isaac Asimov:

I would, without hesitation, take both boxes… I am myself a determinist but it is perfectly clear to me that any human being worthy of being considered a human being (including most certainly myself) would prefer free will, if such a thing could exist… Now, then, suppose you take both boxes and it turns out (as it almost certainly will) that God [i.e. the Predictor] has foreseen this and placed nothing in the second box. You will then, at least, have expressed your willingness to gamble on his nonomniscience and on your own free will and will have willingly given up a million dollars for the sake of that willingness—itself a snap of the finger in the face of the Almighty and a vote, however futile, for free will… And of course, if God has muffed and left a million dollars in the box, then not only will you have gained that million but far more important [Asimov’s italics] you will have demonstrated God’s nonomniscience. If you take only the second box, however, you get your damned million and not only are you a slave but also you have demonstrated your willingness to be a slave for that million and you are not someone I recognize as human.

Again, strong words for some, but surely not enough to convince a determinist who recognizes the rational course of action in a deterministic world is to take one box, no matter how unromantic that action might seem.

There are many reasons Newcomb’s problem continues to be debated fifty years after its inception. Aside from starkly divergent opinions it engenders, the problem examines the mystery of how our brains work and how we consciously/freely or unconsciously/deterministically make decisions. In doing so, Newcomb’s problem addresses a question philosophers have debated for as long as they have been practicing philosophy. And in the twenty-first century, we are closer than ever to recreating a Newcomb’s problem scenario. As we learn more about how our brains work and how we make decisions, we bring new insights to this fascinating dilemma. 

Greg Hickey is a former international professional baseball player and current forensic scientist, endurance athlete and Amazon-bestselling author. His novels include The Friar’s Lantern, a fictional take on Newcomb’s problem. His website is greghickeywrites.com.

3 comments

  1. Look, I don’t have any special knowledge concerning probability or game theory, but the two-box variant of this issue seems to me to be effectively rigged (which might also be what Azimov is responding to). IF the Predictor knows his/her/its success rate is high, then his/her/its ability to manipulate the outcome by adding or removing elements to the boxes makes him/her/it an actor, and no longer a neutral determinant. Such motivated staging of the problem is clear in the prisoner’s dilemma itself, and in the betting version, where the staging is part of the play by invested stage managers. But the two-box version is presented as neutral, and it just isn’t, even though the possible motivation on the part of the Predictor is unclear – what is clear is that it is motivated and there must be an investment on the Predictor’s part.

    The problem, as seen by this rank amateur, is that such thought experiments look different in real-world application, and in ways that I’m not sure that neurological science can ever address properly.

  2. That such maddeningly simplistic thought experiments such as these are used to draw conclusions about real world human free will remains the kind of reductionist thinking that has always been and ever will be an embarrassment to philosophy.

    The central fallacy is that human decision making is somehow neurophysiologically linear.

    Human consciousness is less a linear train of nerve impulses and thinking and more a multidimensional whirlwind of interrelated cyclic sensuality and thought. Each moment’s neurophysiology is inherently impinged upon from multiple metacognitive directions by the whole of consciousness past. Consciousness is the product of a complex dynamic mental cycling that has been ongoing since the womb. These mental cycles constitute dynamic relationships of the mind with the environment as well as with itself. Furthermore, all such cyclic neurophysiological mental activity by nature animates substantive human experience, and human motivation does not derive from any one of these nerve impulses or tracts but from the whole of them at once. In order to apprehend the meaning in all of these swirling, interrelated cycles, they must be regarded wholistically. A reductionist analysis of them (i.e. neurophysiological) can’t possibly apprehend their meaning.

    A secondary fallacy as that there is but one decision being studied here.

    From the moment one begins to present the problem to the mind, there are multiple metacognitive decisions being made about conceptualizing the problem as the presentation unfolds that each build on the one before and that each are dependent on the individual’s life experience and lesson’s learned from past decisions. This is not to mention spontaneous instinctual semiconscious decisions being made made that express the functional architecture of the mind which records the evolutionarily successful decisions made by our countless ancestors (i.e. collective unconscious). So the fact that an fMRI can, toward the end of this sequence of metacognitive cyclic decision making, detect a neurophysiological pattern that indicates a decision just before the subject is conscious of making one is akin to predicting where an arrow will hit a target by taking a snap shot of the arrow just before it hits the target.

    Hardly impressive or particularly useful in this pitched, conflicted, warm, lovely, and spectacularly beautiful place we call the real word.

  3. I think the initial experiment is too mentalistic, assuming that a decision is only made when it is conscious. Over a long period of time we create our own character by our actions which become automatic. We can manage our tendencies and dispositions by occasionally analysing them and adopting different practices. Conscience is a matter of moral skill and it is arguable that a person who reflexively is good is morally superior to the actor who must ponder. The wisdom traditions recognise this with their concepts of gunas and vasanas in the Vedic and the Sufi Enneagram. The orthopraxis/orthodoxy is a false dichotomy.

    My rich brother would have no difficulty with the boxes as he worked proverbially. ‘You’ll never go broke taking a profit’. ‘Money sticks to money’. ‘If you are prepared to work for nothing you’ll never be idle’. All the conundrums proffered by Nozick and Newcombe are largely generated by mentalism essentially Cartesianism in action.

Comments are closed.