Human Causal Reasoning and the Causal Markov Condition: A Purported Discrepancy

Jessica Wang
10 min readApr 19, 2021

One of the fundamental aims across different scientific disciplines is to understand the methods by which variables take on their values, and to subsequently estimate what those values may have been had the generating mechanism in question been subject to manipulation. Whether it is investigating the driving forces of greenhouse gas emissions or identifying the biomarkers that emerge when cancerous cells multiply, a large part of scientific research is dedicated to defining and understanding cause-and-effect relationships.

Scientific research is rooted in the understanding of causal relationships

If we could somehow pinpoint the modifiable causes of unfavorable outcomes, we could transform the world by making the needed adjustments. This statement then begs the question as to how we can determine such causes of a given system. A widely-accepted formalism for modeling causal structure is a Bayesian network, or more specifically, a directed acyclic graph where the nodes represent the variables and the directed edges represent the causal relations between the variables (Hitchcock). The causal Markov condition stipulates that in this graphical representation of causal structure, every node is conditionally independent of its non-descendants, given its parents (Hitchcock).

However, by conducting a series of research studies designed to test people’s intuition about causal relations, Bob Rehder, a professor of psychology at New York University, found that many subjects draw conclusions in a way that violates the causal Markov condition (Rehder). This then implies that either people’s ability to reason about causal relations is flawed or the causal framework stipulated by the causal Markov condition is invalid. However, what if neither of these views is entirely correct? What if the disagreement between human causal reasoning and the causal Markov condition is no reason to discredit the validity of the causal Markov condition at all? Rather, this conflict simply undermines the applicability of the causal Markov condition, since one must have an abundance of knowledge about the system to justifiably assume that the condition holds.

To prove this, let’s discuss Rehder’s research findings to show that human causal reasoning most likely deviates from the causal Markov condition because our understanding of a system is oftentimes substantiated by more than just statistical data and the laws of probability calculus. We can then explore the idea that purported violations of the causal Markov condition arise due to limitations in background knowledge — i.e. misrepresenting the scenario or choosing the set of variables incorrectly. We can then conclude by suggesting that given all the right information, the causal Markov condition is indeed valid. Therefore, it seems as though the inability of the causal Markov condition to capture all the ‘right’ information renders it more inadequate in some scenarios, rather than incorrect.

Rehder’s study suggests that people fail to recognize when variables should be probabilistically independent of one another

The causal Markov condition formalizes the idea that there is nothing more to be learned from a variable, besides the variable’s descendants, once the state of all its direct causes are known. However, one of the biggest takeaways from Rehder’s findings is that people seem to be unaware of the simplicity that the causal Markov condition affords when making causal inferences. Specifically, people “exhibited a systematic bias in which they chose [a response] with more causally related variables 74% of the time” and when asked to causally infer a common effect, people discounted or “chose [the response] in which the alternative cause was absent 24% of the time” (Rehder, 90). In other words, people fail to recognize when variables should be probabilistically independent of one another. However, this stems largely from people’s tendency to bring prior domain knowledge onto the table, thereby augmenting and complexifying the model of the causal system. And in most cases, this should not be viewed as problematic.

Questions concerning the effect of actions should be decided by causal considerations that extend past the standard rules of probability calculus, if only because probability itself is not as straightforward as the causal Markov condition seems to suggest. Arriving at an answer to the question “What is the probability that advancements in genetic research will bring an end to the shortage of donated organs in the next decade?” is drastically different from arriving at an answer to the question “What is the probability that a fair coin toss will result in heads?” and human reasoning and logic is advantageous in its ability to distinguish the different challenges that each one presents.

Additionally, perhaps the tests designed by Rehder were not truly testing whether or not subjects honor the causal Markov condition in causal inferences. It seems plausible that while people’s metacognitive abilities may allow them to detect erroneous responses that violate the causal Markov condition, their ability to actually create a one-to-one mapping of their causal responses to an analytic reasoning system is much weaker. Once again, while human causal reasoning can contradict the probabilistic independencies set forth by the causal Markov condition, this does not mean that human causal reasoning is necessarily flawed.

Later, we will show that the contradiction also has no significant bearing on the correctness of the causal Markov condition although it does seem to suggest that in order for the causal Markov condition to be useful in its application, a great deal of information about the system must already be known. For now, we see that causality can also appear to be governed by its own logic which dictates a major extension beyond the probabilistic constraints imposed the causal Markov condition, and this is where human causal reasoning is valid and acceptable.

For example, when people observe that the grass is wet and the sprinklers are on, they discount rain to be a cause of the wet grass. This causal inference stems from people’s understanding that sprinklers are seldom turned on during a rainy day. This knowledge derived not from the laws of probability calculus, but rather from personal experience suggest that the ‘right’ causal inferences need not be rooted in the field of mathematics. While human reasoning may result in incorrect causal inferences whether through the cherry-picking of evidence or the over-reliance on emotions and intuition in assessing likelihoods, this does not mean human reasoning itself is incorrect in its processes. In this regard, the tendency to violate the causal Markov condition is analogous to adults responding “incorrectly to the well-known question ‘A bat and ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?’ even though they are presumably able to carry out the elementary arithmetic operations needed to compute the correct answer” (Rehder, 93). Obviously, the human capacity for reasoning is hindered by many things including our intellectual capability, our personal experiences, our natural perception, and much more, but this is not ‘bad’ or ‘wrong.’ It is simply one limited approach to the understanding of a system.

We can now show that a system needs to be sufficiently detailed in order for the causal Markov condition to hold. We will also show that the purported violations or counterexamples which have been cited against the causal Markov condition typically indicate limitations in the background knowledge available. Namely, variables employed in the causal model could be mistakenly treated as distinct when they should actually belong to the same entity, or variables belonging in the causal model could be articulated at the wrong ‘level,’ or relevant variables that should belong to the causal model are missing.

We consider a simple example that models a cue ball colliding with two other billiard balls. In our causal model, we will have three binary variables where ‘C’ denotes a collision, ‘A’ denotes a billiard ball going into a pocket of the pool table and ‘B’ denotes the second billiard ball going into a different pocket of the pool table. From our understanding of physics, momentum, and conservation of energy, we know that conditional on whether or not a collision occurs, the state of ‘A’ provides additional information about the state of ‘B.’ In other words, ‘A’ and ‘B’ are clearly not related through a direct causal relation but ‘C’ fails to screen off ‘A’ from ‘B’ and vice versa despite being a common cause to both. Graphically, this can be viewed as a “fork” where ‘C’ is the only parent to both ‘A’ and ‘B.’ Hence, we seem to have a violation of the Markov condition. However, we can defend the validity of the causal Markov condition by observing that had we let ‘C’ denote the exact momentum of the cue ball upon colliding with the two billiard balls, this more detailed and informative variable then is able to screen off ‘A’ from ‘B’ and vice versa.

Therefore, the causal Markov Condition holds in describing the same causal relations. It merely asks for a redefinition of variables at a more granular level. While a valid causal model is proposed through the causal Markov condition, the utility of the causal Markov condition can be called into question because it appears as though we must first draw. In other words, it seems as though we must explain more causal relations to the model, whether by relying on our judgment, relevant domain knowledge, or other understanding of the system than the causal Markov condition can explain to us.

If the causal Markov condition is unable to prove its utility in a few situations unless modifications to the variables are made, we see that the causal Markov condition seemingly fails in explaining non-causal dependencies too. What is meant by non-dependencies will be shown in the following causal model. Once again, let us consider a simple example involving three binary variables. Suppose a rich uncle has ten million dollars. He states in his will that his nephew should inherit four million dollars and his son the remaining six million dollars once he dies (Gebharter and Retzlaff). We can denote ‘D’ as the death of the uncle, ‘S’ as the amount of money possessed by the son and ’N’ as the amount of money possessed by the nephew. At first glance, this example of a causal model bears striking resemblance to the aforementioned example concerning the two billiard balls and a cue ball. Here, the death of the uncle, ‘D,’ is a common cause to both ‘S’ and ’N’ so our model is also a ‘fork.’ However, this example differs by introducing a non-causal dependency. To see this, we observe that if the uncle died and the son gained a lot of money, this drastically increases the probability that the nephew also now also possesses a large sum of money. If the uncle dies, the amount of money possessed by the nephew and the son are related through a non-causal dependency since the nephew would gain about four million dollars and the son would take the remaining six million. We see that the causal Markov condition is unable to capture the background story related to the causal model we have built.

How straightforward are causal relationships truly?

However, even if we could build such a non-causal dependency into the model, how we would do so becomes problematic. A directed arrow between ’N’ and ‘S’ would not make sense but to leave them as is, with no directed edge between them, also feels insufficient. The causal Markov condition seems to be relevant for a few types of causal scenarios where much of the information is already known and understood. In the real world, we are hardly lucky enough to be given all this information which is why it could be argued that the causal Markov condition is useful and correct in simplifying and providing causal relations for the ‘right’ kind of causal information.

In conclusion, the failure of human causal reasoning to abide by the causal Markov condition calls for a reconsideration of the status of the causal Markov condition and its correctness in the governance of causal relations and probabilistic dependence. However, human causal reasoning is not intended to capture the causal Markov condition. Relatedly, the causal Markov condition is not intended to capture human causal reasoning. Both approaches are simply attempts at describing a system.

People make remarkably good causal inferences unconsciously, effortlessly, and daily from simply observing the world around them.

Perhaps, from a scientific perspective, human causal reasoning may be a less desirable approach since an ideal model would most likely require as few a priori assumptions as possible, but this does not mean that human causal reasoning is wrong and should be abandoned, if we could even abandon human reasoning. The causal Markov condition is a valuable aid in helping us move from statistical data to causal inferences about the underlying structure. However, the inappropriateness of the causal Markov condition for certain situations frequently renders it unable to produce accurate, satisfactory results. To identify a normative framework for causal reasoning is challenging since the way in which our world operates is of such a rich and diverse nature. Both the causal Markov condition and human causal reasoning construct useful approximations of the causal system at hand and while both are somewhat limited in their capacity, neither should be deemed ‘flawed’ or ‘wrong’ without careful evaluation of the causal scenario.

Gebharter, Alexander, and Nina Retzlaff. “A New Proposal How to Handle Counterexamples to Markov Causation à La Cartwright, or: Fixing the Chemical Factory.” Synthese, 2018, doi:10.1007/s11229-018–02014–7.

Hitchcock, Christopher. “Probabilistic Causation.” Stanford Encyclopedia of Philosophy, Stanford University, 9 Mar. 2018, plato.stanford.edu/entries/causation-probabilistic/#MarkCond.

Rehder, Bob. “Independence and Dependence in Human Causal Reasoning.” Cognitive Psychology, vol. 72, 2014, pp. 54–107., doi:10.1016/j.cogpsych.2014.02.002.

--

--