
Principled reasoning about the identifiability of causal effects from non-experimental data is an important application of graphical causal models. This paper focuses on effects that are identifiable by covariate adjustment, a commonly used estimation approach. We present an algorithmic framework for efficiently testing, constructing, and enumerating $m$-separators in ancestral graphs (AGs), a class of graphical causal models that can represent uncertainty about the presence of latent confounders. Furthermore, we prove a reduction from causal effect identification by covariate adjustment to $m$-separation in a subgraph for directed acyclic graphs (DAGs) and maximal ancestral graphs (MAGs). Jointly, these results yield constructive criteria that characterize all adjustment sets as well as all minimal and minimum adjustment sets for identification of a desired causal effect with multivariate exposures and outcomes in the presence of latent confounding. Our results extend several existing solutions for special cases of these problems. Our efficient algorithms allowed us to empirically quantify the identifiability gap between covariate adjustment and the do-calculus in random DAGs and MAGs, covering a wide range of scenarios. Implementations of our algorithms are provided in the R package dagitty.
52 pages, 20 figures, 12 tables
FOS: Computer and information sciences, Computer Science - Machine Learning, I.2.4, Computer Science - Artificial Intelligence, I.2.6, d-separation, Data Science, ancestral graphs, knowledge representation, Reasoning under uncertainty in the context of artificial intelligence, Machine Learning (cs.LG), Bayesian network, Artificial Intelligence (cs.AI), m-separation, Knowledge representation, causal inference, complexity, Probabilistic graphical models, covariate adjustment, I.2.4; I.2.6
FOS: Computer and information sciences, Computer Science - Machine Learning, I.2.4, Computer Science - Artificial Intelligence, I.2.6, d-separation, Data Science, ancestral graphs, knowledge representation, Reasoning under uncertainty in the context of artificial intelligence, Machine Learning (cs.LG), Bayesian network, Artificial Intelligence (cs.AI), m-separation, Knowledge representation, causal inference, complexity, Probabilistic graphical models, covariate adjustment, I.2.4; I.2.6
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 20 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
