Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ Vrije Universiteit A...arrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
https://doi.org/10.5463/thesis...
Doctoral thesis . 2025 . Peer-reviewed
Data sources: Crossref
versions View all 2 versions
addClaim

Logical Challenges in Artificial General Intelligence

Authors: Wang, Ruoding;

Logical Challenges in Artificial General Intelligence

Abstract

The present thesis pertains to the research area of logic for artificial intelligence (AI), and is motivated by the critical role of automated reasoning in AI, particularly by the capacity of automated reasoning to support logical inference, problem-solving, and explainable decision-making. Automated reasoning serves as a fundamental building block of symbolic AI, and is essential for constructing general-purpose intelligent systems that can operate transparently and reliably. The current AI systems, such as Large Language Models (LLMs), rely on statistical correlations learned from vast datasets. This limits their capacity for robust and generalizable reasoning involving complex, novel, or counterfactual scenarios, and moreover, lacks trustworthy explanations of their outputs. To address these limitations, the emerging field of neural-symbolic AI seeks to integrate the statistical strengths of neural models with the structural rigor of symbolic reasoning. Automated reasoning, as a symbolic technique, remains central to these efforts, especially as AI applications increasingly demand higher standards of transparency, responsibility, and generalization. The field is therefore exploring a range of logical systems beyond classical logic—including non-monotonic, probabilistic, and modal logics—to better capture the uncertain, dynamic, and context-sensitive nature of human reasoning. The present thesis aims to advance symbolic formalizations in three key areas where current AI systems continue to struggle: conceptual reasoning, causal reasoning, and defeasible reasoning. These reasoning modes are vital for representing complex human knowledge structures, understanding causal relationships, and making decisions in the presence of incomplete or conflicting information. In the domain of conceptual reasoning, Formal Concept Analysis (FCA), description logic, and lattice-based logics are used to model conceptual hierarchies and category dynamics, to investigate how agents can generalize, classify, and compare concepts within dynamic knowledge systems. These tools allow for structured representation and reasoning with concepts. The present thesis investigates the application of LE-ALC, a logical framework which combines FCA and description logic, to ontology-mediated query answering, and illustrates how it can be used to answer different types of queries involving conceptual relationships. In the area of causal reasoning, the present thesis addresses the need for formal models that go beyond correlation to capture the semantics of interventions and counterfactuals. Building on Pearl’s Structural Causal Models (SCMs), the present thesis proposes a hybrid framework (causal Kripke models) that incorporates epistemic modalities in causal inference. This framework includes a formal definition of counterfactuals under minimal change conditions, avoiding erroneous causal attributions, and aligning more closely with intuitive human reasoning about cause and effect. Within the domain of defeasible reasoning, the present thesis extends the KLM framework, and defines three types of defeasible consequence relations within lattice-based logic. These include defeasible object-level entailment (where most members of a category belong to another category), defeasible feature-level entailment (where most attributes of a category apply to another category), and combined entailment over both objects and features. These allow for exception-tolerant reasoning across categories and features, supporting more human-like reasoning capabilities. In conclusion, the present thesis makes technical contributions to the development of symbolic frameworks that enhance the explainability and generalization abilities of AI systems. By formalizing conceptual, causal, and defeasible reasoning within lattice-based and modal logics, the contributions of the present thesis help to bridge the gap between data-driven learning and human-like inferential structures, contributing to the broader goal of building more transparent, trustworthy, and cognitively plausible AI.

Country
Netherlands
Related Organizations
  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average