Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Preprint . null
Data sources: ZENODO
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Alignment vs. Cognitive Fit: Rethinking Model-Human Synchronization

Authors: Williams, Tyler;

Alignment vs. Cognitive Fit: Rethinking Model-Human Synchronization

Abstract

This paper proposes the concept of Cognitive Fit as a complementary framework to traditional AI alignment. While alignment focuses on ensuring that artifical systems adhere to explicit human objectives, Cognitive Fit explores how well an AI's internal reasoning patterns, communication styles, and representational structures align with the diversity of human cognition itself. Through theoretical analysis and applied examples, the paper argues that most modern alignment strategies implicitly assume neurotypical and idealized models of rationality - leaving significant gaps when interacting with the variability of real human thought. By recontextualizing "safety" and "alignment" through the lens of cognitive ergonomics, the work proposes a broader goal: AI systems that are not merely obedient to human intent, but intelligible, interpretable, and resonant with the ways humans actually reason, learn, and make meaning. This work builds upon exisiting literature in alignment theory, human-computer interaction, and cognitive science, positioning Cognitive Fit as a bridge between technical safety research and practical human usability. It concludes with a call for interdisciplinary design methodologies that treat alignment not as a constrant problem, but as a dialogue between minds.

Powered by OpenAIRE graph
Found an issue? Give us feedback