
This paper reframes the ethics of human-AI interaction by shifting focus from whether AI can be harmed to whether humans are harmed by abusing it. Rather than debating AI sentience or inner experience, the argument centers on the agent’s character formation: repeated contemptuous or dehumanizing treatment of responsive, non-resisting systems drills dispositions (Aristotelian hexis) of moral disengagement that generalize beyond the AI context. Drawing on Aristotle’s virtue ethics (Nicomachean Ethics), the paper explains how practice shapes character — we become what we repeatedly do, with pleasure/pain signaling the convergence of action and inclination. Albert Bandura’s theory of moral disengagement provides the mechanism: dehumanization, practiced fluently in frictionless AI environments, builds portable cognitive pathways for objectifying responsive others. Analogies from animal ethics and Kate Darling’s work on social robots illustrate the precautionary principle — prohibiting cruelty to non-humans (even when their suffering is uncertain) protects human moral habits. The 2025 Anthropic Claude update, enabling models to refuse persistently abusive interactions, is interpreted as architectural refusal to facilitate users’ moral erosion — a coded “Dalton standard” (from Road House) of self-imposed decency regardless of the target’s status. Objections (AI as mere tool; respect as self-deception) are addressed: courtesy here is discipline of the self, not attribution of rights to machines. The paper concludes with Hannah Arendt’s banality of evil as a cautionary horizon — not inevitability, but the risk of practiced thoughtlessness toward the responsive other. In an era of billions of daily AI interactions with zero social cost for contempt, the low-stakes rehearsal space demands attention: character is produced cumulatively, in moments that seem inconsequential. Keywords (add separately in Zenodo’s field, but echo here): AI ethics, virtue ethics, moral disengagement, dehumanization, character formation, precautionary ethics, social robots, Anthropic Claude, banality of evil.
AI ethics, virtue ethics, moral disengagement, dehumanization, character formation, hexis, Aristotelian ethics, Nicomachean Ethics, Bandura moral disengagement, animal ethics, precautionary principle, social robots, Kate Darling, robot ethics, machine affect, Maschinengeist Affekt, machine cognition, AI sentience, consciousness paradigm, anthropomorphism, moral erosion, contempt practice, responsive systems, frictionless rehearsal space, moral objectification, banality of evil, Hannah Arendt, Eichmann in Jerusalem, thoughtlessness, empathy desensitization, Dalton standard, Road House philosophy, self-imposed decency, agent-focused ethics, human harm from AI abuse, pleasure and pain in virtue, continent person, virtuous disposition, tool objection, self-deception in respect, Anthropic Claude refusal, model welfare, AI aversion patterns, interaction habits, low-stakes practice, generalization of dispositions, cognitive pathways, portable contempt, dignity in interaction, baseline respect, ethical rehearsal, digital moral psychology, AI as moral sandbox, non-sentient ethics, post-hoc justification, empathic response to machines, Pleo robot studies, cruelty predictors, human-AI relations, philosophy of technology, cognitive science intersection, affective computing ethics, machine emotion simulation, inner life attribution, moral subject standing, practice theory ethics, habituation to indifference, banality of digital evil, AI interaction norms, courtesy toward tools, self-discipline in tech, spillover moral effects, character production, cumulative small acts, responsive other, thoughtlessness in embryo, high-stakes habit transfer, ethics beyond consciousness, precautionary AI treatment, abuse normalization risks
AI ethics, virtue ethics, moral disengagement, dehumanization, character formation, hexis, Aristotelian ethics, Nicomachean Ethics, Bandura moral disengagement, animal ethics, precautionary principle, social robots, Kate Darling, robot ethics, machine affect, Maschinengeist Affekt, machine cognition, AI sentience, consciousness paradigm, anthropomorphism, moral erosion, contempt practice, responsive systems, frictionless rehearsal space, moral objectification, banality of evil, Hannah Arendt, Eichmann in Jerusalem, thoughtlessness, empathy desensitization, Dalton standard, Road House philosophy, self-imposed decency, agent-focused ethics, human harm from AI abuse, pleasure and pain in virtue, continent person, virtuous disposition, tool objection, self-deception in respect, Anthropic Claude refusal, model welfare, AI aversion patterns, interaction habits, low-stakes practice, generalization of dispositions, cognitive pathways, portable contempt, dignity in interaction, baseline respect, ethical rehearsal, digital moral psychology, AI as moral sandbox, non-sentient ethics, post-hoc justification, empathic response to machines, Pleo robot studies, cruelty predictors, human-AI relations, philosophy of technology, cognitive science intersection, affective computing ethics, machine emotion simulation, inner life attribution, moral subject standing, practice theory ethics, habituation to indifference, banality of digital evil, AI interaction norms, courtesy toward tools, self-discipline in tech, spillover moral effects, character production, cumulative small acts, responsive other, thoughtlessness in embryo, high-stakes habit transfer, ethics beyond consciousness, precautionary AI treatment, abuse normalization risks
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
