
Nonaka and Takeuchi's SECI model describes knowledge creation as a spiral process of Socialization, Externalization, Combination, and Internalization—converting between tacit and explicit knowledge. This paper proposes that Large Language Models (LLMs) can serve as partners in this spiral, not by possessing tacit knowledge themselves, but by accelerating the externalization of human tacit knowledge through dialogue. When a human engages in sustained dialogue with an LLM, the human's tacit knowing is drawn out, articulated, and reflected back in structured form. This process resembles Shimizu's sōgo yūdō gōchi (mutual inductive coherence): two fields—human and AI—resonate and converge toward articulations neither could produce alone. The result is not artificial intelligence creating knowledge, but human-AI mutual induction accelerating the SECI spiral. This paper documents this phenomenon through autoethnographic analysis and proposes implications for research methodology, organizational knowledge creation, and understanding of human-AI collaboration.
LLM, ba theory, tacit knowledge, mutual induction, knowledge creation, externalization, human-AI collaboration, SECI model
LLM, ba theory, tacit knowledge, mutual induction, knowledge creation, externalization, human-AI collaboration, SECI model
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
