
doi: 10.25965/trahs.6416
In 2017, Google's “Attention is All You Need” (Vaswani et al.) introduced the Transformer architecture, laying the groundwork for today’s large language models (LLMs) like GPT, Claude, and Llama. Transformers excel at processing sequences by leveraging self-attention, which allows for the dynamic weighting of relationships between words in a sentence. This approach revolutionized natural language processing, enabling models to understand and generate human-like text by calculating complex contextual meanings. These foundation models are now so advanced that interactions with them often feel human-like. This evolution challenges not only technological norms but also human self-perception, sparking both fascination and fear. Historically, humans have been resistant to ideas that diminish their unique place in the world, such as the Copernican and Darwinian revolutions. Similarly, today's AI advancements evoke concerns about technology surpassing human abilities, including creativity and problem-solving. Transformers, by processing vast amounts of digital information, act as material production engines, transforming human-readable inputs into abstract data structures, leading to outputs often more polished than what humans can produce. As AI continues to advance at an unprecedented rate, its integration into society poses ethical and existential questions. Understanding AI as active digital material requires new metaphors, vocabularies, and frameworks, including ethical considerations about creativity, intelligence, and responsibility in a rapidly evolving technological landscape.
interaction homme-ia, architecture de transformateur, grands modèles de langage (llms), matériel numérique, auto-attention, Social history and conditions. Social problems. Social reform, HN1-995
interaction homme-ia, architecture de transformateur, grands modèles de langage (llms), matériel numérique, auto-attention, Social history and conditions. Social problems. Social reform, HN1-995
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
