Towards an Ethos of Machines: LLMs as Rhetors
DOI:
https://doi.org/10.29107/rr2025.4.12Słowa kluczowe:
etos maszyn, komunikacja ludzi i maszyn, zero persona, retoryka sztucznej inteligencjiAbstrakt
W niniejszym artykule przedstawiono argumenty przemawiające za pilną potrzebą pogłębienia zrozumienia retoryki, a zwłaszcza etosu, w świetle pojawienia się zaawansowanych modeli językowych AI jako podmiotów retorycznych. Podkreślono w nim znaczenie czynnika ludzkiego w interpretacji retorycznej i wprowadzono pojęcie zero persona w odniesieniu do twórców i interesariuszy stojących za narzędziami AI. Refleksja nad etyką maszyn jest pilną kwestią, biorąc pod uwagę, że kwestie zaufania i wiarygodności to główne obawy społeczne związane z wykorzystaniem tej technologii.
Pobrania
Bibliografia
Aristotle. [2018]. Rhetoric. Translated by C.D.C Reeve. Indianapolis: Hackett Publishing Company.
Bach, Tita Aissa, Amna Khan, Harry Hallock, Gabriela Beltrão, and Sonia Sousa. 2024. “A systematic literature review of user trust in AI-enabled systems: An HCI perspective.” International Journal of Human–Computer Interaction 40 (5): 1251–1266. https://doi.org/10.108 0/10447318.2022.2138826.
Bedor Hiland, Emma. 2024. “The Rhetorical Possibilities of Communicative Time Travel.” Rhetoric Society Quarterly 54 (3): 263–271. https://doi.org/10.1080/02773945.2024.2343267.
Behdadi, Dorna, and Christian Munthe. 2020. “A normative approach to artificial moral agency.” Minds and Machines 30 (2): 195-218. https://doi.org/10.1007/s11023-020-09525-8.
Bender, Emily, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi. org/10.1145/3442188.3445922.
Bitzer, Lloyd. 1968. “The Rhetorical Situation.” Philosophy and Rhetoric 1: 1–14.
Black, Edwin. 1970. “The Second Persona.” Quarterly Journal of Speech 56: 109–119.
Bontridder, Noémi, and Yves Poullet. 2021. “The Role of Artificial Intelligence in Disinformation.” Data & Policy 3: e32. https://doi.org/10.1017/dap.2021.20.
Boyle, Casey, James Brown, and Steph Ceraso. 2018. “The digital: Rhetoric behind and beyond the screen.” Rhetoric Society Quarterly 48 (3): 251–259. https://doi.org/10.1080/02773945.2018.145 4187.
Braet, Antoine. 1992. “Ethos, Pathos and Logos in Aristotle's Rhetoric: A Re-Examination.” Argumentation 6: 307–320. https://doi.org/10.1007/BF00154696.
Cappelen, Herman, and Josh Dever. 2021. Making AI Intelligible: Philosophical Foundations. Oxford: Oxford University Press.
Cloud, Dana. 1999. “The null persona: Race and the rhetoric of silence in the uprising of '34.” Rhetoric & Public Affairs 2 (2): 177–209. https://doi.org/10.1353/rap.2010.0014.
Cummings, Louise. 2014. “Informal Fallacies as Cognitive Heuristics in Public Health Reasoning.” Informal Logic 34 (1): 1–37.
Egan, Matt. 2024. “AI could pose ‘extinction-level’ threat to humans and the US must intervene, State Dept.-commissioned report warns.” CNN. Accessed May 22, 2025. https://edition.cnn. com/2024/03/12/business/artificial-intelligence-ai-report-extinction.
European Commission. 2021. “Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence.” European Commission. Accessed May 22, 2025. https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682.
Esposito, Elena. 2017. “Artificial communication? The production of contingency by algorithms.” Zeitschrift für Soziologie 46 (4): 249–265. https://doi.org/10.1515/zfsoz-2017-1014.
Ferrario, Andrea, and Michele Loi. 2022. “How explainability contributes to trust in AI.” Proceedings of the 2022 ACM conference on fairness, accountability, and transparency, 1457–1466. https://doi.org/10.1145/3531146.3533202.
Flahive, Gerry. 2018. "The story of a voice: Hal in '2001' wasn't always so eerily calm." The New York Times. Accessed December 1, 2022. https://www.nytimes.com/2018/03/30/movies/hal-2001-a-space-odyssey-voice-douglas-rain.html.
Fonagy, Peter, and Elizabeth Allison. 2023. “Beyond Mentalizing: Epistemic Trust and the Transmission of Culture.” The Psychoanalytic Quarterly 92 (4): 599–640. https://doi.org/10.1080 /00332828.2023.2290023.
Fritz, Leslie M. 2018. "Child or product? The rhetoric of social robots." In Human-machine communication: Rethinking communication, technology, and ourselves, ed. Andrea Guzman, 67–82. New York: Peter Lang.
Giles, Howard. 2008. Communication accommodation theory. Thousand Oaks: Sage Publications. https://psycnet.apa.org/doi/10.4135/9781483329529.n12.
Gilovich, Thomas, Dale Griffin, and Daniel Kahneman, eds. 2002. Heuristics and biases: The psychology of intuitive judgment. Cambridge: Cambridge University Press. https://psycnet.apa. org/doi/10.1017/CBO9780511808098.
Goldman, Alvin. 2001. "Experts: Which ones should you trust?" Philosophy and phenomenological research 63 (1): 85-110. https://doi.org/10.2307/3071090.
Grice, H. Paul. 1975. “Logic and Conversation.” In Syntax and Semantics, 3: Speech Acts, eds. Peter Cole and Jerry L. Morgan, 41–58. New York: Academic Press.
Guzman, Andrea. 2018. “What is human-machine communication, anyway?” In Human-Machine Communication: Rethinking Communication, Technology, and Ourselves, ed. Andrea Guzman, 1–28. New York: Peter Lang. https://doi.org/10.3726/b14414.
Guzman, Andrea, and Seth Lewis. 2020. “Artificial intelligence and communication: A Human – Machine Communication research agenda.” New Media and Society 22 (1): 70–86. https://doi. org/10.1177/1461444819858691.
Hallsby, Atilla. 2024. “A Copious Void: Rhetoric as Artificial Intelligence 1.0.” Rhetoric Society Quarterly 54 (3): 232–246. https://doi.org/10.1080/02773945.2024.2343265.
Hume, David. 1748/1977. An Enquiry Concerning Human Understanding. Indianapolis: Hackett Publishing Company.
Ji, Jiaming, Tianyi Qiu, Boyuan Chen, Borong Zhang, Hantao Lou, Kaile Wang, Yawen Duan, Zhonghao He, Jiayi Zhou, Zhaowei Zhang, Fanzhi Zeng, Kwan Yee Ng, Juntao Dai, Xuehai Pan, Aidan O'Gara, Yingshan Lei, Hua Xu, Brian Tse, Jie Fu, Stephen McAleer, Yaodong Yang, Yizhou Wang, Song-Chun Zhu, Yike Guo, Wen Gao. 2024. AI Alignment: A Comprehensive Survey. https://arxiv.org/abs/2310.19852v5. https://doi.org/10.48550/arXiv.2310.19852.
Jonker, Alexander, and Alice Gomstyn. 2024. What is AI alignment? Accessed May 22, 2025. https:// www.ibm.com/think/topics/ai-alignment.
Koenig, Melissa, and Paul Harris. 2007. “The basis of epistemic trust: Reliable testimony or reliable sources?” Episteme 4 (3): 264-284. http://dx.doi.org/10.3366/E1742360007000081.
Lee, Seungcheol, and Yuhua Liang. 2016. “The role of reciprocity in verbally persuasive robots.” Cyberpsychology, Behavior, and Social Networking 19 (8): 524–527. https://doi.org/10.1089/cyber.2016.0124.
Lee, Seungcheol, and Yuhua Liang. 2019. “Robotic foot-in-the-door: Using sequential-request persuasive strategies in human-robot interaction.” Computers in Human Behavior 90: 351-356. https://doi.org/10.1016/j.chb.2018.08.026.
Lubin, Kem-Laurin, and Randy Harris. 2024. “Sex after Technology: The Rhetoric of Health Monitoring Apps and the Reversal of Roe v. Wade.” Rhetoric Society Quarterly 54 (3): 247–262. https://doi.org/10.1080/02773945.2024.2343266.
Majdik, Zoltan, and S. Scott Graham. 2024. “Rhetoric of/with AI: An Introduction.” Rhetoric Society Quarterly 54 (3): 222-231. https://doi.org/10.1080/02773945.2024.2343264.
Maleki, Negar, Balaj Padmanabhan, and Kaushik Dutta. 2024. “AI hallucinations: a misnomer worth clarifying.” 2024 IEEE Conference on Artificial Intelligence (CAI), 133–138. https://doi.org/10.1109/CAI59869.2024.00033.
Manna, Riya, and Rajakishore Nath. 2021. “The problem of moral agency in artificial intelligence.” 2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW), 1–4. https://doi.org/10.1109/21CW48944.2021.9532549.
Martini, Carlo. 2014. "Experts in science: a view from the trenches." Synthese 191 (1): 3-15. https://doi.org/10.1007/s11229-013-0321-1.
McCraw, Bemjamin. 2015. “The Nature of Epistemic Trust.” Social Epistemology 29 (4): 413–430. https://doi.org/10.1080/02691728.2014.971907.
Mercier, Hugo, and Dan Sperber. 2011. “Why do Humans Reason? Arguments for an Argumentative Theory.” Behavioral and Brain Sciences 34 (2): 57–74.
Miller, Carolyn. 2007. “What Can Automation Tell Us About Agency?” Rhetoric Society Quarterly 37:137–157. https://doi.org/10.1080/02773940601021197.
Noor, Dharna. 2025. “Inside a plan to use AI to amplify doubts about the dangers of pollutants.” The Guardian. Accessed August 22, 2025. https://www.theguardian.com/technology/ng-interactive/2025/jun/27/tony-cox-epidemiology-risk-assessment-chatgpt-ai.
Paglieri, Fabio. 2014. “Trust, argumentation and technology.” Argument and Computation 5 (2-3): 119–122. https://doi.org/10.1080/19462166.2014.913262.
Payne, Kay, Joe Downing, and John Fleming. 2000. “Speaking Ebonics in a professional context: The role of ethos/source credibility and perceived sociability of the speaker.” Journal of technical writing and communication 30 (4): 367–383. https://doi.org/10.2190/93U1-0859-0VC3-F5LK.
Peled, Yael, and Matteo Bonotti. 2019. “Sound reasoning: Why accent bias matters for democratic theory.” The Journal of Politics 81 (2): 411–425.
Peters, John. 1999. Speaking into the Air: A History of the Idea of Communication. Chicago: University of Chicago Press.
Pruś, Jakub, and Andrew Aberdein. 2022. “Is Every Definition Persuasive?” Informal Logic 42 (1):25-47.
Reinares-Lara, Eva, Josefa Martín-Santana, and Clara Muela-Molina. 2016. “The Effects of Accent, Differentiation, and Stigmatization on Spokesperson Credibility in Radio Advertising.” Journal of Global Marketing 29 (1): 15-28. https://doi.org/10.1080/08911762.2015.1119919.
Sætra, Henrik. 2024. “A Machine's ethos? An inquiry into artificial ethos and trust.” Computers in Human Behavior 153: 108108. https://doi.org/10.1016/j.chb.2023.108108.
Sharf, Zack. 2018. Douglas Rain, Voice of HAL 9000 in ‘2001: A Space Odyssey,’ Dies at 90—Here’s Why Stanley Kubrick Cast Him. Accessed May 22, 2025. https://www.indiewire.com/2018/11/douglas-rain-dead-hal-9000-2001-a-space-odyssey-stanley-kubrick-cast-1202019828.
Sherman, Natalie. 2023. Google's Bard AI bot mistake wipes $100bn off shares. Accessed May 22, 2025. https://www.bbc.com/news/business-64576225.
Sperber, Dan, Fabrice Clément, Christophe Heintz, Oliver Mascaro, Hugo Mercier, Gloria Origgi, and Deirdre Wilson. 2010. “Epistemic vigilance.” Mind & Language 25 (4): 359–393. https://doi.org/10.1111/j.1468-0017.2010.01394.x.
Sun, Yujie, Dongfang Sheng, Zihan Zhou, and Yeifei Wu. 2024. “AI hallucination: towards a comprehensive classification of distorted information in artificial intelligence-generated content.” Humanities and Social Sciences Communications 11 (1): 1–14. https://doi.org/10.1057/s41599-024-03811-x.
Torkington, Simon. 2024. “These are the 3 biggest emerging risks the world is facing.” World Economic Forum. Accessed May 22, 2025. https://www.weforum.org/stories/2024/01/ai-disinformation-global-risks.
Wander, Philip. 1984. “The third persona: An ideological turn in rhetorical theory.” Communication Studies 35 (4): 197–216. https://doi.org/10.1080/10510978409368190.
Wang, Zhaozhe. 2024 “Post-Rhetoric: A Rhetorical Profile of the Generative Artificial Intelligence Chatbot.” Rhetoric Review 43 (3): 155–172. https://doi.org/10.1080/07350198.2024.2351723.
Watson, Alex P. 2024. “Hallucinated Citation Analysis: Delving into Student-Submitted AI-Generated Sources at the University of Mississippi.” The Serials Librarian 85 (5–6): 172–80. https://doi.org/10.1080/0361526X.2024.2433640.
Weiss, Debra Cassens. 2025. “AI-hallucinated cases end up in more court filings, and Butler Snow issues apology for 'inexcusable' lapse.” ABAJournal. Accessed May 22, 2025. https://www.abajournal.com/news/article/ai-hallucinated-cases-end-up-in-more-legal-documents-and-butler-snow-issues-apology-for-inexcusable-lapse.
Weizenbaum, Joseph. 2000. “From computer power and human reason: From judgment to calculation.” In The new media reader, eds. Noah Wardrip-Fruin and Nick Montfort, 368–376. Cambridge, MA: MIT Press.
Wells, Lindsay, and Tomasz Bednarz. 2021. “Explainable ai and reinforcement learning—a systematic review of current approaches and trends.” Frontiers in artificial intelligence 4: 550030. https://doi.org/10.3389/frai.2021.550030.
Wolf, Marty J., Keith W. Miller, and Frances S. Grodzinsky. 2017. “Why we should have seen that coming: comments on Microsoft’s tay ‘experiment’ and wider implications.” Acm Sigcas Computers and Society 47 (3): 54–64. https://doi.org/10.29297/orbit.v1i2.49.
Zappen, James. 2005. “Digital Rhetoric: Toward an Integrated Theory.” Technical Communication Quarterly 14 (3): 319–325.
Zierau, Naim, Christian Engel, Matthias Söllner, and Jan Marco Leimeister. 2020. “Trust in smart personal assistants: A systematic literature review and development of a research agenda.” In Entwicklungen, Chancen und Herausforderungen der Digitalisierung. Band 1: Proceedings der 15. Internationalen Tagung Wirtschaftsinformatik, eds. Norbert Gronau, Hanna Krasnova, Key Pousttchi, and Moreen Heine, 99–114. Berlin: GITO Verlag. http://dx.doi.org/10.30844/wi_2020_a7-zierau.
Pobrania
Opublikowane
Numer
Dział
Licencja
Prawa autorskie (c) 2026 "Res Rhetorica"

Utwór dostępny jest na licencji Creative Commons Uznanie autorstwa 4.0 Międzynarodowe.
Artykuły publikowane są na lincencji CC BY 4.0. Treść licencji jest dostępna tutaj: https://creativecommons.org/licenses/by/4.0/
Artykuły opublikowane na licencji CC-BY (wersje postprint) mogą być udostępniane przez autorów każdemu, na dowolnej platformie lub za pośrednictwem dowolnego kanału komunikacyjnego pod warunkiem, że zostały przypisane do Res Rhetorica jako pierwotnego wydawcy.