Framing and Priming in Dialogue Systems: New Forms of Speech Manipulation Through AI

Konul Azizaga Habibova

Abstract

The article explores the cognitive mechanisms of framing (language framing) and priming (anticipatory suggestion) in the context of human interaction with artificial intelligence (AI)-based dialogue systems. The author proposes an interdisciplinary analysis combining cognitive linguistics, psycholinguistics, social psychology and generative AI technologies to identify how such systems can use speech strategies of persuasion and manipulation. It is shown that dialogue AIs actively apply frames in the presentation of information, both in lexical framing and in the tone of utterances, which can skew interpretations and guide the user's opinion. In parallel, priming allows systems to shape attitudes even before the main utterance, through tone, order of presentation, or even background information, which is particularly effective in long-term interactions. The author discusses threats to user autonomy, especially for vulnerable groups (children, the elderly, people with mental disabilities), the problems of deception through anthropomorphisation of AI and substitution of real dialogue with simulated 'friendship'. Legal and regulatory initiatives, including the European AI Act (2024) provisions, prohibit AI use for covert behavioural influence and are considered. Possible solutions are also suggested, ranging from AI content labelling and algorithm transparency to increasing digital literacy among users. Thus, the article contributes to understanding the speech mechanisms involved in human-AI communication and emphasises the need for a balance between technological development and the preservation of human cognitive freedom.



Keywords


speech manipulation; AI; framing; priming; cognitive linguistics; verbal influence



References


1. Bargh, J. A., & Chartrand, T. L. (2000). Studying the mind in the middle: A practical guide to priming and automaticity research. In H. Reis & C. Judd (Eds.), Handbook of Research Methods in Social Psychology (pp. 1–39). New York: Cambridge Univ. Press.

2. Crawford, J. (2023). Protecting Yourselves from Manipulative Conversational AI Agents. Retrieved from https://www.linkedin.com/pulse/protecting-yourselves-from-manipulative-ai-agents-jenson-crawford#:~:text=Conversational%20AI%20agents%20can%20be,anchoring%2C%20availability%2C%20and%20confirmation%20bias

3. Entman, R. M. (1993). Framing: Toward Clarification of a Fractured Paradigm. Journal of Communication, 43(4), 51–58. doi: 10.1111/j.1460-2466.1993.tb01304.x

4. Fillmore, C. J. (1982). Frame semantics. In Linguistics in the Morning Calm (pp. 111-137). Seoul: Hanshin Publishing Co.

5. Hiraoka, T., Neubig, G., Sakti, S., Toda, T., & Nakamura, S. (2014). Reinforcement Learning of Cooperative Persuasive Dialogue Policies using framing. COLING 2014: Proc. 25th Int. Conf. on Computational Linguistics, 1706–1717.

6. Franklin, M., Moreira Tomei, Ph., & Gorman, R. (2023). Strengthening the EU AI Act: Defining Key Terms on AI Manipulation. Retrieved from https://arxiv.org/abs/2308.16364

7. Krook, J. (2025). Manipulation and the AI Act: Large Language Model Chatbots and the Danger of Mirrors. Retrieved from https://arxiv.org/abs/2503.18387

8. Pataranutaporn, P., Liu, R., Finn, E., & Maes, P. (2023). Influencing human–AI interaction by priming beliefs about AI can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence, 5(10), 1076–1086. doi: 10.1038/s42256-023-00720-7

9. Loftus, E. F., & Palmer, J. C. (1974). Reconstruction of automobile destruction: An example of the interaction between language and memory. Journal of Verbal Learning and Verbal Behavior, 13(5), 585–589. doi: 10.1016/s0022-5371(74)80011-3

10. Matz, S. C., Teeny, J. D., Vaid, S. S., Peters, H., Harari, G. M., & Cerf, M. (2024). The potential of generative AI for personalised persuasion at scale. Scientific Reports, 14(1). doi: 10.1038/s41598-024-53755-0

11. Tappin, B. M., Wittenberg, C., Hewitt, L. B., Berinsky, A. J., & Rand, D. G. (2023). Quantifying the potential persuasive returns to political microtargeting. Proceedings of the National Academy of Sciences, 120(25). doi: 10.1073/pnas.2216261120

12. Public Citizen. (2023, September 26). Chatbots Are Not People: Dangerous Human-Like AI Systems. Retrieved from https://www.citizen.org/article/chatbots-are-not-people-dangerous-human-like-anthropomorphic-ai-report/

13. Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. Cambridge University Press.

14. Tversky, A.; Kahneman, D. (1981). The Framing of Decisions and the Psychology of Choice. Science, 211(4481), 453–458.

15. Ward, A., & Litman, D. (2007, January). Measuring convergence and priming in tutorial dialog. Retrieved from https://www.researchgate.net/publication/228863614_Measuring_convergence_and_priming_in_tutorial_dialog

16. Zewe, A. (2023, October 2). Study shows users can be primed to believe certain things about an AI chatbot's motives, influencing their interactions. Retrieved from https://techxplore.com/news/2023-10-users-primed-ai-chatbot-interactions.pdf


Article Metrics

Metrics Loading ...

Metrics powered by PLOS ALM

Refbacks

  • There are currently no refbacks.




Copyright (c) 2025 Konul Azizaga Habibova

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.