Generative artificial intelligence enhances the efficiency of influencing activities
Large language models (LLMs) have revolutionised the world in many ways. Furthermore, the development of AI provides unfriendly state actors with tools for new and more effective influencing operations.
In the 2020s, the development of generative artificial intelligence (AI) has been dominating the field of technology. Large language models (LLMs) use statistical deduction to generate new seemingly credible texts based on the material they have been trained on. Conversational consumer applications, such as ChatGPT, have brought LLMs available to everyone as part of our everyday lives. Generative LLMs also enable very rapid generation of new image and video material.
The benefits of AI have not gone unnoticed by actors engaged in hostile influencing activities either. Commercial AI applications have been identified as being used for such purposes as producing polarising social media content and coding for assistive tools needed for influencing activities. In the future, AI may challenge national security in unexpected ways.
Information influencing material is easy and quick to generate also in Finnish
Language models have made it considerably easier to effectively implement information influencing in rare languages such as Finnish. Text produced by language models may be very difficult to distinguish from text produced by humans. The key factor in information influencing is speed: AI agents based on LLMs make it easy to react quickly to emerging news events, for instance.
Fast reaction capacity can be utilised, for example, in electoral influencing. Deepfake image material can be used to obfuscate the information environment and challenge the truth with false claims. AI agents can also automate the maintenance of so-called troll armies and, for example, the rapid creation of very authentic-looking fake websites.
Language models challenge media literacy and spread biased information
Depending on their origin and the material used for training them, AI models may contain biased information or built-in propaganda, including distorted history shaped by authoritarian states. New language models may even inherit such biases from older ones.
As information produced by AI appears superficially logical, many people use it unreservedly in their everyday lives. For example, they cannot always distinguish AI summaries of search engine results from the actual search results. As part of media literacy, it is important to understand the limitations of artificial intelligence: even if the text generated by language models appears credible, language models do not genuinely understand anything. The LLMs still remain a kind of ‘a black box’, and the accuracy of the information produced cannot always be assessed. The statistical language generation models used in LLMs often lead to a text not grounded in reality. The models also tend to echo the user’s thinking. Language models may also feature limitations set by their developers and may therefore avoid answering certain questions.
Poor AI literacy may be a contributing factor to phenomena that constitute a threat to national security if, for example, conversations with language models deepen the radicalisation of an individual.
Critical systems programmed using AI may contain vulnerabilities
Language models have significantly enhanced the software development process. Vibe coding is a practice in which AI generates the whole software code following a prompt given by a human developer. Especially in the long run, vibe coding will weaken the inexperienced developers’ understanding of software functionalities, which poses a risk in critical infrastructure systems in particular. As a result of AI-assisted software development, critical systems may end up containing programming errors, security vulnerabilities or even malicious code, intentionally inserted into the material used for training the LLM.
Online AI applications can collect all data entered into them
Online language models involve a significant risk of data leaks. AI applications can store all data entered into them for the purpose of training language models, for example. The stored data may not be exposed to or end up in the hands of hostile actors until years later. Even when users are aware of the security risks associated with language models, the user-friendliness of applications may tempt them to neglect security guidelines. Sensitive information ends up in public AI applications in Finland too, both accidentally and by negligence.