Mind your (security) risks in AI

Artificial Intelligence (AI) will transform the work of communication professionals in government. It’s a matter of, if not when. But it is a minefield that needs to be stepped through carefully. It is a tool that promises time and money, but also a tool that raises many more questions than there are currently answers.

Take the critical question of the security and veracity of information in Government.

Here are four questions you need to ask as you consider how to use AI.

1. Where is the request processed?

AI, like all software is processed somewhere, somehow. Currently, all publicly available AI is web-based. We enter our prompts into a web-based system, and it (usually) spits out something convincing and intelligent.

But for security purposes, we need to ask where it is processed and if our queries are protected. If we scrutinise AI in the same way as other software used by government agencies, will agencies be allowed to use them? From my experience working within Australian Government departments, I’m reluctant to enter sensitive data into any AI. We don’t know where it is processed or how long our queries will be stored.

2. Who has access to our conversations with AI?

We expect our conversations with others to be confidential. But is it realistic to place that same expectation on AI, considering it is software? As software, it needs to be reviewed and upgraded from time to time. We don’t expect AI to review its own code, but acknowledge that someone has to do that. Who is that person or group of people who perform that role, and do we really want them looking at our ‘private’ conversations?

3. What data was used to train the AI?

AI isn’t programmed to be able to think or string words together coherently, only convincingly. AI trawls copious amounts of data to train its algorithms to make choices. We don’t know what information it is being fed to arrive at those choices. Where was the information sourced? Who checked that the information was unbiased and free from discrimination? Was the information checked? Were biases applied while training the AI? How did AI come up with that decision?

Government decisions are often scrutinised by the public. If we follow AI’s recommendations and apply its output in our work, do we understand what information it is based on? Can we trust AI to be fair in its choices and decisions? The public wants to know that decisions are made based on the best available data. With AI, it’s easy to overlook the thought process and the data that resulted in the decision. Government communicators using AI will need to ensure they ask why the AI made the recommendation and understand it so they can back themselves up.

4. Are there any legal and ethical implications when using AI?

The use of AI in government communications work can raise legal and ethical concerns, including privacy violations, discrimination, and the potential for AI to replace human decision-making altogether. Governments all over the world are looking at the implications of AI, and how they can legislate AI to protect their citizens and economy.

Additionally, because of its use of sometimes proprietary images for training, artists are calling for protection for their work as AI can copy their ‘signature’ styles in seconds.

If a government communicator decides to use AI for official government work, how will any future legislation affect today’s use of AI for government communication? It’s too early to predict, but never too early to be aware of the possible outcomes.

While AI has immense potential to improve government communications, it also comes with significant risks that government communicators need to be aware of. Don’t get us wrong, we are not to say government communicators should not use AI. AI is a powerful tool, and with the right training and direction, government communicators could harness AI’s power while meeting the government’s security and ethical requirements.

If you are looking for more content for government communicators, you can listen to our GovComms Podcast where we interview experts and leaders from Australia and around the world. In our recent episode, the Deputy Secretary of Revenue NSW, Scott Johnston shares how they are managing the risks when using AI.

Leave a Comment

Your email address will not be published. Required fields are marked *