Feb 06 2024
Security

How State CIOs Can Promote Responsible AI Use

As state and local governments implement artificial intelligence, state CIOs need to think differently about security, both internally and externally.

In December, the National Association of State Chief Information Officers released the State CIO Top 10 Priorities for 2024, the latest representation of state technology leaders’ key areas of focus for the year, as voted on by 49 state and territory CIOs.

The NASCIO report has been published annually since 2007, but 2024 is a landmark year: For the first time, artificial intelligence made the list, debuting at No. 3. Cybersecurity remains at No. 1, as always, though it now shares the top spot with digital government services.

It’s fitting that cybersecurity and AI are high on the list, as new AI capabilities can create so much disruption within an enterprise and bring about such rapid change. The possibilities of AI seem limitless, but it also presents many more vulnerabilities. As AI changes operations, state CIOs and CISOs need to think differently about security, both internally and externally. Ignoring technology such as generative AI models isn’t a viable strategy, because ultimately AI is a productivity multiplier.

The challenge IT leaders are up against is securing the environment while also ensuring that they don’t hinder the opportunities for innovation and creativity that come with AI. How do you lock down an enterprise where everyone has AI in their pockets and is a potential exploit source but still reap the benefits of the technology?

Click the banner below to weigh elements of zero-trust security for your agency.

Measure Your Organization’s AI Maturity

Secure, responsible AI use requires a level of literacy on the technology. That’s particularly true when using commercial AI models because shadow AI is prevalent and very likely happening within your organization. Shadow AI refers to the unauthorized use of a generative AI model that’s outside of IT governance, which — well-meaning or not — creates potentially devastating cybersecurity and data privacy issues. If you’re not careful, it’s easy to feed a commercial AI model sensitive or confidential information that could leak into other models, putting it at risk. All it takes is for one bad actor to find and exploit a vulnerability in a model such as ChatGPT.

To bolster AI maturity and make sure employees stay secure, organizations should ask themselves:

What is our current AI literacy?

  • What is the organization’s maturity curve on AI overall?
  • Have we assessed the potential risks of commercial AI models, including shadow AI?
  • Have we scanned our server logs to see how many people are using commercial AI models regularly?

Another way to make AI use more secure: Not far off is the ability to have your own private instance of a generative AI model such as ChatGPT running in your local cloud or, theoretically, on-premises in your server network. Then, when using internal systems, organizations can block users from accessing a public AI model or redirect them to the organization’s private model.

READ MORE: Implementing data governance strategies for AI success.

Use AI to Bolster Security

AI can create vulnerabilities, but it can also shore up defenses when used effectively. There are frameworks, such as AIOps, where AI is used to automate and streamline operations, including security functions, at a time when attack surfaces are growing and data collection is increasing exponentially. Organizations can use AI to combat alert fatigue by automating the handling of security alerts — an essential capability for state and local governments, which are generally more vulnerable to cyberattacks because of staff and budgetary constraints. AI tools can analyze alerts as they come in, trigger automatic incident responses and generate risk analyses.

Soon, we’ll see enhancements of AI in security with the use of multi-agent AI models that use personas to turn security automation from a rules-based system to a logical, reasoned and predictive-based setup. Organizations will be able to write specific personas, such as a white-hat hacker or a black-hat hacker, and use them to pressure-test their defense systems against multiple decision-making agents with different focuses and areas of expertise. Then, agencies can load those personas into different AI models.

This article is part of StateTech’s CITizen blog series. Please join the discussion on X (formerly Twitter).

CITizen_blog_cropped_0.jpg

da-kuk / getty images
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT