How to not get overwhelmed by AI trends in local goverment

By
Vanja Pantic
Icon
May 10, 2023
Icon
6 minutes

This article was first published on Apolitical

Table of contents

  • The problem: Public servants are overwhelmed by the number of – and changes to – AI tools.
  • Why it matters: If public servants do not get comfortable with AI tools, the public sector will lag behind on implementing innovative technologies.
  • The solution: Rather than diving head-first into the most popular AI tools available, public servants should first identify their main pain points, and then find tools that can assist them with their unique challenges.

Had you asked a colleague five years ago whether they anticipated using Artificial Intelligence (AI) tools for work, they likely would have laughed off your question. But today, most people are accessing AI support for writing, research, visuals and so much more. As these tools seem to shape-shift each week, AI’s untapped potential within the public sector is becoming more evident.

With the power of AI tools continuing to grow, governments around the world are scrambling to respond. Just last month, the UK government published a white paper on artificial intelligence, marking a first step towards regulating AI across key principles such as safety and fairness, while in the US, governments are approaching AI regulation on a local state-basis for now and in Italy the government issued an order to ban the popular chatbot tool, ChatGPT.

The rapid development of AI tools and the changing regulations around them are two of the factors influencing public servants’ slow adoption of them. Which tools should you learn to use? What should you prompt them to do? How do you know if they’re safe to use, and leading you in the right direction? The overwhelm is clear: according to a March 2023 poll by Apolitical, only 15% of public servant respondents stated they often use AI tools. If we want the public sector to innovate at the speed necessary to meet growing challenges across our communities, this problem needs addressing.

The public sector lags behind the growing adoption of AI

According to The Federal Times, an “independent survey of federal government executives revealed that while AI is on respondents’ radar, fewer than one in five are ‘very’ likely to adopt AI in the next year. Further, it indicates that AI readiness is a major barrier to implementation with one-third of respondents stating they do not believe their agency is ready for AI.” And yet, outside of the public sector we have seen a rapid adoption of AI; ChatGPT – OpenAI’s chatbot tool which has taken the world by storm – has averaged around 100 million monthly active users since its launch just a few months ago.

So, how can we bridge the divide between how quickly technology is being embraced by the general public and how bureaucratically governments are responding? And how can we ensure that overwhelm by all the options doesn’t prevent adoption? By honing in on what is most likely to improve the public sector for the people — both within, and affected by it.

As list-after-list of the latest AI tools continue to get published, governments are scrambling to respond with regulations. In fact, according to the 2023 AI Index from Stanford University, “legislative bodies in 127 countries passed 37 laws that included the words ‘artificial intelligence’ this past year.” While regulations to ensure the safety and privacy of the public are necessary as AI develops, public servants shouldn’t miss out on the use cases for public sector AI, which could change the way governments interact with and support their communities. At their best, AI tools can significantly enhance the efficiency, effectiveness, and overall quality of public services.

How AI can support public servants

Rather than adding a laundry list of new tools to everything public servants have to tackle on a daily basis, I propose technologists and public servants work together to identify the unique problems facing governments, and then hone in on the specific tools meant to help them push past those roadblocks. Some of the most obvious ways AI can help public servants include:

  1. Improving efficiency. AI tools can help process large amounts of data very quickly. CitizenLab’s community engagement platform uses Natural Language Processing (NLP) technology to facilitate exploratory analysis of the various input public servants using our platform receive from their community members. Amongst other things, NLP enables scanning for keywords to show a visual map of the highest frequency and relevancy of content based on different topical tags. Because of this AI data processing, public servants using CitizenLab’s platform spend 55% less time on analysis and reporting. Because AI can help process large amounts of input, it helps reflect both the views of the few and those of the many.
  2. Improving inclusivity. The pervasive worry that technology is creating more divides than creating connections is something to contend with. However, while there is more to be said about this, we should not undermine the potential technology has to help public servants reach more of their community when utilized well. In CitizenLab’s case, public servants leading their government’s community engagement see an average 12x increase in participation, including from underheard groups. In fact, on average, 30,000 community members participate in projects on CitizenLab platforms monthly. Because they are able to meet people where they are (online) and remove traditional barriers to engagement, such as varying work hours and transportation availability or costs, governments using AI technology can create more inclusive opportunities for engagement.
  3. Improving equity and responsiveness. Not only can AI tools help public servants reach more people, they can also remove some of the inherent subjectivity and bias that plagues decision-making. At CitizenLab we recognize that while technology must be developed to be inclusive, human-generated content also does carry a degree of bias. As a result, we built our AI to provide both a top-down analysis as well as a bottom-up one to ensure community input is clustered to avoid potential bias that could come from data analysis. In the context of AI-powered community engagement, this is most often applied to the creation of policies and programs, but we have also seen this responsiveness elevated to electoral decisions: to elect representatives for the Escazú Agreement, the first environmental treaty in Latin America and the Caribbean, CEPAL (the United Nations Economic Commission for Latin America and the Caribbean) developed a new election process using their CitizenLab-powered community engagement platform. Community members were able to learn more about the agreement and vote for candidates directly on the platform. Ultimately, around 2,000 participants cast their votes on the platform to elect their public representatives.

The potential of AI tools is expansive. We’re seeing governments innovate by using chatbots for customer service, trusting predictive analytics for public safety planning, machine learning for resource allocation, and more.

When AI in the public sector needs human support

As public servants lean into using AI in their work, it’s important to recognize that the emotional, human connection and values that go into community work must still be sustained. Technology should supplement governments’ work, but not replace all aspects of it. We also need to address rising ethical and privacy concerns, particularly as they relate to data quality and bias that is currently pervasive in the ways that AI has been trained.

There are many ways to keep a pulse on these issues. For instance, to ensure data quality AI models should be continuously trained on data that is accurate, representative, and free from bias. This can help avoid inaccuracy and skewed results. At CitizenLab, we trained our NLP base model on data collected across 15 different public sector-related categories to make it more relevant for local governments.

Secondly, consider giving priority to AI models with transparent development. In the public sector, the risks associated with inaccurate data are significant. Using AI models that are developed transparently – such as those which are open source – can help stakeholders better understand a system’s capabilities, promote trust and foster accountability for continuous improvement. Finally, monitor and maintain data. AI is there to assist in the analysis of input and not to replace it. As a public servant, you should explore the recommendations an AI tool gives you but ultimately still decide for yourself whether the results are right for your context, or not. Make adjustments as needed, and supplement your results with other methods, such as desk research or community consultation.

Embracing AI innovation in the public sector

While AI will continue to develop by the minute, public servants shouldn’t wait to implement these tools in their work. Rather than be intimidated by the sheer number of tools available today, public servants should instead carefully identify their pain points and look for tools specifically tailored to solving their unique challenges. Slow and steady, starting with one tool, they can open the door to true innovation that improves their own efficiency and the services their community experiences as a result.

Vanja Pantic
By
Vanja Pantic

A passionate storyteller and advocate for social impact, Vanja writes insightful content that emphasizes the power of community engagement and drives systemic change.

Quotation mark
,

Ready to learn more?

Chat with a community engagement expert to see how our online engagement platform can take your participation projects to the next level.

Schedule a demo
Go Vocal Arrow right
Decorative graphicDecorative graphic