The Demand Machine: The Realities of AI-Powered Public Service

Brief
A queue with red signs indicating a one-hour wait time.
iStock via Getty Images
Feb. 11, 2026

Key Findings

  • This brief describes and provides concrete guidance around a looming AI challenge that has received little attention: an increased demand for government services.
  • By lowering friction, AI makes it easier for residents to interact with government—resulting in more service requests, not fewer.
  • Without planning, this demand surge risks overwhelming already-stretched public systems.
  • Treating AI as a cost-cutting tool is misleading. AI is shifting capacity, staffing, and trust; processes will need to change rapidly in response.

Introduction: The New AI Operating Environment

Demand for public services is increasing faster than governments can respond. The idea that technology is purely an efficiency booster is deeply incomplete. New technologies, especially artificial intelligence (AI), add work for government administrators while improving efficiency. The reality is that AI uncovers and unleashes unmet needs, increasing demands on governments and communities. These demands aren’t necessarily new, but the speed and visibility of these issues are at a velocity that governments have not yet experienced.

In practice, AI acts less like a labor-saving device and more like a demand machine. It lowers barriers for residents to request services, apply for benefits, file complaints, and seek help, thereby surfacing needs that were previously hidden by friction, time, or bureaucratic complexity. The result is not less work for governments, but more and often different work. In the early stages of adoption, earning public trust is an essential infrastructure that enables continued use, institutional learning, and eventual efficiency.

Ground Truth: What’s Actually Happening

In the best circumstances, new technologies, when successful, can indeed create efficiencies: In health care, for example, AI can facilitate more health screenings and better diagnoses. But the flip side is that this also results in a need for more follow-up care, health treatments, and system capacity. Instead of simply “saving time,” tech tools amplify citizens’ unmet, or previously unknown, needs. In doing so, tech becomes less a tool of austerity or cost-cutting and more of a demand machine that requires rapid policy and organizational shifts to keep up.

Technology reduces friction for citizens through conversational interfaces, predictive routing, automated eligibility guidance, and triaged service tickets. That friction—often a limiting factor in how many people file complaints, request inspections, or apply for benefits—is reduced. As a result, more people engage with these government services, generating a surge in visible demand. In fact, the day before this brief was released, Harvard Business Review posted research that showed the same dynamic in the private sector. AI is lowering friction for individual workers, and companies “find themselves surprised by the complex reality that AI tools didn’t reduce work, they consistently intensified it.”

This dynamic is not new. Earlier digital service platforms highlight that reducing barriers to reporting often surfaces latent demand rather than reducing workload. AI-enabled systems extend this dynamic by increasing detection, personalization, and accessibility simultaneously, speeding up demand visibility. Tools like SeeClickFix allow people to flag problems to their government, making it easier for residents to report issues like potholes and broken infrastructure. Back in 2016, more than 300 local governments were using SeeClickFix, providing residents with immediate answers about their city’s plans to address submitted concerns. Similar to AI’s promise, these apps purportedly formed a new citizen dynamic in which constituents could directly interact with their government. While these increases in volume are often used to interpret vendor data about reporting volume as increased public need, it may instead reflect increased ability to express demand.

The promise sounded too good to be true, and it was; these apps led to serious unintended consequences. Stephen Goldsmith and an author of this brief, Neil Kleiman, reported on this problem in their book, A New City O/S, where one local administrator in Minnesota said, “My mayor insisted we adopt this response platform, which is great on the front end with residents. But we did nothing to change the back-end—the guts of how we work to fill potholes and meet other requests. So, we had the same slow response time as before and [layered on top of that] new citizens’ requests coming in, so in the end we went backwards in terms of speed.” In Boston, the Chief of Streets at the time, Chris Osgood, reported, “I am a big proponent of the you-call-we-respond. However, if it is the sole way of managing operations and prioritizing investments, it can pull focus away from more long-range and underlying issues—in effect pulling us away from being truly responsive.”

These field reports from the last tech wave show us that while new innovations can improve citizen reporting, many agencies were already struggling to keep up. The result was often a growing backlog and slower responses, both of which had the potential to erode trust. While the argument was made that technology would free up human resources to address more complex problems, there’s little evidence that this reality manifested in the last decade of civic tech work.

In our experience, this results in three types of demand patterns:

  1. Legitimate Unmet Need: If our assumptions about technology are correct, and it lowers friction and eases the ability to use public services, constituents will increase their use. Examples include residents discovering they are eligible for benefits, tenants reporting long-standing housing violations, or neighborhoods flagging infrastructure issues that were tolerated because reporting was too cumbersome. This demand reflects real and likely present needs. It is also the most resource-intensive to meet because it usually requires follow-up, discretion, and sustained service. For example, New York has seen steady increases in its 311 service request system over the course of a decade, including a 2020 surge to over 3 million requests, despite having little population growth.
  2. Duplicate and Low-Value Contacts: This could include chat loops and repeat submissions that jam up workflows and increase lag. A single unresolved issue can quickly turn into five or 10 separate contacts. A different project in New York City automated ticketing in bus lanes but resulted in about 3,800 mistaken violations—many of which were for legally parked cars. The resulting volume was within a predictable margin of error, which required increased staff time to resolve and correct.
  3. Higher-Complexity Work: Such work includes investigations and case management. As routine tasks are automated, the remaining work becomes more specific and more complex. AI-assisted triage may surface edge cases faster: situations involving overlapping jurisdictions, legal ambiguity, vulnerability, or risk. What looks like efficiency on the front end often translates into harder cases on the back end—requiring skilled staff, cross-department coordination, and judgment calls that cannot be automated away. No work could be more sensitive or complex than predictive analytics applied to child welfare. It has shifted standard case management to “humans in the loop” that must interpret and contextualize predictions to make meaningful decisions about children.

Collectively, these are examples of successful tech implementations—just without institutions adapting accordingly. Unlike past tools, AI is likely to produce the same dynamic on steroids. It can provide near real-time feedback on needs and failure points. While most local governments are in the early stages of implementing AI, it will soon produce clearly defined citizen demands at scale, exposing gaps in staffing, skills, and systems. The “efficiency gains” promised by AI may simply lead to higher workloads, not lower ones.

Of course, AI will not only increase demand but also automate responses. The question is not whether AI or automation will be used, but will they be ready when demand accelerates faster than the capacity for solutions? Theoretically, this should allow governments to respond to higher service volumes more quickly. However, demand can be automated faster than trusted responses can be delivered, especially when responses require discretion, policy judgment, or cross-agency coordination.

Thus, the risk is not automation per se, but automation without redesign. At present, many public institutions are pursuing ways to speed up response times, exploring how AI can be used for intake and triage and, increasingly, for service delivery itself. Early pilots optimize for speed, feasibility, and what’s minimally viable—not what’s maximally usable. Bridging the distance between pilot and scale requires a strategic approach to policy, staffing, and bias.

Where Hype Diverges from Reality: Responding to Demands in the AI Era

AI will inevitably produce meaningful gains. We have been in the civic tech movement for decades now and have seen patterns reflected in work with tools and vendors. Long before algorithms, public service inherited a common belief that if civic intelligence could be centralized and automated, governance would finally become rational, kind, frictionless, and responsive.

The dominant narrative suggests AI will replace workers and streamline government services. However, adoption patterns in public sector technologies suggest that participation and expectations shift earlier than efficiency gains, creating an interim period where demand grows faster than an institution’s ability to adapt. Before long-term changes, the more likely immediate scenario is the opposite: Human work increases while systems strain under faster, more detailed demand. In this context, “institutional pressure” is the growing gap between public demand and the speed governments can deliver quality services.

Governments cannot halt demand or labor without heavy consequences, which means that AI will inevitably cause strain before it delivers efficiency. AI doesn’t simply reallocate existing capacity more efficiently or “free up” time—it expands the volume and complexity of interactions public institutions must handle. The efficiency gains promised by AI often translate into more immediate, higher organizational workloads, not lower ones.

The reality is we need to change this austerity framework—where value is in cost or time savings. We should shift value assessment to areas like trust and quality to address rising demand. Labeling AI as an efficiency tool obscures these systemic effects and creates a dangerous operational blind spot.

Why Organizations Often Struggle to Keep Up

Governments are legally, and often ethically, required to absorb public demand. As a result, they will either be frozen by a skeptical, low-trust public that points to errors resulting from AI, or they will avoid using AI altogether. There are several recurring constraints that make responding to demand in the AI era challenging.

  • There’s No Shared Strategy for Advancing Public Value: Demand for public services is happening faster than institutions can understand. Additionally, there is the added challenge of federal funding cuts that reduce services just as they’re needed most. Without an approach and a clear position on how this technology will be used to benefit the public, demand becomes noise instead of guidance. Additionally, there is a disconnect between top-down government implementation and the painful experiences of constituents who are often excluded from decision-making processes.
  • Metrics Are Misaligned: Drawing from the private sector, government success metrics tend to focus on cost savings and speed. Success measurements are rooted in efficiency and compliance, rather than trust and public benefit. Finally, distinguishing the hidden needs of communities from net-new demand is an open challenge and thereby hard to solve.
  • Institutions Are Not Built to Learn in Real Time: AI is not a back-office upgrade; it fundamentally changes the way an organization works. It is simply not possible to just plug in this new technology. There is an accompanying culture change and system process improvements that must accompany the implementation of new technologies.

Recommendations: How to Respond to the Pressures of AI-Driven Demand

For public institutions, responding to these increased pressures of AI requires short and long-range planning. Executable plans are not always a public-sector strength, but the advent of AI demands it. Plus, if local government focuses on building resident trust, much good can come from strategic forethought.

AI creates demand and ultimately institutional pressures on volume, quality expectations, resources, and governance. As a result, we recommend creating an adaptation plan that addresses the following priorities:

  • Forecast Volume: Before launching any new technology services, and particularly AI-enabled services, governments should model multiple scenarios with increased service demand. What if requests increase by 10 percent? Or by 50 percent? The objective isn’t perfect accuracy; it’s avoiding willful surprise. Lower barriers will surface unmet needs from communities that have historically been excluded or deterred, and public institutions must be equipped to respond and listen to evolving needs.
  • Define Service-Level Targets: Organizations need to define what “good” looks like, particularly around response times, but most importantly, the quality of the service delivery. How might the actions and activities of government staff and constituents shift and change?
  • Adjust Budgeting and Staffing at the Outset: If you treat AI as a cost-cutting tool, you risk underresourcing public agencies—right when demand spikes. Instead, adapt quickly, anticipate increased demand, and budget accordingly. Public institutions will need to rapidly augment their resources, skills, and structures to meet this demand.
  • Establishing Where and When Humans Are in the Loop: Human discretion becomes more, not less important, as these technologies are implemented. The most important decision-making junctures need human oversight, and we need to understand those critical touchpoints before any new AI-enabled service is launched.

An adaptation plan responds directly to pressures and how governments can absorb demand without sacrificing trust. By working through an Adapt-Listen-Trust (ALT) approach, governments can continue to respond to evolving resident experience and institutional learning. AI is often sold as a tool for doing more with less, but that’s the wrong approach. AI doesn’t reduce the need for public service; it reveals its critical and increased importance. Government organizations should take that demand as a signal to invest in public capacity, not run away from it.