From hype to reality: Answering your questions on AI in government

Blogs and Articles

There were critical questions raised on our recent webinar ‘AI in New Zealand government: Building trust through data, privacy and strong foundations’ which amplify that simply implementing AI is not enough; it must be done in a way that builds and maintains public trust. Read on for our perspective on these key concerns.

24 September 20257  mins
AI bulb innovation strategy that a man holds in his hand

There were critical questions raised on our recent webinar ‘AI in New Zealand government: Building trust through data, privacy and strong foundations’ which amplify that simply implementing AI is not enough; it must be done in a way that builds and maintains public trust. Read on for our perspective on these key concerns.

Is a "one-size-fits-all" AI strategy enough?

It’s important to differentiate between general AI and Generative AI (GenAI) in your policies and education.

While GenAI is a type of AI, its ability to create brand new content introduces unique risks that demand specific attention. A predictive model that identifies fraud is different from a tool that writes a public-facing report.

Our controls, language, and training must reflect this. When educating your workforce, go beyond general AI ethics to cover the specific guardrails for GenAI: the risk of hallucinations (plausible-sounding but false information), the need for human review of every output, and the rules around what data can be used as input. To build trust, we must be precise.

How do we build bias-free AI?

The fear of embedding bias in AI systems is a major concern, and rightly so. The unfortunate truth is that AI models are only as good as the data they're trained on. If that data reflects historical biases, the AI will amplify them.

So, how do you prevent this? It starts at the source. You must actively seek out diverse and representative datasets for training, and then clean and preprocess that data to remove or mitigate any existing biases.

Additionally, it’s important to implement continuous monitoring. Bias isn't a one-time fix. You must regularly audit models, establish clear data governance frameworks, and prioritise human oversight. Explainable AI models can also help, providing a window into how decisions are made, making it easier to spot and correct biases.

Is AI's impact overestimated?

There was a great point raised about the gap between high expectations and the current reality of AI adoption in the public sector. Initial progress seems most evident in science and research, not necessarily in day-to-day government operations.

You're likely correct that we tend to overestimate the short-term impact of new technologies and underestimate their long-term transformative power.

In the next two years, the public sector's pace will be deliberate. We’ll see targeted pilots and proof-of-concepts, not sweeping systemic change. This is due to the need for rigorous ethical frameworks, navigating legacy systems, and strict privacy laws.

However, over the next decade, AI will continue to transform. Imagine proactive policy-making where AI predicts social trends, hyper-personalised public services, and optimised city infrastructure. The key is to use this period of measured adoption to build the strong data infrastructure and ethical guidelines needed to unlock that long-term potential.

Data sovereignty: The elephant in the room

As government agencies explore using powerful offshore LLMs, a critical question emerges: what happens to our data? When you feed information into a model hosted overseas, you enter a different legal and jurisdictional landscape.

It’s worth asking: where is your data processed and stored? Could a foreign government compel access to it? And can you guarantee compliance with the New Zealand Privacy Act?

A sensible approach requires extreme due diligence. Only use vendors with clear commitments to data sovereignty. Where possible, anonymise data before using offshore models or explore on-premise solutions. The bottom line is to never sacrifice a secure, trusted data environment for the convenience of an overseas vendor.

The environmental impact of AI: An ethical blind spot?

It was inspiring to hear that younger people in your organisations are raising concerns about the environmental impact of AI. This is an aspect that shouldn’t be ignored. Training and running large AI models consume massive amounts of energy and water.

Even if this isn’t a government reporting requirement yet, it’s important to take an ethical approach. Start by prioritising energy-efficient models, advocating for green AI principles, and asking vendors about their sustainability practices - ensuring they align with your organisations. Also consider voluntary reporting to track and manage our AI carbon footprint.

Building ethical AI means considering its impact on people and the planet.

The myth of effortless time-savings

Finally, let’s address the tension between using AI for time savings and the need to continuously check its output.

The truth is, AI is not a "set it and forget it" tool. For the foreseeable future, AI is an augmentation tool, not a replacement. It can significantly speed up the first draft or initial analysis, but the time saved is then re-invested in a new, critical task: verification and curation.

The human-in-the-loop provides the essential judgment, ethical oversight, and contextual understanding that AI lacks. This shift from "creation" to "critical assessment" is the real productivity gain. The goal isn't to eliminate work but to elevate it. We save time on the mundane so we can spend more on the strategic and the nuanced.

These questions highlight the complexity of the road ahead, but they also show our collective commitment to getting it right. By having these honest conversations, we can build a foundation of trust that will serve government agencies for years to come.


Watch the webinar replay