New Zealand Has An AI Trust Problem. Regulation Might Actually Help.
You’re using AI tools at work. The colleague to your left uses ChatGPT to draft emails and the colleague to your right uses Perplexity for research. Maybe you used Modus to draft fifty new position descriptions. This adoption puts you in the 87% of New Zealand businesses now using AI (which is up from 48% just two years ago). But… Do you trust it? Does your team? What about your customers and clients?
The data says probably not. Only 34% of New Zealanders trust the AI we’re using everyday. If your organisation is already using AI, your AI journey might already be eroded with mistrust.
We need clearer rules, though we might not admit it.
I've worked with organisations across New Zealand, and now live and work in the EU. With one foot in each country, you could say I’m also waist-deep in AI… and I’m noticing a pattern. Rapid adoption, slow trust-building, and a growing gap between what businesses do with AI and what their people feel comfortable with. I’m reading between the lines and hearing “we need clearer rules”.
Here's what the data says. The majority of New Zealanders actually want AI regulation. Specifically, 89% of kiwis want laws to combat AI-generated misinformation and only 23% believe current safeguards are sufficient to make AI use safe.
This isn't just a public poll or isolated research piece. Just last month, more than twenty AI experts signed an open letter calling on the government to introduce binding, risk-based AI regulation. These are people who understand both the technology and its implications, and their message was clear. Regulatory uncertainty is causing AI to be used in areas where it shouldn’t be. Without proper, responsible AI governance, we are at risk of causing real-world harm.
New Zealand was the last OECD country to develop an AI strategy. It’s described as "light-touch" and principles-based. Business-friendly? Maybe. But what about this paradox staring at us? While businesses are quickly adopting AI, individual consumers aren’t trusting it. It doesn’t really seem like a recipe for sustainable growth.
Building without a blueprint.
It’s a little like building a house without knowing the building code. Some companies over-engineer their safeguards (which is expensive and slow), others wing it (which is risky and inconsistent), and nobody knows if they're getting it right until something goes wrong or until regulations eventually appear and force serious retrofitting (which is costly).
If it’s not “we need clearer rules” maybe it’s “we need a clearer understanding”. If your people don’t understand AI, or how to use it, some will use AI recklessly because they don't understand the risks, and others will refuse to use it at all because they don't understand the safeguards. Regardless, nobody will have consistent criteria using AI appropriately. Decision-making will default back to individual comfort levels, politics, or whoever shouts loudest (and we all know how that goes). You’d get fragmented adoption, inconsistent practices, and… exactly the kind of trust erosion we're seeing in the data.
Clear frameworks, like AI literacy requirements, might not be bureaucratic barriers. It could be a blueprint that lets you and your organisation go forth into the world of AI with confidence.
New Zealand could learn from EU AI regulation.
The EU AI Act has taken a risk-based approach to regulation. As part of the rollout of compliance requirements under the Act, from February 2025, any company deploying AI systems must ensure their staff have "a sufficient level of AI literacy." I quite like this. It’s not forcing everyone to become data scientists or programmers. It's about making sure people understand the AI tools they're using - the opportunities, the risks, and how to use them responsibly.
New Zealand ranks 40th on the Oxford Government AI Readiness Index. Why? Well, partly due to low AI literacy. If regulation made it a requirement for businesses to invest in AI education, we'd be addressing two problems at once; the trust problem (the 34% confidence rate) and the capability problem (the sad 40th ranking).
New Zealand has done this before. When the Privacy Act was updated in 2020, it was deliberately aligned with GDPR principles. Maybe it’s time to follow the same logic. Being last to develop an AI strategy gives some runway to learn from the EU's implementation, observing what works and what doesn’t before committing to a clearer path forward.
“Educate the people!” said the change manager.
If we adopted AI literacy requirements like the EU, our approach to improving AI literacy in New Zealand organisations could look something like this.
Assessment Map the tools you’re currently, identify the roles that interact with them, and identify the level of risk. A plumbing business that uses AI for scheduling is different to a head office using AI for recruiting.
Training Develop role-based training, focussing on what AI can do well, what it can’t do, what it shouldn’t be used for, and what safeguards are in place (like escalation channels and what to do when something goes wrong).
Application Develop a decision-making framework. When someone suggests a new use-case for AI, evaluate with pre-agreed consistent criteria.
Review Measure if it’s working. Are your people using AI more? Are people trusting AI more?
AI literacy is a good start for New Zealand organisations.
I think this is one of the answers to addressing the trust gap that’s holding back 66% of New Zealanders from embracing AI at work. With public support at 81% and experts calling for regulation, NZ’s “light touch” AI strategy needs to be followed with some real regulation eventually. The question is whether you’ll be ahead of it or scrambling to catch up. Whether you'll build AI capabilities on a foundation of clarity and trust, or retrofit compliance later at a higher cost.
Start with developing AI literacy requirements, implement them proportionately, and maybe we’ll start seeing what most kiwis are asking for: confident adoption with clear safeguards.