Nexthink Report | Page 14

NEXTHINK Prioritising responsible AI Given Nexthink ' s autonomous agents have the potential to interact with 25 million users across major enterprises, responsible AI must sit at the heart of everything the company does.
The organisation has established a cross-functional AI committee, drawing together representatives from engineering, product, legal, security a nd field teams. The committee meets weekly and every AI feature that goes to production must pass through it.
“ Enterprise trust is very important and responsible AI is super critical,” continues Moe.“ Every feature that needs to go to production, or anything we ' re doing with AI, goes through our committee.
We validate that we ' re doing the right thing and that it ' s compliant with all of the relevant acts. Security and privacy are baked into everything.”
Beyond governance, responsible AI at Nexthink means transparency in AI reasoning, continuous production monitoring and protection against threats such as prompt injection and model theft. Evaluations are run before any deployment, while benchmarks are used to ensure models are grounded and operating as intended.
The committee structure also ensures accountability is distributed. Rather than responsible AI being owned solely by a compliance or legal team, it is a shared discipline.
14 May 2026