
SYDNEY, AUSTRALIA — A recent report revealed that KPMG Australia has fined one of its partners A$10,000 (US$7,000) for using AI to cheat on an internal training course about the technology itself. This news follows the consultancy’s discovery that more than two dozen of its personnel have been misusing the tool for their exams since July.
Like most organizations, we have been grappling with the role and use of AI as it relates to internal training and testing. It is a very hard thing to get on top of, given how quickly society has embraced it.
— Andrew Yates, KPMG Australia Chief Executive
The partner was found to have breached the consultancy’s rules by uploading its reference manual to an AI tool to answer the exam, a misconduct that was detected in August using KPMG’s own detection tools.
This follows the company’s introduction of improved testing monitoring tools, built on existing measures, after a series of similar cases that spanned 2016 to 2020.
KPMG also plans on reporting these AI-related cheating cases in its annual results, adding to existing disclosures. This is expected to increase demand for transparency across professional services companies.
Why This News Matters
As AI use is rapidly increasing, this news serves as a cautionary tale for companies still grappling with effective governance.
It underscores a broader conflict between employees expected to be proficient with AI and the need for organizations to ensure that learning isn’t substituted for shortcuts. This doesn’t just apply to how the technology can disrupt internal learning, but also reflects how it can easily damage the organization’s trust and ethical standards.
With this, clients and employees must pay attention to how they use the tool in professional contexts—not just whether it produces good results, but also how those results are achieved.












