Generative AI: A Crisis of Trust for Agencies and Enterprises
Last updated on October 10, 2025 at 08:15 AM.A recent incident has sent shockwaves through the world of professional communication: a renowned consulting firm refunded part of its fee to the Australian government after a report containing major factual errors was published. These errors were a direct result of using generative AI, whose outputs were inadvertently incorporated into the final report. Since then, the public, trade media, and industry experts have been engaged in a vigorous debate about how to manage the use of AI in communications going forward. This situation is not an isolated case. In agencies and corporate communications departments, the demand for transparency and security in the use of new technologies is growing. Digital tools and AI-powered systems are becoming ever more integral to daily workflows but with this increased reliance comes higher expectations for control, traceability, and accountability.

New Challenges for Everyday Work in Communications Departments
The debate around generative AI and its impact on professional routines is highly relevant for communications professionals, as it directly affects the quality of reports, analyses, and strategies. Especially where decisions are based on data and recommendations, the integrity of content is paramount. Tools like ChatGPT, Gemini, and similar solutions have become standard practice but how can organizations ensure that results are reliable, traceable, and accurate?
Communications, branding, and digital agencies often work with complex data, studies, and reports. The incident involving the erroneous study demonstrates that human oversight remains essential. At the same time, there is mounting pressure to work efficiently and manage budgets. This creates a tension between automation and quality assurance that calls for new processes and methodologies.
Incomplete Oversight: Risks Associated with Generative AI
A key risk in using generative AI is the phenomenon of “hallucination,” where AI models generate seemingly plausible but entirely fabricated information. In the case involving the Australian government, a report was submitted containing invented quotes and references. Initially, these errors went unnoticed, only to be uncovered by vigilant academics. The consulting firm's reputation suffered significantly, as did the client's trust in external service providers.
This situation is symptomatic of the increasing complexity in agency-client collaboration. The faster pace of content creation must not come at the expense of substantive accuracy. Particularly in strategic areas such as brand positioning, content strategy, or communications planning, it is crucial to ensure that AI-powered tools are used properly.
Transparency and Disclosure as New Industry Standards
The incident has clearly demonstrated that openness in the use of AI applications must become standard practice. In practical terms, this means that anyone using AI tools for drafting texts, analyses, or presentations should disclose this fact. Industry associations, media, and academia are now calling for clear guidelines and mandatory transparency. Reviewing and disclosing the methods used is becoming a core component of quality standards.
Documenting sources and methodologies is now a hallmark of professional practice. Companies that commission reports, analyses, or strategies from external agencies expect transparent processes and robust outcomes. Disclosing the use of AI, for instance in the methodology or appendix section, not only reduces the risk of errors but also strengthens trust in the partnership.
Critical Sources of Error and Their Impact on Business Operations
Faulty content can have far-reaching consequences—from reputational damage and financial loss to strategic missteps. In the aforementioned case, the consulting firm was required to refund a portion of its fee, and it suffered significant reputational harm. The government is now reviewing whether and how the engagement will continue. Similar errors in the private sector also result in loss of trust, uncertainty, and increased oversight requirements.
The challenge lies in designing AI-assisted processes that can be seamlessly integrated into the daily workflows of communications departments and agencies. This includes content creation, research, analysis, and presentation. When oversight is lacking or errors go undetected, the quality of work suffers—and, consequently, so does the perception of the brand or organization.
Innovative Methods for Risk Mitigation and Quality Assurance
To minimize risks and ensure quality, leading agencies are adopting multi-stage review processes. Generative AI is used as an assistive tool, not as the sole source of content. All AI-generated materials are reviewed, validated, and, if necessary, edited by human experts. Structured checklists, source verification, and peer-review procedures are standard practice.
Another approach is the development and use of domain-specific AI models. Rather than relying solely on generic systems like GPT-4, specialized models are trained on industry-specific content. This increases accuracy, reduces the risk of hallucinations, and enhances the traceability of results. In sensitive fields such as law, finance, or science, this is already standard practice.
Case Study: The Deloitte Report Errors and Their Consequences
Deloitte Australia published a report for the Department of Employment and Workplace Relations (DEWR) that later came under fire for serious errors. Academics identified up to 20 incorrect or fabricated quotes and references, including a supposed book by Professor Lisa Burton Crawford that does not exist and an entirely invented court ruling citation. It was revealed that the company had utilized the generative AI model Azure OpenAI GPT-4o in the report’s preparation (Sources: ABC News, Yabble, Australian Financial Review).
Initially, the AI-generated content was not labeled as such. Only after public criticism and an internal review was the report revised, erroneous passages removed, and the use of AI disclosed. Deloitte refunded part of its fee (AUD 440,000). The government suspended the final payment and is reviewing the ongoing engagement. This incident has sparked a sector-wide debate about new standards for transparency, disclosure, and quality assurance in AI use. Experts are now calling for mandatory review processes, disclosure requirements, and the adoption of specialized AI models.
First Steps Towards Safe and Transparent AI Usage
To avoid the risks described above and maintain the quality of your communications, a structured approach to implementing AI is recommended:
1. Develop Clear Internal Guidelines
- Define which processes may involve generative AI.
- Establish documentation and disclosure requirements for AI usage.
2. Ensure Human Oversight
- All AI-generated content must be reviewed by subject matter experts.
- Ultimate responsibility for final results remains with humans.
3. Verify Sources and Citations
- All AI-generated sources and citations must be thoroughly checked.
- Do not include unchecked content in reports or presentations.
4. Provide Transparency to Clients and Stakeholders
- Disclose AI usage in methodology or appendix sections.
- Provide prompt history or AI-specific settings upon request.
5. Implement Ongoing Training and Awareness Programs
- Regularly update team members on new AI trends and associated risks.
- Build awareness of AI hallucinations and common sources of error.
6. Consider the Use of Specialized AI Models
- Use domain-specific AI models for particularly critical tasks.
- Collaborate with technology partners to develop custom solutions.
Future-Proof Communication Processes Through Expertise and Methodological Competence
Agencies that combine creative excellence, analytical prowess, and sector-specific expertise are especially well positioned to meet the challenges of AI-driven communications. The ability to clearly explain innovative methods, identify trends, and communicate transparently fosters trust in client relationships. By demystifying how generative AI works and raising awareness of potential risks among non-specialists, agencies can strengthen their brand and support clients on the path to sustainable, secure, and successful communication strategies.

Creative, smart and talkative. Analytical, tech-savvy and hands-on. These are the ingredients for a content marketer at Crispy Content® - whether he or she is a content strategist, content creator, SEO expert, performance marketer or topic expert. Our content marketers are "T-Shaped Marketers". They have a broad range of knowledge paired with in-depth knowledge and skills in a single area.