Trusted AI describes artificial-intelligence systems that are dependable, compliant, and transparently traceable. They are applied across editorial, marketing and product teams, operating within clearly defined parameters for quality, safety, transparency, copyright, and human oversight (Human-in-the-Loop). The goal is to safeguard editorial integrity, prevent misinformation, secure revenue streams, and unlock efficiency gains in a responsible manner. Recent industry studies show that audience scepticism towards AI in journalism remains high – trust can only be achieved through clear standards and robust governance that meet the highest professional and ethical expectations.
Trusted AI means deploying artificial intelligence in ways that are transparent, fair, secure, and subject to human control – thereby strengthening trust in both the technology and the content it produces rather than undermining it. The objective is to build trust in AI systems, technologies and outputs – not to put that trust at risk. While Trusted AI describes the result – an AI system that people genuinely trust – Trustworthy AI refers to the attributes and principles that make an AI system worthy of that trust: such as traceability, ethical standards, data protection, and technical robustness.
Trust has become a defining differentiator for media brands. AI-generated content that is poorly labelled, inaccurate, or biased threatens credibility and weakens audience loyalty. Trusted AI protects brand value by guaranteeing transparency, editorial responsibility, and rigorous quality standards.
Clean data sources, traceable licence chains and transparent content origins not only protect intellectual property but also open up new business models. Organisations can license and monetise content, data or training materials securely and legally – without the risk of infringement or compliance breaches.
Open standards for content provenance ensure technical traceability throughout the entire content lifecycle. This makes it possible to track material back to its source, identify potential errors, and maintain consistent quality – a key building block for trustworthy AI-driven products.
Although AI is already an established part of editorial and production workflows, many organisations face similar obstacles. The crucial task is to identify potential risks early and address them through clear standards and structured processes.
Unlabelled AI Content: A growing volume of content is already created or edited with AI assistance, without users being aware of it. This lack of transparency fosters uncertainty and erodes trust.
Licence Gaps: Many AI models rely on data sources whose legal origin or usage rights are unclear, creating risks around copyright, brand reputation and compliance.
Deepfakes & Manipulation: As generative models become more powerful, the risk of manipulated images, audio and video increases. Without detection mechanisms, this can spread misinformation and damage brand credibility.
Trust Deficits: Even responsibly deployed AI can provoke scepticism if its workings remain opaque. Insufficient communication about function, control and benefit leads to mistrust.
Governance & Accountability
Clearly defined roles, approval processes, and editorial codes of conduct for AI usage – supported by regular audits – form the backbone of responsible AI governance.
Transparency & Labelling
Use Content Credentials (C2PA), clear user notifications, and machine-readable metadata to indicate when and how AI has been used.
Data & Copyright Compliance
Employ only licensed data and archives, log all data sources, and ensure contractual security with model and tool providers.
Human-in-the-Loop
Maintain editorial review and human oversight at every critical stage of content creation. This includes guidelines for rigorous research, factual accuracy, and forensic verification of visual or audio assets.
Security & Quality
Conduct systematic checks for bias, misinformation, and privacy issues to ensure all outputs meet internal and external quality benchmarks.
Measurability
Define and track AI-specific performance indicators: time saved, automation levels, reader engagement, content-quality scores, hallucination rates, correction ratios, time-to-publish, label coverage, and community trust signals.
AI supports topic discovery, fact-checking, and first-draft creation, with all outputs clearly marked as AI-assisted. An editorial review ensures that quality standards are met and that AI-generated content is only published after human validation.
Automated processes – such as tagging, personalisation, audio versions or translations – include traceable metadata. This preserves the integrity of content and allows users to understand which parts were AI-assisted.
AI-driven analytics, creative variant generation, and campaign optimisation are fully documented and auditable. Data privacy and compliance are maintained through privacy-by-design workflows and transparent data handling.
Responsible AI adoption requires structure, transparency, and continuous quality assurance. The following ten principles help organisations embed Trusted AI in practice – from governance and technology to cultural change.
Define company-wide rules for AI use across editorial, product, and commercial functions. A public AI policy strengthens transparency, builds trust among employees and audiences, and sets out ethical, editorial, and technical standards.
Create a comprehensive inventory of all data sources – archives, agencies, user-generated content, and training data for AI models. Review licences, rights, and privacy aspects to ensure data is lawful, traceable, and ethically sourced.
Embed C2PA (Content Provenance and Authenticity) standards directly into editorial systems and workflows. This allows AI-generated or AI-edited content to be automatically tagged – a key step towards transparency and misinformation prevention.
Integrate Human-in-the-Loop mechanisms throughout content processes – especially for editorial decisions, image selection, and publication. Human professionals retain ultimate control over what is published and how models are adjusted.
Set measurable quality and safety benchmarks – such as indicators for bias, misinformation, algorithmic errors, or content variance. Define how AI outputs are evaluated, corrected, and continuously improved across all channels.
Evaluate each AI use case using predefined criteria such as reach, reputational risk, data protection, copyright, and legal exposure. This helps identify where AI offers tangible value – and where caution, transparency, or additional oversight is necessary.
Regularly audit external providers for technical security, data processing, and server locations. Document the handling of IP addresses, logs, and storage locations to ensure full compliance with privacy and data-protection laws.
Be open about where and how AI is used – through interface notices, FAQs, or opt-in settings. Transparent communication increases acceptance, prevents misunderstanding, and reinforces trust in AI-supported products.
Train staff in editorial, product, and sales teams on AI literacy. Launch change programmes that combine technical expertise, ethical awareness, and organisational adaptation – ensuring that Trusted AI becomes part of the company culture, not just a technical add-on.
Develop routines for regular audits and updates. Monitor labelling systems and similar mechanisms, define SLAs for error correction, and document incident reviews. Only through ongoing evaluation can AI use remain sustainably trustworthy and robust.
At Retresco, every AI development follows the Trusted AI principle – meaning systems that are transparent, compliant, and auditable by design. The approach combines efficiency through intelligent automation with human control at every decisive point.
With Human-in-the-Loop, humans remain in charge at all times: editors, product specialists, and marketing teams guide, review, and finalise content. Retresco’s AI solutions provide reliable decision support and automated suggestions – enabling the creation of high-quality, brand-safe, and verifiable content that meets the highest editorial standards.
Reuters Institute – Digital News Report 2025: Executive Summary
Reuters Institute – Digital News Report 2025 (Full Report) [PDF]
C2PA Technical Specification 2.2 (May 2025) [PDF]
C2PA Specifications 2.2 – Overview & Changelog (May 2025)
Thomson Reuters Foundation – Journalism in the AI Era (Jan 2025) [PDF]