Authentic Intelligence should be the aim for any business in 2026

Artificial Intelligence continues to dominate the corporate, social and political agenda, and it shows no signs of slowing down. Even if the AI ‘bubble’ deflates or, worse, pops, the underlying technologies are here for good. The capabilities and outputs are already both tangible and profound.

But it is still early days and major risks remain. In terms of written as well as video content, so-called AI ‘slop’ has become a significant issue: banal and useless content churned out en masse to generate noise and traffic. At the end of last year, one study looking at new YouTube accounts noted many were dominated by low-quality content, with 21% defined as “slop” and 33% as “brainrot”.

Interestingly, while AI can already generate Hollywood-quality videos of anything we can imagine, the seemingly simpler art of writing professional and engaging copy still feels a fair way off. Looking back at last year, I noticed a great deal of not just ‘thought leadership’ slop but lots of sloppy work too. Too many people relying too much on AI. And this has become, I believe, a major reputational risk for firms and individuals.

AI-written copy often lacks nuance and depth. Imperfect source material inevitably leads to imperfect output. Professional analysis and personal sentiment tend to be missing. Overall, it can feel plastic, often rigid and fixed by rules and algorithms.

AI copy often reveals itself through traits. The rule of three, for example, has become a simple but common motif. As AI itself tells me: “The rule of three in AI content is a foundational principle where information is structured into three distinct points, steps, or examples to maximize clarity, memorability, and engagement.” The Wikipedia Signs of AI writing page lists many more indicators and, as such, risks.

As the AI era builds momentum and its capabilities become increasingly evident across all domains, authentic intelligence will only grow in importance and value – particularly when it comes to content and professional insight. The more authentic voices are, the better the impact and the lower the risks. AI-centric tools will continue to help in areas such as research and analysis, among others, but ultimately as enablers of more compelling human voices.

As a major provider of content services to an international, blue-chip client base, CDR continues to embrace the potential benefits of AI while intimately monitoring the risks. We’re excited about what might be possible and are positioning ourselves to be at the vanguard of modern communications for the years ahead.

While the hype and fear around AI can sometimes feel overwhelming, we believe that authentic intelligence should be the goal for every business in 2026.

Perhaps ironically, I’ll give the final word to ChatGPT. I asked it to write “a short article about the dangers of using AI to write a short article about the dangers of AI.” Much of what came back was deep-fried waffle, but the following resonated:

“Asking an AI to write about the dangers of AI is a bit like asking a mirror to critique your haircut: it will oblige, but you should question the angle… Outsourcing critique to automation dulls human responsibility. If we let AI do the warning, we may stop doing the work: questioning sources, setting boundaries, and deciding when not to automate. The real danger isn’t that AI will misdescribe its risks—it’s that we’ll accept the description as sufficient, and call vigilance complete.”

Get in touch
with us today.