At IBC 2025 in Amsterdam, AI is taking center stage, reshaping creative workflows across the content production pipeline. Industry leaders confirm that AI tools have progressed beyond experimental phases and are now integral to production, automating time-consuming tasks and expanding creative potential. These systems assist creators from initial concept to final delivery.

AI is transforming content production end-to-end. Automating scripting, enriching live production with real-time tagging, and accelerating post with instant highlights, edits, and localization,” said Ross Tanner, senior vice president for EMEA at Magnifi. “For sports and media, it turns days of work into minutes, enabling personalized, platform-ready content at scale while giving creators more time to focus on storytelling.”

The evolution extends beyond mere automation; it's a creative partnership. AI systems provide real-time support during live broadcasts, managing technical aspects without interrupting creative flow. These systems handle automatic camera tracking, audio level optimization, and real-time graphics generation. “Beyond a tool to automate tasks, AI is increasingly a creative partner for live production teams,” said Roberto Musso, technical director at NDI. “The use of AI in workflows has allowed teams to simplify the content production process by enlisting tools that offer functions such as automatic camera tracking, optimizing audio levels, and generating graphics in real-time.”

Agentic workflows represent a significant advancement. AI systems execute complex, multi-step tasks with minimal human intervention while upholding editorial standards. These systems personalize content for diverse audiences, apply real-time metadata, and format content for multiple platforms simultaneously. “AI, especially in the form of agentic workflows, is now accelerating every stage of the content pipeline — from automated story discovery and script generation to multi-platform clipping and post-production,” said Jonas Michaelis, CEO of Qibb. “These systems can quickly tailor content for different audiences, apply real-time metadata, and format it for a range of channels—tasks that typically require large, specialized teams.”

However, human oversight remains crucial for maintaining editorial standards and creative intent, especially in news and compliance-sensitive content. This “human in the loop” approach safeguards editorial quality and compliance. “Even with AI doing more heavy lifting, having a ‘human in the loop’ remains essential to ensuring editorial quality, compliance and creative intent while speeding up time-to-air,” Michaelis said.

AI’s impact is particularly notable in news, addressing the need for speed and accuracy. Automated systems handle transcription, translation, and metadata enrichment, freeing journalists to concentrate on reporting and analysis. This addresses the growing demand for multi-platform content delivery. “AI is moving beyond experimentation into core production processes, with the greatest impact seen in accelerating timelines and reducing hours spent on manually-intensive tasks,” said Craig Wilson, product evangelist at Avid. “In news, this includes automating transcription, translation, and metadata enrichment to support faster story creation and multi-platform delivery.”

In post-production, AI applications focus on content discovery and accelerated editing. Systems identify specific clips, eliminating manual review. This extends to performance refinement and localization. “AI enables faster post production workflows by automating tasks like video indexing and content discovery,” said Frederic Petitpont, CTO and co-founder of Moments Lab. “AI and AI agents significantly reduce the time editors spend for example, scrubbing through footage to find exact clips. The result is creative teams significantly increasing their video output.”

Effective AI implementation depends on robust technical infrastructure and data workflows. Successful implementation requires centralized content repositories. “There’s a lot of snake oil out there right now when it comes to AI. Buyer beware,” said Derek Barrilleaux, CEO of Projective. “If content is strewn all over the organization, it will be next to impossible to get real usable value from AI. But if you have everything centralized and coherent, now AI tools can truly provide value.”

Technical architecture affects AI performance and cost-effectiveness. Consolidating video compression and AI processing using GPUs yields significant efficiency gains. “Many media companies are using slow, complex, and costly CPU-based processing, where one pipeline handles compression and another handles AI processing,” said Sharon Carmel, CEO of Beamr. “By using GPUs exclusively, video compression and AI enhancements can run together in the same real-time pipeline, with faster, more efficient, and cost-effective video and data processing.”

Data quality is paramount. Organizations must invest in comprehensive content cataloging and indexing to maximize AI tool value. “AI agents are only as good as the quality of the data they’re fed, and large-scale video indexing projects are essential to unlocking the full value of AI workflows,” Petitpont said. Streaming-based approaches eliminate data preparation bottlenecks.

AI is the hook, automating repetitive tasks while surfacing insights from a single source of truth,” said Peter Thompson, CEO and co-founder of Lucidlink. “Stream-don’t-sync workflows eliminate data-prep bottlenecks in Gen-AI pipelines, enabling teams to spend less time wrangling data and more time delivering insights and impact.”

AI applications identify technical issues previously requiring human inspection, including audio-visual synchronization problems and graphical interference. Natural language-driven workflow creation democratizes media operations. “Quality control is also evolving; rather than relying solely on rule-based checks, AI can now identify issues like lip-sync mismatches or graphical interference that traditionally required human review,” said Charlie Dunn, executive vice president of products at Telestream. “Perhaps the most profound shift is the rise of natural language-driven workflow creation, which lowers technical barriers and democratizes media operations.”

AI capabilities in audio processing extend beyond basic loudness control to sophisticated language and speech management. Machine learning systems handle complex multilingual content, identifying speakers, languages, and inconsistencies. “AI functions are increasingly capable of managing language and speech clarity at scale, going far beyond basic loudness control,” said Costa Nikols, executive-team strategy advisor for media and entertainment at Telos Alliance. “Machine learning can identify speakers, languages, flag inconsistencies, adapt mixes for intelligibility across devices, and detect profanity in multiple languages and dialects.”

Advanced AI systems analyze multiple content elements simultaneously for comprehensive metadata and sophisticated content manipulation through multimodal analysis. This enables automated highlight generation, trailer creation, and replay sequences. “The technology examines visual, audio, and narrative elements frame by frame to capture the full context of each scene and automatically generate detailed metadata,” said Adam Massaro, senior product marketing manager at Bitmovin, noting it can help deliver hyper-personalized viewing experiences and more effective ad targeting.

As IBC 2025 approaches, the broadcast industry is carefully assessing AI applications, focusing on practical tools that integrate seamlessly into existing workflows while upholding editorial standards.