Video: Accelerator Kickstart Day: 2026 Quick-Fire Pitches | Duration: 5208s | Summary: Accelerator Kickstart Day: 2026 Quick-Fire Pitches | Chapters: Reliable Broadcast Networks (8.925s), Network API Benefits (274.69s), EcoFlow Project Overview (349.325s), Team Introductions Begin (733.64s), AI-Driven Preproduction Acceleration (782.16s), Project Framework Overview (905.425s), Quantum Network Pitch (1103.425s), Quantum Encryption Innovations (1147.3s), Resilient Pipeline Integration (1328.935s), Digital Catapult Presentation (1508.76s), Virtual Production Decision-Making (1552.585s), AI for Live Media (1894.23s), Speech Intelligibility Pitch (2314.49s), Intelligibility Measurement System (2368.29s), Software-Defined Production Workflows (2632.22s), Knowledge Graph Integration (2830.99s), Immersive Festival Project (3088.79s), Immersive Concert Experiences (3796.58s), Immersive Technology Partnerships (3974.275s), Personalized Content Delivery (4080s), Agentic AI Incubator (4540.01s), Concluding Remarks (4997.015s)
Transcript for "Accelerator Kickstart Day: 2026 Quick-Fire Pitches": cellular devices over public networks for live contribution, especially news gathering and some sports and smaller OBs. Majority of the time, all is well, but we're well aware that's a best effort solution, and it can fail when capacity is contended on public networks or coverage is patchy. The BBC is therefore looked into providing its own exclusive bundled cellular connectivity solutions, including private five gs networks, as mentioned this morning. This four contended events like the Birmingham Commonwealth Games, the Coronation on the Mall, and also when Coverage is Unreliable, the Northwest two hundred motorcycle event on rural public roads in Northern Ireland and the recent WinterWatch deployments where coverage is similarly patchy. What we need as broadcasters and content providers is reliability and reassurance for rapid deployment of our private five g networks and in the future, potential five g slicing of public networks. There we go. As you can see on the right, coronation day, we relied on private networks. The other SIM cards on public networks, not so much. So quality of delivery, quality of service provides standardized standardized and understood it's understood by our suppliers, and network APIs are such an interface for us to interact with the five g stand alone networks, and we see that as a promising way forward. So our core challenge is the shift from best effort networks, public networks, to guaranteed performance for live feeds. While five g stand alone offers massive potential, we must be able to dynamically prioritize resources to protect critical wireless services in congested environments as well. This POC isn't just about public networks, but also implementing network APIs like Quality On Demand across private five gs networks as well, and also within network slices, and even within exploring potential areas across non terrestrial networks to ensure broadcast reliability anywhere. So what's the actual innovation here? At the moment, network control is often locked behind closed priority APIs, which limits flexibility and slows down adoption across different regions and vendors. We're shifting that paradigm by utilizing GSMA's open gateway standards. And by implementing Camara Project's open APIs, we provide a standardized interface to interact directly with five g stand alone networks. This allows us to trigger network APIs like quality sorry, like dynamic quality on demand programmatically. And instead of relying on a best effort connection, a broadcast unit can now use these open APIs to request the specific latency and throughput it needs for a four k feed in real time, regardless of which network operator you use around the world. The move toward interoperability ensures that whether you're in a stadium or a city center, wherever you are in the world, the network becomes a programmable tool for live production. So we've already been discussing this project with quite a few champions in the room. We've already got a lot of champion interest, so broadcasters, Rai, France TV, BBC. We've got support from the BBU the EBU, sorry, five gs MAG and their five gs IMerge project. But we're still looking for content creators, for rights holders, for events coordinators, people to get involved with the project and let us actually try to demonstrate this technology in a live event. So we want to do this live. So a lot of the trials for these quality on demand APIs, for network slicing, they've all been in very controlled environments. We want to do this at a real event, actually make use of network APIs on a congested network to prioritize particular feeds and protect critical links. There's a lot of benefits for this, not just for the broadcast industry, but industries such as PPDR that's public protection and disaster relief. There's a lot of interest in being able to have some control over what the network is able to do. It promotes sustainability through reduced infrastructure that you might need to take to an event if you can use wireless connectivity that's there already. It promotes diversity, equality, and what we want to do sorry. We want you to help us take videos on the left, which is a live contribution on a bundled cellular link, to look more like the video on the right. Thank you very much. Come and see us at Table 1. Well, guys, smashed it again. Smashed it again. Smashed it again, haven't you? Great stuff. That's a really great network, API project there from, Neutral Wireless BBC. And what looking forward to that with the GSMA and Mobile World Congress. They'll be talking about it next week in Barcelona as well. So hopefully, they'll get some mobile networks involved as well, which would be fantastic. Next up, we've got our Ecoflow three, no less, coming back for a year. What a gang this is. We've got here, Tim Davis from ITV. Welcome, Tim. Right. There we go. Welcome, Tim. Ian Notch from IET, please. How are you doing, Christian? Christian is with, humans not robots. That's right, isn't it? Yeah. And we've got Francoise here from Accredo. Welcome, team. Have you got the clicker here? There's the clicker. Who's doing the slides? It's that white one there. Okay. Just press that to go down and point it at the back of the room. Okay? Okay. Chris? Yes. Yeah. We will. Okay. Thanks. So I'm one of the big four of the EcoFlow project or we're the steering group, as we call it. And I pressed the button on this. I am pointing it. There. There we go. Okay. So the problems we've been dealing with over multiple generations of our Ecoproach project have been about measuring energy. And surprisingly enough, it's very difficult to measure energy consumption of end to end media workflows, except where you could get at it, mostly within the home. So that was a bit of a challenge, and that's what we discovered in our first year. In our second year, we adapted and looked to how we could use digital twins and more variable quality data to get a picture of energy consumption and end to end workflows, and that was successful to a point. But the key thing with all of those, though, is that we recognize some key discoveries from that. We recognize key insights. We recognize that we had still quite a number of difficulties with doing that. We recognized that we also had a problem with the way that sustainability was looked at within the whole development of media workflows, and it was seen as a bolt on, something you did afterwards. And we recognized very quickly early on that actually sustainability has to be one part, one factor of the whole engineering design concept of performance, cost, price and sustainability was added to that. And you have to do that right from the beginning. And so we've taken that forward into our concepts of understanding what is good energy usage and what is bad energy usage. And this is the controversial thing that we're looking to look at really hard, because we define good energy as where things are efficiently and effectively used, and we look at bad energy as where it's not. And there's a lot of those within every end to end workflow. So I'd like to hand over. First time. So one of the main things that we've been trying to do last year was around data collection. This was one of the main challenge, banging the drums, banging the door at the broadcasters, at all the partners. Have a project. How can we get more data? How can we get energy data? Not gonna happen, right? So we have to figure out a different way to emulate some of the data, and this is what we continue doing through a potential observability framework. The second thing we want to break this year, talking about getting this adopted by the broadcasters in the real world. The first year was about end user devices. Second year was a digital twin. This year, we want this to be used. It's the clock is ticking. It's very important that we get to that to that point. So operationalizing this sort of project is very important to us. And what does success look like for us? Well, essentially being able to break the barriers of adoption, not only being able to prove that, yes, can measure, we can look at efficiencies, But how do we go to an ITV or all the partners we have on the project and say, There's no barriers to adoption anymore. This is not impacting our road map. Yes, this is helping with cost efficiency, performance and so on. This is what we want to get. Any questions to take from? Thank you. We need clicker innovation for next year as well, please. So for EcoFlow one, we looked at the consumer device and we tested to understand what experiments we could do to reduce the energy consumption on the device. For EcoFlow two, we looked at the back office and the distribution workflows to understand if we can move towards multicast ABR or a peer to peer delivery mechanism and what effect that might have on energy consumption. As part of that, built modeling as well. We want to build on and extend that for Ecoflow three. And this means looking more closely to unify what's happening on the client device with the backend. If we can get the client to tell the server how it is dealing with its energy efficiency, we maybe can use content steering to navigate how that device receives content to further optimize how it utilizes energy. That all relies on a unification of that forecasting and near casting model in the digital twin. So using that and a sprinkle of AI, of course, to provide optimizations and ultimately, to put that out into the market and provide an open observability framework that real people can use to factor further back into that. The core group of us have been working together for the last three years. The skills we really need to take that and and and transform it are are things that enable us to understand, to be able to monitor, to be able to measure, and to be able to standardize the reporting and just standardize the language we're using partners who can help us to generate and share data to enable that intelligence and then partners who can help us to build, to test and to validate to really help us take it and move it forward. The impacts that we want to be able to measure are genuine, measurable and validated strategies that organizations can take and can adopt, but also the KPIs and metrics that allow sustainability to be communicated and discussed in a similar vein as we have latency, performance, cost and quality. The vision is the same as we've had for the last three years. It's about driving decarbonization and efficiency across the entire streaming supply chain. Thank you very much. Thank you. Thank you. Team Ecoflow three, one two three. Wait. That's Spanish. So without further ado, we're going to move things along just for a reminder for our teams presenting. This is the down sticker on the white button, and you point it to the monitor here. Please give a massive round of applause for Ciao, Paula. Ciao, Roberto. Ciao. Ciao. Here to present the frame, the federated retrieval, and agentic media environment, our esteemed alumni from the Rye and EBU. Over to you. Good luck. This one this this this one there and then point. It's no work. Take care. Take care. Yes. Cool. So hello again. AI is already being deploying in some part of TiVo production and FEMA production, but preproduction is still particularly time consuming. And archives are not fully exploited in a semantic way. Users are overwhelmed by hundreds of generative AI tools for content production. What we want to accelerate with this accelerator is preproduction. And our challenge is to deploy a production environment ontology driven from the very beginning, orchestrated by specialized agents that interact with the archives, because we wanted to keep the mood, the authenticity in the story using the archive, and with the human in the loop by definition. The sorry, this year. So what is the innovation? What we are aiming at is, first of all, we would like to have an automated content tagging using a filmmaking vocabulary with the definition of camera shots, angles, look and feel narrative. And then we will map it to MovieLabs media ontology for creation for media ontology creation. And then we wanted to deploy these specialized agents. These agents will perform specific tasks, including also post production tasks, like, for example, upscaling, generative generation in painting or painting. And we wanted to deploy these agents also locally. And we want also to add automation in this ecosystem. But it's automation designed around the Cooperative Human in a Loop workflow and not unsupervised automation. Thank you, Paola. The white one. Okay. So as you can see, we have many work streams to break down the work. So it's easy for you to find your spot and join our accelerator. So this is the reason. So do you want to work on the ontology? At least to understand what the hell is on ontology. We have the work stream. Do you want to work on the, you know, the back end? We feature extraction on video from archive or on the maybe the on the core part that it is the agentic orchestration or and this agent, by the way, they have a certain level of autonomy. So the risk ethics also is a fundamental part. And the last work streams comes from our learnings from the past accelerators. I mean, if the creative don't like what we are doing, our job, that means that this is just a useless choice. So we involve the creative people since the beginning. So this is the team. Ray and Debio are co leading this project. We have Clive from ITV. We have Eric from the Entertain Technology Center, California. We have Raymond from Movilabs and Tim from XRBB and also John from AMD. And what we are looking for? We are looking for people, developer, expert in AgenTiKi, as I said, credit people and also people expert in, you know, risk and ethics and these kind of people. And this is just a glimpse of the framework. So the user write the prompt, for instance, a tour on the Route 66. And this year, by the way, this year is the celebration of one hundred years of this route. And then these triggers the Zuom intelligence that search and retrieve the source from our archives. But, of course, it's not just search. It's just chunk of the sources that the human in the loop can be assembled in a nice way. Now think about the human in the loop as the bartender that mix the ingredients to make a delicious cocktail. So this is mainly the idea. And then so division, we are tired about the synthetic content. Even if they are photorealistic, you can spot them because they have a certain style. So because of that, we want to use our CAIL as a living memory to provide authenticity. Then we want to deploy everything on premises because we don't want to share our content with anyone, my content, my business. And thank you. And finally, we do want the free lunch. That means we don't want to pay for the tools. So we want to use free tools as much as we can. And that's also. Join, Thank you. Thank you, guys. That was fantastic. Thank you very much. Right. Next up, pitch four. Do we have the guys here? Anybody in the they are fantastic. Here they are then. This is the Qstream Quantum Secure Network Adaptive Verifiable Live Media Infrastructure. Is that right? That's the one. Take a deep breath. I'm take deep breath for that one. Russell Trafford Jones. Everybody knows Russell, don't you, from the IET? Nick Beer from British Forces Broadcasting Service, b f b b b f b s. Fantastic. Great of you to join us. And Esteban Vasquez from Tesla Technologies in Spain, welcome. The floor is all yours. Thank you. Thank you very much. Let's hear of innovation, everybody. Innovation's in the house now. We've got here yeah. I'm joined by Nick Beer. I'm from the representing the IT Institute of Engineering Technology. And together, we're the joint champions so far. I'm sure there'll be more later for this project along with Esteban from Tesla Technologies. So good afternoon. The headline, we face a trust gap. Standard encryption is obsolete against the quantum threat of tomorrow. What do we mean by that? Data of any kind, whether it's media or any other kind of data that's encrypted using today's standards is going to be decryptable by quantum tomorrow. This is the so called harvest now and decrypt later threat. Therefore, we need to have quantum level encryption today to protect for tomorrow. Furthermore, today, we've got real time deepfakes, which can be injected into live feeds. We're all vulnerable. Everyone here should be concerned about this challenge. So therefore, the first aspect of our project will be to address the critical issue of trust. That's right. We'll be bringing post quantum cryptography to live streams, bringing c2PA to live workflows so that we can address the authenticity and the provenance questions so you know the feed you're picking up is from the person you think it's from and that it hasn't been tampered with in the meantime. CTPA, of course, provides an open technical standard for publishers and creators to prove that the digital content is intact. But we don't stop there. Simultaneously, we are operating in scenarios where we have denied, disrupted, intermittent and limited connectivity, whether that's a conflict zone, as it so often is for many of us in the room, or reporter at a saturated sports stadium. The network is no longer a utility, but it's often a hostile environment in the broadest sense of those words. The networks, though, represent significant opportunities for broadcasters. You've heard a lot about that already today, especially five g. So how can we get greater assurance? The problem is that although things like SRT and RIST are genuinely awesome and they work really well, there are also times when network conditions are just so bad that extra retransmissions are not going to cut it. So we are going to understand the congested pipe. We're going to try and avoid this cliff edge that can happen. And our project, we're going to integrate directly with five gs cores, the standards there, in order for us to get the analytics and use technologies that already exist to be able to know ahead of time what's going wrong and allow graceful management of that. Esteban? Thank you. What is the innovation? We are integrating three technologies into the one unit pipeline, the RAN, the I, and the SIL. The RAN, by integrating directly in the five gs core, we predict the congestion five seconds before it happens, completely avoiding the reactive failures of other protocols like SRT. Also, second, the eye. Using if, for example, the batteries drop to 100 kilobytes, we prioritized intelligently. We separate the subject from the noise, brewing the background to maintain the face and the voice in high definition. And finally, the seal, because we need more technology, is we wrap every frame in what was quantum chain of trust, embedding this standard cryptography directly into C2PA manifest. We are not alone. Okay? We have assembled a world class team to build this resilient future. Tesla Technologies, my company, is leading their intelligence, integrating the A and the Quantum Secure Core, but we are not alone. Also, for the market, really the solution, we have the people from Bogota that is there. Up is the industry are able to manage them. This experience transform complex MetaLatta into operator into Google operator interfaces, and the MarDiBag brings gaming grade low latency performance to broadcast. And also, our guested asset is our champions. The VTs and IoT, because they are not passive for servers. They define directing scenarios, Jmin, Jeter and total collapse of the network to prove our tactical truth holds to under five. And the Institute of Engineer is our engineer champion will ensure we align with the global standards. So who are we looking for? Well, we're looking for you. We're looking for you. We're looking for you. We're going need a whole load of people to come on board and really make this into what it could be. So anyone who's a telco who has access to five gs publicly or privately would be very much welcomed. Any broadcasters, news or otherwise, have got some really harsh environments, they want to bring that here or indeed just participate in part of one of these columns, which should be brilliant, encoder vendors and, of course, anyone in academia or with cloud platforms. So in summary, I promise, we don't fix the break. We avoid it. We make sure that we have intelligibility at ultra low bit rates, and we secure content against deepfakes and quantum decryption. So if you want to build the resilient pipeline of 2026, come and see us on table four. Thank you. Thank you. Thank you. Well done, guys. Well done. Well done. Well done. Well done. Well done. You so much. That was our Qstream team on table four. So next up, please give a big round of applause for, some new champions this year, which I would just wanna touch point on our very first, IBC Accelerator kick start day happened in 2020 at, Soho House at White City BBC, and Digital Catapult gave the, presentation there too. But we have some new Digital Catapult, team. This is, Nigel McAlpine, Robin Cramp, and they're about to present, Voopla, the decision twin decision ready sustainable digital twins for broadcast and virtual production studios. Take it away. Please put this bottom clicker down there. Thank you. Big round of applause, guys. Thank you. Thank you. Thank you. Another alumni of the BBC from many years ago as was Nigel as well, so it's superb to be back in the radio theater. Big up, the BBC. Yes. So, yes, so Voopla, the decision twin. So let's give you a little bit of a whistle stop tour about what the problem space is here. So virtual production decisions lack confidence at green light. So, ultimately, virtual production has been around for a number of years, but it's very fragmented in the understanding of where that meets from studios and the complexities of studios all around the globe. So essentially, what we're looking at is global broadcasters and production teams are under pressure to deliver more ambitious content with less time, less budget, and growing sustainability expectations. So virtual production should help, but adoption stalls at the moment of commissioning because teams simply don't understand the insights they have early enough. So for the benefits of preproduction, production, postproduction, time and money benefits is all what these problems are trying to solve. So the key is here, they can't confidently compare studios, test feasibility, or understand sustainability impacts before going to greenlighting projects, which leads to conservative decisions and costly changes. So that's the decision gap, and it is the problem we're looking to solve. So the challenge overviews of our POC objectives here is really the accelerator creates a shared decision layer using production accurate digital twins of real virtual production studios. So this gives commissioners, production teams the ability to explore studios remotely, test layouts and workflows and understand risk, cost and sustainability before projects are approved. So the success of this means that the work in practice culminates in a live demonstration at IBC. That's where decisions can be explored in real time. So what's the innovation? It's living decision twin. So what we're looking at here isn't just about digital twins. It's about when and how they're used. So we combine spatial capture, real time production data and sustainability monitoring into a live Studio Twin. And that's that teams could explore essentially anywhere. Crucially, we shift digital twin upstream from operational tools used during production to decision tools used before productions begin. And I'll put on the end here, that's the real innovation. So champions so far. So we've got myself and Nigel from Digital Catapult. We've got a couple of virtual production, advanced media production studios. It's all about driving impact and trying sort of the growth and adoption of these technologies and things that go on in there. We also have Tom from Solve Evolve. So Tom is contributing systems thinking, real time workflows and design, shaping how the decision ready twins can actually work in practice. We've got Alex from Helly Guy. We've got Rich from Helly Guy as well. There's a wave. We're handing the air. So these guys are providing high fidelity spatial capture, Gaussian splats, LiDAR scanning and drone enabled data acquisition essential to creating production accurate studio twins. And together, this forms a credible base, but what we're ultimately looking for is is more around this in in the thinking of from commissioning to broadcasters. So looking for broadcasters, productions that want to sort of go through this journey with us, really like to extend that into the other studio landscape as well. So we've mentioned we've got a couple within our own sort of suite as part of this. But ultimately, what's the an industry partner that we can look to do as part of that? Sustainability, people working in the whether it's Albert, Ag Green, let's bring that sustainability level into this piece as well. And cloud compute as well, you know, things that can hold this whole piece together and really give us the opportunity to sort of, like, make this POC shine. And I suppose the ultimate thing here, the ultimate outcome is simple but quite powerful. It's it's a world where commissioners and broadcast and production teams can make confident evidence based VP decisions before committing time and budget or travel. That means lower risk, lower carbon, better collaboration, and faster global adoption of virtual production. And at IBC, audiences will be able not just be able to hear about it, but also to see this live experience at IBC. So think of it as like a a VP Zoopla where you have this ability to be able to sort of, like, dynamically move your requirements of what you need for a studio and being able to sort of dial those up and down and really try and see what the capabilities of virtual production studios are globally at scale in order to make them suitable for your production needs and really just bring that sort of cohesion around those facilities. Thank you very much. I think we'll be on, what, table five out there? Thank you very much, guys. Thank you. Thank you. Thank you. Thank you. Thank you. Oh, yeah. That's it. That's it. Well done, guys. Well done. Another big round of applause, please, for Digital Catapult. Digital Catapult. I'm personally big fans of their work, and they're respected in The UK industry. So next up is pitch six. This is called, the AI for live media platform for sports and beyond, And I'm gonna just go I'm gonna go for it. Powered by agentic AI and dynamic content adaptation for end to end automation, personalization, and monetization. Thank you. I'm here for another hour. Thank you. So please welcome to the stage all the way from Malaysia, our good friends, Nivan from ASTRO. ASTRO. Come on. Come to the podium. Here we go. And also Lakshmi from Tata Consultancy Service. Welcome. Please stand by your chin. Hi, everyone. We are from ASTRO MVC, Middle East Broadcast Company, TCS and KEM AI. Today, we're excited to share our vision about Identiq AI platform for live sports and beyond. Across ASTRO, and also why there are many industry, live is everything. And we are seeing an extraordinary opportunity for AI powered live media to unlock new levels of audience growth, personalization, and also monetization. But today, our release is challenging. Our live workflows are very heavy, fragmented, as well as resource intensive. They struggle to keep up with the rising demand of multi language, also multi platform and real time experience. As an industry, we are still working with an isolated AI pilots that simply doesn't scale. That's why we believe that we have reached to a true inflection point here. AgenTeq AI now gives us ability to unify our workflows. This is blending with the human creativity, with intelligent AI. Finally, we can deliver trusted, localized, and monetizable live experience at the speed and scale audience expect. We are excited to move from a siloed live AI use cases to unified and repeatable solutions. Across ASTRO, MVC, and wider industry, our live sports AI use cases now span the entire media value chain, or the entire supply chain, from intelligent production to distribution, to real time highlights and semantic insight. From MPC, especially in Middle East, Eastern broadcast company, sports commentary localization is far more than translation. It's emotion, cultural depth, and poetic delivery that audience connect with. So our challenging is clear, to explore unified live media platform powered by authentic AI registration and enriched with essential human creativity. A platform that delivers experience that are deeply contextualized, personalized, and trusted, and delivered at a speed and scale in sports and beyond. Through IBC Xtraitor, we will deliver three outcomes. The first is basically a foundational life identity platform that is scalable, repeatable, and built to support contextualization across multiple use cases. Second, two to three high value vertical use cases, such as live localization, compliance, content moderation, or even monetization agents. And third one, a demonstration of multi domain agent orchestration across a single unified platform. In short, we aim to move to industry from isolated POC to a scalable, repeatable, live AI platform approach. Talking about our innovation. So we start off our innovation lies in a unique interplay of four key constructs. Firstly, a foundational live AI platform, leveraging agentic orchestration with modular layered architecture, which will allow you to configure competence to create the desired workflows, which means that once proven, it is scalable to newer use cases, capabilities, and partners. Secondly, a seamless human machine collaboration, where we mean that you're not replacing the essential human creative intelligence, but seamlessly blending it with agentic orchestration, where human and AI agents talk to each other in the flow of work. Thirdly, a bold construct of dynamic content adaptation in real time, where AI agents, we believe, can contextualize up to 70 percentage of your content automatically, which means that the humans which is in real time in the moment, that means the humans can uplift the rest of the experience. So that means that you could have localization compliance, you know, relevant to India, China, Middle East at high speed and productivity. And, fourthly, we are talking about, can we deliver all of it with trust and safety, which is where we bring in the responsible AI constructs, trust guardrails, and governance in place embedded within these live workflows to make it executable. And in stage one, we built these specialized vertical agents. For example, a localization agent along with the corresponding platform constructs. And in stage two is where the magic happens, where you connect these specialized agents in a multifunctional workflow to to stitch the desired workflows. Now in terms of champions and participants, we have an interesting set of champions and participants, but we believe this is a broad blueprint with multiple components and the highest true industry collaboration. And that is why, as champions, to define the strategy and blueprint, we are looking for live media players as well as, you know, if you have news and live entertainment in similar cases, do come and join us. And particularly as participants, if you have domain, LLM, or live technology capabilities, we really think that we should come together with us. And similarly, from a industry vision, bringing it all together, level one, we would have a POC of the use cases. Level two, a platform and a blueprint, which is scalable, extensible to various use cases. But we believe the real takeoff is for the industry. When this blueprint is established so that you can really unlock multiple use cases around your monetization models, your personalization models, or even, you know, multi content delivery across multiple platforms. And we are not talking just about live, but also around multi platform delivery across the downstream as well. So we are not upgrading live media here. We are reimagining and rearchitecting it, and we need to do it together. Thank you. Thank you, Lakshmi. Thank you, Lakshmi. Thank you, Nivendran. Next up, we have pitch seven, which is crystal clear boosting speech intelligibility. I had to say that very clearly. Boosting speech intelligibility in media. And it's Balash Sari from channel four. Welcome. And Andrew Dunn from the BBC. That was easy to say, Andrew. Thank you. Welcome. It's over to you. Press that down one over there and point it that way. Hello everyone. I'm Bala Sherri from Channel four, and I have Andrew Dunn from BBC with me today. Oh, excellent. Intelligibility remains a long standing challenge for our industry, a hot topic that flare up time to time. I'm convinced that great solutions start with measurement and analysis. Our Pristar Care project wishes to take a step forward by measuring intelligibility. We believe we can create an intelligibility measurement auto QC system by combining emerging technologies such as strong or fast listening effort meter alongside various loudness measurements. The set of measurements aim to highlight sections of content where intelligibility may be questionable. Those sections would then be reviewed by human to score intelligibility and identify potential creative decisions. Such a workflow could be integrated to delivery chains or directly into production processes. Of course, intelligibility problems can arise after production and delivery due to playback system or the listener's environment. However, this project focuses specifically on production quality and ensuring that the source content we publish does not carry error. This diagram may look simple, but achieving this will require research, testing, and collaboration. We will begin by reviewing available tools and measurements. We expect success will come from a new combination of measurements, potentially moving slightly beyond standardized approaches. A key part of research will involve listeners. Their intelligibility scores will be compared with objective measurement results. The panel will include people with and without hearing loss. Following data collection, we will define thresholds for our measurements and test them. Finally, we will implement our measurement system within Channel four's supply chain and test it using real world deliveries for extended analysis. The innovation lies in three key areas. First, a new automatic measurement toolset that highlights content segments with questionable intelligibility. Second, objective intelligibility thresholds that could become part of broadcaster or publisher delivery specifications. And third, a simple and practical human assessment QC scale for intelligibility. This project is championed by Channel four and BBC. We are also working with Fraunhofer, NewGen Audio, and BBC R and D. Collaboration across the industry is essential. We invite industry participants to support us by providing sample content segments with both good and questionable intelligibility. So I'm Andrew Dunne from BBC, but I'm also Chair of the EBU's QC Group. So I've got a vested interest in defining a QC test. Vaish's first slide had an EBU QC test, which was the golden years test. We'd like to move away from that in the future. But the benefits for the industry, alongside having a new test defined, is that we're going to be able to improve the quality of the experience for the audience. We're going to make things easier for publishers. They'll know by defining an intelligibility threshold that their content is good for distribution. And we'll also make that tool available to producers so they'll be able to test at every stage of production so that they know when they deliver the content that it's is is suitable for purpose. And that is it. Thank you very much. Well done, guys. And you stayed it within the time. Stayed within the time. We're gonna move things swiftly along. We are now on our pitch eight. This is the software defined workflows for interoperable movie and TV production brought to you by the wonderful, wonderful Raymond Drew from MovieLabs. Raymond, take my arm, please. You're good. Cedric is on his way. Okay. You're gonna be the clicker. Official Cedric. Cedric. Everything that requires any physical appeal at all. So we're right here, Cedric, with this white button and pointing towards link. Right. Great. One more round of applause for move us. Thank you very much. I'm I'm gonna donate the first fifteen seconds or so to the accelerator project in general. MovieLabs was in the very first one in 2020. We did a project on archival storage of computer graphics. That accelerator firmed up some of our assumptions in our research projects and gave us some new ideas, which are now some of the very basic material that's turning into what we're gonna talk to you about today. So, accelerators can build on things and then come back five years later to haunt you again. I don't know. So for the name, MovieLabs, we're not we don't just do movies. We do television ads that any any content, any audio visual content, that's fine. And the main thing we try to do with this project is break down the silos. In almost all media workflows now, there are three common problems. One of them is, where's my stuff? The other one is, what is that stuff? Right? And and the third one is knowing how to respond to, I don't know. So, anyway, the this project is about breaking down those barriers and answering those existential questions. And the project we're bringing is actually a use case that is very difficult. And those of you who have been through that, planning a reshoot is actually a complete nightmare. You have to find the people, you have to find the equipment, you have to find the place and gather a lot of data that is actually shared in e mails, documents. It's a real, real challenge. There's already been a POC that was shown at the IBC last year, and this was driven by members of the industry forum, the MovieLabs industry forum. And the goal is actually so you have different data that is stored into different tools. And what we want to achieve is actually to connect the data while preserving the whole security access, confidentiality and all the right management you need. So we had Avid, Adobe, we had Console, Leando. So really, what we want to achieve with this new step is actually broaden the range of tools that we can connect and make the model stronger. So one of the things we do to break down these problems is we have a few key components. One of them is common data models. Everybody should talk about things the same way. I dare you to go into a room full of production and postproduction people and ask them what take means. It's it's shared storage is important. You can call it the cloud or whatever you want to call it, but shared storage, you can get the same know you're getting the same thing and not a not a modified copy or whatever. Shared metadata. Metadata is as important as the assets in modern media workflows. If you get it wrong, who knows what's going to happen to you later? And the last technical pillar is zero trust security. Security matters a lot. Nobody wants to talk about it. Everybody should talk about it, and everybody should actually do it. And this entire project, we want to be led by vendors with their products or with their research teams. This we want people are putting this kind of thing into products. We want to accelerate that, as it were. Yes. So the real innovation in here is to bring the production into knowledge graph. This is something that we have in the digital world and is something that is very interesting to connect people, data, tools, storage together because we have a lot of data that is only accessible through human connection, a lot of SMS, WhatsApp, emails. So all of that actually should be readable at some point. And so we can make that information browsable, actionable. And something very important, obviously, have talked a lot about AI. If you want to enable AI in your workflow, you need some structured data. And right now, I mean, it's a real challenge to find that information that sits somewhere in an email, in an SMS. So it's an opportunity to actually put things together. Okay. So as for people working on this, we have officially NBCUniversal as a champion along with MovieLabs, which is great. And we're working on a couple other studios you've probably heard of, although some of them may have merged with others by the time that IBC rolls around. I I say that every few years, and it keeps coming back. I don't know why. And for the participants, we have current members of the two thousand thirty Industry Forum come by the booth, we can tell you all about that and how to participate there. And we're very interested in people who aren't forum members and who are interested in interoperability and have their own products and services they want to use, especially interested in people who use AI in interesting ways with structured data. We've seen some very good innovations there, but not a lot of product yet. So we'd like to see some products in that space for things like tagging and script analysis and so on. Yes. And MovieLabs will be providing a set of sample data training on the ontology for media creation. Actually, we've seen ontology several times in here. That's very good to see on the screen. And training and also existing extensions to connect to. So that's there's already an ecosystem that is here. The goal is to expand it. Alright. Okay. So this is what we want to see is a bunch of applications working together, sharing data, making the creative process better for the people doing the creative stuff, make the reshoot easier rather than taking a week, make deciding which shot you want to use easier than it would have been otherwise, all that stuff that's currently you have to be your own little IT person or have an IT department to help you out with it. And we want to really push on what happens with a knowledge graph. If you put all this information gathered from tens or dozens of little applications, what can you see out of it? What new things can you imagine and create and so on? So please come help. Yeah. Thank you. Thank you very much, Cedric. Thank you very much, Raymond and Cedric. Great to have you back. You were involved, as you mentioned, back in 2020. It seems like a long time ago. I think that was partly during COVID, wasn't it? During the pandemic, you guys were. And we had all your guys on the on the phone from on the teams from from Hollywood. It was a brilliant project and great to see you guys back. Okay. So next up, and I'm very excite I'm excited about all of these pictures. They're all astonishing. But I think the next one is the one that we heard a lot about this morning from from Sandeep at DAZN. And so it's it's a real pleasure to welcome Caroline U Everton from DAZN. Caroline, come forward. You've been involved, haven't you, for us? I have. It's good to come back. Absolutely It's been a brilliant to have you here. We're so we're so thrilled. The Delta Protocol Live Media Reinvented. I think this is really visionary stuff here, so the floor is all yours. If you use the clicker that you press it Which one is it again? The arrow down. The sticker on the clicker. Thank you. Okay, so I'm Caroline Everton from DAZN. So Delta Protocol Live Media Reinvented. So for those of you who are in the room, this is almost like an extension of our CTO keynote from this morning. We know that this is a big statement, but it's also a big challenge and an even great opportunity. So, live video was built to move pixels. The next decade will move intelligence. I wanted to pause here for a minute to just let you allow that to sink in a little bit. For the past thirty years, we've done some amazing transformation in our industry. We've complex pixels, we've optimized compression, we've squeezed things into smaller frames, we've pushed bit breakdown, Yet AI is exploding everywhere around us. You heard from Sandeep this morning. But the transmission layer has not really changed. We still transmit frame after frame, even if nothing meaningful has changed. I think we're all aware of that problem. It's the speed in which we are responding to it that we are challenging with this project. So we know that it works, right? We've been doing it for several years, but it comes with cost, it comes with latency baked in, it comes with complexity in the infrastructure. So we are inviting you to think about things differently, because more than anything, it's also capping personalization and in cuts capping intelligence. So think about this, we are innovating the peripheries of the workflows and we are not really innovating the transport layer. This is what this project is about. So let's think about things a little bit differently. So here's the challenge. What if live video could be not just sent, but actually understood? What if the broadcast chief was actually content aware and actually knew what is caring and decided in real time what really matters and transmitted that. What if devices and the network could collaborate in real time to create an experience that is actually relevant for the customer and no longer just one too many? So for the first time, we strongly believe, and we know that many of you will, because we've been having these conversations for the past few years in isolation with many of you, we believe that every condition exists to change how life and data moves through the world. The convergence of AI, edge compute, the demand for personalization, all of these things come together, created these conditions for us. So this is no longer just a dream. This is the time to make it happen. So just imagine what live digital could actually become if we had all of these things come together cohesively. The question now is how do we actually get there? And it's definitely not by just doing more compression, right? It's definitely not just optimizing the peripheries and hoping that these things will magically come together. It's actually figuring out how to make this work. And this is actually redesigning how life vision moves. So what are we actually building with this project? So we are prototyping the very first AI native transmission model. So what does this actually mean? What we are proposing is that live visual is actually treated as a sequence of scenes and no longer just a sequence of frames. So there is meaning, there is intent being analyzed and understood in that transmission. Delivery is no longer the job. So AI will identify the meaningful change in real time and only that is considered meaningful will move through the network. I'll show a conceptual diagram at the moment, so we will get less theoretical, but we'll start discussing what this actually could look like in practice. So at the edge, the experience is reconstructed in real time contextually. So that means that all of that data that we've been gathering about devices, about the network, about the customer wants and needs can come together to create this experience and create an experience that is actually relevant for the user. Yes, so no longer one too many. So the success criteria is clear. So we want to be able to prove that we can separate what we call base continuity from meaningful change. We want to prove that we can reduce bandwidth and compute measurably. We want to maintain broadcast grade fidelity in sync. I mean, are pure sports platform. So for us being able to keep things in sync and maintain rights and all of those things continue to be important. This is not about throwing those things away. It's actually being able to utilise the technology to maintain an experience that is hyper personalised, but also still with the integrity, with the high quality that we expect from a sports broadcast. And obviously, we want it to be fully compatible with ecosystems. You know, we don't want to be able to swap everything around, because that would take decades to do. So being able to stay compatible with things like MPEG and CMAR and all the standards, that's a no go. And also we'll use a live proof of principle in a football match to be able to demonstrate this as we work through the project. So we believe this is all measurable. Again, we started those conversations with many of you by isolation. This is actually about how we bring it all together. But most important, just to make sure that we are all aligned, and I know that I've come to the end of my time, this is not content generation. This is a change in the transmission layer. So just very quickly, this conceptual architecture. So this is a shift from where the intelligence lives to inside the transmission layer itself. So acquisition can stay the same. We still transmit the content as is. There will be continuous audio, because that's important. Football, there will be commentary. So today, we send continuous frames in the Delta protocol model. We will separate two things: the base continuity from the Delta, and the Delta is that meaningful change. Our intelligence layer will then be able to detect motion, context, significance, and all of these things will be controlled by a control plane that will bring in the policy, bring the context for the customer, the wants and needs, the device, and also the intent, and provide that feedback to all of the components of this layer. So this is then packaged in a Delta standards aligned layer that is then delivered to the edge, delivered to the device to be able to reconstruct the experience. So nothing breaks, nothing degrades. But what we are able to do is unlock that next layer. So you guys heard from something today, hyper personalization is where we are headed, and we believe that this is the transport layer that will get us there. So think about all built on the same signal, but reconstructed contextually built on metadata that is provided by the customer device. So none of those things that you're looking at here are new. They are all existing capabilities. I promise I'll be really quick. I can see Mark looking at me. I've exceeded my time. But it's how we bring all these things together. This is where the transformation really starts, right? So this is all about creating a new transport layer that will bring all of these things together: perceptual signal extraction, semantic intelligence, the delta and deterministic reconstruction. We are leading as we are the champions, but we are here starting a call for participation officially today. We are looking for other co champions. We are looking for all of the vendors and the technology partners that you can see listed. We will be on table nine. And just to finalize this piece, this is how we strongly believe that we can unlock personalization in the new era of OTT experiences. Thank you, Caroline. You're good. You're good. Don't you worry. You just buy me beer later. That's all good. So thank you, everybody. That was Caroline from DZone. And, of course, that was pitch. That yeah. Yes. Pitch pitch nine. So we actually have three more to go. Three more to go. Three more to go. And once again, if you are watching the livestream, remember, you're gonna see some surveys. You can click on the projects that are kind of taking your fancy and join the first base call next week. So the next project is called Eiffel or the immersive festival live, the remote presence at scale. And it is the magnificent return of immersive, interactive, spatial audio projects that have been evolving over the accelerators over many, many, many years. And I'd love to welcome to the stage once again Rich Welsh from SMPTE. Huge round of applause for Rich. And, also, Luke Farwell from, reality hack MIT. Another round of applause, not only for this wonderful this wonderful coat outfit that you're gonna be wearing, so you'll be easy to find on Pitch 10. Take it away, boys. We've got some cool shit you know. I'm Rich Walsh. And I'm Lip Corwal. And this is our pitch for the immersive festival live, remote presence at scale. And it doesn't work. Oh, No. Try it. You have technical issue. I'm a woman in tech. Oh, it works. Yeah. Okay. You know, MIT, you know, it's really hard with technology. What we present to you, we try to address a real problem. We want to lift that guy off the couch. This is not everybody, but certainly there is a part of the audience that feels exactly like that. They are bored to death with the current mode of delivery, which is two d with limited audio, mono stereo. There is limited innovation, and it's been going on for years and decades this way. Of course, the mode of delivery has changed to different devices, but essentially, it is the same thing. So on the other hand, on the far end, we have three d three d solutions like emerging, social VR. However, those are really not well integrated with existing, live shows, and live festivals. So that's that's a challenge. And stand alone solutions cannot basically provide the same dynamics. You know, we cannot feel as present as we are in a live show. And obviously, there have been different opportunities to experience these types of events in VR, but there isn't really a usable blueprint. Oh, now I need water. Why is this not working? This is meant to be on this mic. Okay. Yeah. This is working. Yeah. Okay. It's my voice. It's not the microphone. No. I do. I need some water. Yeah. So we what we wanna do is create a reusable blueprint that allows us to do this at scale prevent the kind of loss of ROI that comes with just having to do these things as a first time experiment every time. Okay. Now I'll speak because, you know, his voice is out. So definitely, we want to put together a lot of different components here. And the whole magic is all about the composition of individual technological components. So we have ultra high definition streaming live in 180, three sixty, stereoscopic, and we will see further how our partners address that. We have spatial audio in sync, again, immersive, binaural, object based in sync. That's a major innovation to have it in sync at incredible low latency and high bandwidth. And then combined with two way presence, that's a completely new component. We want it to be interactive and using the leading edge of AR and VR technologies to deliver that life at low latency. And we wanna make it accessible. We wanna make it a seamless experience for people both in real life at the concert experiencing the AR component of this, which we'll talk about in a moment, and also for the people experiencing it in VR at home or wherever they're getting their content. Accessible by design, so it's going to give you that kind of seamless experience where you can be either on the stage or backstage or actually there in the audience. With that in mind, what we want to do is have a multi perspective capture so that it will be 180 or three sixty VR, preferably stereoscopic, definitely UHD, dynamic range. It's not just about VR or AR at consumer end. It's also about people actually at the concert being able to also participate with the kind of the ghosts of the audience that aren't there, and even the artists as well-being able to see the virtual presence, the people who are there participating from home. So and of course, that has to go in sync and real time, and there's all the challenges mentioned by also other projects. So, you know, high high throughput, high bandwidth, low latency, and it all has to, you know, work together. And on the far end, we have the emerging and the leading edge immersive technologies. We all know this year has been revolutionary with the uptake of the AR and AI classes, and it'll be only more. All pretty much all the companies have many models in their pipeline. So twenty twenty six will be transformative in this space, and we wanna jump immediately into it straight away. So the champions are, of course, us, SMPTE, and Reality Hack at MIT. We've also got King's College London and University of Galway on board. We're in discussion with Coachella, which we're super excited about. Represents a great opportunity for us because not only is it obviously an incredible live event to be able to bring to a virtual audience, but also we have some participants who are already going to be there present at Coachella, for instance, in the the immersive space and even in the robotics, so capture of live event in the more traditional way on stage. So that includes some of our other participants. Yeah. And of course, we invite all the partners that are present in five gs and networking technologies infrastructure. These are all essential components for this to work at low latency, and we also talk to providers of emerging green edge ARVR technologies like X-ray. We're also talking to Snap and other partners, and it's only growing. So these are our partners, and we're all invited to join this very applied project, so pretty much everything's fitting here. And a shout out to Shure, of course, is one of the sponsors and also involved in this project for the microphones. And we have a few more things. So at the end, what we want to provide is we don't wanna replace live TV. It's not going away. It's there and will be innovated in many ways presented today, but we wanna extend this offering to provide new premium remote ticket that people can just experience a bit more. You know, 75% of what you feel will will be delivered digitally, and we want a real real use case, real deployment, something that works in Fasterboard. As I mentioned, Coachella would be one of our goals here. And we want the reusable blueprint so that you can all go out and do it for yourselves, and we'll make all the mistakes for you so you can do it without fear in the future. And we wanna Well, that's basically we wanna show you. This is seems like a future, but that's something we can actually deliver with the technology of today. Technology is out there. So let's put it all together and make it happen. Thank you very much. Thank you. Yeah. Thank you very much, guys. That's really exciting. Coming from a music industry background, I'm really, really excited about that one. Next up, we have got, from broadcast to mecast, personalized broadcasting for the next generation of media consumption. And that is Nivendran again. Hello, Nivendran, from Astra Malaysia and Ajay Chandra from t oh, my old friend Ajay. Hi there. From TCS, who helped us enormously with the CTPA project last year. Thank you very much. Over to you guys. You've got your five minutes. There's you. I think you know how the clicker works. Yep. Yep. All good. All good. Thank you. Yeah. Yeah. Again, am I okay. I'm gonna talk a bit more about Astro. Astro has been a commercial leading content and treatment company for a decade. So why am I talking about Astro? It's basically the media landscape is now changing. Right? So I think the younger the younger generation are now just blocking YouTube, TikTok, or in or social media, or even actually consuming content from pilot sites, which are really post a cyber security. So I think there's a a risk there of actually how we're actually going to deliver content to the end customers. So they actually consuming like sports, dramas, entertainment, everything on social media. So I also the pay TV market has been now heavily saturated. So the the lives and the content lives actually climbing very high. So we have to make use of opportunity to how do we actually monetize from the content lines. So the only way is to actually how we can actually deliver personalized content to our customers or the different segment of customers. Our answer to deliver this is basically moving from a traditional broadcast delivery to a MYCAS kind of solution. I mean, it's it's it might be just complementing into the the main broadcast stream. How we can do this is by AI driven content processing. We can actually try to produce dozens of different content. For example, highlights or behind the scene, or even localized with different languages to reach our different customer segment. By doing that, we're also giving additional I mean, we're also giving opportunity for revenue generation. So we are actually providing a segmented or targeted content that actually creates more inventory in terms of our content. This lets us to compete and hit and I mean, the the the customers that are actually consuming the content from the social media or even the private sites. So in short, Astro feels that we can deliver content that feels personal, relevant, and to every platform our audience are. So I'll let let Ajay to explain more details. Yeah. Thank you. I think thank you, Niven. I think this is a common problem what we have seen for most of the linear broadcasters and the streaming platforms are facing that issue. So so what as Niven has mentioned, so we thought of building the agentic AI architecture, or we can calling it as a platform, which can redefine the your entire broadcasting and the streaming platforms. To make it simplify, I think we highlighted the objectives, but we just want to broadly divide into two simple terms. Teams, are calling it as one is automate and monetize. We're calling it as AM and personalize and multiply. So this is these are the two themes. So now what I'm going to do is we just quickly talk about in the two next couple of slides what is mean by AM. So so if you see right? So, typically, what we want to do is as part of autonomous intelligent content library, we want to create a complete autonomous and sovereign content intelligent platform, which can continuously get the audience intelligence data and the viewing behavior data so that it can train itself and kind of provide the deeper engagement and the rich insights that one. So why we need this one? Because this is what we want to achieve as part of the me cast and so that we can get a tailored experience to the audience. We are calling it as a audience one so that we can reach that one, and this can generate a more revenue and create a new monetization models. This is about the AM. And coming to the next one, we are calling it as a personalized and at the same time, multiply. So why we are calling it as personalized? I think the morning, I think we have extensively discussed. So how we can get a completely tailored experience for individual user. So this is where what we try to look at is we try to create three different personas. Like, for example, you have a one user, Alex, who is a more in-depth analysis of the sports, that one. So he must be interested more into knowing about what is happening rather than the standard highlights, that one. So so similarly, you have a different persona like where Beth is more interested to the casual fan. So he's just interested to see the what is the normal highlights. And the other user is more interested to know the highlights only, like, falls, what are the key goals happen or misses and those kind of things. Right? So how we can generate in a near real time? Okay? And we make it as a very lightweight, calling it as a smart bookmarks. Like, so for example, if Alex is logged in, so he can get based on his personalization, he will get those things that one. And on top of that one, like as I mentioned, we are calling it as a multiply. The reason is we want to bring it more geographical users, the users who are based out of particular region of the country so that they can get a completely localized content. So that's where we are bringing our some of our parts spends and including the CAM data yet. So what is architecture? We don't want to go this one, but as a we want to work with ASTRO and MPC, Middle East broadcaster, and we need more champions. Want to work with them to define the strategy, define the use cases, and at the same time, we are looking more partners who can help us in building the end to end media supply chain and create the LLMs, custom LLMs, and integrate and create the end to end architecture as part of this accelerator program. So what is our vision? So we want to completely help in democratize or redefine the broadcast economics and by establishing the blueprint if of the AI identity architecture. This is our vision. At this time, we have two great champions, like Niven is there from ASTRO and Middle East broadcasting is there. And we are looking more champions. We know this is a common challenge across the industry, that one. So we need champions. At the same time, we need participants as well who can help us in building the end to end supply chain and the various LLMs, SLMs, and the in in intelligent platform for end to end, which we can deliver in a three to four months in the building the prototype. Thank you for giving this opportunity. Thank you. Thank you. Thank you, guys. And we have now come to our last and final pitch. That's right. One whoop. They get a little bit, I think we deserve a little bit more whoops as we, introduce to the stage our, official 2026 incubator project, which is duly upgrades from the last few years. And, of course, congratulations on the project of the year award 2025, graduated to the incubator. And this year, to really, really put some of these to the test into an MVP, into the agentic, in story intelligence, the agentic production ecosystem. So without further ado, I've got, the award winning, John Roberts. I've got the award winning Morag Macintosh. I've got I'm I'm sure you guys have won awards sometime in your day. Yeah. There's a bit but maybe soon this year. Who knows? All the way from Chicago, Brian Hoff from Associated Press, welcome. A brand new champion. And also, Alex Bassett from NBCUniversal, originally from Britain, but obviously living abroad. So four champions to present this. Without further ado, welcome our incubator, our official incubator. Thank you. Thank you. We brought the awards up because we're hoping to cash in for another forty five seconds on the clock if that's okay. Okay. So this is the most profound disruption that our industry has faced. We no longer ask whether AI will transform production. That's already happening. The question now is whether we design that transformation or allow it to define us. AgenTik AI is a structural shift in the production stack, systems that reason and act, workflows that adapt in real time, intelligent integrations, and knowing automations operating at scale. Meanwhile, as broadcasters, many of us share a common problem. Our tech stacks are fragmented, our data is siloed, our workflows our workflows break under pressure. Our tech does not always support our people. But AgenTic is a rearchitecture moment. This is the opportunity to revision the stack. As you've heard a little bit already today, this project has its roots in three consecutive IBC accelerators that have broadly explored how software defined production and AI might reshape live production through the lens of integration, automation, and those interfaces where workflows meet our skilled people. Each year, the picture has become clearer and what we see now with agentik.ai and the frameworks emerging around it is the convergence point. These aren't separate development tracks anymore, but one architectural challenge, and that's what this incubator seeks to take on. We've proven that agents can work. Now the task is to make them production ready. Building on last year's proof of concept, this incubator would advance the work across the full content life cycle beyond the control room from isolated capabilities to an integrated system. The organizing principle here is story context, a shared persistent understanding of a story that flows end to end, from tip-off through to distribution and into archive, accessible to every agent and every human at every stage. Today, that context fractures at every handoff point. Notes get lost, decisions are obscured, intent gets diluted. This project builds the architecture to preserve that through the chain. Success will be a full agentic content life cycle, an end to end agentic layer coordinating tools, data, and workflows, A live demonstration of breaking news moving from the first wire signal through to multiplatform outputs carried by coordinated agents throughout. Security by design, guardrails embedded at every stage, full transparency and auditability, human editorial control maintained throughout, and a reusable open framework pattern that the wider industry can adopt. This will be the shift from experimentation to infrastructure. This is what story intelligence looks like as a system. That red banner at the top, that's the story agent. It's a single persistent brain that holds everything about the story and carries it from first alert to final publish. It doesn't hand off. It doesn't reset. It keeps getting smarter. Underneath that, five stages of production, each with its own coordinator and specialist agents. Discovery agents find the story. Planning agents build the angles. Create agents produce assets for every platform. Execute agents run the show. And distribution agents push out the door. Every one of them reads from and writes back to the same place. That expanding bar, that's the key, the story object. It starts as identity, a headline, a few entities. By the time it reaches distribution, it's carrying assets, rundowns, graphics, editorial decisions, play out logs, SEO metadata, 20 plus skills deep. It only ever grows. Context is never lost. And all three output tracks, broadcast, social and digital, run simultaneously off that story object, one source of truth, every format. At the bottom, editorial standards enforced across every stage and a shared skills layer that where any partner's capability plugs in through open protocols. No walled garden, no vendor lock in. This is one unified system that scales from a single breaking alert to an entire media pipeline. So this initiative brings together some of the world's leading broadcasters, news companies and technology innovators to build the future of agentic production. Our champion organizations represent worldwide daily audience reach and have centuries of combined editorial excellence. Together, we're establishing standards that, will guide the industry's transformation. We are actively seeking partners who bring deep expertise in cloud native architectures, in multi agent system design, and production grade DevOps practices. If your organization has pioneered scalable AI systems or production automation, we would love to hear from you. And this is an opportunity, as we say, to help shape foundational standards that we think are going to define the next generation of broadcast and media technology. And if we get this right, we define the next production standard. We fix the broken threads, and in doing so we enhance how we tell stories. And we keep our people front and centre, using AI to support our teams whilst human judgement rules. This isn't just news. The system works for sports, entertainment, live events, one framework, many genres. And new methods for safety by design for Agentic in production, with transparency, traceability and human accountability built directly into the stack. This is not a product. We're defining a new framework for AgenTik production, and we're inviting the industry to build it with us. Come and talk to us at Table 12. Thank you. Thank you, Donald. Thank you. Thank you. Thank you. Thank you. Thank you. You. Well, well, well I'll just keep this little number up here, won't Whoop. Whoop. Whoop. Over to you. Wow. So thank you so much to all 12 projects that you've just seen. I mean, our minds are blown. We've seen so much work go into all of these pitches and some of the teams that have come together so far, when we're starting the ideation of some of these projects. But today is the day where you now have a chance to get stuck in if you're interested. And remember, you can go meet and greet all the teams at the table. Only tap your badge on the table if you're interested in, the project to learn a little bit further. But, by all means, you can keep networking. Yep. And so all pitches All pitches. Yeah. The number of the pitch will be at the table. Yeah. You'll see it all on the screens outside as well anyway. I think it's time we really gonna move out of this room in a minute and go do that, and we want you to really go and and express all the interest in in the project amazing projects. I'm I'm blown away by by some of that. Yes. So thank you to our sponsors, to to Shaw and to AMD HP, and to the EIT, our innovation partner. Thank you to all of the speakers, Sinead Greenaway, the CTO acting CTO of the BBC this morning. Sinead was amazing and as was Sandeep from DAZN, CTO of DAZN. I mean, just amazing. And then to our pan panelists, our champions panel, wow. You did a brilliant job on that, Mookie. And they were they were great. You weren't so bad yourself. They were great. So thank you, more importantly, to you, our wonderful audience. It's always a test at the end of the day that every seat just about is is full here just as a testament to the interest in these amazing projects and these ideas, isn't it? We have one more very important note to make. So after the networking, we have happy hour. Yeah. Because it's very, very important. So we also wanna give a huge special shout out to our colleague, Karen Boyd Yeah. For hosting the livestream Yeah. All afternoon. Thank you so much, Perrin. Thank you everybody for watching. Thank you again to the BBC. And before we go Thank you to the production team in the gallery back there. World bash. Mark Diamond. And his team. Yeah. Brendan, the whole crew, we're so great That seemed like a pretty good moment to step in. So does anybody have any questions before we wrap up the livestream? Just to note, as we've been saying in the chat, you need to register your interest to join the projects via the survey link. The survey link will stay open until COB on Friday. Look at our marketing purse. I think it's Friday. Mitch got thumbs up, so it's Friday. The other important thing, if you can't make the first base call time that we've specified on the survey, we will be able to either send you the recording or connect you to the project group in another way. So please just register your interest either way. And, otherwise, hopefully, we will see you all soon on a project call. Are there any questions before I before I go off? Any comments on the oh, thank you, guys. Ernest, I did also see you were into my Coachella idea. Let's do it. Alright. Well, it looks like there are no questions, but you can find me at kBoyd@ibc.org, if there's anything else. And, otherwise, we hope to see you on the project call. And thanks so much, everybody. It's been really great to host this with you live from the VBC. We'll see you soon.