Times are changing and this is good news. Barely 10 years ago, a notion that we’d be collectively working hard as an industry to deliver a KPI-defined quality of experience (QoE) would have been laughable. With service providers struggling to generate revenue from online offerings, there was a never publicly exposed view that OTT services were very much a best-efforts value add. Any thoughts of spending significant dollars on monitoring and development time purely to shave fractions of seconds off content delivery times would have evaporated like remnants in the bottom of a bottle of scotch.
Today, the world of QoE is becoming huge business, and with service providers now generating significant sums from these services, the pressure to deliver five-star experiences to our now paying (often double digits!) consumers is high.
When we think about measurements of not just a live sports event—but any deliverable—what are the first things to come to mind? Audience numbers? Average video bitrate? Unique viewer minutes? Most probably all of the above. More importantly, what are the things that don’t come to mind?
At Spicy Mango , our team has spent years building media technology solutions and consulting clients on everything from platform architecture to quality of service implementation. Here are a few of my tips on measuring the success of your content.
Data is key
Data is (or should be!) a key pillar in helping us ensure that our service meets increasingly demanding consumer expectations. If as technologists we’re doing our jobs in the right way, we use this data to iteratively feed back into the product and platform to ensure that what we deliver is always a better experience than the last. But as an industry, there is no denying that we’re still hugely focused on the video playback experience. The notion of how we measure and monitor the entire consumer journey from end to end, is often spared little thought.
In traditional video on demand (VOD) services, we experience what we know as peak viewing periods, which are usually between 6 p.m. and 11 p.m. This broad window of time results in the traffic we receive to control plane services such as authentication, catalogue, location services, and entitlements, being generally evenly distributed with a consistent and steady stream of activity. After all, our VOD assets have no fixed start time or end time, so there’s no drive for everyone to view concurrently at any one moment in time.
In a live sports environment, the polar opposite applies. As consumers rush to access the service in those few moments before the start of the live event, our peak viewing period of hours is compressed to a window of activity of only a few minutes. Requests to key platform functions like authentication, purchase flows, and entitlements all surge, resulting in what we refer to as peak concurrency.
Over the last two years, there have been a number of significant outages in the OTT platforms of major brands worldwide. Surprisingly, it’s not the video delivery services that have been to blame, but failures at the control plane. Our favorite CDN providers do not (and should not) front the services that facilitate authentication or entitlements, meaning those huge traffic volumes are directed straight back to the core of the platform. We’ve been so focused on our video delivery metrics that little thought has been spared to how the other major building blocks in our services will hold up when traffic volumes peak.
So, now I think we’re getting close to being able to answer our key question. How do we monitor the success of our live sports event? We need to go back to the beginning of our consumer journey. If our users are unable to log in or pass a simple entitlements check (either at all or within seconds), they have little hope of making it as far as access to the products we’re fundamentally selling—the video assets themselves.
The widespread adoption of the term “microservices” is something worth taking note of —and within circles of global video enterprise architecture, has been something of a mantra for a long time. The notion of smaller groups of functionality and capabilities that are loosely coupled, grants us the ability to monitor, analyze, and as a result, scale different subsystems at different rates as traffic volumes vary. Our approach to how we monitor key functionalities in our services—including those provided by third-party SaaS and PaaS suppliers present in our architectures—is key to how we scale to ensure the level of quality of the whole experience our consumers expect. Do not be fooled into thinking that your third party has this covered—take responsibility for holding them to account—ensuring response times and interactions behave exactly as they should.
Breaking down our monitoring to unique groups of functionality will help greatly here. Dashboards and reports that trend user activity with authentication, catalogue, entitlement, DRM, and other systems will quickly and easily identify hot spot areas of traffic—identifying where additional capacity should be added, or changes to the architecture need to be invoked to bring optimizations.
Measuring how successful our live sporting events are requires a holistic but granular approach to monitoring. It means an end to business reports and statistics that focus solely on video views and average bit rates—and an introduction of metrics that chart every single element of the end-to-end journey of the consumer. From application load time, to login, to catalogue load and rendering, to player page launch, and more—right until the last frames of video are displayed.
When compared to traditional viewing volumes of our cable and satellite counterparts, OTT delivery still supports fractional numbers. There is no doubt that Internet delivery should be viewed as a viable long-term replacement, but the strengthening of core components in our platforms to support these vast levels of scale and performance is undoubtedly the key for success.
Interested in learning more? Catch my entire presentation on live sports on demand at REPLAY .