facebookpixel

Video Delivery Methods: Broadcast, the Cloud, and the Future

CMMA Blog

The Future of Video
While online video systems were initially viewed only as competitors to traditional broadcast systems, there is a case to be made for converging the two systems. By reviewing the production and distribution workflows of each, we find several possible architectures for this convergence that we can use as a roadmap toward the future of video.
Broadcast Video Delivery Systems
Terrestrial broadcast TV systems have been historically the first technologies for delivery of videos to the masses. Cable and DTH (direct to home) satellite technologies came next as natural and highly successful evolutions of such systems. And historically, these systems have been deployed on premises, using purposely built facilities, hardware, and dedicated networks / links enabling transmission of video feeds between different facilities and entities in the distribution chain.
A conceptual diagram of the broadcast distribution system is shown in the figure below.

As shown in this figure, there are two classes of content used as input to broadcast systems. Live content typically arrives in the form of live video feeds from remote and field production. Pre-recorded, or on-demand content, may also be provided in the form of mezzanine files from production studios or video distributors.
Both live and pre-recorded videos are then directed as inputs to the master control or playout systems, responsible for the formation of a set of live TV channels. The compositions of video segments inserted from different sources in each channel is called a program. Playout systems are also typically responsible for the insertion of channel logs (“bugs”), lower thirds, slots for ads, captions, metadata, etc. Broadcast playout systems are human operated, and traditionally deployed in dedicated facilities (master control rooms).
After playout, all channel streams are subsequently encoded and passed to a multiplexer, which combines them in a multi-program transport stream (also known as TS or MPEG-2 TS) intended for distribution. In addition to the channel’s media content, the final multiplex TS also carries program and system information (PSIP), SCTE-35 ad markers, and other metadata as required for broadcast distribution. All such operations are also typically performed on-prem, by using hardware broadcast encoders, multiplexers, modulators, and other purpose-built equipment.
As further shown in the above figure, the distribution chain in broadcast systems may have multiple tiers – from the main network center to local stations and also multichannel video programming distributors (MVPDs), such as cable or satellite TV companies. At each stage, media streams corresponding to each channel can be extracted, modified (e.g., by adding local content or ads), re-multiplexed into a new set of channels, with new program tables and other metadata inserted, and then again sent down to distribution or next headend.
Cloud-based OTT Video Delivery Platforms
Typical workflows used in today’s cloud-based online video platforms (OVPs)are shown in the figure below.

Similar to broadcast systems, such workflows also ingest videos from a variety of live and pre-recorded content sources. However, most of today’s OVP platforms don’t include master control or playout functionality, and don’t form or multiplex channels for distribution. They simply encode input content as is, and package it in formats suitable for OTT delivery. Most commonly, HTTP-based adaptive streaming protocols, such as HLS or DASH are used as final delivery formats.
All processing steps in cloud-based OPV systems are typically implemented in software and operated using the infrastructure of cloud service providers, such as AWS, GCP, Azure, etc.
The Differences Between Broadcast and Cloud-based OTT Video Delivery Systems
The use of software implementations and cloud-based deployment has a number of well-known advantages. It minimizes investments in hardware, allows pay-as-you-go operation, solves scalability problems, simplifies management, upgrades, makes the whole design more flexible and future-proof, etc. However, it also brings some unique requirements and forces designs of various components and coordination mechanisms in cloud-based systems to be done differently as compared to on-prem systems.
Specific differences include:

Granularity of data processing and delays in cloud-based OTT vs on-prem broadcast systems;
Mechanisms used for enabling redundancy and fault tolerant operation of such systems;
Means of scalability, load-balancing, resource provisioning, and other operations as unique to cloud-based operation;
Mezzanine formats, contribution links and transport protocols used for ingest in cloud vs on-prem broadcast systems;
Broadcast encoders and multiplexers vs encoders and packagers used for OTT delivery;
Implementations of ad-splicing and ad-impression analytics; etc.

The Convergence of Broadcast and Cloud-based Delivery Systems
There are several possible architectures enabling convergence of broadcast and cloud-based OTT systems, as we see them evolving in the future.
The figure below shows the simplest model of such integration, already used in many deployments today.

As easily observed, this model offers a coupling of the existing broadcast and cloud-based OTT workflows by means of adding several extra contribution encoders, directing fully formed broadcast channels/programs to the cloud-based OTT platform.
Minimum integration efforts are needed to launch such a system. However, it requires extra contribution encoders to be installed on-prem, and only offloads functionality related to OTT delivery chain in the cloud. All broadcast-essential operations remain to be on-prem.
A diagram of a system enabling much more complete offload of broadcast-related functionality in the cloud is shown in the next figure.

Here, most broadcast-chain operations are effectively implemented in the cloud.
The most critical component that enables such migration is the cloud playout system. It operates entirely in the cloud, but enables human control and monitoring from remote terminals in the broadcast center. Functionally it is fully equivalent to the existing broadcast systems, enabling frame-accurate switching, editing, ad break placements, and other control operations as required in broadcast.
The move of playout in cloud also enables the feeds from remote and field production to be collected by the cloud platform. This eliminates the need in maintaining a farm or receiver / decoders, file servers, and various other equipment typically used for ingest. It also enables much more effective distributed, and if needed – world-scale operations, as cloud platform data centers are present in most regions.
Finally, once the playout system moves to the cloud, most subsequent operations needed for broadcast distribution – broadcast encoding and multiplexing – can also be easily migrated to cloud. This further eliminates the need in installing and maintaining farms of broadcast encoders, multiplexers, splicers, and various other equipment in broadcast facilities.
This makes the whole system much simpler to operate and maintain, and enables all other benefits of cloud operation: scalability, future proof, increased reliability, etc.
Evolution of the Brightcove Video Cloud Platform
At Brightcove, we are always looking at ways of improving our Video Cloud platform and making it a natural choice for broadcasters considering expanding their OTT offerings and/or offloading components of their existing workflows in the cloud.
In recent years, we’ve made considerable efforts in this direction. This includes:

Adding support for broadcast-native ingest protocols: TS over RTP, SMPTE 2022-1, SMPTE 2022-2, SRT, etc.
Improving ingest capabilities of our transcoders: improved handling of interlace content, telecine, mixed cadence content, gradual refresh streams, etc.
Improved pre-processing and dynamic profile generation (denoising, high-quality format conversion, context-aware encoding),
Improving our encoders to generate broadcast compliant streams (strict HRD control, CBR operation, pass-through of broadcast specific metadata, etc.)
Improving live redundancy and fault-tolerance capabilities of the system
Adding Cloud Playout capability to the platform.

But indeed, our work continues and we are looking forward to many other ways of improving our system and working with broadcast customers, cloud platform providers, and other technology vendors towards making mass-scale broadcast operations in the cloud a reality.
More information can be found in the October 2021 Issue of SMPTE Motion Imaging Journal. Find the article “Transitioning Broadcast to Cloud” , written by Brightcove’s own engineering experts.

To view our Partner blog, click here

Architecture Reviews at Brightcove

CMMA Blog

We believe in continuous improvement at Brightcove. That applies to more than just ourselves and our software: it also applies to our processes. Recently, a group of us got together to talk about how our architecture review process was working for us and decided to give it a refresh. We want an architecture review process that is:

Iterative
Comfortable
Adaptable to different types of projects
Respectful to the expertise of the participants

The end result was the following document that describes our updated process.
What is architecture?
Software architecture is about making fundamental structural choices that are costly to change once implemented. [source]
What is the process?
Designing software is a continuous process. If it was possible to know everything we need to know to make perfect decisions up front, we wouldn’t need agile. To reflect this, our architecture review process has several steps that can be adapted to the needs of each project.

It may not be necessary to perform all these steps. It may be helpful to repeat steps. Generally, a review and a retrospective should be considered the minimum. Do whatever makes the most sense to achieve the following goals:

Make the best software architecture decisions possible
Produce useful engineering documentation
Learn along the way and adapt your plans to what you learn
Keep your peers informed of what you’ve decided and what you’ve learned

The following sections describe each step in the process.
Architecture Workshop
Maybe you’re planning a new project, but just don’t know how to implement it yet. Maybe you have a challenge that isn’t fully defined yet. Maybe you have three different perspectives on a challenge and are not sure how to pick one. Maybe you have an idea for an architectural change and would like to connect it to a problem statement. Rather than hunkering down and trying to answer these questions in a silo, it’s good to start talking to other experts early. This makes it easier to incorporate feedback and consider big changes before the plan gets too established. It also helps the architecture review run smoothly, since the architecture will be more ready.
Checklist

Invite subject matter experts (eg: leads of impacted teams)
Invite the architecture reviews Slack channel
Suggest 3-8 attendees

Suggestions

Start by fully providing the context and explaining the problem that needs to be solved. Inviting someone who has no pre-existing knowledge can help hone the problem statement and challenge assumptions.
Architecture workshops can be speculative. Maybe the idea won’t end up moving forward, or it could become a hackweek project instead. No problem! If all you’ve done is gotten some experts to discuss and learn more about the challenges we face, it was a win.
If there are too many unknowns to move forward, consider some research activities then try again. Prototypes, research spikes, and reading are useful tools in this phase.
Invite non-experts too. This is a good opportunity to give people more experience in architecture design. The best way to learn how to design good architecture is to either try it yourself and fail a few times or watch other people do it.

Architecture Review
An architecture review is the presentation of your architecture to a broad audience. Here, we want as much participation as possible. You’re both soliciting additional feedback and educating the audience about your project. This should be done before development begins if possible.
Checklist

Invite the main engineering Slack channel
Remind #engineering right before the meeting starts
Create engineering documentation before the architecture review and share it with participants in advance
In the meeting, walk through the documentation, explain it in detail, and solicit questions and feedback.

Suggestions

Start by fully providing the context and explaining the problem that needs to be solved. Remember: your audience probably includes people who are new to Brightcove and people who have no context about your engineering area. Avoid saying “as you already know…” as it can alienate newcomers and reduce participation.
Make engineering documentation that will be updated going forward, not just a one-off document. At the end of the project, you will ideally have up-to-date documentation of what was actually built. This documentation can be used for reference, to share with stakeholders, to ramp-up new team members, and so on.
Be visual in your documentation. Diagrams and photos of whiteboards can really help readers take in your ideas efficiently.
It’s possible to feel like this is a defense rather than a feedback cycle. If you feel like you’re on the defense, remember you don’t need to have all the answers now. It’s ok to say “Thank you for bringing that up, we’ll get back to you after we’ve had some time to think about it.”
There is no expectation that you do everything someone says you should do. Disagreements are ok. Overlooking something is not as great. As people learn from each other, they will gravitate toward the best ideas.

Documentation
Here are some suggestions of what to include in your documentation. Not all of these will be relevant to every project.

System interactions
Interfaces
Domain modeling
Deployment plan
Technologies used
Where will the hosts live
Monitoring plan
COGs
Billing
User experience
Maintainability
Accepted compromises (technical debt, etc)
Security
Potential patterns of abuse
Secure coding practices
Authentication / authorization
Auditing / logging

Participant Guidelines

Presenting new ideas to a big group can be pretty stressful. Please help them have a good experience!
Avoid advocating for your ideas by raising the stakes, eg “If you don’t do this, the project will fail.” Focus on explaining the specific ramifications that you believe are being overlooked.
Avoid saying “Why don’t you just…” This suggests that the presenter is making an obvious oversight. This is usually a matter of perspective, so approaching from a perspective of curiosity and exploration can lead to a more constructive conversation.
If you have a feeling something won’t work, but you’re not sure why, it might be better to give it some more thought before bringing it up. Or, if you want to raise a gut feeling, call it what it is. eg “I have a gut feeling there is a problem with using this tool we’re not thinking of, so I want to revisit this again later.”

Architecture Update
For long-running projects, it is useful to get together regularly to talk about the progress of the project. This acts as a retro for the progress so far, a review of the updated architecture, and a chance to change plans according to what was learned.
Checklist

Invite everyone involved in the project
Invite the architecture reviews Slack channel
Update the engineering documentation you created in the review so it reflects the latest state of the software.
Solicit retrospective notes ahead of the meeting. Provide a place for people to write what is working, what isn’t working, and suggested actions.
In the meeting, walk through the updated engineering documentation, call attention to the changes, and solicit questions and feedback
In the meeting, have everyone read their retro notes and encourage discussion

Suggestions

For long-running projects, it may be wise to do this at least once a quarter. This can result in finding hidden issues or novel new ideas, even if the project is going well.
If you feel like it’s time to have this meeting, you’re probably right. Don’t wait until the end of the quarter, end of the project, etc.

Architecture Retro
Have this meeting after the software is shipped. Essentially, this is the same as the architecture update, except it’s the last one and it should include a wider audience.
Checklist

Invite everyone involved in the project
Invite the main engineering Slack channel
Update the engineering documentation you created in the review so it reflects the latest state of the software.
Solicit retrospective notes ahead of the meeting. Provide a place for people to write what worked, what didn’t work and suggested changes.
In the meeting, walk through the updated engineering documentation, call attention to the changes, and solicit questions and feedback
In the meeting, have everyone read their retro notes and encourage discussion

Suggestions

Retros are one meeting where sticking to the agenda is not always best. If people want to discuss a particular challenge, make room for it. You can always schedule more time if needed.

To view our Partner blog, click here

LTN Global & Brightcove Partner Integration Helps FreightWaves Live Stream at Scale

CMMA Blog

In the massive global freight industry, FreightWaves is the leading provider of global supply chain data and media content to industry leaders and analysts. You could call them the Bloomberg of freight based on how many subscribers they have to their Sonar data platform, their viewing audience for FreightWaves TV, and their recent success in virtual events.

Cody Mathis is a broadcast engineer at FreightWaves who implemented the tech stack powering FreightWaves TV, and he decided to leverage Brightcove and Brightcove partner LTN Global in his workflow. LTN’s Schedule product is how Cody’s team pulls together live streams and other content that they produce for the FreightWaves TV channel. Brightcove Live, Brightcove Video Cloud, and the Brightcove Player are how they stream it to their audiences worldwide, and they’ve seen dramatic viewership growth in the US, Canada, Mexico, the UK, and Germany.
Cody reports that his primary goal in vendor selection for FreightWaves TV was the ability to scale in the cloud and deliver a reliable and high-quality playback experience across the many screens, like freight brokerage houses, that keep FreightWaves TV on constantly. An added benefit was what the integration between LTN Global and Brightcove meant for his team’s efficiency. In comparison, it used to take a week of training and two weeks of practice for a new team member to operate the FreightWaves TV workflow. Cody explains, “The mixture of LTN and Brightcove lets us train a new person on the entire system in a day, and they can now completely build up the schedule and hit play, and it just works flawlessly.”

The key to the integration is that the user interface of LTN Schedule offers the option of sending a stream to Brightcove while maintaining all markers and metadata needed for Brightcove to facilitate ad-insertion and analytics.
Cody and team have extended their live streaming expertise into virtual events in 2020. FreightWaves LIVE was supposed to be an in-person event with 2000 attendees, but the pandemic necessitated going virtual. FreightWaves LIVE @ Home, in the early spring, featured over 5MM minutes of content in 3 days and netted 92,000 unique viewers who streamed over 250,000 sessions. This success inspired the addition of more live-streamed virtual events to the schedule, including FreightWaves LIVE: Global Trade Tech in mid-September.
The ease of operation and high technical performance of LTN Global and Brightcove has enabled FreightWaves to go bigger with confidence. Said Cody of the workflow, “Our entire team is trained in it. So if anyone has to step in, they’re more than confident to know what to do if someone has to push a show longer or cut a show early or anything like that. Everyone on the team knows what to do in those situations because the integration is so simple. So it’s been really great.”

To view our Partner blog, click here

Towards Efficient Multi-Codec Streaming

CMMA Blog

During the last decade, the majority of video streams sent over the Internet were encoded using the ITU-T H.264 / MPEG-4 AVC video codec. Developed in the early 2000s, this codec has become broadly supported on a variety of devices and computing platforms with a remarkable 97.93% reach.

However, as far as technology is concerned, this codec is pretty old. In recent years – two new codecs have been introduced: HEVC from the ITU-T and MPEG standards groups, and AV1 from Alliance for Open Media. Both claim at least a 50% gain in compression efficiency over the H.264/AVC.
In theory, such gains should lead to a significant reduction in the costs of streaming. However, in practice, these new codecs can only reach particular subsets of the existing devices or web browsers. HEVC for example reportedly only has a reach of 18.73%, pointing mostly to Apple devices, and devices with hardware HEVC support. The AV1’s support among web browsers is higher, but notably, it is not supported by Apple devices and most existing set-top box platforms.

This situation begs the question: *Given such a fragmented support of new codecs across different devices, how do you design a streaming system to reach all devices and with the highest possible efficiency? *
In this blog post, we will try to answer this question by introducing the concept of multi-codec streaming and explaining key elements and technologies that we have engineered to support the Brightcove Video Cloud platform.
Adaptive Bitrate Streaming 101
Before we start talking about multi-codec streaming, let’s briefly review the main principles of the operations of modern-era Adaptive Bit-Rate (ABR) streaming systems. We show a conceptual diagram of such a system in the figure below. For simplicity, we will focus on the VOD delivery case.

When a video asset is prepared for ABR streaming, it is typically transcoded into several renditions (or variant streams). Such renditions typically have different bitrates, resolutions, and other codecs- and presentation-level parameters.
Once all renditions are generated, they are placed on the origin server. Along with the set of renditions, the origin server also receives a special manifest file, describing the properties of the encoded streams. Such manifests are typically presented in HLS or MPEG DASH formats.
The subsequent delivery of the encoded content to user devices is done over HTTP and by using a Content Delivery Network (CDN), ensuring the reliability and scalability of the delivery system.
To play the video content, user devices use special software, called a streaming client. In the simplest form, a streaming client can be JavaScript run by a web browser. It may also be a custom application or a video player supplied by the operating system (OS). But regardless of the implementation, most streaming clients include logic for adaptive selection of streams/renditions during playback.
For example, if the client notices that the observed network bandwidth is too low to support real-time playback of the current stream, it may decide to switch to a lower bitrate stream. This prevents buffering. Otherwise, if there is sufficient bandwidth, the client may choose to switch to a higher bitrate, meaning a higher quality stream, and leads to a better quality of experience. This logic is what makes streaming delivery adaptive. It is also the reason why videos are always transcoded into multiple (typically 5-10) streams.
The system depicted in the above diagram has two additional components: The analytics system, which collects the playback statistics from CDNs and streaming clients, and the ABR encoding ladder generator, defining the number and properties of renditions to create. In the Brightcove Video Cloud system, this block corresponds to our Context-Aware Encoding (CAE) module.
Encoding ladders and quality achievable by the streaming system
Let us now consider an example of an encoding ladder that may be used for streaming. This particular example was created by Brightcove CAE for action movie video content.

As easily observed, the encoding ladder defines five streams, enabling delivery of video with resolutions from 216p to 1080p and using about 260 to 4200Kbps in bandwidth. All streams are produced by the H.264/AVC codec.
The last column in this table lists perceived visual quality scores as estimated for playback of these renditions on a PC screen. These values are reported using the Mean Opinion Score (MOS) scale. MOS score five means excellent quality, while score one means that the quality is bad.
We next plot (bitrate, quality)- points corresponding to renditions, as well as the best quality achievable by the streaming system with varying network bandwidth. This becomes a step function, shown in blue.

In the above figure, we also include a plot of the so-called quality-rate model function [1-3], describing the best possible quality values that may be achieved by the encoding of the same content with the same encoder.
This function is shown by a dashed red curve.
As can be easily grasped, with a proper ladder design, the rendition points become a subset of points from the quality-rate model, and the step function describing quality achievable by streaming becomes an approximation of this model. What influences the quality of the streaming system is the number of renditions in the encoding ladder, as well placement of renditions along the bandwidth axis. The closer the resulting step function is to the quality-rate model – the better the quality that can be delivered by the streaming system.
What this all means is that encoding profiles/ladders for ABR streaming must be carefully designed. This is the reason why most modern streaming systems employ special profile generators to perform this step dynamically, by accounting for properties of the content, networks, and other relevant contexts.
Additional details about mathematical methods that can be employed for the construction of quality-rate models and the generation of optimal encoding ladders can be found in references [1-5].
Multi-codec streaming: main principles
Now that we’ve explained key concepts, we can turn our attention to multi-codec streaming.
To make this more specific, let’s consider an example of an encoding ladder, generated using two codecs: H.264/AVC and HEVC. Again, Brightcove CAE was used to produce it.

The plots of rendition points, quality-rate models, and quality achievable by streaming clients decoding H.264 and HEVC streams are presented in the below figure.

As easily observed, the quality-rate model function for HEVC is consistently better than the quality-rate model for H.264/AVC. By the same token, HEVC renditions should also deliver better quality-rate tradeoffs than renditions encoded using H.264/AVC encoder.
However, considering that there are typically only a few rendition points and they may be placed sparsely and in an interleaved pattern, this may create regions of bitrates, where H.264/AVC renditions may deliver better quality than nearest HEVC rendition of smaller or equal bitrate. Such regions in the above figure are seen when step-functions for H.264/AVC clients go above the same functions for HEVC clients.
What does this mean? It means that with a two-codec ladder decoding of only HEVC-encoded streams does not automatically result in the best possible quality! Even better quality may be achieved by clients that selectively and intelligently switch between both H.264/AVC and HEVC streams. We illustrate quality achievable by such “two-codec clients” in the graphic below.

In this example, the two-codec client can make nine adaptation steps instead of just five in HEVC or H.264-only ladders. This enables better utilization of the available network bandwidth and delivery of better quality overall.
__Multi-codec functionality support in existing streaming clients __
As we just saw the ability of the streaming client not only to decode but also intelligently and seamlessly switch between H.264/AVC and HEVC streams is extremely important. This leads to better quality and allows for fewer streams/renditions to be generated, reducing the costs of streaming.
However, not all existing streaming clients have such a capability. The best known examples of clients doing it well are native players in recent Apple devices: iPhones, iPads, Mac computers, etc. They can decode and switch between H.264/AVC and HEVC streams seamlessly. Recent versions of Chrome and Firefox browsers support the so-called change Type method, which technically allows JavaScript-based streaming clients to implement switching between codecs.
Streaming clients in many platforms with hardware decoders, such as SmartTVs, set-top boxes, etc. can only decode either H.264/AVC or HEVC streams, and won’t switch to another codec during a streaming session. And naturally, there are plenty of legacy devices that can only decode H.264/AVC – encoded streams.
This fragmented space of streaming clients and their capabilities must be accounted for at the stage of encoding ladder generation, by properly defining HLS and DASH manifests, and design of the delivery system for multi-codec streaming. In the next section, we will briefly review how we addressed all these challenges in the Brightcove Video Cloud platform.
Multi-codec support in Brightcove VideoCloud platform
Brightcove Video Cloud is an end-to-end online video platform that includes all the building blocks of the ABR streaming system that we’ve reviewed in this blog post.
For example, encoding ladder generation in this system is done by using Brightcove CAE technology. For the user/operator of the system, it manifests itself by the presence of several pre-configured CAE ingest profiles enabling H.264-, HEVC-, as well as mixed-codec- streaming deployments.

When the Multiplatform Extended HEVC (CAE) mixed-codec ingest profile is selected, the result will be a mixed codec ladder, with both H.264 and HEVC streams present. This profile can produce from three to 12 output streams for both codecs, covering the range of resolutions from 180p to 1080p and with bitrates in the range of 250Kbps to 4200Kbps. The CAE profile generator defines everything else automatically, based on characteristics of the content as well as playback statistics observed for the account.
In Video Cloud, the manifests and media segments are produced according to a variety of streaming standards and profiles (e.g. HLS v3, HLS v7, MPEG DASH, Smooth, etc.). These are all generated dynamically, based on the preferences and capabilities of the receiving devices. Furthermore, certain filtering rules (delivery rules) may also be applied. For example, if a playback request is coming from a legacy device that can only support H.264/AVC and HLS v3 format of streaming, it will be only offer a HLS v3 manifest along with TS-based segments, and they will only include H.264/AVC encoded streams.
On the other hand, for newer devices capable of deciding both H.264/AVC and HEVC streams, the delivery system may produce a manifest including both H.264/AVC- and HEVC-encoded streams.
The declaration of mixed codec streams in manifests is done according to the HLS and DASH-IF deployment guidelines. I show conceptual examples of such declarations below.

As observed, in HLS, mixed codec renditions can be included in the natural order in the master playlist. In MPEG DASH, however, they must be listed separately, in different adaptation sets, sorted according to each codec. To enable switching between mixed-codec renditions in DASH, a special SupplementalProperty descriptor is included in each adaptation set.
To assist clients in making the right decisions in switching between streams encoded using different codecs, special relative quality attributes can be used. In HLS, they are called SCORE attributes, with higher values indicating better quality. In MPEG DASH, they are called “Quality Ranking” attributes, but now with lower values indicating better quality. However, both of these attributes are optional and supported only by a few existing client devices. To make sure that all devices/clients don’t get confused in switching between multi-codec streams in Brightcove Video Cloud offers a manifest filtering option that leaves only renditions with progressively increasing quality values in the final manifests visible by the clients.
The final delivery of multi-codec streams in Video Cloud is handled by a two-level CDN configuration, ensuring high efficiency (low origin bit rate) and high scale and reliability of delivery of the streams. More details about various configurations and optimization techniques employed by the Brightcove Video Cloud can be found in our recent paper [3] or Brightcove product documentation.
Conclusions
With a combination of all described features and tools for multi-codec streaming support, enabling and deploying it with Brightcove Video Cloud can be done in a matter of minutes.
If you are delivering a high volume of streams to Apple devices or other HEVC-capable mobiles and set-top boxes – enabling the use of HEVC and multi-codec streaming can offer a considerable reduction in CDN traffic/costs, without compromising reach to legacy devices.
The tools that we have engineered ensure that such deployments will happen with minimum deployment costs and ensuring high quality and reliability of reaching all devices.
References
[1] Y. Reznik, K. Lillevold, A. Jagannath, J. Greer, and J. Corley, “Optimal design of encoding profiles for ABR streaming,” Proc. Packet Video Workshop, Amsterdam, The Netherlands, June 12, 2018.
[2] Y. Reznik, X. Li, K. Lillevold, A. Jagannath, and J. Greer, “Optimal Multi-Codec Adaptive Bitrate Streaming,” Proc. IEEE Int. Conf. Multimedia and Expo (ICME), Shanghai, China, July 8-12, 2019.
[3] Y. Reznik, X. Li, K. Lillevold, R. Peck, T. Shutt, and P. Howard, “Optimizing Mass-Scale Multiscreen Video Delivery,” SMPTE Motion Imaging Journal, vol. 129, no. 3, pp. 26 – 38, 2020.
[4] Y. Reznik, “Average Performance of Adaptive Streaming,” Proc. Data Compression Conference (DCC’21), Snowbird, UT, March 2021.
[5] Y. Reznik, “Efficient multi-codec streaming” – talk at HPA Tech Retreat 2021

To view our Partner blog, click here

Creating Your First Delivery Rule

CMMA Blog

Introduction
In 2017 Brightcove launched Dynamic Delivery and began generating our manifests and video “just in time” at the point a player requests them, allowing us to repackage content to multiple different formats and support the widest range of devices possible.
Since then, we’ve increasingly been hearing from customers that they want to take advantage of this flexibility themselves and control at a fine-grained level which CDN their content is delivered over, what quality of video renditions should be available and even what order the different quality renditions should be presented in inside the video manifest. This lead to the creation of our Delivery Rules feature, which is now generally available as part of our API.
In this post, we’ll look at how to use this API to solve one of the most common use-cases we see – limiting the maximum quality of video being served to certain devices.
Getting Set Up
To get started we first need a video to test with, which we’ve ingested as normal via Video Cloud:

Looking under the covers at the HLS manifest that’s being delivered to that player, we can see it contains a number of the following lines, representing rendition manifests of different resolutions – in this case 480×270, 640×360, 960×540 and 1280×720 (below):
#EXT-X-STREAM-INF:PROGRAM-ID=0,BANDWIDTH=2205500,CODECS=”mp4a.40.2,avc1.4d001f”,RESOLUTION=1280×720,AUDIO=”audio-2″,CLOSED-CAPTIONS=NONE
https://manifest.prod.boltdns.net/manifest/v1/hls/v4/aes128/5270290572001/aa7059e5-586b-4caa-ae6f-5533f223a569/a557b391-f20e-4727-9185-459411a63029/10s/rendition.m3u8

If 960×540 is the maximum resolution we want to deliver to our hypothetical device, then we should see the 1280×720 rendition disappear once our rule has been applied.
Creating Our Rule
Delivery Rules are made up of Conditions (when a rule should be invoked) and Actions (how the rule should affect the content).
Since we know that we want the rule to limit the maximum video rendition quality, let’s go ahead and create the Action first with the following HTTP request:
POST /accounts/{accountID}/actions
Content-Type: application/json
Authorization: Bearer {access_token}

and the request body set to:
{
“properties”: {
“max_video_resolution”: “960×540”
}
}

This will create the new Action and return us something that looks like:
{
“id”: “88b13752-3469-4e46-b4aa-49cd4f1685a6”,
“properties”: {
“max_video_resolution”: “960×540”
}
}

Manually Invoking Actions
Before we dive into conditionally applying our Action, we can manually invoke it and check it does what we want by adding its ID as a parameter to our Playback API request:
https://edge.api.brightcove.com/playback/v1/accounts/5270290572001/videos/6230434222001?config_id=88b13752-3469-4e46-b4aa-49cd4f1685a6

This will now return us the same HLS manifest as before, but with the 1280×720 rendition removed!
As well as being useful for testing, manually invoking rules in this way is a powerful technique for customers that leverage the Brightcove SDKs to build their own apps, allowing them to invoke different rules per-device, per-app or even per-user, just by passing in different Action IDs.
A common usage we see is for our customers to apply an Action to serve a basic type of content to anonymous users, while anyone who signs up has a manual Action applied to serve them richer (higher-quality video and audio, modern codecs such as HEVC, premium CDN provider) content.
Conditionally Invoking Actions
For customers that don’t want or need the granularity of manually invoking Actions, we provide a set of pre-canned Conditions to control under what circumstances the Brightcove manifest generation services should apply rules. These will apply to all requests and so are a great way of targetting certain groups of users, regardless of what type of device they’re viewing your videos on.
The full set of Conditions offered can be found in the API reference here.
For this tutorial, let’s imagine we know that users in the UK have been struggling with buffering high resolutions because of country-wide network capacity issues and so we want to strip out the higher resolution rendition until these are resolved.
PUT /accounts/{accountID}/conditions
Content-Type: application/json
Authorization: Bearer {access_token}

This API call takes an array of Conditions but for now we’re just creating one, by including the following body with our request:
[
{
“name”: “Cut off high-quality renditions for the UK”,
“if”: {
“request_country”: [
“GB”
]
},
“then”: [
“88b13752-3469-4e46-b4aa-49cd4f1685a6”
]
}
]

Now, if we make our standard Playback API request from within the UK (notice no config_id parameter attached this time) then we’ll see the high-quality rendition stripped out!
What’s Next?
Hopefully this has given you a taste of the power and flexibility of Delivery Rules. The full lists of supported Conditions and Actions can be found here and if there are any others that you’d like us to add in the future, then please reach out to Customer Support or your account rep and let us know!
If you’re keen to read more about how our customers are using Delivery Rules then take a look at how Seven West Media Optimises Video Content Delivery With Brightcove’s Delivery Rules

To view our Partner blog, click here

The Evolution of Low-Latency Video Streaming

CMMA Blog

In the past few years, the video streaming industry has seen immense interest in low-latency streaming protocols. The majority of interest is on delivering videos with a sub-five-second delay, making them comparable with delays in live broadcast TV systems. Attaining such low delay is critical for streaming live sports, gaming, online learning, interactive video applications, and others.
Developing the Technology for Low-Latency Streaming
The delays in conventional live OTT streaming technologies such as HLS and DASH are much longer. They’re caused by relatively long segments (4-10 seconds) and a segment-based delivery model, requiring complete delivery of each media segment before playback. Combined with the buffering strategies used by the HLS or DASH streaming clients, this typically produces delays of 10-30 seconds, or even longer.
To combat delays, recent evolutions of HLS and DASH standards, known as Low-Latency HLS (LL-HLS) and Low-Latency DASH (LL-DASH) , have introduced two key tools:

Chunked video encoding. This is an encoding strategy that produces video segments structured as sequences of much shorter sub-segments or chunks.
Chunked segment transfer. This is an HTTP transfer mode that enables transmission of shorter video chunks to streaming clients as soon as they are generated.

A streaming client can join a low-latency live stream from any Stream Access Point (SAP) which is made available by the live transcoder on segment boundaries or chunk boundaries. Once joined, a player would only need to buffer the latest video chunk generated by the encoder before decoding and rendering it. Considering that each segment can be split into several chunks (typically 4-10), this reduces the delay significantly.
Several open-source and proprietary implementations of low-latency servers and players are currently available on the market, including one by Brightcove. Many of them have demonstrated lower streaming delay when only a single-bitrate stream is used and when they stream over high-speed network connections. However, their performance under more complex and realistic deployment environments has not been well studied.
Testing the Performance of Low-Latency Streaming
In our paper published in ACM Mile-High-Video 2023 (link forthcoming) as well as a presentation at Facebook@Scale , we reported some results after testing the performance of low-latency systems.
First, we developed an evaluation testbed for both LL-HLS and LL-DASH systems. We then evaluated various low-latency players using this testbed, including Dash.js, HLS.js, Shaka player, and Theo player. We also evaluated several of the latest bitrate adaptation algorithms with optimizations for low-latency live players. The evaluation was based on a series of live streaming experiments. These experiments were repeated using identical video content, encoding profiles, and network conditions (emulated by using traces of real-world networks).
Table 1 shows our live video encoding profiles for generating the multi-bitrates, low-latency DASH/HLS stream.

.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-baqh{text-align:center;vertical-align:top}
.tg .tg-amwm{font-weight:bold;text-align:center;vertical-align:top}

Rendition Video resolution (pixels) Video codec and profile Bitrate (kbps)
low 768 x 432 H.264 main 949
mid 1024 x 576 H.264 main 1854
high 1600 x 900 H.264 main 3624
top 1920 x 1080 H.264 main 5166

Table 1. Live video encoding profiles

Figure 1 shows the workflows of our low-latency DASH and HLS streaming testbeds.

Figure 1. System setup used for evaluation of low-latency players.

Figure 2 shows the available network bandwidth of our emulated LTE mobile network. The download bandwidth of the low-latency players is controlled by our network emulation tool and the network traces.

Figure 2. Available bandwidth of emulated LTE mobile networks (Verizon and T-Mobile)

A variety of system performance metrics (average stream bitrate, amount of downloaded media data, streaming latency) as well as buffering and stream switching statistics were captured and reported in our experiments.
These results have been subsequently used to describe the observed differences in the performance of LL-HLS and LL-DASH players and systems.
A few plots from our study can be seen in the figures below.

Figure 3. Dynamics of bitrate switches reported by LL-HLS and LL-DASH players.

Figure 4. Comparison of variations of latency reported by LL-HLS and LL-DASH players.

Performance statistics – T-Mobile LTE network

.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-1wig{font-weight:bold;text-align:left;vertical-align:top}
.tg .tg-baqh{text-align:center;vertical-align:top}
.tg .tg-amwm{font-weight:bold;text-align:center;vertical-align:top}
.tg .tg-0lax{text-align:left;vertical-align:top}

Player/Algorithm Avg. bitrate [kbps] Avg. height [pixels] Avg. latency [secs] Latency
var. [secs]
Speed var. [%] Number of switches Buffer events Buffer ratio [%] MBs loaded Objects loaded
DASH.js default 2770 726 3.06 0.21 10.4 93 38 7.99 352.2 256
DASH.js LolP 3496 853 5.65 4.59 22.7 70 53 21.96 369.4 210
DASH.js L2all 3699 908 4.14 3.18 19.9 5 19 7.99 368 147
Shaka player (dash) 3818 916 4.92 2.06 0 16 5 4.66 360.3 155
THEO player (dash) 4594 993 6.16 0.01 0 27 0 0 418.7 152
HLS.js default 2020 1763 562 10.08 10.91 8.1 26 2 9.8 130.7 589
HLS.js LolP 2020 1756 560 5.97 0.2 6.1 24 0 0 148.1 688
HLS.js L2all 2020 1752 560 6 0.23 5.9 34 0 0 133.1 686
HLS.js default 2023 3971 895 8.93 1.13 0 8 0 0 360.8 613
Shaka player (HLS) 3955 908 7.18 2.23 0 14 7 3.8 230 475

Table 2. Performance statistics – T-Mobile LTE network

Evaluating the Performance of Low-Latency Streaming
While LL-HLS and LL-DASH worked well in unconstrained network environments, they struggled in low bandwidth or highly variable networks (which are typical in mobile deployments). Observed effects included highly variable delays, inability to prevent buffering, and frequent bandwidth switches or inability to use the available network bandwidth. Some players simply switched into non-low-latency streaming under such challenging network conditions.
As promising as LL-HLS or LL-DASH are in theory, there is still some room for these technologies to mature.
At Brightcove, we are continuing to work on best-in-class implementations of algorithms for low-latency streaming clients as well as encoder and server-side optimizations. We intend to make our support for low-latency streaming highly scalable, reliable, and fully ready for prime time.
This blog was originally written by Yuriy Reznik in 2021 and has been updated for accuracy and comprehensiveness.

To view our Partner blog, click here