In December, YouTube paid nearly two billion dollars for the streaming rights to the National Football League (NFL). This was the largest amount of money paid to a major professional league for live streaming rights. This exemplifies the dominance of streaming and over-the-top OTT platforms content.
While large media companies and OTT platforms seek to differentiate themselves and add to the video-on-demand (VoD) content that has been the hallmark of digital success, and other events such as Coachella that were not part of traditional linear offerings find their way into streaming, live streaming has emerged as the next frontier.
Platforms such as Hulu, Paramount+, Peacock, and YouTubeTV are ensuring that everything that is broadcast on network or cable can also be watched live through their streaming platforms. It’s a market that is expected to grow.
However, live streaming has similar risks to the early days of VoD. Experience is what counts because consumers are fickle – and streaming services have largely been able to capitalize on the benefits of digital and move viewers away from traditional TV. Last August, for the first time, streaming viewers surpassed those of cable TV.
Unlike VoD, the test for live broadcasting is not conducted in a static environment. A failed live stream is immediately noticeable. And unlike large streaming organizations, most organizations cannot scale to support the number of live streams produced today.
So how do OTT platforms providers capitalize on the opportunity of live streaming?
while maintaining the quality experience that viewers expect? Doing it correctly – and on a global level – requires testers that represent real viewers, a wide range of devices, and access to data that allows providers to receive useful input to improve the experience.
One of the main mistakes organizations make during live testing is assuming that testing the staging environment will produce the same output as actual live production. Sometimes teams assume that if it works in the staging environment, it will also work in production. The intrinsic challenge in such an assumption is that there are so many possible variables that teams struggle to reproduce them internally and test them appropriately.
Take, for example, the most-watched event in the world: the Super Bowl. It could be Kansas City taking on Philadelphia in Glendale, Arizona, but the audience is global. This is much more complicated than simply broadcasting the game in the United States because bandwidth speeds vary from country to country, making network capabilities very different. If there is any latency, packet sizes are not sent correctly. As they accumulate, a disturbance is created from the viewer’s side.
Advertising is also a hurdle to consider during live broadcasting
Server and client ads can behave differently, and once again, there are regional issues that are even more micro from country to country. For something as big as the Super Bowl, for which advertisers pay millions and millions, they want to ensure that their ads are targeted correctly not only from country to country but also from city to city. If there are no testers reviewing this in real-time, advertisers cannot be guaranteed that their ads are targeted, with the possibility of alienating them from returning.
Devices also pose a problem from country to country. In the United States, there are many Apple TVs or Chromecasts. In Asia, it is more likely to produce content for Amazon Fire Stick or Xiaomi, a device that a testing team outside of Asia probably does not have access to and, even if they do, should replicate the same type of network conditions that they could test in the United States with a 4G or 5G or high-speed connection.
There are many aspects to consider during testing for live broadcasting – and it’s too much for most organizations to do internally.