Network
Over the last decade, most innovation in streaming has happened at the application layer. But one thing that might not have evolved at the same pace is your streaming infrastructure.
And whilst improving your software layer, codecs, streaming protocols and content delivery network (CDN) strategies is vital, without a well-optimized infrastructure foundation, challenges like performance degradation and rising infrastructure spend are likely to persist.
Latency is one of the biggest challenges in streaming because of its direct connection to viewer abandonment. Research shows that viewers start to abandon content that doesn’t start up within two seconds and that even a one second incremental delay increases viewer abandonment by 5.8%.
CDNs and player optimizations help, but these problems originate deeper in the stack. If your underlying infrastructure can’t deliver predictable throughput and stable performance, the upper layers can only compensate so much.
The most common hosting mistake streaming platforms make is treating all workloads the same. When workloads are placed on infrastructure solutions that aren’t best optimized for their specific needs, performance issues surface and costs inflate.
The most common example of this, is placing all workloads in hyperscale cloud environments believing this will always be simple and efficient.
But these environments were engineered for general-purpose enterprise workloads not throughput-hungry, low-latency streaming. In these setups, your resource-intensive streaming workloads will eventually be met by some common challenges:
Virtualization taxes add overheads to your encoding and packaging workloads
Noisy neighbours cause resource contention and inconsistent performance
Multi-tenant networking gets in the way of predictable delivery
Generic routing and caching aren’t tailored to your specific media pipelines
Cost inefficiencies lead to paying for unneeded elasticity
Virtualized environments can’t guarantee the determinist performance that real-time video demands and live streaming workloads quickly expose every hidden inefficiency.
As workloads grow, virtualization overhead, resource contention, and one-size-fits-all networking starts to get in the way of performance predictability. At the same time, inefficient resource usage and obfuscated billing practices make it hard to keep costs under control - and these trade-offs become more noticeable as your streaming operation scales.
If these challenges sound familiar, it’s time to start reassessing your infrastructure strategy. For many streaming platforms the solution lies in a combination of bare metal cloud adoption and hybrid right-sizing strategies, placing workloads on compute types that best meet their needs.
Check out part 2 of our “Rebuilding the streaming stack” blog series, to learn more about how bare metal cloud can help boost streaming performance and optimize spend.

Frances is proficient in taking complex information and turning it into engaging, digestible content that readers can enjoy. Whether it's a detailed report or a point-of-view piece, she loves using language to inform, entertain and provide value to readers.