Tuxera – The evolution of creative infrastructure
Why storage technology matters more than ever
Ned Pyle, Enterprise Storage Technical Officer, Tuxera
The media and entertainment industry is experiencing unprecedented growth in data demands. With an estimated 250-300% increase in media data over the last five years, and increasing adoption of 4k workflows, traditional approaches to storage and file sharing are struggling to keep pace with modern creative workflows. From post-production houses exchanging terabytes of data with global partners, to broadcasters managing time-critical content for live events, the ability to move and access massive files swiftly and securely has become critical to business success.
As someone who has spent years developing network file sharing protocols, I’ve witnessed firsthand how technical infrastructure can either empower or hinder creative teams. The reality is stark: every minute spent waiting for files to transfer or load is post-production paused and money lost.
The infrastructure challenge
Today’s media workflows demand more from their infrastructure than ever before. Virtual production requires real-time rendering at high resolution and immediate access to multiple versions of scene changes and assets. Post-production teams need to transfer large files between workstations for editing, color correction, and VFX work. Broadcasting operations require rapid ingest and distribution of content for live events. The need to transfer, centralize, and share these collections requires a fabric with high bandwidth, low latency, and considerable throughput.
Real-time rendering necessitates the lowest possible latency and rock-solid reliability to ensure frame rates are maintained. Traditional TCP (Transmission Control Protocol) network solutions simply cannot stand up – they’re too slow, too processor-heavy, too congested. The biggest problem with open-source SMB (Server Message Block) products on Linux is they have not evolved or kept up. These huge datasets require a modern solution that can compress data, offload processing, and fully utilize the biggest networks, all while operating on clients running Windows, Mac, or Linux.
SMB has been the primary remote file protocol of MacOS for twelve years, part of the core Linux kernel for more than twenty years, and present in Windows since the 1990s. The protocol has evolved from humble workgroup beginnings to SMB 3, a powerful data fabric designed to take advantage of the latest networking and storage innovations, with the latest security and scalability options. When fully and properly implemented, it can run equally well on huge clusters or tiny containers.
Next-generation networking
Time is money. The speed of sharing large datasets is key to productivity so that the valuable time of experts is spent working, not waiting. When infrastructure bottlenecks force teams to wait for files to transfer or load, it doesn’t just waste time – it interrupts creative flow and impacts the entire production pipeline. A modern data fabric using SMB will provide objective, tangible productivity benefits and even improve staff morale – how frustrated are you when you’re just sitting there while your computer appears to do nothing?
SMB operating over RDMA (Remote Direct Memory Access) brings hundreds of gigabits per second throughput at sub-millisecond latency to each node, ensuring uncompromised performance and consistency for production teams. When using SMB Direct – or even when still on traditional TCP and ethernet – SMB multichannel ensures your networks get the full throughput utilization they support. You paid for the network; SMB 3 makes sure you fully utilize its potential. When working with the highest fidelity raw media formats, SMB compression ensures that you can maximize that bandwidth for all to share.
For high-fidelity raw media formats like DPX sequences, OpenEXR files, ProRes masters, and Digital Cinema Packages (DCPs), SMB compression can dramatically reduce transfer times. A 100GB uncompressed video file might transfer at half its original size, freeing up bandwidth for compressed formats like H.264 or HEVC that don’t benefit from additional compression. This optimization happens in flight, even improving the transfer of container formats like MXF.
The closed-loop nature of RDMA networking adds an extra layer of security through its inherent air-gapping, protecting valuable content while maintaining peak performance. This enhanced security is particularly crucial in today’s distributed production environments. When I owned the protocol at Microsoft, one of our key focuses was protecting data both during transfer and at rest, while providing robust management tools to handle the increasing complexity of large-scale deployments. RDMA’s architecture helps address these concerns by creating isolated, high-performance data paths that are inherently more secure than traditional networking approaches.
Looking to the future
I believe that RDMA, containers, security, and compression are the big four technologies shaping our industry’s future. RDMA will become the standard for media and entertainment productions, and traditional networks will become the exception.
Containers and microservices will be common, with containers providing storage and remote file sharing inline, all as part of managed fleets spun up quickly and efficiently for production, then discarded when no longer needed.
The days of lax security and trusting air-gapped networks will diminish, requiring more use of encryption, modern authentication, and other protective measures. The sheer size of datasets and raw production media formats will bring compression back to the forefront, both on the wire and on the storage pools, until the next generations of hardware rise to meet them.
Planning for tomorrow
What should media companies consider when planning their infrastructure? Listen to what your creative teams are saying about application performance. They probably don’t understand the storage or network, but they know their tools and where they’re slow, and you can tell when the infrastructure is bottlenecked. Plan for a system with the biggest capacity to store and move data you can forecast, then increase that even more because it probably isn’t enough – especially when you are successful!
The future of content creation depends not just on creative tools, but on the infrastructure that enables them. As global collaboration becomes the norm and file sizes continue to grow, the ability to move and access data efficiently will become even more crucial. By understanding and investing in the right technologies today, media companies can ensure their technical foundation supports rather than constrains their creative ambitions.