Delivering Amazing Things in Parallel with a Pandemic

It isn’t often that we encounter something really remarkable. You’re reading this and saying, ‘Sure, smart guy – COVID-19 has changed the game for us all – what else have you got?’’ Conceding the present pandemic has skewed our collective perception of what is and isn’t remarkable – I’ve recently interacted with something new that is definitely noteworthy and I want to tell you more about it.

The present pandemic has driven the consumption of video streaming to heights faster than we’d imagined, faster and beyond any and all expectations. On one day, April 4th, Americans watched 27 billion minutes of streaming video. In one day. Early on in the nationwide quarantine, there appeared to be a growing number of industry experts wondering if there was enough bandwidth available to all who wanted to watch video as they sat isolated with their friends and loved ones. One writer even touted the efficiency in a relatively new color compression scheme in use as a quick savior (HDR), enabling visually compelling experiences while saving the amount of bandwidth required in order to deliver them.

We do live in interesting times, indeed. The rise in relevance of streaming video was already on its way, but the pandemic triggered the proverbial ‘turbo’ button. Despite the litany of challenges we face as a result of COVID-19, the shot in the arm for video, and those who produce it, has been a welcome byproduct of the pandemic. And fear not, we are still moving toward the really remarkable item I sought to call your attention to.

Setting the Stage

In order for your favorite show, college football game or NBA contest to be captured, processed and delivered to your smartphone, living room TV or screen of choice – it begins its journey by passing through an appliance. Most would call this a computer of some form or fashion. For those of us in the industry, these ‘pizza’ box encoders have largely looked and performed the same for the better part of the past 15 years. Sure, we’ve seen variations that leverage GPUs to enhance the processing power, some platform specific half form factor appliances – but nothing that really changed the game. For many years, the various architects, product leads and engineers I’ve worked with have asked for more – and less. They want more powerful processing to meet the demands of today’s resolutions and to prepare for tomorrow’s. Today this usually means HD to 4K, and a general spread of a 6-8 rendition ladder of output streams. But they want to save valuable power and cooling resources in their facilities – meaning smaller and more nimble technology and appliances. This group has been asking for more capability in smaller form factors for years. It wasn’t that long ago that we needed two ‘pizza boxes’ to do this same process – and only in HD. Today it clearly can be done with one unit. Now we get to the remarkable part.

No More Compromises

When building video services recently, one generally had to make decisions which often involved compromise. Low latency? 4K? These questions and others would lead to potential sacrifice and potential ‘horse trading’ But what if one could now have the power to process the aforementioned stream ladder, up to and including 4K, consume less power and have a form factor roughly eight times less than before?

And a price point under $2k? That is remarkable – the Videon Versastreamer. 4k under $2k. Really worth a look – and a second, at that. When getting the signal out of the broadcast operations center or television station with pristine quality is the highest priority, this device is a game changer. Need to reach up to three RTMP publishing destinations? Check. Require forward error correction using protocols like SRT? Check. These features and functionality are remarkable.

As someone who has been working in the proverbial OTT video coal mine for many years, I am really impressed with this innovation. I think you will be, too.

 

Authored by Matt Smith

Matt Smith is a recognized digital media industry evangelist and thought leader, having spoken at the National Association of Broadcasters (NAB) Show, IBC, and various other shows.  He’s served in a variety of roles in the industry during his career, with stops at Comcast, Brightcove Anvato, Envivio and others

re:Invent 2019 Recap: The Year of AI/ML

By Tristan Avelis, Product Manager

re:Invent may have happened over a month ago, but what better time to touch on Amazon Web Service’s inaugural conference and our key takeaways than at the start of the new year… Or halfway through the second month of the new year…

re:Invent was an amazing way to see just how far-reaching AWS is in terms of not only available services but also the huge variety of users that they have. Odds are, if you’re doing anything on the internet, AWS is somewhere either at the forefront or behind the scenes. 

I went into re:Invent intentionally seeking out AI/ML sessions. I wanted to understand how AI/ML can be useful to live video streaming, but also to various applications such as satellite imaging and utility surveys. I also wanted to learn a lot about how AWS views live video productions and the approaches that they take to ensuring a successful live event. What I learned has greatly improved my understanding of how to enable our customers to succeed when setting up a live event and how our product can help with that.

  • MediaConnect enables the high-quality distribution of mezzanine-type content. MediaConnect is a great ingest point that allows delivery to whatever workflow can be connected easily to any other AWS service while still being flexible enough to fulfill various workflows. 
  • Resiliency in live video workflows
    • Resiliency = Redundancy + Failover. Having resiliency allows the viewer to have uninterrupted playback, even if you have failures in your workflow. 
    • Simple Resiliency = Duplication + Manual Failover. 
    • Better Resiliency = Cloud-Native Redundancy with autoscale and Self-Healing + Auto-Failover.
  • Performing audio transcriptions using Machine Learning is up and coming, if not already in use in a number of institutions (think medical field to reduce charting and data entry workload on doctors). In support of Machine Learning, AWS Sagemaker Studio was released and allows anyone to develop Machine Learning applications with a full set of pre-made, easy to use tools.
  • Users should think about their live streaming workflow from the ground up and do it right from the start. The main pillars of the AWS Well-Architected Framework are Reliability, Performance, Security, Cost Efficiency, and Operational Excellence.
  • AWS Rekognition: AI/ML is used for image and video analysis to help AWS customers apply their custom label sets to a huge variety of applications. 
    • The main demonstration was using custom labels for finding specific moments in video content (ex: Interviews where a golden record was in the frame). Having custom label sets that users can train the AI/ML systems to use greatly reduces the manual effort needed to find and process video content. 
    • Because AI/ML can process much faster, valuable data is provided in real-time. 

Biggest Takeaway: AI/ML is huge and AWS is driving it full-throttle. The number of AI/ML tools and services AWS is going to provide in the next two to three years is going to grow drastically.

Tip for Future Attendees of re:Invent: Take the time to plan your day in advance. I ended up running from place to place to try and see everything rather than planning an efficient route between the various sessions. There is just so much to see. 

 

About the Author
Tristan Avelis is Videon’s Product Manager responsible for translating input from the live streaming market, individual users, and tech industry leaders into implemented product features that meet the needs of a wide variety of live streaming applications.