Video Streaming vs. Video Conferencing

Despite uncertain economic times, there are signs of life for the VoIP and video over IP markets, as well as interesting applications popping up this week. As we noted on the GIPS blog, the market for videoconferencing in China is expected to grow at over 16 % this year, and the broadband infrastructure plans in the American Recovery and Reinvestment Act of 2009 should hopefully mean more capacity for advanced IP communications. 

In addition, something that really caught my eye was the quality of the video stream for CBS' live online coverage of the NCAA Men's Basketball Tournament. I recommend checking it out as I think the video is pretty impressive. They are using the Microsoft Silverlight plugin . John Hermansen brings up the topic of the differences in streaming and real-time video and why Microsoft/CBS is able to get such high quality.

The main difference between streaming video and real-time video conferencing is noticed when you start watching a streamed video segment (live or recorded). It doesn't start immediately. The reason is of course that video is being buffered to facilitate good quality. This buffering delay, that is typically tens of seconds long, allows for handling many of the major challenges of video over IP. Naturally, by having a buffer there will be time for retransmission of lost packets, something that can rarely be done in a video conferencing scenario. The long buffer also ensures that the network jitter (variations in transport time for IP packets) will not cause any quality issues. These two effects of buffering are well known by most people in the industry. However, there are some other less known benefits of high buffer latency that can be very helpful in terms of delivering high quality media.

One of the major issues in video transmission is to manage the bitrate of the media to utilize the bandwidth available. If too high bitrate is used, there will be freezes in the picture and delay will keep building up even more. If, on the other hand, a lesser bitrate than the available bandwidth is used the best possible quality will not be delivered. The long buffers in streaming makes it easier to adapt to the available bandwidth since there is more room for error and adaptations don't need to be as quick as for realtime video conferencing. An additional problem with the two-way nature of a video conference is that typically both sides will be limited by its uplink speed, which typically is an order of magnitude lower than the downlink speed. In streaming scenarios there is only video going in the downlink direction.

Another important factor of video quality is that the audio and video are synchronized in time. This is also a much easier problem to solve if significant buffer delay is available to adjust the timing for the audio or video stream.

Lastly, I want to mention one of the least well known benefits of long delay. Namely, the effect of long delay on the bitrate of the video (and audio) compression. Since audio and video are data sources that both are highly correlated between adjacent points in time, more efficient compression can be achieved if a longer portion of the signal is considered at each coding instance. For realtime communication coding is typically happening every 20 ms to 60 ms while in streaming it is possible to consider up to several seconds of signal at each coding instance. The corresponding gain in compression efficiency is very significant and results in better quality for the same available bandwidth.

| 0 TrackBacks

Listed below are links to sites that reference Video Streaming vs. Video Conferencing:

Video Streaming vs. Video Conferencing TrackBack URL :

Around TMCnet:

Around TMCnet Blogs

Latest Whitepapers

TMCnet Videos