A Rolling Sound Gathers MOSs

David Byrd : Raven Call
David Byrd
David Byrd is the Founder and Chief Creative Officer for Raven Guru Marketing. Previously, he was the CMO and EVP of Sales for CloudRoute. Prior to CloudRoute, He was CMO at ANPI, CMO & EVP of Sales at Broadvox, VP of channels and Alliances for Telcordia and Director of eBusiness development with i2 Technologies.He has also held executive positions with Planet Hollywood Online, Hewlett-Packard, Tandem Computers, Sprint and Ericsson.
| Raven Guru Marketing http://www.ravenguru.com/

A Rolling Sound Gathers MOSs

Twice this week, I encountered questions about MOS (Mean Opinion Score). The first was regarding how ANPI collected MOS data to manage our nationwide IP network; the second was regarding whether this information was distributed widely throughout our customer base and channels. First, we use the EXFO Brix System to generate MOSs and, second, we only make such data available to large accounts that have an interest in the data. Perhaps now would be a good time to explain what MOS is.

A Mean Opinion Score is a subjective test of voice quality using a variable number of humans (50 is optimal) to listen to five phrases or sentences over the same voice quality circuit. The ITU recommends the following English language samples:

  • “You will have to be very quiet.”
  • “There was nothing to be seen.”
  • “They worshipped wooden idols.”
  • “I want a minute with the inspector.”
  • “Did he need any money?”

Each person is asked to rate the voice quality on a scale from 1 to 5, with 1 equal to “Impossible to Communicate” and 5 equal to “Excellent.” Typical ratings for toll quality voice and VoIP using a G.711 codec are 4.4. A lower, but quite acceptable, rating of 4.2 is usually the result for compressed voice using a G.729a codec. To put this in perspective, a common cell phone call is rated at 3.8, and such a rating for a voice call over the ANPI network would actually trigger a network event requiring a response by a member of our Network Operations Center. Interestingly, by moving from subjective human ratings to objective automated scoring, MOS information can be regularly obtained, and trusted to reflect actual network conditions. However, the use of MOS is not real-time and most network engineers prefer to use call-in-progress measurements of jitter, latency and packet loss to manage network Quality of Service (QoS).

The conclusion of several studies regarding the use of MOS data to measure voice quality is that automated systems remove the subjectivity associated with polling humans. However, automated systems tend to state a slightly lower value for voice quality than human subjective measurements. The difference is not enough to influence how networks are managed, but presenting the data to lay personnel without an understanding of this difference may result in lower customer perceived QoS versus actual quality of experience (QoE). Consequently, while the use of automated MOS software and systems to ensure superior network performance is promoted, the broad distribution of results without educating the recipient is called into question. With the continued growth of Hosted Unified Communications in business critical environments, carriers need to be vigilant in delivering high QoS and superior QoE by incorporating both automated and human MOS data.

 



Feedback for A Rolling Sound Gathers MOSs

Leave a comment

Featured Events