For large enterprises, small cells make a lot of sense.
Upwards of 80 percent of all mobile usage now occurs indoors, according to Alcatel-Lucent, and enterprise small cells deliver a flexible and economical way for reliable mobile connectivity in-building.
Recently a field trial held at a large financial institution in Mumbai showed the potential of enterprise small cells. Small cells bathed a 45,000-square-foot, all-glass office space with cellular connectivity that replaced an existing DAS and delivered a call drop rate of only 0.87 percent, an increase in average throughput of 42 percent, and a boost in peak throughput of 82 percent, according to a recent TechZine posting, Field insights: Deploying enterprise small cells, that went into detail on the deployment.
Impressively, this was done with the use of only nine small cells.
There were five key takeaways from the field trial that large enterprises should note.
First, don’t forget about macro cell connectivity. It is easy to focus on femto-to-femto handovers and overlook macro cells, but ignoring macro cell connectivity can greatly reduce the effectiveness of enterprise small cells deployment.
Second, the field trial found that IP/backhaul expertise helped the small cells deployment meet all key performance indicators despite the fact that the core network the financial center was connecting with was more than 1,440 km away in Delhi.
Third, the trial found that proper advance planning made a huge difference.
“In the Mumbai enterprise, an early solution design called for using 12 cells across the 45,000-square-foot office space. But the initial design was then optimized upfront, based on network expertise and Bell Labs tools, which eliminated 3 small cells,” noted the Alcatel-Lucent blog post. That’s significant.
Fourth, scalability needs to be kept in mind when it comes to enterprise small cells. Enterprises often need to expand capacity, and not all small cells configurations can scale to meet extra demand later on. But proper small cells architecture can enable scalability as needed.
Finally, the field trial found that reliability should be a point of focus when designing enterprise small cells configurations.
“The most reliable enterprise small cell solutions avoid single points of failure,” noted the Alcatel-Lucent blog. “Each of the nine cells used in the Mumbai financial institution operates independently. That makes sure that any failure is isolated and does not affect the rest of the network.”
Enterprise small cells deployment makes a lot of sense. But the devil is in the details.
]]>
Roughly 90 percent of all EU jobs will require some ICT skills in the near future, yet 39 percent of EU workers have little or no ICT skills as of 2014, according to the European Commission. In the U.S., the digital skills gap between what’s needed of employees and what’s available in the market comes at an estimated cost of $1 trillion per year in lost productivity, according to estimates from Entrepreneur.com. ICT-based employment is growing 7 times faster than overall employment in the EU, too.
The situation is even worse in developing countries, where ICT training is often lacking—especially for girls. While 77 percent of the population in developed countries is online, only 31 percent of people in developing countries have access according to ITU figures for 2013. And globally, women are 16 percent less likely than men to have Internet access.
Looking to help with this problem, Alcatel-Lucent and World Education developed the ConnectEd program, which helps disadvantaged youth achieve better learning outcomes, become better prepared for the world of work, and engage meaningfully in their communities. Between 2011 and 2015, the program provided training to 25,000 young people in Australia, Brazil, China, India, and Indonesia. Roughly 58 percent of those helped were girls, the group with the greatest need.
The ConnectEd program had a huge impact on the lives of the young people they helped. More than 90 percent of program participants passed ConnectEd digital skills training, and more than 95 percent of the in-school youth remained in school. In Indonesia, 21 ConnectEd students even broke the stereotypes against street children and entered university as a direct result of the program.
“In all countries, what comes out most strongly in terms of ConnectEd’s longer-term impact are the effects of having improved confidence,” noted Estelle Day in a recent blog post, the director of the ConnectEd program.
“It sounds such a small thing, but for excluded youth, it seems to be a key to unlocking their potential,” she added. “Disadvantaged youth, more than anything, need someone who believes in them, respects them, who identifies their strengths and helps build on them. And that is where, I believe, ConnectEd and the inputs of Alcatel-Lucent volunteers have had so much power.”
Most community giveback programs make a difference. But when it comes to helping disadvantages youth build ICT training, such programs can make a huge difference in the lives of those it helps. ConnectEd is one such program.
]]>Sometimes fiber to the subscriber is the best fit to support broadband services for residential and small and medium businesses. However, existing copper continues to have an amazing ability to be enhanced to meet broadband requirements. Indeed, copper-based technologies such as VDSL2 vectoring, Vplus, and G.fast can support bandwidth rates of 100, 300mbps or even 1gbps.
To decide which areas are ideal candidates for fiber-to-the-home (FTTH) or business, and which can be more than adequately served with copper-based technologies, Bell Labs Consulting suggests that service providers consider:
“To do this, service providers need to conduct a thorough access study, including a detailed market analysis of the service area,” Mohamed El-Sayed, consulting manager of the network strategy and technology evolution practice of Bell Labs, writes in an aptly titled recent TechZine article, Study shows ultra-broadband potential of copper. “With this information, the service provider can determine present and near-future bandwidth demand.”
The average bandwidth required for a fixed network in a residential area can vary significantly based on all of the above. Here are a few of the many related data points mentioned in the blog. A study by Alcatel-Lucent suggests that the current upper bound broadband access rate is about 50mbps and will be 100mbps by 2020. A Bell Labs study for a major operator in Western Europe indicates 40mbps is sufficient for triple play resident services there. And a study by U.K. government regulator Ofcom reports that average fixed residential broadband subscribers get 22.9mbps, and that broadband with a minimum download speed of 30mbps is available to three-fourths of subscribers by only has seen 21 percent penetration.
“For residential and SMB subscribers, high-speed copper technologies can deliver bandwidth in excess of current and anticipated demand,” says El-Sayed.
The bottom line is that extending the life of copper provides two major benefits. First, it is less costly than putting in fiber particularly in residential or rural areas. Second, it enables service providers to offer ultra-broadband services quickly. In a hotly competitive world with a seemingly insatiable appetite for high-speed services now, this point is as if not as important as the first.
]]>Small cells are a boon for mobile network operators, as they easily and cheaply expand wireless network connectivity. However, they also can strain an operator’s evolved packet core (EPC).
“The EPC may be called upon to deliver a significant increase in scale, capacity, and performance beyond that which was required initially to support the macro-cellular network,” noted David Nowoswiat, Sr. Product and Solutions Marketing Manager, Alcatel-Lucent in a recent TechZine posting, Is your EPC ready for the small cells onslaught? He suggests that operators look at three areas when examining if their EPC is up for the challenge.
First, is the network architecture ready for numerous small cells. Two of the options involve the addition of a small cell gateway to aggregate control and/or user traffic from a group of small cells back to the EPC, while a third option brings direct connectivity from each small cell to the EPC.
Adding a small cell gateway reduces the scaling and capacity requirements of the EPC but increases the network and operations complexity, and connecting the EPC directly to each small cell significantly increases its scalability and performance requirements yet keeps the network flat. Each operator will need to assess what makes sense in their particular case.
Second, does the EPC support the scaling and performance of the additional small cells load.
“If it’s directly connected to the small cell network, the biggest impact is on the control plane and the mobility management entity (MME) -- with all of the additional signaling that’s required,” noted Nowoswiat. But the EPC also should support an integrated and operationally simple model.
Third, is the mobile operator to offload data to take some of the load off of the EPC. Local breakout options can be implemented in small cell networks to offload data traffic that brings little value to the mobile operator, thus saving the EPC from added load. In that case, though, the EPC must support the requirements necessary to redirect traffic to the appropriate gateway and packet data network.
Nowoswiat questions whether most EPCs are up to the challenge. Is a virtual EPC a better option and a way to handle the extra load from small cells? While the answer is “it depends,” to learn more about EPC and small cell network choices the whitepaper Evolved Packet Core for Small Cell Networks, which compares architecture options, is a great place to start.
Currently, most route reflectors run either on a router that is dedicated to route reflection, or on routers that also perform other IP routing and service functions. Both scenarios have downsides.
Dedicated BGP route reflectors are a waste because route reflection functions require minimal data plane resources. Routers that juggle route reflection with other duties, on the other hand, may not have sufficient resources to support scalable route reflection.
Network virtualization offers a solution. A virtual route reflector, or “vRR” for short, can remove reliance on dedicated hardware and be adjusted up or down as needed through allocation of more or less resources to vRR virtual machines.
As a recent Alcatel-Lucent TechZine posting, Virtual route reflector delivers high performance, by Anthony Peres, Marketing Director, IP Routing portfolio, Alcatel-Lucent has noted, however, not all vRR solutions are created equal.
“Virtualizing an RR function is more than just compiling a software image to run on a virtualized x86 server,” noted the authors. “To meet the same level of stability and robustness that is offered today, virtualized network function implementations require a proven and stable software base optimized to operate within an x86 virtualized environment.”
A good vRR will take advantage of the multi-core support and significantly larger memory capacity of x86 servers. This can deliver a significant boost in performance and scalability for vRR.
“An implementation that supports parallel Symmetric Multi Processing helps unleash the power and performance of multi-core processing,” noted the blog. “This multi-threaded software approach offers concurrent scheduling and executes different processes on different processor cores. It significantly reduces route learning and route reflection times (route convergence times).”
The usefulness of vRR is not in question. But like many things, the devil is in the details.
]]>The customer experience has always mattered, but its importance has grown in recent years. This has been driven by increased global competition, including the almost instant availability of alternations, and the rising expectations by fickle and informed consumer. Yet, cable operators have a long way to travel if they want to deliver the customer experience (CX) that consumers demand.
The Temkin Group’s Q3 2014 survey of 10,000 US consumers’ opinions about goods and services registered the lowest ranking average Net Promoter Score (NPS) for pay TV providers, a telling statistic. Internet service providers did almost as poorly, coming in only one position higher.
“As technology innovations drive shifts in consumer behavior and open new service opportunities, operators must start eliminating pain points,” stressed Alcatel-Lucent’s Nicholas Cadwgan in a recent TechZine article, Cable MSOs transform the customer experience. “This includes any obstacles that will impede their ability to launch and provide adequate care and quality assurance for those services.”
Cadwgan lays out four customer experience management (CEM) areas that cable operators should focus on.
“This means operators must have comprehensive control of and visibility into every device and every service delivered to those devices from their networks,” he noted.
2. Basic customer service has to be improved, including help desks, interactive voice response (IVR) systems, and self-help portals.
“To provide CSRs all the information they need to reduce resolution time, the new CEM platform must have powerful analytics capabilities in conjunction with data-gathering that reaches across all devices and systems,” suggested Cadwgan.
Analytics systems need to be able to track and identify issues based on information relevant to whatever access link a particular user is on, and the operator’s CX system must be able to aggregate and present all relevant information from a user-centric viewpoint.
The customer experience matters, and cable operators better start taking it seriously if CX statistics are to be believed.
For more of Cadwgan’s thought about opportunities transformation can afford cable operators you are invited to listen to the following podcast on the subject.
From original TechZine Article
Metro network transport platforms must be compact, scalable, and agile to conquer the specific challenges of this key portion of the transport network. Growing and shifting traffic in the metro has triggered these challenges.
Today’s cloud-optimized metro network transport platforms “must” be:
Growth in metro networks
Following a long cycle of core network capacity build out, service providers are now challenged by the growth and shift in metro network traffic dynamics.
A recent Bell Labs study reported that the rise of social media and over-the-top video — along with the rapid adoption of mobile broadband — has led to the proliferation of mega data centers. This drives an increase in metro traffic. And it results in more traffic moving around within in the metro between data centers versus going out to the backbone.
The study also found that metro traffic will grow almost 2 times faster than backbone traffic by 2017. So, there’s not only a dramatic growth of traffic in the metro but that traffic is diverse, dynamic, and flows much differently than in the backbone.
So, what’s the takeaway from this study? Today’s metro networks increasingly require metro-optimized transport solutions versus adapted core platforms. That is, metro transport is driven by scale, flexibility, and efficiency versus sheer capacity and reach.
The new metro transport network
It is clear that a metro-optimized transport solution must be compact, scalable, and agile. But, what specific capabilities are required?
A metro-optimized transport solution can help maximize revenue and ROI by accelerating services availability/time-to-market and improving network operational efficiency. The key benefits are:
Does your metro network have what it takes?
A high-capacity, packet-optical transport solution with metro-optimized flexibility, size, and power can help maximize revenue generation and ROI in the cloud era. Look for a multiservice solution that delivers graceful pay-as-you-grow scaling with no-compromise distributed switching and agility in a metro-optimized form factor.
Our recent expansion to the Alcatel-Lucent 1830 Photonic Service Switch can help you meet the challenges of growing and shifting metro traffic demands in the cloud era.
Related Material
Listen to the podcast to learn more.
To contact the author or request additional information, please send an email to techzine.editor@alcatel-lucent.com.
]]>From original TechZine article
Can the virtualized evolved packet core (vEPC) be deployed today in large scale, LTE networks? Mobile network operators (MNOs) are increasingly convinced that the vEPC has become viable both financially and technically. And I think so, too, based upon the advances made over the past year that I’ll discuss in this blog.
Advancements in vEPC scaling and performance
Early in 2014, the vEPC proofs of concept and field trials of virtualized mobility management and gateway products were limited in both scale and performance. But as the year progressed, advancements in the design and architecture used network functions virtualization (NFV) tools and capabilities that greatly improved their capacity and performance.
These improvements, together with other software enhancements, such as the Data Plane Development Kit (DPDK), have the vSGW/vPGW approaching the capacity and performance of dedicated hardware platforms.
Converged NMS/VNF manager: The key to seamless vEPC network operations
A lot of progress has been made with enhancements to the ETSI Management and Orchestration (MANO) architecture. However, rather than having separate element management system (EMS) and VNF manager (VNFM) functions, there’s been a move to converge these functions since both are integral to managing the VNFs. (The EMS described by MANO includes both network and element management (NMS/EMS) functions).
By unifying the VNF manager and NMS functions, an MNO can seamlessly manage and orchestrate the vEPC. This makes it easy for an MNO to perform VNF lifecycle management functions from the same NMS that is used on a day-to-day basis for network operations.
When EMS and VNFM are converged:
The traditional NMS Fault, Configuration, Accounting, Performance and Security (FCAPS) management function is now applicable to both the EPC VNFs and the physical network functions (PNF). This enables a common and consistent approach.
This also provides the topology and logical connectivity of the individual VNFs/PNFs and more advanced performance and SLA reporting. A single manager simplifies overall coordination and adaptation for configuration and event reporting between the virtualized infrastructure manager (VIM) and the NMS.
Troubleshooting is simplified because traditional NMS faults/events are correlated with VNF related events/faults. The VNFM provides lifecycle management and automates the self-healing of VNFs. It uses recipes to describe the vEPC VNF, its VNF components (underlying VM instances) and their interdependencies. Each VNF component has its own recipe, which includes a description of how to monitor, self-heal, and scale it.
With coordinated fault management and automated self-healing, the MNO’s operations team will have the visibility and intelligence to understand whether alarms are caused by normal maintenance activities or are indeed an emerging issue that they need to react to quickly. In addition, new advanced NMS approaches to network assurance visualization will speed problem assessment for both VNF and PNFs. These developments will also provide the VNF and network event data to support reporting and analysis.
When the VNFM and the NMS are combined into a single management functional instance, the management and orchestration of the vEPC VNF and integration of the vEPC into the existing OSS/BSS infrastructure is greatly simplified. This is because the VNFM/ NMS has complete knowledge and visibility of VNFs within the physical and virtual EPC network.
Is the vEPC ready for commercial deployment?
Based on the progress made in both the scalability and performance of the vEPC VNFs and the advances made in management and orchestration of the vEPC, 2015 will be the year for vEPC deployments to commence at some Tier 1 mobile operators. The momentum and confidence of mobile operators in NFV will make it a reality.
Alcatel-Lucent at Mobile World Congress
Alcatel-Lucent will have a large presence at Mobile World Congress in Barcelona. I will take part in a panel discussion on “Unifying Network IT and Telco IT” on Thursday, March 5th from 11.30 – 13.00.
We will also be demonstrating our vEPC at our booth. There you will be able to see the dynamic scaling of our Virtualized Mobile Gateway and the operational elegance of our NMS/VNFM system. I look forward to seeing you there and discussing how our vEPC solution can meet your NFV evolution plans.
Related Material
To contact the author or request additional information, please send an email to techzine.editor@alcatel-lucent.com.
]]>From original TechZine article
Rural communities and small cities need fast broadband access to prosper in an increasingly globalized and connected world. Municipal governments recognize the socioeconomic benefits that ultra-broadband connections can bring. Many also understand the technical and financial challenges involved in bringing these connections to small communities. Still, most municipalities lack a clear strategy and implementation path for realizing their ultra-broadband vision.
Cities like Opelika, Alabama and Chattanooga, Tennessee have proven that the transformative benefits of ultra-broadband are within reach for smaller population centers. Both cities have successfully deployed fiber networks that deliver gigabit speeds and services to homes and businesses. Their citizens now enjoy ultra-broadband experiences that had previously been unknown outside the world’s elite cities.
So how can your small city or rural community emulate the success of Opelika and Chattanooga? There’s no universal ultra-broadband deployment strategy. But there are fundamental steps you can follow to build a fast network that lets your citizens and businesses thrive.
Building a sound business case
A well-developed business plan is the foundation for ultra-broadband success. 1 key part of the development process is to choose a viable business model. 2 business models are possible for municipalities seeking to build ultra-broadband community networks:
Your resources, regulatory environment and strategic goals will determine your best way forward.
To support your chosen business model, you need to establish a clear financial blueprint and identify reliable funding sources. Your best sources may include municipal bonds, community loans, private investments, and government grants.
You can build momentum and improve your chance for success by encouraging more stakeholders to embrace and invest in your ultra-broadband plan. To this end, it’s important, after assessing the plans of existing telecom service providers, to engage with schools, hospitals, and businesses from an early stage. It’s also important to consult a telecom lawyer who can help you create a corporate entity that can effectively handle funding and operations.
Turning ideas into action
A sound business case is 1 crucial component of a much broader deployment process. Whatever your vision and starting point, you can secure ultra-broadband success by adopting a process that incorporates 5 steps:
Get started today
Communities around the world have transformed their economic and social future with ultra-broadband networks. Yours can be the next to do so. How can you get started? Talk to peers and engage with prospective partners and investors. Look at different technologies – fiber, wireless, or a combination of both – that can help you define and execute on a new broadband vision. Think about what skills your community needs to reap the benefits of ultra-broadband. There’s no need to wait to join the world’s elite connected communities. You can start building your ultra-broadband future today.
Related Material
Chattanooga case study
Opelika case study
Alcatel-Lucent governments web page
Municipality Rural Ultra-Broadband brochure
To contact the author or request additional information, please send an email to techzine.editor@alcatel-lucent.com.
]]>By: Kevin Landry, Product Marketing Manager, Alcatel-Lucent
From original TechZine article
Assurance visualization can prepare network operations tools to meet the demands of increasingly complex networks. And the limitations of today’s tools are indeed a cause for concern.
As networks evolve to next-generation IP/optical technologies, cloud networking, software defined networking (SDN), and network functions virtualization (NFV), network operations tools need to evolve, too.
The Network Operations Tools Evolution
Innovation is happening and better technologies are emerging. Among them, network assurance visualization is useful for enhancing network monitoring, troubleshooting, and analytics.
NFV and SDN require network management systems (NMS) to meet the new performance monitoring and assurance visualization challenges created by these rapidly changing virtualized network environments. And big data analytics and new web software frameworks have unlocked the potential for new assurance visualization approaches that will radically change network operations tools.
The end result will be a dramatically enhanced, more visually insightful approach for assuring networks and services that enable operators to utilize a wealth of real-time and historical data in a meaningful and effective way.
This blog is the first in a series that discusses network assurance visualization in the context of gaining efficiency in addressing various network operations challenges.
Traditional Fault Management Can Impede Efficiency
Even the most seasoned network operators find it challenging to diagnose the problems in a network with high fault volumes. To manually interpret alarms within an alarm list and isolate problems efficiently, you need to identify the relatively smaller number of root causes quickly and focus on investigating the highest priority problems first.
Some of the better fault management and network operations tools in the industry simplify and automate this process through alarm de-duplication, suppression, and correlation. This de-clutters the alarm list by lowering the number of alarms that network and service operators have to filter and sort through.
“Service aware” network operations tools go further by performing alarm correlation between services and underlying infrastructure, out-of-the-box. There’s no need to develop custom configurations or scripts to map out the many relationships needed to link the volumes of individual services to all the underlying network infrastructure layers and network resources. Service aware network operations tools are also able to correlate alarms network-wide across end-to-end services composed of multiple service segments – even if they use different service types or span different network technology domains.
Despite some of the latest network operations software technology advancements, operators can still find it difficult to know where to start when there are many active faults – especially when using the more traditional fault management approach that involves navigating alarm lists. It is also not uncommon for operators to sometimes lose their bearings as they open many windows when troubleshooting faults across multiple different views and forms.
It is often difficult and time-consuming to uncover and prioritize all the root problems from symptomatic alarms. This can mean that operators only address the root cause of problems on a best-effort basis. Alarms unrelated to the particular fault being investigated may tend to move operators in the wrong direction when troubleshooting a specific problem – and this increases the mean time to resolution (MTTR).
Service Aware Fault Management with Assurance Visualization
Many traditional network operations challenges can be alleviated by adding assurance visualization to fault management workflows within a service aware NMS.
For the most effective assurance visualization, it is fundamental to use service aware network operations tools that possess a high performing, scalable framework for alarm and event correlation. This will insure a highly optimized environment for efficient traversal of relationships across the managed network and services model in-memory. And that’s exactly the prerequisite to be able to pinpoint root causes with speed and at scale, while also enabling their isolation from downstream network infrastructure impacts.
With this advanced service aware fault management, issues are automatically isolated down to the root cause to enable the delivery of assurance visualization. Assurance visualization enables network operations to detect emerging problems faster and accelerate the troubleshooting process to reduce MTTR. It achieves this by giving network operators intuitive, holistic views so they can:
RELATED MATERIALS
To contact the author or request additional information, please send an email to techzine.editor@alcatel-lucent.com.
]]>From original Alcatel-Lucent TechZine posting
A Wi-Fi first strategy can help multi-system operators (MSOs) remain competitive in the evolving marketplace. Wi-Fi enabled devices default to using the cable operator’s Wi-Fi network for voice, and cellular equipped devices can switch to cellular when out of Wi-Fi range.
Although nuances in the business drivers for adopting such a strategy vary by region globally, this model turns the traditional cellular voice paradigm on its head.
Just like other communications or media industries, MSOs face a dynamic and extremely competitive market. As a result, in EMEA, they have evolved their end-user offerings to embrace market-leading fixed high speed internet access, Wi-Fi connectivity, and bundled mobile cellular services using mobile virtual network operator (MVNO) partnerships.
As the pace of change continues to accelerate, subscribers have made a widespread move to Wi-Fi enabled smartphones and tablets. A European commission study stated that 71% of all EU wireless data traffic in 2012 was delivered to smartphones and tablets using Wi-Fi. This is expected to rise to 78% by 2016.
European MSOs have already invested in Wi-Fi and offer data connectivity services in and out of the home. This not only is a customer retention strategy, but also lets MSOs build out further value added services (VAS) and can reduce data costs of their MVNO agreements. So if we now contemplate the delivery of voice to these Wi-Fi enabled devices, how do we get started?
Existing Mobility Assets
MSOs in EMEA already have different types of Wi-Fi hotspot locations:
These Wi-Fi hotspot networks have been mainly used to enhance customer experience by extending broadband access outside the home, and to help provide TV Everywhere services.
Some MSOs have also invested in 4G spectrum and tentatively contemplated this to extend fixed services outside of their hybrid fiber-coaxial (HFC) network footprint. If MSOs decide to take a more traditional approach to 4G and deploy mobile coverage using small cells, their own networks can provide backhaul for this traffic.
In addition, most MSOs in EMEA have – or are building – a full MVNO (F-MVNO) network that enables them to deliver cellular-based mobile services to their customers. The costs of maintaining a mobile data and voice partnership with a mobile network operator (MNO) are high. In response, some MSOs use their own Wi-Fi investments to steer (also known as offload) data connections from the MNO cellular network to improve the MVNO business case as well as improve customer experience.
A new opportunity
Both Android OS and Apple iOS recently added native dialer capabilities to their phones’ operating systems. This development paves the way for MSOs to not only offer new voice over Wi-Fi services to tablet and smartphone users, but also steer their own MVNO voice smartphone traffic to use Wi-Fi. This directly impacts MSO´s bundled mobility offers and increases competitiveness, while also managing costs.
Most EMEA MSOs now have assets in place to build a sustainable mobility strategy. Some can combine Wi-Fi and 4G small cell networks with F-MVNO agreements to provide both entertainment and communication services to their subscribers at work, at home, and on the move throughout the day.Being able to control voice communications across multiple wireless assets allows MSOs to adopt a “Wi-Fi first” approach. Subscriber voice calls automatically use MSO Wi-Fi networks. Where the device also has cellular capabilities, calls connect to cellular only when Wi-Fi is unavailable. This concept is also important for converged MNO/MSO operators, who can use all their mobility assets to create a heterogeneous network (HetNet).
Necessary ingredients for a Wi-Fi first approach
1. Quality of Experience
MSOs are already familiar with voice. They deliver fixed services over their HFC networks. Voice, unlike most data services, is a real-time application that requires quality of service to avoid jitter and delay. For MSO Wi-Fi networks to be competitive, the subscribers’ quality of experience using MSO Wi-Fi based voice services must be on par with that of traditional mobile carriers.
Similarly, the end-user experience with the Wi-Fi service mustn’t be any more cumbersome than subscribers are accustomed to. People just want to be able to use their phone without hassles. They don’t want to have to worry about which access technology they are using or perform manual changes as they move in and out of different coverage zones. This means MSO platforms and systems have to be completely automated:
Figure 2 shows a possible high-level Wi-Fi first architecture, including:
Many MSOs are already thinking about deploying IMS capabilities as part of their overall voice renewal plans. Including voice over Wi-Fi and other value-added services such as video calling are a natural fit. Figure 2 also demonstrates that beyond Wi-Fi first schemes, IMS can eventually replace the MVNO operation (2G/3G) as well as the fixed access network.
2. Mobile device manager (MDM)
An MDM system can be used to provision both iOS as Android devices, allowing MSOs to offer cellular, Wi-Fi and hybrid service plans.
The concept can use embedded MDM clients on user devices that allow operator settings to be installed, including Wi-Fi settings, usernames and passwords, and SIP settings.
In addition, the MDM would enable the MSO´s service to assume control of (or replace) the subscriber devices’ native dialers. The dialer ultimately must be capable of both Wi-Fi and circuit-switched calling, along with handovers between Wi-Fi, LTE and 3G domains to create a seamless user experience.
3. IMS
IMS technology can be used as the call control solution for voice calls. In the Wi-Fi first approach described here, IMS will handle all calls originating from the user device while in the packet-switched domain (4G, Wi-Fi). IMS delivers SMS messages to the device while in the Wi-Fi/LTE/IMS network using an IP short message gateway. It can also allow other IP communication services, such as video calling, to be added easily. IMS is particularly helpful when services are delivered by other access technologies, including 2G/3G, 4G, and fixed access.
Next Steps for Wi-Fi first
Creating a sustainable MSO mobility strategy is complex, and building a Wi-Fi first scheme as part of this strategy will require planning for considerations such as:
Once these questions have been answered, MSOs are well placed to grasp the current market opportunity of offering voice services via Wi-Fi and leveraging a Wi-Fi first strategy to help remain competitive in the evolving marketplace.
To contact the author or request additional information, please send an email to techzine.editor@alcatel-lucent.com.
]]>OpenStack isn’t an as-is solution for telco network functions virtualization (NFV) infrastructures. OpenStack is an open-source cloud management technology that provides many of the capabilities needed in any NFV environment. And this has prompted interest among many telco service providers.
But to realize the full benefits of NFV, service providers need NFV platforms that provide additional capabilities to support distributed clouds, enhanced network control, lifecycle management, and high performance data planes.
The OpenStack/NFV backstory
In 2010, RackSpace® and NASA jointly launched OpenStack®, an open-source cloud computing platform. Since then, the OpenStack community has gained tremendous momentum, with over 200 member companies.
Originally, OpenStack was not designed with carrier requirements in mind. So in 2012, a group of major telecommunication service providers founded an initiative to apply virtualization and cloud principles to the telecommunications domain.
The term network functions virtualization was coined for this initiative. Service providers called for vendors to build virtualized network functions (VNFs) and NFV platforms to help them become more agile in delivering services, and to reduce equipment and operational cost.
To address identified gaps in OpenStack and other relevant open source projects, major industry players established in September 2014 “Open Platform for NFV” as a Linux™ Foundation Collaborative Project. The intention is to create a carrier-grade, open source reference platform for NFV. Industry peers will build this platform together to evolve NFV and to ensure consistency, performance, and interoperability among multiple open source components.
There are 5 main areas in which OpenStack is currently lacking as a solution for telco NFV environments:
1. Distribution
In the IT world, enterprises want to consolidate their datacenters to reduce costs. But this is not always the best choice for NFV. Many NFV applications require a real-time response with low latency. NFV applications also need to be highly available and survive disasters. Service providers need the flexibility to deploy network functions in a distributed infrastructure — at the network core, metro area, access, and possibly even a customer’s premises.
Figure 1. Distributed NFV infrastructure
OpenStack supports Cells, Regions, and Availabilities Zones, but these concepts are not sufficient for the needs of NFV. Each OpenStack Region provides separate API endpoints, with no coordination between Regions. Typically, one or more Regions are located in one datacenter. The Cells component provides a single API endpoint that aggregates multiple regions.
With Cells, workload placement (“scheduling”) across cells is by explicit specification or by random selection. The Cells component doesn’t have a placement algorithm that is able to choose the best location based on the needs of the application.
The Horizon GUI is restricted to a single region at a time. There is no GUI able to show an aggregated view of the NFV cloud infrastructure. The OpenStack Glance virtual machine image manager is also limited to a single region. This means that the NFV operator would have to deploy images manually to the regions needed.
Bottom line: Service providers need a platform that will deal efficiently with the distributed NFV infrastructure necessary for low signal latencies and disaster resiliency. This infrastructure must also be manageable as a single distributed cloud with global views, statistics, and policies.
2. Networking
VNFs vary widely in their network demands. Because they are distributed throughout an NFV infrastructure, the baseline requirement for an NFV network is connectivity, both within datacenters and across WANs. Security dictates that different network functions should only be connected to each other if they need to exchange data, and the NFV control, data, and management traffic should be separated.
As network functions are decomposed – for example into data plane components and a centralized control plane component – network connectivity between these components needs to remain as highly reliable as traditional integrated architectures. Sufficient network resources should be available to ensure surging traffic from other applications cannot adversely affect NFV applications.
The network should be resilient against equipment failures and force majeure disasters. Latency and jitter requirements vary from hundreds of milliseconds for some control and management systems, to single digit milliseconds for mobile gateways and cloud radio access networks.
NFV networks will typically consist of a semi-static physical infrastructure, along with a much more dynamic overlay network layer to address the needs of VNFs. The overlay layer needs to respond quickly to factors such as changing service demands and new service deployments.
OpenStack Neutron is the OpenStack networking component offering abstractions, such as Layer 2 and Layer 3 networks, subnets, IP addresses, and virtual middleboxes. Neutron has a plugin-based architecture. Networking requests to Neutron are forwarded to the Neutron plugin installed to handle the specifics of the present network. Neutron is limited to a single space of network resources typically associated with an OpenStack region. It is unable to directly federate multiple network domains and manage WAN capabilities.
Bottom line: Service providers need a platform that will set up and manage local- and wide-area network (LAN and WAN) structures needed for carrier applications in a programmable manner
3. Automated lifecycle management
One of the greatest advantages of NFV as a software-based solution is its ability to automate operational processes. This includes the application lifecycle, from deployment to monitoring, scaling, healing and upgrading, all the way to phase out. Studies have shown that this automation will allow service providers to reduce operational expenses (OPEX) by more than 50 percent in some cases.
OpenStack Heat allows users to write templates to describe virtual applications (“stacks”) in terms of their component resources, such as virtual machines including nested stacks. Originally, Heat templates were based on AWS™ CloudFormation™, but more recently Heat Orchestration Templates (HOT) have been introduced that offer additional expressive power. Heat focuses on defining and deploying application stacks but does not explicitly support other lifecycle phases.
OpenStack Solum is a new project designed to make cloud services easier to consume and integrate into the development process. It is being designed to provide some of the missing lifecycle automation functions. There is some initial work on auto-scaling by combining the measurement capabilities of OpenStack Ceilometer with Heat. Heat is currently limited to a single OpenStack region.
Bottom line: Service providers need a platform that will automate not only deployment and scaling but also many other lifecycle operations of complex carrier applications with many component functions.
4. NFV infrastructure operations
The distribution of NFV infrastructures across many locations in a service provider’s network – as opposed to a few centralized locations – will pose specific challenges and impact the operational processes and support systems. NFV’s distributed infrastructure means that cloud nodes at different locations are added, upgraded, and/or removed more frequently than in a centralized cloud. These processes should be performed remotely whenever possible to avoid truck rolls across the coverage area.
OpenStack TripleO (OpenStack on OpenStack) is an experimental addition to the OpenStack family. The project aims at automating the installation, upgrade and operation of OpenStack clouds using OpenStack’s own cloud facilities. TripleO uses Heat to deploy an OpenStack instance on top of a bare-metal infrastructure.
Bottom line: Service providers need a platform specifically designed for a distributed NFV infrastructure, one that automates the complex software stack deployment and upgrade procedures.
5. High-performance data plane
Many carrier network functions (e.g., deep packet inspection, media gateways, session border controllers, and mobile core serving gateways and packet data network gateways) are currently implemented on special-purpose hardware to achieve high packet processing and input/output throughput. Running those functions on current off-the-shelf servers with current hypervisors can lead to a 10-fold performance degradation.
The industry is currently working on new technologies that have the potential to improve data plane performance on commercial off-the-shelf servers, in some cases to nearly the levels of special-purpose hardware.
Data plane performance, however, has been a fringe activity in the OpenStack community. Only recently, e.g., with the Juno release, more focus has been put on data plane acceleration. Juno offers support for requesting access for virtual machines to Intel®’s Single Root I/O Virtualization technology.
Bottom line: Service providers need a platform that will manage high-performance data plane network functions on commercial off-the-shelf servers.
Beyond OpenStack: What’s needed to make NFV work today?
Most service providers around the globe are looking for an open and multi-vendor NFV platform based on OpenStack. But as discussed, the OpenStack community is not strongly focused on some key NFV requirements. What’s missing is an NFV platform that goes beyond the scope of OpenStack to help customers realize reductions in CAPEX and OPEX, and improved service agility.
OpenStack is still under heavy development in many areas. As it matures, OpenStack will become more stable and richer in functionality, allowing it to better meet NFV requirements in certain areas. However, it is not expected to meet all requirements.
Service providers need a horizontal NFV platform that provides:
This approach will make it possible to break open today’s multiple application silos.
This article is based on the Alcatel-Lucent/Red Hat white paper CloudBand with OpenStack as NFV Platform.
To contact the author or request additional information, please send an email to techzine.editor@alcatel-lucent.
]]>Forward thinking providers are already concerned that the coming wave of unicast traffic generated by popular on-demand video services will affect the delivery network from end to end. Clarifying the potential impact of these services on the network is vital as the ramifications could be significant.
Growth of unicast
In a traditional cable or IPTV network architecture—broadcast or multicast—traffic is proportional to the number of channels. Beyond a certain range and for a limited channel line-up, adding new subscribers has no traffic impact. Unicast is different. Traffic is directly proportional to the number of devices: more devices beget more traffic.
As illustrated in Figure 1, multicast traffic will flatten as the subscriber base grows, because the likelihood that users are watching all available TV channels increases. Meanwhile, unicast will continue to rise in step with subscriber growth.
Furthermore, knowing that the proliferation of connected devices is progressing rapidly, service providers don’t have the luxury of time. They need to get started on their transformation strategy now. Indeed, a Bell Labs study shows that metro video traffic will increase 720% by 2017.
Figure 1. Multicast and unicast traffic trends related to the number of subscribers
Key considerations
The paradigm shift from multicast to unicast impacts every aspect of the network—from access to the backbone. Figure 2 maps some of the key considerations to the network elements. Let’s look at how pay TV operators that want to offer personalized cloud TV services can re-imagine their network architecture from end to end.
Figure 2. Considerations for network design to support bandwidth demands of unicast traffic
Assess the situation
Before launching network-based time-shifted TV services, pay TV operators should model their cloud DVR solution. This includes identifying the type of services they will offer. Whether it is catch-up TV, restart TV, or personal recording, operators must understand the impact of these unique services on network transformation.
Here are several service characteristics to consider:
Meet the capacity challenge
Volume — 100s of catch-up TV programs and 100s of hours of personal recording for a large subscriber base – makes for a significant storage capacity challenge. Multiple storage nodes have to be interconnected within a 10 GigE LAN topology to accommodate petabytes of programming.
As a result, designing an appropriate solution for scaling data center networks must consider:
Software-defined networking (SDN) will be a fundamental component in this design. SDN is already being used to automate connectivity within virtualized data center infrastructures and can establish connectivity between cloud DVR nodes upon their creation.
Prioritize the traffic
Once the cloud DVR is built, the next step is to feed the unicast streams into the network through an edge router—BNG/BRAS for an IPTV network, or a video router/CMTS in a cable hub architecture. Traditional edge routers were built to support highly oversubscribed, best-effort Internet connectivity. Today, however, they are becoming a bottleneck for increasing unicast video sessions. Consequently, they need to be upgraded or replaced. As traffic is growing, they are also being further distributed in the network.
At the edge, pay TV operators apply quality of service to pay TV traffic delivered to the set-top box (STB) as opposed to over-the-top (OTT) traffic receiving best-effort treatment. TV service to connected devices is often treated like an OTT service.
It’s time to revisit this practice. From the end-user’s point of view, connected devices are increasingly becoming the primary screen. That means best-effort service is no longer enough. This concern is pushing pay TV operators to reconsider how they mark and prioritize the traffic.
Scale the network
Backbone
Traffic growth on the backbone network can be managed using a content delivery network (CDN). The CDN caches the most popular content at the edge of the backbone. When the same asset is requested by multiple end users, it is served from the CDN cache. This approach significantly reduces bandwidth consumption within the backbone network.
In the event of strong user content demand, this approach also protects the origin server from high peak requests. This ensures that other critical functions, such as ingest, recording, encryption, packaging, and streaming remain unaffected.
A CDN dramatically cuts the cost of the origin server while reducing investment in legacy infrastructure. Investing in additional caches to serve popular content from the edge is more economical than adding capacity to the centralized origin servers.
Today many operators are growing their CDNs, using them as a unified infrastructure to serve traditional devices, such as STBs, as well as newer connected ones[2]. Typically, the CDN delivers content using HTTP over TCP, while the STB receives it using RTP over UDP, as shown in Figure 3. To receive content from the CDN, the STB connects to an RTSP pump that requests content from the CDN over HTTP. It does this using the industry-standard ATIS C2 interface[3].
Figure 3. Click to see the complete IP video infographic and learn more about the standards, protocols and acronyms.
Metro
According to the findings of a Bell Labs study, distributing the caches further into the metro network can reduce the total traffic by 41%. To optimize their service, pay TV operators must ask:
For the operator, there’s a trade-off here. Bandwidth savings need to be weighed against the extra cost of the caches. Alternatively, significant QoS improvements brought about by this distributed architecture might be sufficient to justify the investment.
Access
Pay TV operators need to evaluate the options to increase throughput per user in the access network. For fixed networks, one approach is to push fiber closer to end users. For example, some IPTV operators are deploying FTTx solutions or flexible micro-nodes with vectoring. Then they install fiber all the way to the home.
For their part, cable operators have several options to increase bandwidth. They can:
On mobile access networks where bandwidth is scarce, service providers are using a combination of techniques to improve user quality of experience (QoE) while reducing transport costs. Some techniques, such as transcoding, transrating, and compression can potentially decrease video resolution by transforming the content or streaming at lower rates. Other content distribution techniques that retain video resolution—buffering, caching, and broadcasting—can also be used to enhance QoE.
Knowledge is power
Introducing a cloud-based DVR service takes serious forethought and planning, and naturally leads to a transformation program. Considering the options requires a deep understanding of the complete video service delivery chain, as well as world-class expertise in IP backbone, metro, and access networks. The contribution of this knowledge and experience should be highly valued in developing a comprehensive cloud DVR service strategy and in selecting the right partners with the appropriate range of consulting and professional services.
Related Material
Footnotes
To contact the author or request additional information, please send an email to techzine.editor@alcatel-lucent.com.
For what seems like ages now the communications industry has been talking about convergence. We have already gone through many phases as networks move from TDM to being end-to-end Internet Protocol (IP) with voice traffic increasingly being carried on converged networks. Indeed, the popularity of Voice-over-IP (VoIP) and the coming of Voice-over-LTE (VoLTE) on mobile networks is the future.
That said, convergence is not just about IP but is also about the transformation of global network infrastructures in the wired world, with legs into the wireless one as well, of IP and Optics. And, as Steve Vogelsang, VP Strategy and CTO, IP Routing and Transport Business Division, Alcatel-Lucent noted in a recent TechZine blog, IP and optics: Time to make nice, “Let’s face it. The future of the communications industry requires a convergence of IP and optics. So maybe it’s time to give each other some overdue respect."
Vogelsang starts with the acknowledgment that: “Optical networking is very different from IP networking. The base system designs and some of the underlying technology are similar, but the design goals and resulting optimizations are quite different.” He continues by saying reality is that, “IP and Optics are destined to come together. “
Vogelsang then goes on to provide four insightful observations about IP and Optical convergence that are good food for thought. The four are:
Vogelsang proceeds to delve into an interesting discussion as to what needs to be addressed to get to IP and optical convergence. He notes that: “The first problem to solve is automating the optical layer, because much of what happens, even today, involves hands-on setup. ROADMs were a great start, but they only allow automation of the middle of the route, but not the ingress and egress points. Next-generation ROADMs solve a lot of these issues by making them colorless, directionless, contentionless (CDC) and, for networks over 100 Gbit/s, flexible (CDCF). But the key will be getting the routing layer to talk intelligently to the optical control layer and vice versa.”
How the network of the future gets to being ultra-broadband, including that of mobile operators, is going to be through IP and Optical integration in almost every part of the infrastructure. And, while we are not there yet, as Vogelsang says, “There is a lot of sophisticated and tricky maneuvering happening at the optical layer, which few IP engineers recognize or understand. While that was OK in the past, it is entirely insufficient today.” It is the reason the title of his posting about now being the time for IP and optical engineers to “make nice” is not just an observation but should be construed as a call to action.
]]>Self-service to one degree or another has been present since the rise of the web. However, customers are increasingly choosing self-service because they feel more empowered and it is often perceived to be an easier interaction than dealing with a live person. The rise of the smartphone also has increased the use of self-service.
In fact, as explained by Jessica Verbruggen, Integrated Marketing Assistant at Alcatel-Lucent Motive, in a recent TechZine article, Empowering Autonomous Customer Self-Care, self-service can be a win-win for customers and communications service providers (CSPs).
The voice of the customer supports self-service
Verbruggen, citing a recent consumer survey by Nuance Enterprise to illustrate here point. The survey found:
Plus, in terms of what motivates them to use a mobile app:
Benefits to CSPs
As noted, CSPs are finding self-service to be very beneficial. Experience has already proven that customer self-care reduces the cost of interaction with customers, allows them to collect more customer information and helps them deliver a more personalized experience.
“This, in turn, drives higher customer retention, increases revenues, and positions their brand as being a provider of a comprehensive and personalized customer experience,” Verbruggen noted.
One problem that many CSPs have, however, is easily delivering all the functionality that consumers expect and appreciate. That’s why products such as Alcatel-Lucent’s Motive’s Self-Service Console, part of the company’s Motive customer experience solution, are so well-received.
The Motive Self-Service Console empowers customers to pay their bills, access their accounts and schedule maintenance without having to involve a live agent. A large European operator that uses the tool has reported that 88 percent of customers that used the Motive troubleshooting application were able to avoid a call to the help desk entirely.
That’s huge. And it demonstrates strongly why CSPs are increasingly attracted to customer self-care.
“CSPs are able to cut costs, get a better view of their customers, and provide more personalized service,” explained Verbruggen. “That’s a win-win if I’ve ever seen one.”
]]>