There’s a strong case for virtual residential gateways, especially when it comes to route reflectors (RRs).
Border gateway protocol (BGP) route reflectors have long been an important network component, reducing the need for a full BGP mesh within an autonomous system. They often run on IP routers either dedicated to route reflection or performing that role in addition to IP routing and service functions.
The case for using a virtual residential gateway is a strong one when it comes to RRs. A dedicated RR is underutilized in the data plane because RR functions require minimal data-plane resources. Yet, routers that perform this role in addition to other jobs may not have enough resources in terms of CPU and memory.
This is where a virtual residential gateway makes sense. As noted in a recent Alcatel-Lucent blog post, Virtual route reflector delivers high performance, by Adam Simpson, Senior product manager and Anthony Peres, Marketing Director, IP Routing portfolio, Alcatel-Lucent. As the authors note, “a vRR offers more flexible deployment options and upgrades for improved scale and performance. Scale and performance levels can be adjusted up or down as needed by flexibly allocating virtual machine (VM) resources to the vRR.”
The trick is that not all vRRs are created equal; operators must pay attention to whether the implementation is based on mature software, and if it takes advantage of the underlying x86 server environment.
A mature vRR will leverage the multicore support and significantly larger memory capacity of x86 hardware.
“An implementation that supports parallel Symmetric Multi Processing helps unleash the power and performance of multi-core processing,” noted the Alcatel-Lucent blog. “This multi-threaded software approach offers concurrent scheduling and executes different processes on different processor cores. It significantly reduces route learning and route reflection times (route convergence times).”
Further, operators should ensure that the vRR of choice supports 64-bit processing, as that is better able to address a larger memory space providing more support for increased number of BGP peers and routing entries.
Virtual residential gateways make a lot of sense, but the devil is in the details.
]]>Indeed, it is being viewed strategically as a way for service providers to distinguish their services on the basis of Quality of Experience (QoE) from 3G and OTT voice apps. It is also seen as providing competitive advantage because of its ability to enable end users to seamlessly move from a voice call to a video call, or shift from one device to another in the midst of the conversation. It is why interest in accelerating VoLTE deployments is so high.
However, network transformations are not easy. VoLTE deployment and operations is an interesting case in point. It brings unique challenges for service providers related to policy control, charging and Diameter signaling control. Steffen Paulus, Director of Product Marketing, Alcatel-Lucent has some interesting insights worth sharing on the need for integrated policy, charging and Diameter signaling in a virtualized solution, as the path forward for VoLTE success. This is particularly relevant in light of Alcatel-Lucent’s recent launch of its End-to-End Voice over LTE (E2E VoLTE) solution that is an integral part of the Rapport multimedia real-time communications platform which has been architected specifically to me service provider and enterprise needs.
Paulus has a few tips and suggestions on how to get VoLTE rollouts optimized in those three critical and interrelated areas of policy, charging and signaling.
The first one concerns the value of network analytics and personalized offers as the means for service providers to achieve nice adoption rates when launching VoLTE. Network intelligence can be leveraged with the application of sophisticated analytics to understand which markets and customers will benefit most. This translates into the ability to offer self-service capabilities to customers and enables service providers to proactively target specific customer segments based on rich contextual information and be more responsive to changing market conditions.
In fact, the ability for the sharing of analytics across lines-of-business, assuming part of the network transformation includes the upgrading to a more flexible and adaptable underlying rating and charging capabilities, is critical for enabling rapid competitive responses. The reason is obvious but important, creative marketing can only work when new packages and business models are ready for prime time.
Second, when it comes to creating compelling VoLTE experiences is the getting the policy and signaling plumbing VoLTE-ready for ensuring the QoE. On this score Alcatel-Lucent with deep expertise in VoLTE implementations worldwide, knows that VoLTE can expose significant shortcomings of legacy Policy and Charging Rules Function (PCRF) solutions. Plus, there be limitations in scalability and performance, this can include features such as full geo-redundancy, session binding and correlation.
The scalability issue is not just one for handling the data traffic expected from VoLTE adoption. A surge in VoLTE subscribers is also going to significantly increase in diameter–based IMS-related control plane traffic. As Paulus explained, with this surge will comes a need to properly manage that signaling traffic, offer load-balancing and enable interworking capabilities. He explained, “This area is often referred to as diameter signaling control (DSC), and in combination with the IMS and policy & charging solution is a critical piece of the puzzle. “
Last and not least are the challenges for VoLTE regarding cloud readiness along with the value of using network functions virtualization (NFV) as foundational for next generation service creation and delivery. Part of this is based on the value of moving to software defined networking (SDN) and NFV capabilities in general as the most cost effective and agile way to run a network going forward. As if not more important is the ability for service providers to me fast-to-market, fast-in-the-market and incredibly fast to accommodate changes in market conditions be they competitor or user driven.
As Alcatel-Lucent points out, and recent studies have confirmed, most operators have not upgraded their policy engines and moved to NFV to enable a scalable and high-speed data layer that can be used to quickly create and manage differentiated data plans based on real-time information about subscriber preferences, needs and lifestyles. This is critical in a world where demands will increase for the unpredictability associated with the ebb and flow networking and signaling flows caused by the introduction/need for more flexible data plans and such things as providing real-time subscriber notifications about data usage.
Where all of this leads in terms of VoLTE success is that each one of the tools for success needs to be well-orchestrated and integrated to achieve, scalability and agility to assure both service quality and speedy responsiveness. All of this places a premium on having an integrated solution that incorporates all of the tools necessary for the operational efficiency and effectiveness that achieving optimal VoLTE demands.
]]>As they move into the cloud era, network operators need a service aware network operations tool to assure virtual network functions (VNF) management. They’ll need it to efficiently perform a variety of network operations tasks, including:
As described in a vEPC post related to converging NMS and VNF manager functions within the ETSI Management and Orchestration (MANO) architecture, operators need to evolve their network operations tools for NFV through tighter coupling the NMS and VNF manager functions. Specifically for VNF assurance, the blog states “Troubleshooting is simplified because traditional NMS faults/events are correlated with VNF related events/faults. The VNFM provides lifecycle management and automates the self-healing of VNFs.”
In addition to the ETSI MANO architecture, progress has been made in the ETSI specification for defining NFV Service Quality Metrics that strives to enable better engineering of VNF user service quality, more efficient fault localization and mitigation, and faster identification of true root cause of service impairment so proper corrective actions can be taken promptly.
As NFV service quality metrics and traditional network service performance are continuously monitored, a service aware infrastructure relationship model within a network operations tool will be important for it to be able to innately correlate events to the true root-cause of service impacting problems, without having to develop and pre-configure volumes of custom handling policy rules and scripts. In addition, this model will allow operators to perform a more rapid service impact assessment for network events under investigation, as well as speed fault isolation and resolution.
And to make this more advanced fault management meaningful for network operators, assurance visualization will help by providing intuitive views for easily understanding how a multitude of events and key quality indicators (KQIs) relate to each other, with clear visibility into the root-cause of problems. It will also insightfully give operators an understanding of the time-line for events and state changes in the network to give a better indication of cause and possible effects.
This blog is the 2nd in a series that discusses the evolution of network and service assurance. The 1st blog gives a general overview on how network operations tools can be more efficient.
ASSURING THE EVER-CHANGING STATE OF THE VIRTUAL NETWORK
VNF configurations will be far more dynamic than with physical network elements (PNF), presenting new challenges for network operations tools to keep pace with many events related to highly dynamic network state changes and elastic scaling.
Manual processes that piece together assurance data from disparate views will not be sufficient to keep pace in this highly dynamic NFV environment. And traditional real-time-only monitoring and assurance views will not be effective when a VNF could be here in 1 moment and scaled down and gone in the next. This means that there is a need for both current and historical events and state information to be intelligently processed with near real-time performance, and at large scale.
Consider how much more meaningful it would be for network operators if assurance views could be made more intuitive for easily understanding how all the network events and MANO related KQIs relate to each other. For example, wouldn’t it be more insightful for operators troubleshooting a service performance issue to have a timeline that shows the service impacting threshold crossing alerts (TCAs) as well as whether orchestration or network events occurred in the same general timeframe?
ENHANCING NFV ASSURANCE WITH SERVICE QUALITY METRICS
As VNF deployments increase, network operations tools will need to evolve with new NFV service quality metric definitions and provide intelligence for correlating the multitude of different events coming from the various types of NFV infrastructure and MANO elements. Specifically related to troubleshooting and root-cause analysis that works in coordination with VNF lifecycle management, operators need service aware visibility and traceability to the various possible service quality impacting layers.
For operations to be effective in a highly dynamic environment with network services that depend on both VNFs and PNFs for underlying network infrastructure, there must be a service aware understanding of the relationships between services and these VNFs and PNFs. And equally important, there also must be a mapping of how service quality events triggered by virtual machines, VNFs, and orchestration layers impact or trigger changes in dependent layers.
For example, when there are issues with virtual network provisioning latency or reliability or diversity compliance, these conditions may trigger actions within the orchestration layer. But as a primary concern of network operators:
Without a network operation tool that can provide this type of intelligence for assuring VNFs, operators will not have the visibility needed to understand whether a problem is within the scope of their control. And this is the type of information would not only be highly valuable for troubleshooting, but even more broadly for clarifying accountability for a localized problem across various organization groups from IT to the different network domain groups.
Operators require a unified network operations tool that has evolved with the intelligence to meet all of these new NFV related assurance challenges. This tool must possess a service aware model that is unified with NFV lifecycle management. It must scale and perform to keep pace with tracking huge volumes of events that reflect the continual state of flux of change across service quality impacting layers. (For more examples of service quality metrics that provide requirements for assuring virtual networks, please refer to the ETSI specification for defining NFV Service Quality Metrics.)
EVOLVING ASSURANCE WITH ADVANCED FAULT MANAGEMENT
Operators deploying NFV require advanced fault management that provides both current and historical visibility for root-cause analysis, so that active faults can be correlated with past ones as the state of the network changes. This historical fault correlation is essential for pinpointing the root cause of problems in the highly dynamic virtualized network where MANO triggered corrective actions could potentially make intermittently reoccurring customer impacting issues difficult to investigate.
And network and service assurance tools in the cloud /NFV era must scale to track the full history of related service impacting events so network operators can perform both real-time troubleshooting and trend analysis.
Tools also need to have the intelligence to detect reoccurring problems. Specifically, operators require a tool that can help them to assess whether corrective resolutions that were automated are successful, or whether they are failing. And if failing, whether the failures are persistent or intermittent, and whether there is an actionable probable cause against the network infrastructure within the scope of the network operator’s control. And amongst the high volumes of events, there will also be a need to suppress (or filter out) events that do not require an action by the network operations team.
The following video demo offers a deeper dive into an advanced fault management application from Alcatel-Lucent.
RELATED MATERIALS
]]>
Rarely does a video about network functions virtualization (NFV) captivate your attention like the one that Alcatel-Lucent recently uploaded about service innovation and lean operations in the context of NFV. Sometimes NFV can be a challenging concept to get your head around, but the video breaks it down with clear visuals and none of the PowerPoint that usually puts you to sleep.
If you haven’t seen the video you can watch the embedded version below. But also let me explain what the video is talking about.
Data center network operations are at the heart of telecommunications service delivery, but until recently nimble operations have been stalled by less than nimble infrastructure. New service creation required hardware deployments that both required up-front investment and limited service flexibility due to deployment times of days, weeks or even months.
NFV finally gets the network as lean and nimble as the virtual machines in the data center, allowing both the virtual servers and the network infrastructure to scale and change virtually as services are created or demand changes.
The Alcatel-Lucent video shows how companies can leverage NFV through the use of its CloudBand orchestration platform that manages network deployment and Nuage Networks’ network orchestration layer that does the network spin-up.
Through the use of network service chaining, which connects these services and also includes third-party infrastructure that works on the platform, operators can launch new services such as content filtering by simply clicking a few buttons for all the infrastructure components they want to spin up for the service, such as a WebRTC server.
The demo also shows how this NFV environment handles load variability and the hardware failure. When load rises, new virtual machines automatically spin up to meet the extra demand. When demand falls, virtual machines are reduced automatically.
The system also helps maintain high availability. In the demo video, when the operator needed to shift to a second data center after the first one failed, the platform automatically looked for where to set up a new backup to maintain high availability. This search included understanding the cost of service creation in various places, the weather factors, and other variables that network engineers usually need to consider when making a new deployment. Alcatel-Lucent calls this smart load placement.
Overall, the video is definitely worth a watch. Even if you already know a lot about NFV, seeing it in action is informative.
]]>From original TechZine article
Can the virtualized evolved packet core (vEPC) be deployed today in large scale, LTE networks? Mobile network operators (MNOs) are increasingly convinced that the vEPC has become viable both financially and technically. And I think so, too, based upon the advances made over the past year that I’ll discuss in this blog.
Advancements in vEPC scaling and performance
Early in 2014, the vEPC proofs of concept and field trials of virtualized mobility management and gateway products were limited in both scale and performance. But as the year progressed, advancements in the design and architecture used network functions virtualization (NFV) tools and capabilities that greatly improved their capacity and performance.
These improvements, together with other software enhancements, such as the Data Plane Development Kit (DPDK), have the vSGW/vPGW approaching the capacity and performance of dedicated hardware platforms.
Converged NMS/VNF manager: The key to seamless vEPC network operations
A lot of progress has been made with enhancements to the ETSI Management and Orchestration (MANO) architecture. However, rather than having separate element management system (EMS) and VNF manager (VNFM) functions, there’s been a move to converge these functions since both are integral to managing the VNFs. (The EMS described by MANO includes both network and element management (NMS/EMS) functions).
By unifying the VNF manager and NMS functions, an MNO can seamlessly manage and orchestrate the vEPC. This makes it easy for an MNO to perform VNF lifecycle management functions from the same NMS that is used on a day-to-day basis for network operations.
When EMS and VNFM are converged:
The traditional NMS Fault, Configuration, Accounting, Performance and Security (FCAPS) management function is now applicable to both the EPC VNFs and the physical network functions (PNF). This enables a common and consistent approach.
This also provides the topology and logical connectivity of the individual VNFs/PNFs and more advanced performance and SLA reporting. A single manager simplifies overall coordination and adaptation for configuration and event reporting between the virtualized infrastructure manager (VIM) and the NMS.
Troubleshooting is simplified because traditional NMS faults/events are correlated with VNF related events/faults. The VNFM provides lifecycle management and automates the self-healing of VNFs. It uses recipes to describe the vEPC VNF, its VNF components (underlying VM instances) and their interdependencies. Each VNF component has its own recipe, which includes a description of how to monitor, self-heal, and scale it.
With coordinated fault management and automated self-healing, the MNO’s operations team will have the visibility and intelligence to understand whether alarms are caused by normal maintenance activities or are indeed an emerging issue that they need to react to quickly. In addition, new advanced NMS approaches to network assurance visualization will speed problem assessment for both VNF and PNFs. These developments will also provide the VNF and network event data to support reporting and analysis.
When the VNFM and the NMS are combined into a single management functional instance, the management and orchestration of the vEPC VNF and integration of the vEPC into the existing OSS/BSS infrastructure is greatly simplified. This is because the VNFM/ NMS has complete knowledge and visibility of VNFs within the physical and virtual EPC network.
Is the vEPC ready for commercial deployment?
Based on the progress made in both the scalability and performance of the vEPC VNFs and the advances made in management and orchestration of the vEPC, 2015 will be the year for vEPC deployments to commence at some Tier 1 mobile operators. The momentum and confidence of mobile operators in NFV will make it a reality.
Alcatel-Lucent at Mobile World Congress
Alcatel-Lucent will have a large presence at Mobile World Congress in Barcelona. I will take part in a panel discussion on “Unifying Network IT and Telco IT” on Thursday, March 5th from 11.30 – 13.00.
We will also be demonstrating our vEPC at our booth. There you will be able to see the dynamic scaling of our Virtualized Mobile Gateway and the operational elegance of our NMS/VNFM system. I look forward to seeing you there and discussing how our vEPC solution can meet your NFV evolution plans.
Related Material
To contact the author or request additional information, please send an email to techzine.editor@alcatel-lucent.com.
]]>Have you ever gotten your hands dirty and really implemented an NFV or SDN application? Six teams from academia and industry in Israel and Europe can answer with a resounding yes! These teams gathered in Haifa at the 4-day 2015 Winter School and Hackathon event, organized by Bell Labs, Alcatel-Lucent’s CloudBand team and Israel’s leading Institute of Technology, Technion. The event offered a full program to get acquainted with the fundamental concepts behind cloud computing, software defined networking (SDN) and network functions virtualization (NFV).
Eighty participants gained a clear understanding of enabling technologies, NFV and SDN challenges and barriers, and how to overcome the obstacles of implementing virtualized network functions in the cloud.
The program started with two days of in-depth technical lectures covering the principles of the cloud, server and network virtualization, OpenStack, and high performance packet processing for NFV among other topics. Following this, participants had the opportunity to get hands-on experience with CloudBand, an advanced NFV platform, learn how NFV changes operator roles and responsibilities, and how operational processes can be automated to reduce operational expenditure. One of the use cases shown was the automated deployment of an NFV application in a distributed NFV infrastructure.
After acquiring a solid foundation in the first three days, six teams took up the challenge to develop a real NFV solution. The task was to virtualize the DHCP function of a residential gateway. Virtualizing customer premises equipment, such as residential gateways, and moving some of their complex functions into the cloud has been identified as a promising strategy to reduce cost and increase service provider ability to quickly deploy new services.
The winner of the challenge was a team headed by Mladen Tomic from the University of Rijeka, Croatia, who implemented a solution that not only delivered the cloud based DHCP service, but was also capable of scaling to adapt to changing service traffic. Mladen said, “I pretty much enjoyed the whole event, from attending lectures on hot and interesting topics, exchanging ideas with other participants and having some great fun both learning and competing in the hackathon.” Congratulations to the winners and to all participants for their highly motivated participation!
The future of NFV will depend on a generation of students and engineers capable of grasping the opportunities and challenges of NFV, and we are convinced they will be the creators of advanced NFV solutions that we cannot imagine today. Anyone can join and create their own applications on a public version of the hackathon. VNF and NFV technology providers can also apply to participate in the CloudBand Ecosystem Program.
Over the past several years, I’ve met with many mobile network operators (MNOs) and discussed their plans for virtualizing the evolved packet core (EPC). It’s clear from the more recent conversations that MNOs are now convinced that the vEPC is both financially and technically viable for their networks. But is the vEPC ready for the MNO’s LTE consumer network? In this article, I’ll discuss why I now think that’s possible.
vEPC scaling and performance
Early in 2014, the vEPC proofs of concept and field trials of the Virtualized Mobility Management Entity (vMME) and Virtualized Serving Gateway (vSGW)/Virtualized Packed Data Gateway (vPGW) were limited in both scale and performance. But as the year progressed, advancements in the EPC Virtualized Network Function (VNF) design and architecture used Network Functions Virtualization (NFV) tools and capabilities that greatly improved their capacity and performance.
For control plane subscriber scaling, it is now possible to support up to millions of simultaneous attached users and hundreds of thousands of eNodeBs and small cells on a single vMME instance. This is comparable with today’s existing MMEs built on standard telecom hardware platforms.
In the data plane, the user capacity has increased significantly with the use of packet acceleration techniques. For example, Single Root – Input/Output Virtualization (SR-IOV) bypasses the hypervisor and enables Virtual Machines (VMs) to attach to the VNF (the vSGW/vPGW ) and share a single physical Network Interface Card (NIC) that functions as multiple virtualized NICsThis greatly improves speed and increases capacity by reducing processing overhead. These improvements, together with other software enhancements, such as the Data Plane Development Kit (DPDK), have the vSGW/vPGW approaching the capacity and performance of dedicated hardware platforms.
Converged NMS/VNF Manager: the key to seamless vEPC network operations
A lot of progress has been made with enhancements to the ETSI Management and Orchestration (MANO) architecture. However, rather than having separate element management system (EMS) and VNF Manager (VNFM) functions, there’s been a move to converge these functions since both are integral to managing the VNFs. (The EMS described by MANO includes both network and element management (NMS/EMS) functions). By unifying the VNF Manager and NMS functions, an MNO can seamlessly manage and orchestrate the vEPC. This makes it easy for an MNO to perform VNF lifecycle management functions from the same NMS that is used on a day-to-day basis for network operations. When EMS and VNFM are converged:
The traditional NMS Fault, Configuration, Accounting, Performance and Security (FCAPS) management function is now applicable to both the EPC VNFs and the physical network functions (PNF). This enables a common and consistent approach. This also provides the topology and logical connectivity of the individual VNFs/PNFs and more advanced performance and SLA reporting. A single manager simplifies overall coordination and adaptation for configuration and event reporting between the Virtualized Infrastructure Manager (VIM) and the NMS.
Troubleshooting is simplified because traditional NMS faults/events are correlated with VNF related events/faults. The VNFM provides lifecycle management and automates the self-healing of VNFs. It uses recipes to describe the vEPC VNF, its VNF components (underlying VM instances) and their interdependencies. Each VNF component has its own recipe, which includes a description of how to monitor, self-heal, and scale it. With coordinated fault management and automated self-healing, the MNO’s Operations team will have the visibility and intelligence to understand whether alarms are caused by normal maintenance activities or are indeed an emerging issue that they need to react to quickly. In addition, new advanced NMS approaches to network assurance visualization will speed problem assessment for both VNF and PNFs. These developments will also provide the VNF and network event data to support reporting and analysis.
When the VNFM and the NMS are combined into a single management functional instance, the management and orchestration of the vEPC VNF and integration of the vEPC into the existing OSS/BSS infrastructure is greatly simplified. This is because the VNFM/ NMS has complete knowledge and visibility of VNFs within the physical and virtual EPC network.
Is the vEPC ready for commercial deployment?
Based on the progress made in both the scalability and performance of the vEPC VNFs and the advances made in management and orchestration of the vEPC, 2015 will be the year for vEPC deployments to commence at some Tier 1 mobile operators. The momentum and confidence of mobile operators in NFV will make it a reality.
Alcatel-Lucent at Mobile World Congress
Alcatel-Lucent will have a large presence at Mobile World Congress in Barcelona. I will take part in a panel discussion on “Unifying Network IT and Telco IT” on Thursday, March 5th from 11.30 - 13.00.
We will also be demonstrating our vEPC at our booth. There you will be able to see the dynamic scaling of our Virtualized Mobile Gateway and the operational elegance of our NMS/VNFM system. I look forward to seeing you there and discussing how our vEPC solution can meet your NFV evolution plans.
Related Material
Self-service to one degree or another has been present since the rise of the web. However, customers are increasingly choosing self-service because they feel more empowered and it is often perceived to be an easier interaction than dealing with a live person. The rise of the smartphone also has increased the use of self-service.
In fact, as explained by Jessica Verbruggen, Integrated Marketing Assistant at Alcatel-Lucent Motive, in a recent TechZine article, Empowering Autonomous Customer Self-Care, self-service can be a win-win for customers and communications service providers (CSPs).
The voice of the customer supports self-service
Verbruggen, citing a recent consumer survey by Nuance Enterprise to illustrate here point. The survey found:
Plus, in terms of what motivates them to use a mobile app:
Benefits to CSPs
As noted, CSPs are finding self-service to be very beneficial. Experience has already proven that customer self-care reduces the cost of interaction with customers, allows them to collect more customer information and helps them deliver a more personalized experience.
“This, in turn, drives higher customer retention, increases revenues, and positions their brand as being a provider of a comprehensive and personalized customer experience,” Verbruggen noted.
One problem that many CSPs have, however, is easily delivering all the functionality that consumers expect and appreciate. That’s why products such as Alcatel-Lucent’s Motive’s Self-Service Console, part of the company’s Motive customer experience solution, are so well-received.
The Motive Self-Service Console empowers customers to pay their bills, access their accounts and schedule maintenance without having to involve a live agent. A large European operator that uses the tool has reported that 88 percent of customers that used the Motive troubleshooting application were able to avoid a call to the help desk entirely.
That’s huge. And it demonstrates strongly why CSPs are increasingly attracted to customer self-care.
“CSPs are able to cut costs, get a better view of their customers, and provide more personalized service,” explained Verbruggen. “That’s a win-win if I’ve ever seen one.”
]]>One of the things that will characterize 2015 is the trend that started picking up momentum in 2014 that communications service providers (CSPs) have developed a sense of urgency about transforming their networks. It used to be that if you were a network operator you could invest with some level of assurance that the hardware and the associated software to run it would be core to your network for possible decades before becoming obsolete. However, as everyone in the industry knows, this is no longer the case.
As the world becomes more software-centric in terms of service creation, delivery, agility, security and performance— to meet the tsunami of data heading operator’s way and to allow network operators to maintain their relevance as ecosystem hubs rather than “dumb pipe” providers—cost efficient and effective operational excellence and the need to be fast-to-market and fast in the market with innovative services and enhanced customer experiences have become paramount. It is why so much attention is being paid to thing like Software-Defined Networks (SDN) and Network Functions Virtualization (VFV).
The need for speed has become (pardon the turn of phrase) hyper-critical. However, with recognition of the need to transform and do so rapidly should also come the recognition that network operators cannot transform rapidly and successfully on their own. It may not “take a village” to get transformations in the fast lane and done right, but it certainly takes trusted partners. In fact, Olivier Gueret , Senior Marketing Manager Wireless Transmission at Alcatel-Lucent, in a recent TechZine article, Rely on partners for your network transformation, makes a nice case as to the vital role partners can play in helping develop and expedite successful network transformations.
In fact, Gueret explains why professional services in particular are important in network transformation projects for a variety of reasons including filling in skills gaps and having experience in all of the complexities of such projects. After all, network transformations from my own observations are like trying to change jet engines while a plane is at 30,000 feet. They are extremely complicated, especially since every customer is unique, and the plane needs to stay in the air and perform at optimal levels even as parts are replaced. There is also interesting challenges regarding the costs of change and how to quantify that the ends justify the means.
Gueret in his posting posits the case made above, i.e., network transformation is no longer a nicety it is a necessity. He goes on to highlight that this really is a case of different strokes for different folks. In fact, he points to a recent Ovum study that when it comes to the reasons to transform operators are divided in two camps:
As he notes, both approaches have the same goals of transforming their network to increase revenues and reduce OPEX, they certainly diverge as to how. This leads to falling into some traps which reliance on a trusted partner with deep network transformation expertise can help mitigate.
Gueret points out the hidden costs of “home-made” network transformations which can translate into additional costs. Cited additional costs from going it by yourself if you are an operator include: costs of unexpected delays caused by poor planning and sequencing; costs from poor quality assessments of infrastructure capabilities; and costs from over-dimensioning, .e.g., spending on things that will not be used or cannot be optimized.
The case for relying on a trusted partner
As Gueret details, the case for relying on a trusted professional services partner is a compelling one. He notes that such a partner, “Can define, plan and execute a transformation efficiently, even if most operators have in-house competencies to do it themselves.”
The benefits he cites are:
The article goes on to point out how professional services are part of a broader set of capabilities for upgrading network infrastructure, and that partnering on a variety of fronts can enable operators, regardless of where they are coming from, to shift their business models. This means relying on a variety of trusted partners to not only to prepare and execute their network transformation but also to manage and maintain their networks.
This would let operators shift their business model to focus on their core activity: managing their commercial offers and their customers. This is a reality that is summed up well in the chart below from the posting.
Figure 3. Enabling operators focus on customer facing activities
The message is a powerful one. The urgency is there for operators to transform their networks for a host of well known reasons relating to operations costs and competitive necessities, and despite a cultural history to do almost everything themselves, network operators by relying on the expertise of others have the opportunity to meet their cost objectives and concentrate on what they do best. This means not just listening to the voice of the customer but hearing them and reacting quickly in ways that encourage loyalty and the willingness to trust the operator when evaluating the purchase of new products and services.
]]>Their networks, which traditionally have been based on turnkey network elements running software on purpose-built hardware, are moving to a software-centric model. In this model the true value lies in the software, while the hardware is typically of the commercial-off-the-shelf variety.
Network Functions Virtualization (NFV) is the name of this new architecture, which not only embraces the model of instituting network functionality in software and running it on industry-servers, but also allows applications and services to leverage those resources whenever and wherever they are.
The success of virtualization in the data center has demonstrated the power of running network capabilities on virtual machines. That’s powerful because it allows networks to be more fluid so they can meet shifting demands. It’s also powerful because it can result in cost savings, given less – and less specialized – hardware is required, and given virtualized environments (in which one server can host various network elements) tend to consume less power than environments featuring a collection of appliances.
NFV also can help facilities-based network operators effectively reinvent themselves to be more agile, so they can better compete with faster and often smaller over-the-the-top service providers.
Reducing equipment costs and power consumption, and expediting the introduction of new services and features were among the key goals laid out by ETSI’s NFV group, which got the network functions virtualization movement rolling a couple years ago. Founders of the NFV group within the European standards body included AT&T, BT Group, Deutsche Telekom, Orange, Telecom Italia, Telefonica, and Verizon.
Network operators that want to get started with NFV, suggests Andreas Lemke, marketing lead of the CloudBand NFV platform at Alcatel-Lucent, should take advantage of what he describes as “5 must-have attributes of an NFV platform.” These include:
Finally, and as important as all of the technology, Lemke says that those wishing to get started with NFV should select partners that can provide the same five 9s reliability, quality of service, and security in the new virtualized environment as they enjoy with their existing networks.
There is a growing industry consensus that NFV will become the architecture of the future for networks that are agile, applications friendly, high-performance, interoperable and secure. In fact, not only is there consensus but there is traction in the market for NFV solutions as service providers look to transform themselves to be as accommodating as possible in a profitable manner to the dynamics of rapidly changing market requirements. However, not all NFV solutions are alike which is why the Lemke attributes list is one worth consideration as part of an NFV evaluation.
]]>
It feels like it was just a few months ago when you could read articles in the trade press lumping together SDN and NFV with NFV being a form of SDN or vice versa. Yes, both somehow are about virtualization and about converting hardware into software. Today – after numerous proofs-of-concept run by service provides around the globe – we know the role of SDN as virtually indispensable for NFV solutions that aspire to deliver the kind of agility and operational simplification we all expect from NFV. Only SDN can deliver quickly enough the (virtual) networks needed for newly deployed network functions. Alcatel-Lucent has recently demonstrated a complete virtual evolved packet core (vEPC) including a virtual IMS/VoLTE deployed in less than 30 minutes.
NFV and SDN enable on-demand service composition by steering traffic through a sequence of middle-box service functions (service function chaining), such as firewalls and traffic optimization. For example, an enterprise or consumer customer can use a self-service portal to check off the desired functions, which causes virtual network functions to be deployed or scaled and (per-subscriber) routing policies to be changed automatically (flow-through provisioning).
Likewise, NFV responds to changing traffic within minutes by spinning up additional virtual machines within the same data center but also in a data center close to where the traffic demand originates. NFV enables rapid software upgrades while containing the risk of service degradation. We are even seeing demand on the horizon for adopting Devops models in the telco domain.
A classical operational model with change requests being sent to the networking department is no longer up to the task. The network needs to be as dynamic as the server infrastructure and it is clear that only SDN can fill the bill. This will be a stepwise process and not just any SDN will be suitable for NFV. Telco networks are not only about dropping packets in on one side and the packets popping back out at their destinations. Telco networks are designed to deliver enough capacity, high enough performance, security and high availability for the critical services running over them in an end-to-end geo-distributed environment.
Clearly, SDN is right for NFV but it needs to be the right SDN. Read the white paper “The right SDN is right for NFV” to learn about critical network requirements for NFV, SDN use cases and four stages of SDN integration into NFV bringing different degrees of reward to service providers. Alcatel-Lucent CloudBand™ and Nuage Networks® VSP are discussed as an example integrated SDN/NFV solution.
]]>I still remember with great excitement how, in October 2012, a group of network operators published a whitepaper that coined the term Network Functions Virtualization. This announcement validated a vision that we had been promoting under the name of “Virtual Telco” for more than two years. The telecommunications world had decided to start a fascinating journey towards the cloud, and we were already in the game with a product. What I could not imagine is how fast things would move.
Since then, many players have announced their plans, and many of them have shown some kind of functionality addressing different aspects of NFV. CTO teams have been leading the discussions around performance and functionality, but until now, nobody had tackled one of the major questions: Will the NFV business case fly?
We had begun internal discussions around the business case to support the NFV value proposition and had finished a framework when an opportunity arose to work on a virtual DNS business case with a customer. In the course of a DNS demo presentation on top of the Alcatel-Lucent CloudBand™ NFV Platform, one of our tier-one service provider customers pointed out that their large fixed network DNS deployment was at end-of-life and they were planning to replace it. The discussion quickly focused around the question of whether migrating to NFV with CloudBand would be more beneficial than replacing the old servers with new ones, but keeping the traditional mode of operation. To help with this decision, we offered to develop a joint business case comparing both scenarios. The customer agreed, and within two weeks I was working with our client’s DNS lead at their premises.
A first challenge was to develop a suitable methodology for the analysis. We decided we had to map out the main processes in DNS operations in a way similar to Gantt charts. That was a laborious task, but was key to capturing the differences induced by NFV. We analyzed how the processes were performed today, and how they would change to fit the NFV model. To quantify capacity requirements and get credible numbers, we tested DNS loads on a real CloudBand node in the customer Lab. As a result, we had to change our initial estimations, which turned out to be too conservative. Finally, after about ten weeks of joint effort, we achieved what looked like a sound business case. It had been a great challenge but the effort had paid off.
The work delivered two results we had not necessarily expected. Even a simple application like DNS can benefit clearly from running on an NFV platform. Processes such as scaling, software upgrading and healing are greatly simplified, which increases agility and significantly lowers total cost of ownership. Secondly, the results were clearly positive with a single application running on the NFV infrastructure, that is, service providers can start small on the road to NFV. It is not necessary to deploy many virtual network functions all at once. Of course, sharing the infrastructure will provide the full benefits.
Looking at the demand for presentations both from partners and customers, I can now see how important this exercise was to better understand the economic value of NFV, and how it contributes to the bottom line. Great days are ahead of us. I cannot wait to see the first wave of NFV deployments that will change the telecoms world as we know it today.
For additional information, read the ‘NFV Insights Series: Business case for moving DNS to the cloud’.
]]>An NFV platform enables providers to run network functions on a homogeneous, distributed cloud infrastructure. Using an NFV solution, they can port network functions such as communications and messaging applications and fixed and mobile network functions over to a virtual machine environment. Freed from proprietary, physical hardware, providers can leverage this virtualized infrastructure as the basis for their own service platforms and operations.
Seeing the opportunity inherent in NFV, as described in detail in an applications note Alcatel-Lucent has developed a purpose-built NFV platform for service providers, CloudBand. The platform supports distributed clouds and dynamic network control to meet application demands, and it optimizes network operations by automating cloud node management, application lifecycle management, smart placement and network configuration.
“With the CloudBand NFV platform they gain the agility to quickly deploy and upgrade services in a dynamic cloud environment, and to grow and shrink service resources on demand,” notes Alcatel-Lucent. “The platform eliminates the need to buy more custom hardware or support the large operations teams that are currently needed to install and manage sites.”
The CloudBand solution consists of a software and hardware stack that is made up of Alcatel-Lucent’s CloudBand Management System and its CloudBand Node.
The CloudBand Management System orchestrates, automates and optimizes virtual network functions across aservice provider’s distributed network and data centers, and aggregates distributed cloud resources to provide a coherent view of the entire NFV infrastructure as a single, carrier-grade pool.
The CloudBand system has a pluggable architecture, supports industry-standard APIs such as OpenStack and Apache CloudStack, enables multitenancy, and can serve as a platform-as-a-service to automate the complete application lifecycle from deployment, monitoring, scaling, healing, and upgrading and patching, all from an HTML5 interface.
The CloudBand Node is a turnkey, all-in-one compute, storage and network node system that the company bills as a “cloud in a box.”
Together, Alcatel-Lucent has produced an NFV platform that delivers on the five major components that help bring a successful NFV implementation, including orchestration that treats data centers and networks as a single cloud, abstracted and automated network provisioning and monitoring, lifecycle management, an open infrastructure that can be leveraged by partners and developers, and easy deployment through CloudBand Node.
All this leads to an NFV solution that’s almost too easy to deploy given the benefits it delivers for operators.
]]>