The cloud era is here -- do you think your network is ready? As a network operator, you will need to deliver on-demand network services that are just as dynamic as the cloud services that now dominate network traffic. You face many challenges in making this happen.
But a new study from ACG Research shows you can achieve this quickly and profitably with advancements that are available now. Their analysis of the new Alcatel-Lucent Network Services Platform in a national network scenario showed you can cut service creation time, generate more revenue, and achieve significant ROI very quickly.
Network complexity & waste block profitability
So what’s stopping you with the present mode of operation? Complexity and waste are getting in the way of your profitability.
The business processes used to plan, build, and operate network infrastructure involve manual handoffs between the network engineering processes that control network resources and the network operations processes that provision services. Each is further divided into separate packet and transport silos. OSSs/IT and element management systems are forced to interoperate with the network through multiple, complex, and vendor-specific APIs.
The impact of these limitations can be crippling as operators make the transition from static to dynamic network services.
Carrier SDN: Automate & optimize for true freedom
Carrier SDN offers a fresh way forward. The NSP leverages Carrier SDN to unify service automation and network optimization in one integrated platform. The result is that network operators can deliver dynamic services quickly, efficiently, and at great scale.
The NSP accomplishes this by:
ACG Research test results
To put the NSP to the test, ACG Research made comparisons between an NSP-enabled and PMO-enabled national network used to deliver bandwidth calendaring and bandwidth on-demand services to a target market of 10,000 large enterprises.
ACG found the following:
Operators can achieve the dramatic increase in revenue with the NSP, compared to the PMO, because it can improve capacity utilization by 40 percent. This utilization improvement enables profitable operation at a price point that is 29 percent lower than the PMO price point. This, in turn, stimulates demand relative to what is possible using the PMO. The NSP’s 58 percent faster service creation time also provides a prime-mover advantage and advances revenue recognition.
To be as dynamic as the cloud services that now dominate network traffic, you will need to:
We believe the NSP can help you make this happen. Download the Carrier SDN business case and register for our upcoming webinar series to find out how you can cut service creation time and grow revenue.
We already know the mercurial growth of the Internet and mobile technology. Cloud and data center traffic will increase by 440 percent by 2017, according to a recent Alcatel-Lucent blog post, and video consumption will rise by 720 percent during that time.
What many of us do not know, however, is that the Internet also damages the environment; Gartner recently showed that the Internet creates more than 300 million tons of CO2 a year. So growth of the Internet and mobility is not such a happy picture from a sustainability perspective.
If we are to combat this looming environmental challenge, it will take the work of not just individuals but also businesses committed to sustainable practices. Thankfully, sustainability can be good for companies and not just the environment.
“Organizations that start implementing green solutions are not only helping to reduce the impact of climate change,” noted Nataly Leal, senior communications manager of the Andean region for Alcatel-Lucent in a recent blog post, Internet, a driving force for the responsible development of countries. “Using planet-friendly technologies helps to generate more sustainable business, reduces operating costs, projects a better image of companies and generates a better economy. All these benefits have an impact on business.”
The proper use of technologies can help businesses reduce carbon emissions by 20 percent, according to the Global e-Sustainability Initiative.
“Facing this reality, more technology companies have been working to create new solutions that consume energy in an intelligent way and at the same time be kind to the planet,” Leal noted in her blog post. “Each day these companies launch to market products that meet the major challenges of climate change, seeking help to minimize the impact of the Information and Communications Technology (ICT) industry.”
For instance, Alcatel-Lucent has developed applications such as G.W.A.T.T. to assist operators, service providers and interested parties in understanding the energy consumption and costs associated with communications networks in present and future. These help firms identify the conflictive points of the energy and help show how to improve sustainability practices.
The Internet is a boon for business. But business also needs to make sure that it isn’t a detriment to the environment. In fact, as Alcatel-Lucent has done for many years, being a good steward of the earth’s resources needs to become part of a company’s DNA. Indeed, there is demonstrable proof that being a good steward is also good business.
]]>In the search for more knowledge about the incredible pace of innovation and change that is driving major network transformation by enterprises and service providers; it is always a good idea to review the postings of those on the front lines. This is why the recent blog by Marten Hauville, Principal Solutions Architect (ANZ) for cloud networking specialist Alcatel-Lucent’s Nuage Networks business unit and Co-Organizer of the Australian OpenStack User Group, caught my attention.
Hauville in his blog raises and answers a timely question, “What’s up with the data center network?”
The reason this is so important is as Hauville notes, “We are in the midst of a transition in IT. Over the last couple of years the cloud has morphed from a disruptive technology on the periphery of IT into the mainstream.” In short, the world is going cloud and data center-centric.
Of the three pillars—Compute, Storage and Network— that are the foundation of the move to a data center-centric, software defined and controlled applications-based world, network historically has been a laggard when it comes to transitioning to next generation capabilities. However, as Hauville explains this is no longer the case. Indeed, thanks to Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) the pace of innovation and adoption of cloud-centric transformations is accelerating. Hence, what’s up in regards to the data center network is so relevant.
Hauville starts with the assertion that: “Business competitive advantage these days is dictated by swiftness and agility, increasingly around business-driven applications that attain this advantage in the marketplace. This new edge is being pushed hard by enterprises that are adopting web-scale capabilities through software, drawing them into their inherent business products and practices.” He goes on to cite chapter and verse about how and why “Cloud IT” has become literally mission critical for enterprises in Australia and New Zealand.
Having made the case for Cloud IT, Hauville poses the question about how to enable the cloud to drive greater agility across the whole business. The answer is transforming the data center network. Yet, as he notes the Network presents some interesting challenges. In fact, the inability of Network to keep pace with Compute and Storage he says has led to a situation that, “Limits the overall efficiencies businesses could achieve from both their virtualization and initial private cloud investments.”
Cracking the network constraint challenge
What really caught my attention was the following statement by Hauville that: “This fundamental network constraint is not caused by the hardware capacities or bandwidth of the network. Far from it. The capacity and speed aspects of data centre networking have tracked well ahead of compute power with the availability and density of 10Gigabit, 40Gigabit and even 100Gigabit. The issue is due to limited evolution in the management, configuration and dynamism of these networks.”
I will not spoil why I have bookmarked the blog as a must reread reference. Hauville’s explanation of how the addition of next generation management, configuration, i.e., orchestration and control, can bring out the maximum value of all of the other technology upgrades taking place in data centers. He then goes on to make a very cogent case for Software Defined Networking (SDN) implementation as the means for achieving data center operational excellence.
Hauville closes with a caveat worth considering, “So if this future is set, and the underlying technology decision been made the key question now is not if you choose SDN but how you choose the right SDN implementation.”
Unfortunately, despite the embrace of traditional solutions of open source solutions for SDN, not all SDN solutions are alike. At an even higher level the caveat also should resonate since not all virtualization initiatives in general are not alike. The facts are that interoperability issues are going to be a major challenge for SDN. They are also going to be an issue for the NFV solutions that service providers are beginning implement. It will be fascinating to see how far and how fast the solution buyers will push the vendors to resolve these issues as internetworking and not just what goes on inside a data center or a federation of networked private cloud data centers comes to the fore.
Circling back to the question raised at the top about what’s up with data center networking, the answer is in two words, “a lot.” And, the caveat to this answer is that same as Hauville’s. Choosing the data center networking transformation technology that is right for your organization is a complicated challenge since there are options and vendors to be evaluated in the context of you unique requirements. However, such transformations are no longer about if but when, and because of the nature of how business is changing a sense of urgency about making the right move should be a driver.
]]>A recent Alcatel-Lucent application note, The large enterprise has changed, gave an interesting snapshot of large enterprise IT today.
Source: Alcatel-Lucent, The large enterprise has changed
Based on this, it stressed that large enterprises have networking and communications infrastructure needs that are surprisingly similar to those of the network operators themselves, thanks to the growing importance of having employees connected with the bandwidth, security and reliability they need to do their jobs efficiently and effectively.
What this means is that large enterprises should start thinking like a network operator. This includes having telecom-grade IP platform infrastructure in place to support employee connectivity.
Specifically, large enterprise should think about using data center automation that can take advantage of technologies such as software-defined networking (SDN). With something like Alcatel-Lucent’s Nuage Networks Virtualized Services Platform, large enterprises can deliver SDN capabilities including centralized, policy-driven networking, simplified configuration and compliance automation.
Large enterprises also should have virtualized network services that can leverage SDN to create wide area networks (WANs) that can use best of breed technology and avoid proprietary lock-in.
In terms of the cloud, large enterprises are overwhelmingly deploying private clouds. Large enterprises should make sure they have a turnkey solution in place to make those deployments easy and also flexible enough to support web-based applications and mobile apps.
In thinking like telecoms, large enterprises additionally should consider optical transport and data center interconnect.
Optical transport delivers the bandwidth and speed that large enterprises need to keep up with network demand, and data center interconnect delivers the flexibility and capacity for faster service turn-up and assured business continuity while improving asset utilization and lowering costs. Data center interconnect brings scalable, secure, high-performing, multi-site data center connectivity for the cloud era.
Network connectivity is a key component of every business, especially for large enterprises. As a result, businesses need to learn from network operators and consider investing in similar technologies when it comes to their own connectivity projects.
As they move into the cloud era, network operators need a service aware network operations tool to assure virtual network functions (VNF) management. They’ll need it to efficiently perform a variety of network operations tasks, including:
As described in a vEPC post related to converging NMS and VNF manager functions within the ETSI Management and Orchestration (MANO) architecture, operators need to evolve their network operations tools for NFV through tighter coupling the NMS and VNF manager functions. Specifically for VNF assurance, the blog states “Troubleshooting is simplified because traditional NMS faults/events are correlated with VNF related events/faults. The VNFM provides lifecycle management and automates the self-healing of VNFs.”
In addition to the ETSI MANO architecture, progress has been made in the ETSI specification for defining NFV Service Quality Metrics that strives to enable better engineering of VNF user service quality, more efficient fault localization and mitigation, and faster identification of true root cause of service impairment so proper corrective actions can be taken promptly.
As NFV service quality metrics and traditional network service performance are continuously monitored, a service aware infrastructure relationship model within a network operations tool will be important for it to be able to innately correlate events to the true root-cause of service impacting problems, without having to develop and pre-configure volumes of custom handling policy rules and scripts. In addition, this model will allow operators to perform a more rapid service impact assessment for network events under investigation, as well as speed fault isolation and resolution.
And to make this more advanced fault management meaningful for network operators, assurance visualization will help by providing intuitive views for easily understanding how a multitude of events and key quality indicators (KQIs) relate to each other, with clear visibility into the root-cause of problems. It will also insightfully give operators an understanding of the time-line for events and state changes in the network to give a better indication of cause and possible effects.
This blog is the 2nd in a series that discusses the evolution of network and service assurance. The 1st blog gives a general overview on how network operations tools can be more efficient.
ASSURING THE EVER-CHANGING STATE OF THE VIRTUAL NETWORK
VNF configurations will be far more dynamic than with physical network elements (PNF), presenting new challenges for network operations tools to keep pace with many events related to highly dynamic network state changes and elastic scaling.
Manual processes that piece together assurance data from disparate views will not be sufficient to keep pace in this highly dynamic NFV environment. And traditional real-time-only monitoring and assurance views will not be effective when a VNF could be here in 1 moment and scaled down and gone in the next. This means that there is a need for both current and historical events and state information to be intelligently processed with near real-time performance, and at large scale.
Consider how much more meaningful it would be for network operators if assurance views could be made more intuitive for easily understanding how all the network events and MANO related KQIs relate to each other. For example, wouldn’t it be more insightful for operators troubleshooting a service performance issue to have a timeline that shows the service impacting threshold crossing alerts (TCAs) as well as whether orchestration or network events occurred in the same general timeframe?
ENHANCING NFV ASSURANCE WITH SERVICE QUALITY METRICS
As VNF deployments increase, network operations tools will need to evolve with new NFV service quality metric definitions and provide intelligence for correlating the multitude of different events coming from the various types of NFV infrastructure and MANO elements. Specifically related to troubleshooting and root-cause analysis that works in coordination with VNF lifecycle management, operators need service aware visibility and traceability to the various possible service quality impacting layers.
For operations to be effective in a highly dynamic environment with network services that depend on both VNFs and PNFs for underlying network infrastructure, there must be a service aware understanding of the relationships between services and these VNFs and PNFs. And equally important, there also must be a mapping of how service quality events triggered by virtual machines, VNFs, and orchestration layers impact or trigger changes in dependent layers.
For example, when there are issues with virtual network provisioning latency or reliability or diversity compliance, these conditions may trigger actions within the orchestration layer. But as a primary concern of network operators:
Without a network operation tool that can provide this type of intelligence for assuring VNFs, operators will not have the visibility needed to understand whether a problem is within the scope of their control. And this is the type of information would not only be highly valuable for troubleshooting, but even more broadly for clarifying accountability for a localized problem across various organization groups from IT to the different network domain groups.
Operators require a unified network operations tool that has evolved with the intelligence to meet all of these new NFV related assurance challenges. This tool must possess a service aware model that is unified with NFV lifecycle management. It must scale and perform to keep pace with tracking huge volumes of events that reflect the continual state of flux of change across service quality impacting layers. (For more examples of service quality metrics that provide requirements for assuring virtual networks, please refer to the ETSI specification for defining NFV Service Quality Metrics.)
EVOLVING ASSURANCE WITH ADVANCED FAULT MANAGEMENT
Operators deploying NFV require advanced fault management that provides both current and historical visibility for root-cause analysis, so that active faults can be correlated with past ones as the state of the network changes. This historical fault correlation is essential for pinpointing the root cause of problems in the highly dynamic virtualized network where MANO triggered corrective actions could potentially make intermittently reoccurring customer impacting issues difficult to investigate.
And network and service assurance tools in the cloud /NFV era must scale to track the full history of related service impacting events so network operators can perform both real-time troubleshooting and trend analysis.
Tools also need to have the intelligence to detect reoccurring problems. Specifically, operators require a tool that can help them to assess whether corrective resolutions that were automated are successful, or whether they are failing. And if failing, whether the failures are persistent or intermittent, and whether there is an actionable probable cause against the network infrastructure within the scope of the network operator’s control. And amongst the high volumes of events, there will also be a need to suppress (or filter out) events that do not require an action by the network operations team.
The following video demo offers a deeper dive into an advanced fault management application from Alcatel-Lucent.
RELATED MATERIALS
]]>
OpenStack isn’t an as-is solution for telco network functions virtualization (NFV) infrastructures. OpenStack is an open-source cloud management technology that provides many of the capabilities needed in any NFV environment. And this has prompted interest among many telco service providers.
But to realize the full benefits of NFV, service providers need NFV platforms that provide additional capabilities to support distributed clouds, enhanced network control, lifecycle management, and high performance data planes.
The OpenStack/NFV backstory
In 2010, RackSpace® and NASA jointly launched OpenStack®, an open-source cloud computing platform. Since then, the OpenStack community has gained tremendous momentum, with over 200 member companies.
Originally, OpenStack was not designed with carrier requirements in mind. So in 2012, a group of major telecommunication service providers founded an initiative to apply virtualization and cloud principles to the telecommunications domain.
The term network functions virtualization was coined for this initiative. Service providers called for vendors to build virtualized network functions (VNFs) and NFV platforms to help them become more agile in delivering services, and to reduce equipment and operational cost.
To address identified gaps in OpenStack and other relevant open source projects, major industry players established in September 2014 “Open Platform for NFV” as a Linux™ Foundation Collaborative Project. The intention is to create a carrier-grade, open source reference platform for NFV. Industry peers will build this platform together to evolve NFV and to ensure consistency, performance, and interoperability among multiple open source components.
There are 5 main areas in which OpenStack is currently lacking as a solution for telco NFV environments:
1. Distribution
In the IT world, enterprises want to consolidate their datacenters to reduce costs. But this is not always the best choice for NFV. Many NFV applications require a real-time response with low latency. NFV applications also need to be highly available and survive disasters. Service providers need the flexibility to deploy network functions in a distributed infrastructure — at the network core, metro area, access, and possibly even a customer’s premises.
Figure 1. Distributed NFV infrastructure
OpenStack supports Cells, Regions, and Availabilities Zones, but these concepts are not sufficient for the needs of NFV. Each OpenStack Region provides separate API endpoints, with no coordination between Regions. Typically, one or more Regions are located in one datacenter. The Cells component provides a single API endpoint that aggregates multiple regions.
With Cells, workload placement (“scheduling”) across cells is by explicit specification or by random selection. The Cells component doesn’t have a placement algorithm that is able to choose the best location based on the needs of the application.
The Horizon GUI is restricted to a single region at a time. There is no GUI able to show an aggregated view of the NFV cloud infrastructure. The OpenStack Glance virtual machine image manager is also limited to a single region. This means that the NFV operator would have to deploy images manually to the regions needed.
Bottom line: Service providers need a platform that will deal efficiently with the distributed NFV infrastructure necessary for low signal latencies and disaster resiliency. This infrastructure must also be manageable as a single distributed cloud with global views, statistics, and policies.
2. Networking
VNFs vary widely in their network demands. Because they are distributed throughout an NFV infrastructure, the baseline requirement for an NFV network is connectivity, both within datacenters and across WANs. Security dictates that different network functions should only be connected to each other if they need to exchange data, and the NFV control, data, and management traffic should be separated.
As network functions are decomposed – for example into data plane components and a centralized control plane component – network connectivity between these components needs to remain as highly reliable as traditional integrated architectures. Sufficient network resources should be available to ensure surging traffic from other applications cannot adversely affect NFV applications.
The network should be resilient against equipment failures and force majeure disasters. Latency and jitter requirements vary from hundreds of milliseconds for some control and management systems, to single digit milliseconds for mobile gateways and cloud radio access networks.
NFV networks will typically consist of a semi-static physical infrastructure, along with a much more dynamic overlay network layer to address the needs of VNFs. The overlay layer needs to respond quickly to factors such as changing service demands and new service deployments.
OpenStack Neutron is the OpenStack networking component offering abstractions, such as Layer 2 and Layer 3 networks, subnets, IP addresses, and virtual middleboxes. Neutron has a plugin-based architecture. Networking requests to Neutron are forwarded to the Neutron plugin installed to handle the specifics of the present network. Neutron is limited to a single space of network resources typically associated with an OpenStack region. It is unable to directly federate multiple network domains and manage WAN capabilities.
Bottom line: Service providers need a platform that will set up and manage local- and wide-area network (LAN and WAN) structures needed for carrier applications in a programmable manner
3. Automated lifecycle management
One of the greatest advantages of NFV as a software-based solution is its ability to automate operational processes. This includes the application lifecycle, from deployment to monitoring, scaling, healing and upgrading, all the way to phase out. Studies have shown that this automation will allow service providers to reduce operational expenses (OPEX) by more than 50 percent in some cases.
OpenStack Heat allows users to write templates to describe virtual applications (“stacks”) in terms of their component resources, such as virtual machines including nested stacks. Originally, Heat templates were based on AWS™ CloudFormation™, but more recently Heat Orchestration Templates (HOT) have been introduced that offer additional expressive power. Heat focuses on defining and deploying application stacks but does not explicitly support other lifecycle phases.
OpenStack Solum is a new project designed to make cloud services easier to consume and integrate into the development process. It is being designed to provide some of the missing lifecycle automation functions. There is some initial work on auto-scaling by combining the measurement capabilities of OpenStack Ceilometer with Heat. Heat is currently limited to a single OpenStack region.
Bottom line: Service providers need a platform that will automate not only deployment and scaling but also many other lifecycle operations of complex carrier applications with many component functions.
4. NFV infrastructure operations
The distribution of NFV infrastructures across many locations in a service provider’s network – as opposed to a few centralized locations – will pose specific challenges and impact the operational processes and support systems. NFV’s distributed infrastructure means that cloud nodes at different locations are added, upgraded, and/or removed more frequently than in a centralized cloud. These processes should be performed remotely whenever possible to avoid truck rolls across the coverage area.
OpenStack TripleO (OpenStack on OpenStack) is an experimental addition to the OpenStack family. The project aims at automating the installation, upgrade and operation of OpenStack clouds using OpenStack’s own cloud facilities. TripleO uses Heat to deploy an OpenStack instance on top of a bare-metal infrastructure.
Bottom line: Service providers need a platform specifically designed for a distributed NFV infrastructure, one that automates the complex software stack deployment and upgrade procedures.
5. High-performance data plane
Many carrier network functions (e.g., deep packet inspection, media gateways, session border controllers, and mobile core serving gateways and packet data network gateways) are currently implemented on special-purpose hardware to achieve high packet processing and input/output throughput. Running those functions on current off-the-shelf servers with current hypervisors can lead to a 10-fold performance degradation.
The industry is currently working on new technologies that have the potential to improve data plane performance on commercial off-the-shelf servers, in some cases to nearly the levels of special-purpose hardware.
Data plane performance, however, has been a fringe activity in the OpenStack community. Only recently, e.g., with the Juno release, more focus has been put on data plane acceleration. Juno offers support for requesting access for virtual machines to Intel®’s Single Root I/O Virtualization technology.
Bottom line: Service providers need a platform that will manage high-performance data plane network functions on commercial off-the-shelf servers.
Beyond OpenStack: What’s needed to make NFV work today?
Most service providers around the globe are looking for an open and multi-vendor NFV platform based on OpenStack. But as discussed, the OpenStack community is not strongly focused on some key NFV requirements. What’s missing is an NFV platform that goes beyond the scope of OpenStack to help customers realize reductions in CAPEX and OPEX, and improved service agility.
OpenStack is still under heavy development in many areas. As it matures, OpenStack will become more stable and richer in functionality, allowing it to better meet NFV requirements in certain areas. However, it is not expected to meet all requirements.
Service providers need a horizontal NFV platform that provides:
This approach will make it possible to break open today’s multiple application silos.
This article is based on the Alcatel-Lucent/Red Hat white paper CloudBand with OpenStack as NFV Platform.
To contact the author or request additional information, please send an email to techzine.editor@alcatel-lucent.
]]>One of the things that will characterize 2015 is the trend that started picking up momentum in 2014 that communications service providers (CSPs) have developed a sense of urgency about transforming their networks. It used to be that if you were a network operator you could invest with some level of assurance that the hardware and the associated software to run it would be core to your network for possible decades before becoming obsolete. However, as everyone in the industry knows, this is no longer the case.
As the world becomes more software-centric in terms of service creation, delivery, agility, security and performance— to meet the tsunami of data heading operator’s way and to allow network operators to maintain their relevance as ecosystem hubs rather than “dumb pipe” providers—cost efficient and effective operational excellence and the need to be fast-to-market and fast in the market with innovative services and enhanced customer experiences have become paramount. It is why so much attention is being paid to thing like Software-Defined Networks (SDN) and Network Functions Virtualization (VFV).
The need for speed has become (pardon the turn of phrase) hyper-critical. However, with recognition of the need to transform and do so rapidly should also come the recognition that network operators cannot transform rapidly and successfully on their own. It may not “take a village” to get transformations in the fast lane and done right, but it certainly takes trusted partners. In fact, Olivier Gueret , Senior Marketing Manager Wireless Transmission at Alcatel-Lucent, in a recent TechZine article, Rely on partners for your network transformation, makes a nice case as to the vital role partners can play in helping develop and expedite successful network transformations.
In fact, Gueret explains why professional services in particular are important in network transformation projects for a variety of reasons including filling in skills gaps and having experience in all of the complexities of such projects. After all, network transformations from my own observations are like trying to change jet engines while a plane is at 30,000 feet. They are extremely complicated, especially since every customer is unique, and the plane needs to stay in the air and perform at optimal levels even as parts are replaced. There is also interesting challenges regarding the costs of change and how to quantify that the ends justify the means.
Gueret in his posting posits the case made above, i.e., network transformation is no longer a nicety it is a necessity. He goes on to highlight that this really is a case of different strokes for different folks. In fact, he points to a recent Ovum study that when it comes to the reasons to transform operators are divided in two camps:
As he notes, both approaches have the same goals of transforming their network to increase revenues and reduce OPEX, they certainly diverge as to how. This leads to falling into some traps which reliance on a trusted partner with deep network transformation expertise can help mitigate.
Gueret points out the hidden costs of “home-made” network transformations which can translate into additional costs. Cited additional costs from going it by yourself if you are an operator include: costs of unexpected delays caused by poor planning and sequencing; costs from poor quality assessments of infrastructure capabilities; and costs from over-dimensioning, .e.g., spending on things that will not be used or cannot be optimized.
The case for relying on a trusted partner
As Gueret details, the case for relying on a trusted professional services partner is a compelling one. He notes that such a partner, “Can define, plan and execute a transformation efficiently, even if most operators have in-house competencies to do it themselves.”
The benefits he cites are:
The article goes on to point out how professional services are part of a broader set of capabilities for upgrading network infrastructure, and that partnering on a variety of fronts can enable operators, regardless of where they are coming from, to shift their business models. This means relying on a variety of trusted partners to not only to prepare and execute their network transformation but also to manage and maintain their networks.
This would let operators shift their business model to focus on their core activity: managing their commercial offers and their customers. This is a reality that is summed up well in the chart below from the posting.
Figure 3. Enabling operators focus on customer facing activities
The message is a powerful one. The urgency is there for operators to transform their networks for a host of well known reasons relating to operations costs and competitive necessities, and despite a cultural history to do almost everything themselves, network operators by relying on the expertise of others have the opportunity to meet their cost objectives and concentrate on what they do best. This means not just listening to the voice of the customer but hearing them and reacting quickly in ways that encourage loyalty and the willingness to trust the operator when evaluating the purchase of new products and services.
]]>It feels like it was just a few months ago when you could read articles in the trade press lumping together SDN and NFV with NFV being a form of SDN or vice versa. Yes, both somehow are about virtualization and about converting hardware into software. Today – after numerous proofs-of-concept run by service provides around the globe – we know the role of SDN as virtually indispensable for NFV solutions that aspire to deliver the kind of agility and operational simplification we all expect from NFV. Only SDN can deliver quickly enough the (virtual) networks needed for newly deployed network functions. Alcatel-Lucent has recently demonstrated a complete virtual evolved packet core (vEPC) including a virtual IMS/VoLTE deployed in less than 30 minutes.
NFV and SDN enable on-demand service composition by steering traffic through a sequence of middle-box service functions (service function chaining), such as firewalls and traffic optimization. For example, an enterprise or consumer customer can use a self-service portal to check off the desired functions, which causes virtual network functions to be deployed or scaled and (per-subscriber) routing policies to be changed automatically (flow-through provisioning).
Likewise, NFV responds to changing traffic within minutes by spinning up additional virtual machines within the same data center but also in a data center close to where the traffic demand originates. NFV enables rapid software upgrades while containing the risk of service degradation. We are even seeing demand on the horizon for adopting Devops models in the telco domain.
A classical operational model with change requests being sent to the networking department is no longer up to the task. The network needs to be as dynamic as the server infrastructure and it is clear that only SDN can fill the bill. This will be a stepwise process and not just any SDN will be suitable for NFV. Telco networks are not only about dropping packets in on one side and the packets popping back out at their destinations. Telco networks are designed to deliver enough capacity, high enough performance, security and high availability for the critical services running over them in an end-to-end geo-distributed environment.
Clearly, SDN is right for NFV but it needs to be the right SDN. Read the white paper “The right SDN is right for NFV” to learn about critical network requirements for NFV, SDN use cases and four stages of SDN integration into NFV bringing different degrees of reward to service providers. Alcatel-Lucent CloudBand™ and Nuage Networks® VSP are discussed as an example integrated SDN/NFV solution.
]]>The advantages to mobile operators of network functions virtualization (NFV) and moving to a virtualized evolved packet core (vEPC) have become clear, and mobile networks operators are pretty much sold on the technology in theory.
As the technology side has been figured out and operators begin to plan commercial deployments of NFV and vEPC, however, discussion is starting to move toward operational requirements and challenges. Mobile network operators need to figure out how best to manage these new virtual network functions (VNFs) and the NFV infrastructure, and also how to modify the existing network operations model when these VNFs are deployed.
“These are understandable concerns since clearly there will be additional operational issues when this NFV-MANO [management and orchestration] network architecture is deployed,” noted Keith Allan, Director IP Mobile Core Product Strategy, Alcatel-Lucent, in a recent TechZine posting, vEPC: How to achieve operational elegance.
There are a number of new functional blocks and data repositories that come with this new model, including the MANO functions themselves, vEPC VNFs, element and network management systems (EMS/NMS), operational and business support systems (OSS/BSS), and NFV infrastructure.
For Allan, however, these concerns are real but solutions also exist for mobile network operators to deal with them.
Existing EMS/NMS can be combined together with an integrated NFV/SDN management solution and enable mobile operators to address NFV operational challenges while also being able to manage the existing purpose-built, product-based network using their current OSS/BSS, according to Allan.
This combined system, which Alcatel-Lucent business unit Nuage Networks and the Alcatel-Lucent CloudBand team are developing can deliver, enable workflow automation with push button VNF instantiation and elasticity, automates service chaining via SDN, and brings network function orchestration to coordinate multiple virtual and physical network functions.
How this is done is through dividing into three well-established management domains: virtual machine orchestration and VNF/VNFC life cycle management, network connectivity orchestration, and network function orchestration.
“This combined element and network management solution for NFV/SDN delivers the operational elegance that mobile operators need to reduce complexity,” noted Allan in his blog post, “and it opens the door for innovation to provide new services through automation.”
As operators move from testing to commercial rollout, such solutions will increasingly rise in importance.
]]>