The cloud era is here -- do you think your network is ready? As a network operator, you will need to deliver on-demand network services that are just as dynamic as the cloud services that now dominate network traffic. You face many challenges in making this happen.
But a new study from ACG Research shows you can achieve this quickly and profitably with advancements that are available now. Their analysis of the new Alcatel-Lucent Network Services Platform in a national network scenario showed you can cut service creation time, generate more revenue, and achieve significant ROI very quickly.
Network complexity & waste block profitability
So what’s stopping you with the present mode of operation? Complexity and waste are getting in the way of your profitability.
The business processes used to plan, build, and operate network infrastructure involve manual handoffs between the network engineering processes that control network resources and the network operations processes that provision services. Each is further divided into separate packet and transport silos. OSSs/IT and element management systems are forced to interoperate with the network through multiple, complex, and vendor-specific APIs.
The impact of these limitations can be crippling as operators make the transition from static to dynamic network services.
Carrier SDN: Automate & optimize for true freedom
Carrier SDN offers a fresh way forward. The NSP leverages Carrier SDN to unify service automation and network optimization in one integrated platform. The result is that network operators can deliver dynamic services quickly, efficiently, and at great scale.
The NSP accomplishes this by:
ACG Research test results
To put the NSP to the test, ACG Research made comparisons between an NSP-enabled and PMO-enabled national network used to deliver bandwidth calendaring and bandwidth on-demand services to a target market of 10,000 large enterprises.
ACG found the following:
Operators can achieve the dramatic increase in revenue with the NSP, compared to the PMO, because it can improve capacity utilization by 40 percent. This utilization improvement enables profitable operation at a price point that is 29 percent lower than the PMO price point. This, in turn, stimulates demand relative to what is possible using the PMO. The NSP’s 58 percent faster service creation time also provides a prime-mover advantage and advances revenue recognition.
To be as dynamic as the cloud services that now dominate network traffic, you will need to:
We believe the NSP can help you make this happen. Download the Carrier SDN business case and register for our upcoming webinar series to find out how you can cut service creation time and grow revenue.
In the search for more knowledge about the incredible pace of innovation and change that is driving major network transformation by enterprises and service providers; it is always a good idea to review the postings of those on the front lines. This is why the recent blog by Marten Hauville, Principal Solutions Architect (ANZ) for cloud networking specialist Alcatel-Lucent’s Nuage Networks business unit and Co-Organizer of the Australian OpenStack User Group, caught my attention.
Hauville in his blog raises and answers a timely question, “What’s up with the data center network?”
The reason this is so important is as Hauville notes, “We are in the midst of a transition in IT. Over the last couple of years the cloud has morphed from a disruptive technology on the periphery of IT into the mainstream.” In short, the world is going cloud and data center-centric.
Of the three pillars—Compute, Storage and Network— that are the foundation of the move to a data center-centric, software defined and controlled applications-based world, network historically has been a laggard when it comes to transitioning to next generation capabilities. However, as Hauville explains this is no longer the case. Indeed, thanks to Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) the pace of innovation and adoption of cloud-centric transformations is accelerating. Hence, what’s up in regards to the data center network is so relevant.
Hauville starts with the assertion that: “Business competitive advantage these days is dictated by swiftness and agility, increasingly around business-driven applications that attain this advantage in the marketplace. This new edge is being pushed hard by enterprises that are adopting web-scale capabilities through software, drawing them into their inherent business products and practices.” He goes on to cite chapter and verse about how and why “Cloud IT” has become literally mission critical for enterprises in Australia and New Zealand.
Having made the case for Cloud IT, Hauville poses the question about how to enable the cloud to drive greater agility across the whole business. The answer is transforming the data center network. Yet, as he notes the Network presents some interesting challenges. In fact, the inability of Network to keep pace with Compute and Storage he says has led to a situation that, “Limits the overall efficiencies businesses could achieve from both their virtualization and initial private cloud investments.”
Cracking the network constraint challenge
What really caught my attention was the following statement by Hauville that: “This fundamental network constraint is not caused by the hardware capacities or bandwidth of the network. Far from it. The capacity and speed aspects of data centre networking have tracked well ahead of compute power with the availability and density of 10Gigabit, 40Gigabit and even 100Gigabit. The issue is due to limited evolution in the management, configuration and dynamism of these networks.”
I will not spoil why I have bookmarked the blog as a must reread reference. Hauville’s explanation of how the addition of next generation management, configuration, i.e., orchestration and control, can bring out the maximum value of all of the other technology upgrades taking place in data centers. He then goes on to make a very cogent case for Software Defined Networking (SDN) implementation as the means for achieving data center operational excellence.
Hauville closes with a caveat worth considering, “So if this future is set, and the underlying technology decision been made the key question now is not if you choose SDN but how you choose the right SDN implementation.”
Unfortunately, despite the embrace of traditional solutions of open source solutions for SDN, not all SDN solutions are alike. At an even higher level the caveat also should resonate since not all virtualization initiatives in general are not alike. The facts are that interoperability issues are going to be a major challenge for SDN. They are also going to be an issue for the NFV solutions that service providers are beginning implement. It will be fascinating to see how far and how fast the solution buyers will push the vendors to resolve these issues as internetworking and not just what goes on inside a data center or a federation of networked private cloud data centers comes to the fore.
Circling back to the question raised at the top about what’s up with data center networking, the answer is in two words, “a lot.” And, the caveat to this answer is that same as Hauville’s. Choosing the data center networking transformation technology that is right for your organization is a complicated challenge since there are options and vendors to be evaluated in the context of you unique requirements. However, such transformations are no longer about if but when, and because of the nature of how business is changing a sense of urgency about making the right move should be a driver.
]]>Alcatel-Lucent has developed its Network Services Platform (NSP) as a unified solution for creating agility in delivering network services. NSP brings efficiency and flexibility to the front-end problems of new service creation and the immediately downstream problems of operating those services efficiently and intelligently in a multilayer, multidomain, multivendor network. It does so in a unified and holistically designed solution.
Remarkable gains have been made in the cloud computing community to create and deploy new services efficiently and at scale. It’s also true that a significant impediment to service delivery is the rigidity of networks we deploy and processes used to define and instantiate services being offered.
A great deal of energy has been expended in recent years to enhance the flexibility of networks. Solutions have begun to appear that address parts of the problem, but to date they have been constrained to a particular function or domain and haven’t actually solved the whole agile service delivery problem for networks.
Until the Alcatel-Lucent NSP.
NSP breaks the OSS/BSS logjam in network service creation with the use of open RESTful APIs northbound for OSS and BSS integration and with use of important data modeling standards and templates for network and service representation. Using these abstractions allows services and networks to be represented once to multiple OSS and BSS applications, eliminating the need to define the same service multiple times to different modules so they can talk to a range of vendors’ platforms.
NSP enhances this streamlining by enabling service policies and tenant contexts to be associated with the newly defined services and applied broadly across the target network infrastructure.
As we discovered in the analysis of developing a new bandwidth calendaring service offering in a representative operator case, NSP brings improvements over 50 percent compared to present modes of operation in both calendar time required to define the new service offering and the number of resources needed to define the service in the OSS and BSS contexts.
As the service templates travel southbound they are converted by a versatile mediation engine into the semantics and formats needed to work with each IP/MPLS and optical network platform being managed. This auto-conversion dramatically simplifies and streamlines the provisioning process for the service offerings across network layers, vendors and domains.
Communication southbound with NSP is enabled by support of multiple standard protocols important in the multivendor environment it’s designed for: BGP-LS, PCEP, NETCONF, and SNMP today, with OpenFlow on the horizon for cases where it’s used. Special cases for vendor CLI support are also included to continue the simplification.
On top of protocol versatility, Alcatel-Lucent has integrated functionality derived from 1,000s of operator deployments in both optical and IP/MPLS layers to enhance NSP’s value. For example, three distinct path computation engines are available in NSP for use as the operator requires. A packet-oriented PCE (PCE-P) for use with IP/MPLS paths, an optically-oriented PCE (PCE-T) for use with optical paths, and a multilayer PCE (PCE-X) for use in multilayer path optimization are included. PCEs are used to define paths in line with service policies at provisioning time, and as operations progress KPIs are monitored in real time to determine if adjustments of any sort are called for.
Going further, Alcatel-Lucent has incorporated unique and innovative algorithms for resource optimization such as its self-tuned adaptive routing for LSPs that helps the network adapt allocations in real time according to policies and service delivery needs, producing further efficiencies and revenue-generating capacity.
From this profile we can see Alcatel-Lucent is applying its vision and expertise to deliver a solution that supplies the missing link with NSP in solving the wide area network agility problem. Its combination of functions has all the attributes for turning WANs into agile service delivery platforms. It’s a platform that can help turn aspirations into achievements in new service deliveries. It should be a major contributor to many operators improving their networks to become as agile as the cloud.
Paul's work explores transformations under way in SDN, NFV, cloud computing and service orchestration in service provider environments. Use cases from data center to core, metro, access and customer premises are engaged. New architectural developments and implications for vendor and operator designs are analyzed. Syndicated research analyzes market developments, forecasts market sizes, and evaluates market shares of participating vendors in key product categories. Custom research and analysis helps clients evaluate plans related to these transformations, and implement their offerings in the market. Prior to joining ACG Mr. Parker-Johnson led Juniper Networks’ cloud computing solution business enabling end-to-end cloud offerings for service providers and enterprises of multiple sizes and scale.
]]>Rarely does a video about network functions virtualization (NFV) captivate your attention like the one that Alcatel-Lucent recently uploaded about service innovation and lean operations in the context of NFV. Sometimes NFV can be a challenging concept to get your head around, but the video breaks it down with clear visuals and none of the PowerPoint that usually puts you to sleep.
If you haven’t seen the video you can watch the embedded version below. But also let me explain what the video is talking about.
Data center network operations are at the heart of telecommunications service delivery, but until recently nimble operations have been stalled by less than nimble infrastructure. New service creation required hardware deployments that both required up-front investment and limited service flexibility due to deployment times of days, weeks or even months.
NFV finally gets the network as lean and nimble as the virtual machines in the data center, allowing both the virtual servers and the network infrastructure to scale and change virtually as services are created or demand changes.
The Alcatel-Lucent video shows how companies can leverage NFV through the use of its CloudBand orchestration platform that manages network deployment and Nuage Networks’ network orchestration layer that does the network spin-up.
Through the use of network service chaining, which connects these services and also includes third-party infrastructure that works on the platform, operators can launch new services such as content filtering by simply clicking a few buttons for all the infrastructure components they want to spin up for the service, such as a WebRTC server.
The demo also shows how this NFV environment handles load variability and the hardware failure. When load rises, new virtual machines automatically spin up to meet the extra demand. When demand falls, virtual machines are reduced automatically.
The system also helps maintain high availability. In the demo video, when the operator needed to shift to a second data center after the first one failed, the platform automatically looked for where to set up a new backup to maintain high availability. This search included understanding the cost of service creation in various places, the weather factors, and other variables that network engineers usually need to consider when making a new deployment. Alcatel-Lucent calls this smart load placement.
Overall, the video is definitely worth a watch. Even if you already know a lot about NFV, seeing it in action is informative.
]]>From original TechZine Article
Metro network transport platforms must be compact, scalable, and agile to conquer the specific challenges of this key portion of the transport network. Growing and shifting traffic in the metro has triggered these challenges.
Today’s cloud-optimized metro network transport platforms “must” be:
Growth in metro networks
Following a long cycle of core network capacity build out, service providers are now challenged by the growth and shift in metro network traffic dynamics.
A recent Bell Labs study reported that the rise of social media and over-the-top video — along with the rapid adoption of mobile broadband — has led to the proliferation of mega data centers. This drives an increase in metro traffic. And it results in more traffic moving around within in the metro between data centers versus going out to the backbone.
The study also found that metro traffic will grow almost 2 times faster than backbone traffic by 2017. So, there’s not only a dramatic growth of traffic in the metro but that traffic is diverse, dynamic, and flows much differently than in the backbone.
So, what’s the takeaway from this study? Today’s metro networks increasingly require metro-optimized transport solutions versus adapted core platforms. That is, metro transport is driven by scale, flexibility, and efficiency versus sheer capacity and reach.
The new metro transport network
It is clear that a metro-optimized transport solution must be compact, scalable, and agile. But, what specific capabilities are required?
A metro-optimized transport solution can help maximize revenue and ROI by accelerating services availability/time-to-market and improving network operational efficiency. The key benefits are:
Does your metro network have what it takes?
A high-capacity, packet-optical transport solution with metro-optimized flexibility, size, and power can help maximize revenue generation and ROI in the cloud era. Look for a multiservice solution that delivers graceful pay-as-you-grow scaling with no-compromise distributed switching and agility in a metro-optimized form factor.
Our recent expansion to the Alcatel-Lucent 1830 Photonic Service Switch can help you meet the challenges of growing and shifting metro traffic demands in the cloud era.
Related Material
Listen to the podcast to learn more.
To contact the author or request additional information, please send an email to techzine.editor@alcatel-lucent.com.
]]>OpenStack isn’t an as-is solution for telco network functions virtualization (NFV) infrastructures. OpenStack is an open-source cloud management technology that provides many of the capabilities needed in any NFV environment. And this has prompted interest among many telco service providers.
But to realize the full benefits of NFV, service providers need NFV platforms that provide additional capabilities to support distributed clouds, enhanced network control, lifecycle management, and high performance data planes.
The OpenStack/NFV backstory
In 2010, RackSpace® and NASA jointly launched OpenStack®, an open-source cloud computing platform. Since then, the OpenStack community has gained tremendous momentum, with over 200 member companies.
Originally, OpenStack was not designed with carrier requirements in mind. So in 2012, a group of major telecommunication service providers founded an initiative to apply virtualization and cloud principles to the telecommunications domain.
The term network functions virtualization was coined for this initiative. Service providers called for vendors to build virtualized network functions (VNFs) and NFV platforms to help them become more agile in delivering services, and to reduce equipment and operational cost.
To address identified gaps in OpenStack and other relevant open source projects, major industry players established in September 2014 “Open Platform for NFV” as a Linux™ Foundation Collaborative Project. The intention is to create a carrier-grade, open source reference platform for NFV. Industry peers will build this platform together to evolve NFV and to ensure consistency, performance, and interoperability among multiple open source components.
There are 5 main areas in which OpenStack is currently lacking as a solution for telco NFV environments:
1. Distribution
In the IT world, enterprises want to consolidate their datacenters to reduce costs. But this is not always the best choice for NFV. Many NFV applications require a real-time response with low latency. NFV applications also need to be highly available and survive disasters. Service providers need the flexibility to deploy network functions in a distributed infrastructure — at the network core, metro area, access, and possibly even a customer’s premises.
Figure 1. Distributed NFV infrastructure
OpenStack supports Cells, Regions, and Availabilities Zones, but these concepts are not sufficient for the needs of NFV. Each OpenStack Region provides separate API endpoints, with no coordination between Regions. Typically, one or more Regions are located in one datacenter. The Cells component provides a single API endpoint that aggregates multiple regions.
With Cells, workload placement (“scheduling”) across cells is by explicit specification or by random selection. The Cells component doesn’t have a placement algorithm that is able to choose the best location based on the needs of the application.
The Horizon GUI is restricted to a single region at a time. There is no GUI able to show an aggregated view of the NFV cloud infrastructure. The OpenStack Glance virtual machine image manager is also limited to a single region. This means that the NFV operator would have to deploy images manually to the regions needed.
Bottom line: Service providers need a platform that will deal efficiently with the distributed NFV infrastructure necessary for low signal latencies and disaster resiliency. This infrastructure must also be manageable as a single distributed cloud with global views, statistics, and policies.
2. Networking
VNFs vary widely in their network demands. Because they are distributed throughout an NFV infrastructure, the baseline requirement for an NFV network is connectivity, both within datacenters and across WANs. Security dictates that different network functions should only be connected to each other if they need to exchange data, and the NFV control, data, and management traffic should be separated.
As network functions are decomposed – for example into data plane components and a centralized control plane component – network connectivity between these components needs to remain as highly reliable as traditional integrated architectures. Sufficient network resources should be available to ensure surging traffic from other applications cannot adversely affect NFV applications.
The network should be resilient against equipment failures and force majeure disasters. Latency and jitter requirements vary from hundreds of milliseconds for some control and management systems, to single digit milliseconds for mobile gateways and cloud radio access networks.
NFV networks will typically consist of a semi-static physical infrastructure, along with a much more dynamic overlay network layer to address the needs of VNFs. The overlay layer needs to respond quickly to factors such as changing service demands and new service deployments.
OpenStack Neutron is the OpenStack networking component offering abstractions, such as Layer 2 and Layer 3 networks, subnets, IP addresses, and virtual middleboxes. Neutron has a plugin-based architecture. Networking requests to Neutron are forwarded to the Neutron plugin installed to handle the specifics of the present network. Neutron is limited to a single space of network resources typically associated with an OpenStack region. It is unable to directly federate multiple network domains and manage WAN capabilities.
Bottom line: Service providers need a platform that will set up and manage local- and wide-area network (LAN and WAN) structures needed for carrier applications in a programmable manner
3. Automated lifecycle management
One of the greatest advantages of NFV as a software-based solution is its ability to automate operational processes. This includes the application lifecycle, from deployment to monitoring, scaling, healing and upgrading, all the way to phase out. Studies have shown that this automation will allow service providers to reduce operational expenses (OPEX) by more than 50 percent in some cases.
OpenStack Heat allows users to write templates to describe virtual applications (“stacks”) in terms of their component resources, such as virtual machines including nested stacks. Originally, Heat templates were based on AWS™ CloudFormation™, but more recently Heat Orchestration Templates (HOT) have been introduced that offer additional expressive power. Heat focuses on defining and deploying application stacks but does not explicitly support other lifecycle phases.
OpenStack Solum is a new project designed to make cloud services easier to consume and integrate into the development process. It is being designed to provide some of the missing lifecycle automation functions. There is some initial work on auto-scaling by combining the measurement capabilities of OpenStack Ceilometer with Heat. Heat is currently limited to a single OpenStack region.
Bottom line: Service providers need a platform that will automate not only deployment and scaling but also many other lifecycle operations of complex carrier applications with many component functions.
4. NFV infrastructure operations
The distribution of NFV infrastructures across many locations in a service provider’s network – as opposed to a few centralized locations – will pose specific challenges and impact the operational processes and support systems. NFV’s distributed infrastructure means that cloud nodes at different locations are added, upgraded, and/or removed more frequently than in a centralized cloud. These processes should be performed remotely whenever possible to avoid truck rolls across the coverage area.
OpenStack TripleO (OpenStack on OpenStack) is an experimental addition to the OpenStack family. The project aims at automating the installation, upgrade and operation of OpenStack clouds using OpenStack’s own cloud facilities. TripleO uses Heat to deploy an OpenStack instance on top of a bare-metal infrastructure.
Bottom line: Service providers need a platform specifically designed for a distributed NFV infrastructure, one that automates the complex software stack deployment and upgrade procedures.
5. High-performance data plane
Many carrier network functions (e.g., deep packet inspection, media gateways, session border controllers, and mobile core serving gateways and packet data network gateways) are currently implemented on special-purpose hardware to achieve high packet processing and input/output throughput. Running those functions on current off-the-shelf servers with current hypervisors can lead to a 10-fold performance degradation.
The industry is currently working on new technologies that have the potential to improve data plane performance on commercial off-the-shelf servers, in some cases to nearly the levels of special-purpose hardware.
Data plane performance, however, has been a fringe activity in the OpenStack community. Only recently, e.g., with the Juno release, more focus has been put on data plane acceleration. Juno offers support for requesting access for virtual machines to Intel®’s Single Root I/O Virtualization technology.
Bottom line: Service providers need a platform that will manage high-performance data plane network functions on commercial off-the-shelf servers.
Beyond OpenStack: What’s needed to make NFV work today?
Most service providers around the globe are looking for an open and multi-vendor NFV platform based on OpenStack. But as discussed, the OpenStack community is not strongly focused on some key NFV requirements. What’s missing is an NFV platform that goes beyond the scope of OpenStack to help customers realize reductions in CAPEX and OPEX, and improved service agility.
OpenStack is still under heavy development in many areas. As it matures, OpenStack will become more stable and richer in functionality, allowing it to better meet NFV requirements in certain areas. However, it is not expected to meet all requirements.
Service providers need a horizontal NFV platform that provides:
This approach will make it possible to break open today’s multiple application silos.
This article is based on the Alcatel-Lucent/Red Hat white paper CloudBand with OpenStack as NFV Platform.
To contact the author or request additional information, please send an email to techzine.editor@alcatel-lucent.
]]>Forward thinking providers are already concerned that the coming wave of unicast traffic generated by popular on-demand video services will affect the delivery network from end to end. Clarifying the potential impact of these services on the network is vital as the ramifications could be significant.
Growth of unicast
In a traditional cable or IPTV network architecture—broadcast or multicast—traffic is proportional to the number of channels. Beyond a certain range and for a limited channel line-up, adding new subscribers has no traffic impact. Unicast is different. Traffic is directly proportional to the number of devices: more devices beget more traffic.
As illustrated in Figure 1, multicast traffic will flatten as the subscriber base grows, because the likelihood that users are watching all available TV channels increases. Meanwhile, unicast will continue to rise in step with subscriber growth.
Furthermore, knowing that the proliferation of connected devices is progressing rapidly, service providers don’t have the luxury of time. They need to get started on their transformation strategy now. Indeed, a Bell Labs study shows that metro video traffic will increase 720% by 2017.
Figure 1. Multicast and unicast traffic trends related to the number of subscribers
Key considerations
The paradigm shift from multicast to unicast impacts every aspect of the network—from access to the backbone. Figure 2 maps some of the key considerations to the network elements. Let’s look at how pay TV operators that want to offer personalized cloud TV services can re-imagine their network architecture from end to end.
Figure 2. Considerations for network design to support bandwidth demands of unicast traffic
Assess the situation
Before launching network-based time-shifted TV services, pay TV operators should model their cloud DVR solution. This includes identifying the type of services they will offer. Whether it is catch-up TV, restart TV, or personal recording, operators must understand the impact of these unique services on network transformation.
Here are several service characteristics to consider:
Meet the capacity challenge
Volume — 100s of catch-up TV programs and 100s of hours of personal recording for a large subscriber base – makes for a significant storage capacity challenge. Multiple storage nodes have to be interconnected within a 10 GigE LAN topology to accommodate petabytes of programming.
As a result, designing an appropriate solution for scaling data center networks must consider:
Software-defined networking (SDN) will be a fundamental component in this design. SDN is already being used to automate connectivity within virtualized data center infrastructures and can establish connectivity between cloud DVR nodes upon their creation.
Prioritize the traffic
Once the cloud DVR is built, the next step is to feed the unicast streams into the network through an edge router—BNG/BRAS for an IPTV network, or a video router/CMTS in a cable hub architecture. Traditional edge routers were built to support highly oversubscribed, best-effort Internet connectivity. Today, however, they are becoming a bottleneck for increasing unicast video sessions. Consequently, they need to be upgraded or replaced. As traffic is growing, they are also being further distributed in the network.
At the edge, pay TV operators apply quality of service to pay TV traffic delivered to the set-top box (STB) as opposed to over-the-top (OTT) traffic receiving best-effort treatment. TV service to connected devices is often treated like an OTT service.
It’s time to revisit this practice. From the end-user’s point of view, connected devices are increasingly becoming the primary screen. That means best-effort service is no longer enough. This concern is pushing pay TV operators to reconsider how they mark and prioritize the traffic.
Scale the network
Backbone
Traffic growth on the backbone network can be managed using a content delivery network (CDN). The CDN caches the most popular content at the edge of the backbone. When the same asset is requested by multiple end users, it is served from the CDN cache. This approach significantly reduces bandwidth consumption within the backbone network.
In the event of strong user content demand, this approach also protects the origin server from high peak requests. This ensures that other critical functions, such as ingest, recording, encryption, packaging, and streaming remain unaffected.
A CDN dramatically cuts the cost of the origin server while reducing investment in legacy infrastructure. Investing in additional caches to serve popular content from the edge is more economical than adding capacity to the centralized origin servers.
Today many operators are growing their CDNs, using them as a unified infrastructure to serve traditional devices, such as STBs, as well as newer connected ones[2]. Typically, the CDN delivers content using HTTP over TCP, while the STB receives it using RTP over UDP, as shown in Figure 3. To receive content from the CDN, the STB connects to an RTSP pump that requests content from the CDN over HTTP. It does this using the industry-standard ATIS C2 interface[3].
Figure 3. Click to see the complete IP video infographic and learn more about the standards, protocols and acronyms.
Metro
According to the findings of a Bell Labs study, distributing the caches further into the metro network can reduce the total traffic by 41%. To optimize their service, pay TV operators must ask:
For the operator, there’s a trade-off here. Bandwidth savings need to be weighed against the extra cost of the caches. Alternatively, significant QoS improvements brought about by this distributed architecture might be sufficient to justify the investment.
Access
Pay TV operators need to evaluate the options to increase throughput per user in the access network. For fixed networks, one approach is to push fiber closer to end users. For example, some IPTV operators are deploying FTTx solutions or flexible micro-nodes with vectoring. Then they install fiber all the way to the home.
For their part, cable operators have several options to increase bandwidth. They can:
On mobile access networks where bandwidth is scarce, service providers are using a combination of techniques to improve user quality of experience (QoE) while reducing transport costs. Some techniques, such as transcoding, transrating, and compression can potentially decrease video resolution by transforming the content or streaming at lower rates. Other content distribution techniques that retain video resolution—buffering, caching, and broadcasting—can also be used to enhance QoE.
Knowledge is power
Introducing a cloud-based DVR service takes serious forethought and planning, and naturally leads to a transformation program. Considering the options requires a deep understanding of the complete video service delivery chain, as well as world-class expertise in IP backbone, metro, and access networks. The contribution of this knowledge and experience should be highly valued in developing a comprehensive cloud DVR service strategy and in selecting the right partners with the appropriate range of consulting and professional services.
Related Material
Footnotes
To contact the author or request additional information, please send an email to techzine.editor@alcatel-lucent.com.
There’s no question that the network functions virtualization (NFV) technology around which many telecommunications carriers and vendors are rallying takes a page from the virtualization that already has taken hold in IT data centers. But you can’t judge a book by its cover. NFV and IT virtualization also have their differences.
One key difference is that while data center virtualization tends to rely on a centralized architecture, NFV calls for a distributed one, Andreas Lemke, marketing lead for the CloudBand NFV platform at Alcatel-Lucent, points out in a recent TechZine posting by Andreas Lemke, Marketing Lead, CloudBand NFV platform, Alcatel-Lucent titled, Why distribution is important in NFV.
“As the IT world virtualized, it found that a small number of warehouse-size data centers are more cost-effective than many small, widely spread ones. This is because companies that build data centers do not have to build and operate local access networks,” he wrote.
“In contrast to IT clouds, such as Amazon’s, distribution matters in NFV networks,” Lemke continued. “Many carrier applications have needs that are ill-suited to a centralized architecture.”
Those needs relate to availability, low latency (a key consideration in carriers’ radio access networks, where vRANs are being deployed), network offload (for which content delivery networks are being used), regulations, and security.
Consider network offload, for example. Because video and data have pushed ahead of voice as the most plentiful traffic on the network, there’s a need to optimize network operations for this more bandwidth-loving traffic. Using point-to-point video streaming in all cases is inefficient, notes Lemke, so carriers are leveraging content distribution and multicasting to make the most of their network resources, and a hierarchical, distributed architecture supports these network optimization efforts.
A distributed network also tends to equate to higher reliability and disaster survivability, he says, as when you have network resources in a broader geography the chances that all of those resources will be adversely affected by a man-made or natural disaster becomes lower.
And, while a distributed network creates more potential points of security risk, it also mitigates risk because more nodes exist, meaning there’s a greater chance parts of the network will be unaffected in the event of a security problem, and with the proper processes and tools, the attacks can be identified and isolated.
]]>
Their networks, which traditionally have been based on turnkey network elements running software on purpose-built hardware, are moving to a software-centric model. In this model the true value lies in the software, while the hardware is typically of the commercial-off-the-shelf variety.
Network Functions Virtualization (NFV) is the name of this new architecture, which not only embraces the model of instituting network functionality in software and running it on industry-servers, but also allows applications and services to leverage those resources whenever and wherever they are.
The success of virtualization in the data center has demonstrated the power of running network capabilities on virtual machines. That’s powerful because it allows networks to be more fluid so they can meet shifting demands. It’s also powerful because it can result in cost savings, given less – and less specialized – hardware is required, and given virtualized environments (in which one server can host various network elements) tend to consume less power than environments featuring a collection of appliances.
NFV also can help facilities-based network operators effectively reinvent themselves to be more agile, so they can better compete with faster and often smaller over-the-the-top service providers.
Reducing equipment costs and power consumption, and expediting the introduction of new services and features were among the key goals laid out by ETSI’s NFV group, which got the network functions virtualization movement rolling a couple years ago. Founders of the NFV group within the European standards body included AT&T, BT Group, Deutsche Telekom, Orange, Telecom Italia, Telefonica, and Verizon.
Network operators that want to get started with NFV, suggests Andreas Lemke, marketing lead of the CloudBand NFV platform at Alcatel-Lucent, should take advantage of what he describes as “5 must-have attributes of an NFV platform.” These include:
Finally, and as important as all of the technology, Lemke says that those wishing to get started with NFV should select partners that can provide the same five 9s reliability, quality of service, and security in the new virtualized environment as they enjoy with their existing networks.
There is a growing industry consensus that NFV will become the architecture of the future for networks that are agile, applications friendly, high-performance, interoperable and secure. In fact, not only is there consensus but there is traction in the market for NFV solutions as service providers look to transform themselves to be as accommodating as possible in a profitable manner to the dynamics of rapidly changing market requirements. However, not all NFV solutions are alike which is why the Lemke attributes list is one worth consideration as part of an NFV evaluation.
]]>
It feels like it was just a few months ago when you could read articles in the trade press lumping together SDN and NFV with NFV being a form of SDN or vice versa. Yes, both somehow are about virtualization and about converting hardware into software. Today – after numerous proofs-of-concept run by service provides around the globe – we know the role of SDN as virtually indispensable for NFV solutions that aspire to deliver the kind of agility and operational simplification we all expect from NFV. Only SDN can deliver quickly enough the (virtual) networks needed for newly deployed network functions. Alcatel-Lucent has recently demonstrated a complete virtual evolved packet core (vEPC) including a virtual IMS/VoLTE deployed in less than 30 minutes.
NFV and SDN enable on-demand service composition by steering traffic through a sequence of middle-box service functions (service function chaining), such as firewalls and traffic optimization. For example, an enterprise or consumer customer can use a self-service portal to check off the desired functions, which causes virtual network functions to be deployed or scaled and (per-subscriber) routing policies to be changed automatically (flow-through provisioning).
Likewise, NFV responds to changing traffic within minutes by spinning up additional virtual machines within the same data center but also in a data center close to where the traffic demand originates. NFV enables rapid software upgrades while containing the risk of service degradation. We are even seeing demand on the horizon for adopting Devops models in the telco domain.
A classical operational model with change requests being sent to the networking department is no longer up to the task. The network needs to be as dynamic as the server infrastructure and it is clear that only SDN can fill the bill. This will be a stepwise process and not just any SDN will be suitable for NFV. Telco networks are not only about dropping packets in on one side and the packets popping back out at their destinations. Telco networks are designed to deliver enough capacity, high enough performance, security and high availability for the critical services running over them in an end-to-end geo-distributed environment.
Clearly, SDN is right for NFV but it needs to be the right SDN. Read the white paper “The right SDN is right for NFV” to learn about critical network requirements for NFV, SDN use cases and four stages of SDN integration into NFV bringing different degrees of reward to service providers. Alcatel-Lucent CloudBand™ and Nuage Networks® VSP are discussed as an example integrated SDN/NFV solution.
]]>Before the iPhone, the world of TV was relatively simple. Linear TV programs were delivered to the TV set over the air or to its set-top box (STB), which was directly tied to the cable coax, the home gateway or the satellite dish.
Now everything has changed.
Video-enabled, IP-connected devices with ever-greater screen resolution are flooding the market. Tablets, smartphones and smart TVs are running on many flavors of operating systems. All use different protocols, formats and standards. With these devices, end users have many options to watch video. These include being attached to the service provider’s managed network, or being directly connected to the Internet and consuming ‘over-the-top’ content. Moreover, end users want to watch their favorite content on demand; they no longer want to be restricted to linear programming. This adds yet another level of complexity to this whirlwind of change.
Covering all IP video options results in countless protocols, proliferating standards and loads of acronyms. Even industry watchers can find the rapidly evolving world of IP video confusing. That’s why I created this IP video streaming infographic.
Seeing the upside
In a simple yet comprehensive way, this infographic outlines the multiple steps in video processing and streaming needed to reach all end user devices. At each step, the graphic identifies the key standards, formats and protocols, as well as their role and where they fit into a network transformation strategy. And, once we can grasp the big picture, we’re in a better position to see the way forward.
IP has paved the way for new, over-the-top entrants who compete with pay TV providers for both content and customers. This presents an enormous challenge to traditional service providers, especially those with legacy networks. However, looking more closely, moving video delivery into the IP domain also presents enormous opportunities. In fact, the move to IP video enables technological advances that benefit the wider TV ecosystem.
So what are the benefits of IP video?
Migrating to IP video
Assuming these benefits of IP video, operator networks must be effectively scaled. This is a must before they can deliver what viewers want on whatever screen they prefer, and before IP technologies can pave the way forward. The path to transformation covers three main areas:
Look for the best of the best
Whatever path your transformation takes, the key to success is to view your IP video transformation as part of your overall upgrade strategy—not as a one-off project. To do this, you’ll need an integrator with as big a vision as yours.
I’d look for an integrator who brings together the best products for each solution component in your market. This means working together with a trusted partner who can customize an IP video transformation plan based on the latest innovations and best-of-breed products. That way, you won’t be encumbered by legacy code and allegiance to ageing products.
To keep your project on track, you’ll want a partner with comprehensive set of integrator capabilities that ensure shorter time-to-market and faster ROI. You’ll benefit from:
Where is IP video headed?
Today TV and video is delivered over a blend of legacy and IP networks. In the next few years, we’ll see all video delivered over all-IP networks. That view is generally accepted, but if I had to go further, I’d predict that:
Taken together, these changes will enable service providers to deliver a great user experience in a more cost-effective way. And, subscribers will get a truly personalized viewing experience that they’ll love.
Links
Download our ebook: Future vision for IP Video
]]>
The advantages to mobile operators of network functions virtualization (NFV) and moving to a virtualized evolved packet core (vEPC) have become clear, and mobile networks operators are pretty much sold on the technology in theory.
As the technology side has been figured out and operators begin to plan commercial deployments of NFV and vEPC, however, discussion is starting to move toward operational requirements and challenges. Mobile network operators need to figure out how best to manage these new virtual network functions (VNFs) and the NFV infrastructure, and also how to modify the existing network operations model when these VNFs are deployed.
“These are understandable concerns since clearly there will be additional operational issues when this NFV-MANO [management and orchestration] network architecture is deployed,” noted Keith Allan, Director IP Mobile Core Product Strategy, Alcatel-Lucent, in a recent TechZine posting, vEPC: How to achieve operational elegance.
There are a number of new functional blocks and data repositories that come with this new model, including the MANO functions themselves, vEPC VNFs, element and network management systems (EMS/NMS), operational and business support systems (OSS/BSS), and NFV infrastructure.
For Allan, however, these concerns are real but solutions also exist for mobile network operators to deal with them.
Existing EMS/NMS can be combined together with an integrated NFV/SDN management solution and enable mobile operators to address NFV operational challenges while also being able to manage the existing purpose-built, product-based network using their current OSS/BSS, according to Allan.
This combined system, which Alcatel-Lucent business unit Nuage Networks and the Alcatel-Lucent CloudBand team are developing can deliver, enable workflow automation with push button VNF instantiation and elasticity, automates service chaining via SDN, and brings network function orchestration to coordinate multiple virtual and physical network functions.
How this is done is through dividing into three well-established management domains: virtual machine orchestration and VNF/VNFC life cycle management, network connectivity orchestration, and network function orchestration.
“This combined element and network management solution for NFV/SDN delivers the operational elegance that mobile operators need to reduce complexity,” noted Allan in his blog post, “and it opens the door for innovation to provide new services through automation.”
As operators move from testing to commercial rollout, such solutions will increasingly rise in importance.
]]>OpenStack, the open source cloud management software, has come into the focus of service providers as a rapidly advancing, cost-effective technology foundation for NFV. With OpenStack, service providers are expecting to escape the tangles of individual vendors and build an open horizontal platform for their future networks.
Boon or bane, OpenStack was never designed with carrier requirements in mind. Maybe this was a good thing as the community could advance rapidly, pragmatically, without being tied down by stringent requirements. But now, service providers are getting ready to move NFV out of the labs and deploy solutions in their production networks, and this means asking hard questions about OpenStack’s readiness to support commercial deployments. Clearly for any NFV vendor it would not make sense to re-implement the functions that OpenStack already provides in the area of management of virtual machines, storage, networks, images, distributed databases and more.
Red Hat, the open source leader, and Alcatel-Lucent, the leader in carrier networking and NFV, and specifically the CloudBand team got together to tackle the job of making OpenStack ready for serious service provider applications. The first step toward this goal was to understand the particular NFV requirements and the gaps in OpenStack to fill them. Then, making OpenStack ready for NFV requires a two-pronged approach. On the one hand, OpenStack needs to be evolved to support critical requirements, such as security, that can only be supported in OpenStack itself. On the other hand, complementary capabilities need to be provided that are not in the scope of the open source project.
There are at least three bodies chartered to address these challenges:
The ETSI NFV Industry Specification Group - the original host to the NFV community
The OpenStack Foundation, which recently created an NFV subgroup
Open Platform for NFV, a Linux Foundation collaborative project being formed to establish a carrier-grade integrated open source reference platform that industry peers will build together
Obviously there is a lot of momentum in NFV but to ensure the work of the three groups is most beneficial, their missions need to be clearly defined. Done the right way, open source is like an Autobahn that enables everybody to move faster toward their goals. Without open source, developers would have to re-invented the same functions over again or pay high license fees.
For more details read the whitepaper entitled “CloudBand with OpenStack as NFV platform”, which discusses five critical areas for NFV – distribution, networking, automated lifecycle management, operations, and high performance data plane – and explains how Red Hat and the Alcatel-Lucent CloudBand team work together to build a solution that is optimized for telco NFV environments.
]]>
The world of M2M is changing as solutions move from single purpose devices that transmit data to and receive commands from an application in the network to an Internet of Things where solutions permit devices to be multi-purpose and applications to be collaborative.
The Internet of Things can benefit from global standardization efforts that:
In today’s, world M2M solutions abound and not much architecturally has changed since the 1970’s. The Fraunhofer Institute for Open Communication Systems has described the definition of M2M as communication terminal independent of human interaction communicating with a core network or another terminal for the purposes of automating services.
Granted that while the network that facilitates the M2M communication has changed dramatically since the 1970’s and provides quite advanced capabilities (e.g., 3GPP Machine Type Communication), the architecture of the M2M solution has remained fairly static – A device in the field that communicates with an application in the core network for a specific purpose.
However, we are beginning to see paradigm shift for M2M, called the Internet of Things (IoT). Powered by the infrastructure of M2M, the IoT fundamentally changes the way devices and applications interact. We can look at this progression of how devices and applications collaborate using technologies enabled by the M2M infrastructure in much the same way that people collaborate using the social Web or in how commerce has been enabled using Web 2.0 technologies.
In the world of IoT devices once had a single purpose. But now it provides data or can be controlled for varying purposes across industry domains. For example, a pedometer can be used by:
It is the same pedometer but the data is used by different application domains.
Since the IoT is enabled by the capabilities of the M2M Service Enablement Layer, the IoT domain draws upon many of the benefits provided by global standardization efforts like oneM2M that help solve the challenges faced by the M2M industry today. Benefits like:
However, because the basis of IoT is the multi-purpose collaboration of “things” (e.g., pedometers, storage containers, and energy meters) there are challenges that are accentuated in the IoT domain like:
In this context, standardization provides benefits that enable this type of collaboration.
In fact there are global standards bodies working on these challenges today. They are providing definitions to aspects of the collaboration across application frameworks that enable an application development and execution ecosystem and provide a clear definition of interfaces for application providers and device (Thing) manufacturers. For example the work in the Home Gateway Initiative (HGi), W3C, Open Geospatial Consortium (OGC) and oneM2M in the area of semantics in IoT. They are standardizing a common vocabulary and associated templates for “things” to be described in a context that suits the varying purposes of the “thing”.
One of the key issues in the exchange of semantic information is how the privacy and confidentiality of the information source can be maintained while still providing the needed semantic context. The capabilities to provide rights to the information source and anonymize the semantic information are just a few of the security that standards bodies like the W3C, IETF, ITU and IEEE are actively pursuing. The industry realizes that if privacy and confidentiality isn’t designed in up front and on top of the security capabilities (e.g., authentication, access control, data protection) provided by the enabling M2M infrastructure, the benefits of the IoT cannot be fully realized.
Realizing this, oneM2M is pulling these semantic vocabularies together in a framework that enables applications to efficiently discover, exchange and analyze semantic information across industry domains while providing the capabilities to ensure the privacy and confidentiality of the semantic information sources.
Standardization of the Internet of Things may seem like big hurdle to leap but if these organizations are successful, then the IoT is going to be a much friendlier place to work and live.
About Tim Carey
Tim Carey is the Industry Standards Manager of Alcatel-Lucent’s Customer Experience Division. Tim was recently inducted into the Broadband Forum Circle of Excellence to recognize his leadership in advancing the Forum's mission of driving broadband wireline solutions and empowering converged packet networks worldwide to better meet the needs of vendors, service providers and their customers.
Tim has over 18 years experience in the communications industry, working in the areas of solution deployment, system engineering and system architecture across a wide variety of technologies that include Optical, ATM and IP transport, switching and routing products as well as development of Home Networking devices and Network and Device management systems. In his current role as Industry Standards Manager, he is actively involved in a number of standards bodies that include oneM2M, ETSI, IEEE, Broadband Forum, Open Mobile Alliance, HGI, DLNA and UPnP forum providing expertise in the areas of network management, device management, home networking and machine to machine technologies.
Large enterprises increasingly resemble public network service providers as they manage access, transport and network routing while controlling devices and sessions. Whether businesses build their own or buy their communications services through a public provider, the IP communications architectures are looking remarkably similar.
“I’ve noticed that both private service operators (CIOs of large enterprises) and public service providers are implementing very similar solutions around the globe,” wrote Oliver Krahn in a recent TechZine article, 6 Steps that Improve Communications Services.
Link for graphic.
He has noticed that successful service providers are all more or less taking the following six steps when it comes to building their IP communications architecture. Firms building out their communications services would be wise to pay attention.
First, firms are moving to an IP communications architecture to enable a whole new conversation experience. With so many communications options, the value of having a unified inbox that collects all of these communications services is invaluable to users. And this means moving to all-IP.
Second, APIs are being used to expose network-based IP communications applications.
“This lets them share innovation opportunities with partners while developing carrier-grade and real-time web-communications strategies that keep them current on new modes of communications,” noted Krahn. “Beyond this, more visionary CIOs are exploring next-generation IP communications architectures that may replace today’s PSTN.”
Third, content strategies are being adjusted to take advantage of on-net content delivery networks at the network edge and transparent caching to handle the volume of video data now being moved across the network.
Fourth, they are using small cell networks to avoid bottlenecks and access gaps.
“By deploying multi-standard, small-cell base stations, a large enterprise can achieve cost-effective 3G, 4G and Wi-Fi connectivity,” noted Krahn. “It can then be handed over to a trusted mobile provider to light them up with licensed spectrum. In-building coverage for any size venue and any number of users is also part of the mix; so is reducing the cost of delivering ultra-broadband access in a multi-operator deployment with low-cost digital distributed antenna systems.”
Converged virtual private networks also are being employed to deliver a seamless experience to enterprise users. Enterprise service gateways (ESGs) offer a seamless experience to enterprise users independent of the access bearer and the device. They deliver scalability, high performance and carrier-grade resiliency for VPN services.
This combination of capabilities allows the ESG to concurrently replace the mobile gateway (PGW, GGSN), PE router and border gateway, simplifying the network.
Finally, Krahn has noticed that successful solutions are leveraging cloud-based applications and network functions virtualization (NFV).
“Private and public service providers are realizing that the future of telecommunications networks will be based on virtualizing key network functions,” he noted. “These cloud platform components ease service deployment, automate management and clear the path to cost-effective growth. At the same time, NFV-ready applications, an advanced NFV platform and an NFV partner ecosystem are crucial for achieving the goals of cloud-based applications.”
As noted, Alcatel-Lucent has a white paper, Six Steps to Attractive Communications Services, that fully outlines these six steps used by successful communication service providers, whether enterprises or public providers.