Significant investments require significant returns. How do companies ensure their benefits are measured, tracked and realized during IP Transformation Programs?
Success is Not Guaranteed
Think about the hardest project you have ever delivered. Just think back… that one ‘special project’, the one that spiraled out of control, the one where the requirements kept changing, the one where the objectives kept moving, the one project that would not de-scope, where the tsunami of work was towering over the team, and impossible deadlines were looming. Yes, that one.
Most of us have experienced THAT project. And we probably sat with our colleagues, asking ourselves how a project under such pressure could even exist. Why would the sponsors not revise the scope, refocus the team, or even reinvest the budget elsewhere?
We all know that technical projects can go awry. IT, Networking and Engineering projects – famously 50% overrun on budget, and many are cancelled altogether.
So, what are the figures for complex Transformation Programs? For Programs where IT, Network, Operations and Engineering are undergoing change simultaneously. With an objective eye, it’s easy to question how any of them actually deliver results. But indeed they do.
But, how, and what can we measure to be certain we are achieving the desired results?
The Value of Tracking the Benefits
This is where Benefits Management steps in. The major goals of the approach are to ensure that:
Benefits Management provides a set of disciplines that operate out of the Program Management Office (PMO), and ensure the benefits of complex programs are constantly measured, tracked, assessed and reshaped. Programs that truly measure benefit as a real ‘outcome’ (rather than waiting for the end of the program) are more agile in adapting to changing conditions and can reshape to meet new requirements. Benefits management is in-fact the fourth dimension to the classic time/cost/quality triangle of managing good projects.
How Does It Work?
The role of Benefits Management is to answer the four key questions in planning and delivery:
These questions are addressed through three phases of planning and measurement;
The Strategy Phase is focused on reviewing the company’s IP Transformation strategy, in order to select at a number of sub-programs that can deliver benefit or align existing change programs with that strategy. A high-level benefits map should be produced at this stage.
The Program Phase is concerned with the definition and selection of projects and initiatives to create a portfolio that will deliver the required benefits. This is the phase where most use is made of the Benefits management tools such as the Benefits Plan, Benefits Register and Risk Analysis.
The Project Phase is about the delivery of projects and initiatives to support the program, the monitoring of actual benefits against targets and benefits harvesting.
What is the Cost?
Managing benefits is a key component of complex change programs. Without it, the technical, operational, IT, business and engineering change aspects of the program can happily deliver in parallel, and then fail to coordinate the realized benefits during the program. A small investment in dedicated PMO resources to measure, track and adjust the program is a small price to pay for assurance of successful delivery. Typically, this team is no more than two or three dedicated resources and the assurance they provide outweighs the cost.
Measure, Track and Realize
We started by discussing our most challenging project experiences. In many cases, Benefits Management can reshape projects far earlier, and save those projects from failure.
The approach of measuring benefit, tracking the measurable benefits and realizing the benefits provides the tools and information for program managers to govern the program, and reshape it, or address constraints if benefits are not likely to be realized.
Benefits Management ensures that in two years’ time, we are discussing the realized benefits over a coffee, and not talking about another one of ‘those projects’. For the small cost of a few specialist resources, that’s a worthwhile investment.
These ideas are explored further in the white paper “Better business case management for IP Transformation.”
For more information on how to address the challenge of building and managing the business case for IP Transformation, please see our earlier blogs on TMCnet:
Delivering successful change programs is a significant challenge. Undertaking a Readiness Assessment speeds the launch of new IP services, reduces risks and aligns corporate objectives with your program.
The Challenge of Change…a true story
So your company is planning an all IP network. The CTO is delivering technology roadmaps, the COO is assessing the service portals, and network designers have been architecting for eight months. The program is well underway and people are now starting to plan the migration.
So, you start to scope out the effort required to deliver migration and calculate that it requires hundreds of resources to manage a switchover. You approach engineering to secure the resources, and are informed HR is managing a release program, remunerating engineers to leave the company. The same engineers that you need to deliver your program!
Sound familiar?
This is a real story, it happened to me as a Program Director. I often wonder how executives can launch complex investment programs that impact every aspect of the business, IT, Operations and Network, and then fail to align the vision with the wider company operations. And then I remember;
Addressing the Barriers to Assessment
Assessing the capability of the corporation to manage large scale change is not often addressed. It’s perceived as an academic exercise, one that poses uncomfortable questions of the companies own abilities and management, and one that can become time consuming when resources are most in demand.
However, if undertaken properly, with a specific objective and a hard timeline, such an exercise can save enormous costs in launching in managing Transformation Programs.
The Purpose of a Business Readiness Assessment
A Business Readiness Assessment provides a structured approach to determine the current state of the business to enter a large change program, and identifies the key activities that are;
Scoping Readiness Assessments
Readiness Assessments should be scoped against the range of the business that will be engaged in, and impacted by the Transformation program. That includes all organizations that manage or deliver the program, all organizations that are providing resources (or arranging resources through supplier management) and all organizations that are running parallel programs that are dependent.
The scope should assess two attributes of the organization;
Managing Readiness Assessment…Comprehensively
Readiness Assessments are undertaken through two data gathering and analysis methods:
First, the inventory of existing and planned corporate initiatives are identified and assessed. This is achieved through interviews across the stakeholder organizations, unless a central portfolio management team exists within the company. For each program, the objectives, timeline and program impacts are assessed.
Second, the ability of each organization to deliver against their program requirements is determined. Capability is measured in terms of available resources, skills, experience, reusable processes and specific tooling. This is captured through a structured interview approach, using a method of weighted scoring. This allows metric analysis of strengths and weaknesses against each area of the program, by delivery organization and capability area of discipline.
The diagram below defines the usual capability areas that need to be assessed:Generating Value
Post analysis, the Readiness Assessment is used to define a readiness action plan that prepares the organization for the Transformation Program. This includes;
A Lesson Learned
Whilst a Readiness Assessment is not necessary prior to launching large change programs, it is certainly useful. In my experience, the small investment required to run this short exercise has paid dividends across multi-year programs. It aligns program sponsors and cross division stakeholders to the IP Transformation program, which itself accelerates the launch process. In short, this approach is certainly, worth considering prior to investing in a large program, to avoid costly oversights later.
More information…
A Readiness Assessment is normally performed as part of Program Setup. These ideas are explored further in our white paper “Better business case management for IP Transformation.”
Watch for our next blog, The business case for IP Transformation: Realizing the benefits
For more information on creating an effective business case for IP Transformation, please see our earlier blogs on TMCnet, “The Business Case for IP Transformation: Creating the Case and The Business Case for IP Transformation: Managing the Service Roadmap.
]]>In a technology-focused environment it is possible to conclude that building the business case for IP transformation is all about the network, the technology and the associated spend. That would be a mistake. To build an effective business case network operators must take into account the complexity of the program and its far reaching impact on their business.
The business case validates and supports the transformation activity. As the network operator invests (both capex and opex), the business case demonstrates the feasibility of the exercise and also that the tangible benefits (the return on investment) warrant the expenditures and opportunity cost. IP Transformation isn’t easy, but a well-executed strategy based on a strong business case will result in years of tangible benefits for your business.
IP Transformation: Steps to Success
The first challenge is to justify the costs. It is crucial to determine how the company is going to realize a return on the money invested.
There are 3 essential activities you should complete first in order to build a strong business case:
Build a Better Business Case
In my experience, only those business cases that took into account the ‘big picture’ really stood the test of time, and were not revisited or even scrapped during the delivery program.
The current services portfolio and future roadmap for sales offerings must be understood and modeled. Without it, the company is embarking upon change without understanding its very purpose: what it sells, to whom it sells, where, when and how. This applies equally in strategic industries, such as energy distribution and transportation, where infrastructure services are provided to support business and engineering applications.
Why is this so hard to achieve, and so often ignored? There are several reasons that need to be addressed.
#1. Get the Right Sponsors
Sponsors are often technology focused, and they primarily see the feature roadmap and decommissioning benefits. They tend to ignore the wider organizational stakeholder needs and benefits. The technological benefits of change rarely justify the investment on their own. It takes a holistic set of benefits to make the numbers work.
Also, the portfolio is often fragmented across the business, and pulling together the roadmap is seen as almost impossible. This is not an insignificant undertaking, but it is a necessary cost if the real benefits are to be understood and realized.
#2. Do a Technology Audit
The next step is to understand the current technology baseline, and the level of change that is required. This includes not only the physical network assets, but also the data models, the logical service layer, and the associated OSS and BSS changes.
In many cases, a multi-pronged approach to audit is required. This will cumulatively drive a deeper understanding of the known starting position and give a baseline for planning the investment in network and IT change.
Coordinating these technology efforts, driven by different organizations, but with dependent outcomes, takes significant effort and forethought. Only when they are delivered as a combined view can you truly understand the technology roadmap costs.
#3. Establish Good Governance
In parallel to the network change you must determine the scope and effort required to smoothly and quickly migrate the network and IT operations environments. This audit is driven by a different stakeholder base, with their business-as-usual demands and their own drivers for influencing the network and IT change.
I have witnessed several instances of C-levels operating in ‘splendid isolation’ at this stage, and later wondering why the dependencies between Operations, IT and network change were not planned in when considering the business case.
You must impose strict governance and coordination to plan the roadmap, appease all the stakeholders, and identify the true requirements and costs of operations uplift.
#4. Get Full Corporate Visibility
Any financing should take into account the wider business. What parallel investments are occurring elsewhere? Can they be leveraged, or are they going to impede the change program, and cost the company time and money, or cause blocking dependencies later?
In one particular case, I witnessed HR releasing resources through a funded early retirement plan, only for the change program to hire back those very same resources as contactors. This went on for 18 months, with both programs claiming success against their own measures.
Such company investment programs running in isolation are not uncommon. Early analysis can identify dependencies, and save enormous financial impacts later.
IP Transformation: It’s a Journey
Make your business case first, before you embark on your IP Transformation journey. It gives you the map, which then provides the route and directions for the program’s journey. Like any seasoned traveler, I have learned that a sound map and a route marked with clear waypoints is a pre-requisite before setting out. The white paper, “Better Business Case Management for IP Transformation” outlines these ideas in more detail.
Watch for our next blog, The business case for IP Transformation: Managing the service roadmap.
By Steve Blackshaw, IP Transformation Product Line Management, Alcatel-Lucent
In his role as Senior Director of IP Transformation at Alcatel-Lucent, Steve Blackshaw leads large-scale network evolution and transformation programs for some of the world’s largest telecommunications service providers.
]]>Small cells are a boon for mobile network operators, as they easily and cheaply expand wireless network connectivity. However, they also can strain an operator’s evolved packet core (EPC).
“The EPC may be called upon to deliver a significant increase in scale, capacity, and performance beyond that which was required initially to support the macro-cellular network,” noted David Nowoswiat, Sr. Product and Solutions Marketing Manager, Alcatel-Lucent in a recent TechZine posting, Is your EPC ready for the small cells onslaught? He suggests that operators look at three areas when examining if their EPC is up for the challenge.
First, is the network architecture ready for numerous small cells. Two of the options involve the addition of a small cell gateway to aggregate control and/or user traffic from a group of small cells back to the EPC, while a third option brings direct connectivity from each small cell to the EPC.
Adding a small cell gateway reduces the scaling and capacity requirements of the EPC but increases the network and operations complexity, and connecting the EPC directly to each small cell significantly increases its scalability and performance requirements yet keeps the network flat. Each operator will need to assess what makes sense in their particular case.
Second, does the EPC support the scaling and performance of the additional small cells load.
“If it’s directly connected to the small cell network, the biggest impact is on the control plane and the mobility management entity (MME) -- with all of the additional signaling that’s required,” noted Nowoswiat. But the EPC also should support an integrated and operationally simple model.
Third, is the mobile operator to offload data to take some of the load off of the EPC. Local breakout options can be implemented in small cell networks to offload data traffic that brings little value to the mobile operator, thus saving the EPC from added load. In that case, though, the EPC must support the requirements necessary to redirect traffic to the appropriate gateway and packet data network.
Nowoswiat questions whether most EPCs are up to the challenge. Is a virtual EPC a better option and a way to handle the extra load from small cells? While the answer is “it depends,” to learn more about EPC and small cell network choices the whitepaper Evolved Packet Core for Small Cell Networks, which compares architecture options, is a great place to start.
The mining industry is booming thanks not only to natural resource demands in China, but also because every electronic device, including smartphones a lot of the precious materials that miners pull from the earth. For example, an iPhone contains gold, silver, platinum, copper and many rare earth elements like Yttrium, Lanthanum, Neodymium, Gadolinium and Europium.
Keeping these bustling mines efficient requires a highly reliable, accessible, secure and high-performance communications network. The reason is the mines tend to be operational 24/7/365. It is a major factor in why many mines are in the process of or evaluating upgrading their communications networks, since the existing Wi-Fi, 2G, 3G, proprietary VHF and PMR options are not keeping pace with mining information interchange demands of all types.
One solution is private, ultra-broadband, as described in a recent TrackTalk posting, LTE for mining: delivering ultra broadband in the middle of nowhere, by Thierry Sens, Marketing Director Transportation Segment, Alcatel-Lucent (ALU). Indeed, the reason for the title is somewhat obvious in that mines tend to be in not just remote but very remote locations.
For example, the Rio Tinto West Angelas mine in the Pilbara region of Western Australia, the solution for better connectivity has been a private single and converged ultra-broadband 4G LTE network for its pit fields, railways and ports.
The network, installed in 2013 by Alcatel-Lucent, helps with mission-critical communications for things like in-pit autonomous haulage systems (AHS), autonomous drilling systems (ADS), driverless freight train control, anti-collision systems, in-pit proximity detection, in-pit CCTV, high-precision GPS and an array of telemetry systems and sensors are now integral components of successful mine sites around the world, according to Sens.
Alcatel-Lucent has provided an illustration of a private broadband for mining. While a bit of an eye chart, what stands out is the extent of the IP/MPLS infrastructure along with the wireless links from the mines to the backbone network.Source: Alcatel-Lucent
For Rio Tinto, the performance of its LTE network has led some observers to comment that they have a better mobile signal in the middle of the mine, hundreds of miles from the nearest city, than in their office.
“An LTE network is also contributing to reduced operating costs by using an IP protocol to support all applications on a single converged radio network, and improvements in operational efficiency,” notes Sens.
Private LTE networks and mining are a good fit, as Rio Tinto has demonstrated.
]]>
If you traveled by air this summer, consider yourself lucky if you made it to your destination on time. It was a tough summer for both the airlines and for passengers, as IT issues in both July and August led to widespread delays and flight cancellations in the U.S. and beyond.
Most recently, a software update to a plane routing system at an FAA control center in Leesburg, Va., led to what some are now calling Flypocalypse.
The En Route Automation Modernization system routes planes through 160,000 square miles of airspace over Washington, according to The Washington Post, but on Aug. 15 it was unable to handle that important task. “For several hours, the system that processes flight plans at the center stopped functioning for reasons that are still unclear,” according to the Post.
The result: The delay or cancellation of hundreds of flights nationwide and a sea of frustrated passengers.
The August event followed by just more than a month another airline system glitch that had even more widespread repercussions.
In early July, the busiest month of the year for air travel, a router malfunction in United Airlines’ reservation system led to big delays at the company’s Chicago, Denver, and Houston hubs – negatively impacting a reported 400,000 passengers.
As happened during the August event, many stranded passengers in July lit up social media with their complaints.
The problem with the system – which in addition to selling tickets is used to create gate assignments, manage aircraft movement, schedule pilots and flight attendants, and track maintenance schedules – led United Airlines to ground all its planes from 8 to 9:49 a.m. on July 8, according to The Washington Post, which noted the airline also grounded several flights the previous month.
Given the complexity of predicting weather, of airplanes themselves, and of all the people and systems involved in scheduling planes for takeoff and orchestrating them en route and at landing, it’s kind of amazing that things work as well as they do most of the time. But it’s tough to have that perspective when you’re a passenger who’s been waiting for hours at the airport, or a stakeholder in an airline, for which time is money.
The good news is that there are proven technologies in which airlines, some of which are reporting record profits, can invest to make their systems – and in turn, their businesses – more reliable.
One of those solutions for helping make aviation travel less chaotic is IP/MPLS services.
IP/MPLS is a communications network architecture that can prevent problems like minor router failures from grounding flights, noted Thierry Sens, marketing director of transportation and oil & gas segments for Alcatel-Lucent. In a July blog, Don’t let unreliable IP routers ruin your airline’s reputation, Sens notes that IP/MPLS offers high network availability and resiliency via its fast reroute, link aggregation group, non-stop routing, and non-stop services capabilities.
The technology also features embedded security via network access control, network group, encryption, and traffic anomaly detection. That’s important in this day and age of frequent and high-profile network and system breaches, as we need to closely guard the key infrastructure that is our transportation system, and protect the passengers and airline employees.
]]>In North America, the Positive Train Control (PTC) system was mandated by the United States federal government in 2008 for railway lines carrying passengers and hazardous materials. Yet, the government deadline to have 96,500 km of track with the feature by 2015 will not be met.
Similarly, the European Train Control System (ETCS) in Europe, part of the Europe Rail Traffic Management System (ERTMS), is currently only deployed on 5000 km of track. The EU is aiming for a rollout on Europe’s 68,000km core network by 2030, and there is a long way still to go.
“With the US government set to introduce a five-year extension of the PTC bill by the end of 2015, and the EU turning the screw on ETCS deployment, this is not going away,” noted a recent blog post, Unlocking the benefits of train control with IP/MPLS, by Thierry Sens, Marketing Director Transportation Segment, Alcatel-Lucent. Sens, explained that, “Railways should therefore embrace the respective mandates as an opportunity to improve their network architecture and technology, specifically by introducing IP/MPLS.”
Signaling and train control systems require strict reliability, resiliency, performance and security, as they are mission-critical communications. IP/MPLS architecture is perfectly suited for the task.
By combining IP/MPLS routers, IP/MPLS switches, optical switches, packet microwave and LTE radio networks, railway operators also can build a converged IP/MPLS network to host both mission-critical signaling systems and additional features desired by operators such as CCTV networks and passenger Wi-Fi. While the cost of rolling out the required infrastructure to support these train control mandates is large, railway operators can at least use the opportunity to overhaul their communication systems with modern technology.
Refer in Portugal and Trafikverket (previously Banverket) in Sweden, for instance, are deploying IP/MPLS to support their signaling applications while introducing features such as synchronous Ethernet, cyber-attack protection, non-stop routing, non-stop services and fast reroute.
“These railways are well placed to reap the rewards of improved interoperability, capacity, reliability and safety by hosting enhanced train control on IP/MPLS,” observed Sens. And, Alcatel-Lucent is working with these railway operators on the design and rollout of the systems based on IP/MPLS.
Meeting the regulatory demand for automatic safety features on railways is not quick or easy. But the benefits can be great for railways that do, and next generation communications is the foundation for enabling them to meet future requirements as well as improve operational excellence and the customer experience.
]]>A recent Alcatel-Lucent application note, The large enterprise has changed, gave an interesting snapshot of large enterprise IT today.
Source: Alcatel-Lucent, The large enterprise has changed
Based on this, it stressed that large enterprises have networking and communications infrastructure needs that are surprisingly similar to those of the network operators themselves, thanks to the growing importance of having employees connected with the bandwidth, security and reliability they need to do their jobs efficiently and effectively.
What this means is that large enterprises should start thinking like a network operator. This includes having telecom-grade IP platform infrastructure in place to support employee connectivity.
Specifically, large enterprise should think about using data center automation that can take advantage of technologies such as software-defined networking (SDN). With something like Alcatel-Lucent’s Nuage Networks Virtualized Services Platform, large enterprises can deliver SDN capabilities including centralized, policy-driven networking, simplified configuration and compliance automation.
Large enterprises also should have virtualized network services that can leverage SDN to create wide area networks (WANs) that can use best of breed technology and avoid proprietary lock-in.
In terms of the cloud, large enterprises are overwhelmingly deploying private clouds. Large enterprises should make sure they have a turnkey solution in place to make those deployments easy and also flexible enough to support web-based applications and mobile apps.
In thinking like telecoms, large enterprises additionally should consider optical transport and data center interconnect.
Optical transport delivers the bandwidth and speed that large enterprises need to keep up with network demand, and data center interconnect delivers the flexibility and capacity for faster service turn-up and assured business continuity while improving asset utilization and lowering costs. Data center interconnect brings scalable, secure, high-performing, multi-site data center connectivity for the cloud era.
Network connectivity is a key component of every business, especially for large enterprises. As a result, businesses need to learn from network operators and consider investing in similar technologies when it comes to their own connectivity projects.
Alcatel-Lucent has developed its Network Services Platform (NSP) as a unified solution for creating agility in delivering network services. NSP brings efficiency and flexibility to the front-end problems of new service creation and the immediately downstream problems of operating those services efficiently and intelligently in a multilayer, multidomain, multivendor network. It does so in a unified and holistically designed solution.
Remarkable gains have been made in the cloud computing community to create and deploy new services efficiently and at scale. It’s also true that a significant impediment to service delivery is the rigidity of networks we deploy and processes used to define and instantiate services being offered.
A great deal of energy has been expended in recent years to enhance the flexibility of networks. Solutions have begun to appear that address parts of the problem, but to date they have been constrained to a particular function or domain and haven’t actually solved the whole agile service delivery problem for networks.
Until the Alcatel-Lucent NSP.
NSP breaks the OSS/BSS logjam in network service creation with the use of open RESTful APIs northbound for OSS and BSS integration and with use of important data modeling standards and templates for network and service representation. Using these abstractions allows services and networks to be represented once to multiple OSS and BSS applications, eliminating the need to define the same service multiple times to different modules so they can talk to a range of vendors’ platforms.
NSP enhances this streamlining by enabling service policies and tenant contexts to be associated with the newly defined services and applied broadly across the target network infrastructure.
As we discovered in the analysis of developing a new bandwidth calendaring service offering in a representative operator case, NSP brings improvements over 50 percent compared to present modes of operation in both calendar time required to define the new service offering and the number of resources needed to define the service in the OSS and BSS contexts.
As the service templates travel southbound they are converted by a versatile mediation engine into the semantics and formats needed to work with each IP/MPLS and optical network platform being managed. This auto-conversion dramatically simplifies and streamlines the provisioning process for the service offerings across network layers, vendors and domains.
Communication southbound with NSP is enabled by support of multiple standard protocols important in the multivendor environment it’s designed for: BGP-LS, PCEP, NETCONF, and SNMP today, with OpenFlow on the horizon for cases where it’s used. Special cases for vendor CLI support are also included to continue the simplification.
On top of protocol versatility, Alcatel-Lucent has integrated functionality derived from 1,000s of operator deployments in both optical and IP/MPLS layers to enhance NSP’s value. For example, three distinct path computation engines are available in NSP for use as the operator requires. A packet-oriented PCE (PCE-P) for use with IP/MPLS paths, an optically-oriented PCE (PCE-T) for use with optical paths, and a multilayer PCE (PCE-X) for use in multilayer path optimization are included. PCEs are used to define paths in line with service policies at provisioning time, and as operations progress KPIs are monitored in real time to determine if adjustments of any sort are called for.
Going further, Alcatel-Lucent has incorporated unique and innovative algorithms for resource optimization such as its self-tuned adaptive routing for LSPs that helps the network adapt allocations in real time according to policies and service delivery needs, producing further efficiencies and revenue-generating capacity.
From this profile we can see Alcatel-Lucent is applying its vision and expertise to deliver a solution that supplies the missing link with NSP in solving the wide area network agility problem. Its combination of functions has all the attributes for turning WANs into agile service delivery platforms. It’s a platform that can help turn aspirations into achievements in new service deliveries. It should be a major contributor to many operators improving their networks to become as agile as the cloud.
Paul's work explores transformations under way in SDN, NFV, cloud computing and service orchestration in service provider environments. Use cases from data center to core, metro, access and customer premises are engaged. New architectural developments and implications for vendor and operator designs are analyzed. Syndicated research analyzes market developments, forecasts market sizes, and evaluates market shares of participating vendors in key product categories. Custom research and analysis helps clients evaluate plans related to these transformations, and implement their offerings in the market. Prior to joining ACG Mr. Parker-Johnson led Juniper Networks’ cloud computing solution business enabling end-to-end cloud offerings for service providers and enterprises of multiple sizes and scale.
]]>Currently, most route reflectors run either on a router that is dedicated to route reflection, or on routers that also perform other IP routing and service functions. Both scenarios have downsides.
Dedicated BGP route reflectors are a waste because route reflection functions require minimal data plane resources. Routers that juggle route reflection with other duties, on the other hand, may not have sufficient resources to support scalable route reflection.
Network virtualization offers a solution. A virtual route reflector, or “vRR” for short, can remove reliance on dedicated hardware and be adjusted up or down as needed through allocation of more or less resources to vRR virtual machines.
As a recent Alcatel-Lucent TechZine posting, Virtual route reflector delivers high performance, by Anthony Peres, Marketing Director, IP Routing portfolio, Alcatel-Lucent has noted, however, not all vRR solutions are created equal.
“Virtualizing an RR function is more than just compiling a software image to run on a virtualized x86 server,” noted the authors. “To meet the same level of stability and robustness that is offered today, virtualized network function implementations require a proven and stable software base optimized to operate within an x86 virtualized environment.”
A good vRR will take advantage of the multi-core support and significantly larger memory capacity of x86 servers. This can deliver a significant boost in performance and scalability for vRR.
“An implementation that supports parallel Symmetric Multi Processing helps unleash the power and performance of multi-core processing,” noted the blog. “This multi-threaded software approach offers concurrent scheduling and executes different processes on different processor cores. It significantly reduces route learning and route reflection times (route convergence times).”
The usefulness of vRR is not in question. But like many things, the devil is in the details.
]]>Today’s technology now allows a single fiber strand to carry up to 17.6 terabits per second of traffic. That’s the equivalent of transmitting 88 Blu-ray discs in a second. This ultra-broadband capability, and the new software-defined networks that service providers are embracing, have important impacts on optical networks further upstream.
“…we need to stay in the light/photonic domain as long as possible in order to reduce the cost associated with repeatedly converting wavelength photonic signals to electrical,” notes Scott Larrigan, senior marketing manager of IPR&T product and solution marketing at Alcatel-Lucent, in a recent TechZine posting, CDC-F optical networks propel us forward, and in the podcast embedded below.
While carriers already add to their fiber optic capacity by introducing different colors, or wavelengths, to move to 100gbps, 200gbps, and even 400gbps capacities, that only gets them part way there. There’s also now a need to more efficiently route high-capacity wavelengths in a cost effective manner, and an optical networking technology called CDC-F can allow for that.
“Together, CDC-F and SDN technologies are set to propel our networks forward – and make what’s science fiction today, a reality in the not too distant future,” says Larrigan. They enable what Alcatel-Lucent calls agile optical networking.
CDC-F optical networks are colorless, directionless, and contentionless (thus CDC). Their ability to dynamically optimize and reroute wavelength connections helps carriers recover network capacity so they have a longer lifespan. They lessen requirements for optical to electrical to optical conversions, saving capital equipment and power costs in the process. And with CDC-F, there’s no need for on-site visits to change or route wavelength connectivity.
Alcatel-Lucent today offers a CDC-F wavelength routing solution the Alcatel-Lucent 1830 Photonic Service Switch, which can both efficiently route wavelengths at scale, and deliver in-band per wavelength OAM that can precisely isolate wavelength issues throughout the network. A North America Tier 1 service provider will be leveraging this offering in its national optical core network.
]]>From original TechZine article
Can the virtualized evolved packet core (vEPC) be deployed today in large scale, LTE networks? Mobile network operators (MNOs) are increasingly convinced that the vEPC has become viable both financially and technically. And I think so, too, based upon the advances made over the past year that I’ll discuss in this blog.
Advancements in vEPC scaling and performance
Early in 2014, the vEPC proofs of concept and field trials of virtualized mobility management and gateway products were limited in both scale and performance. But as the year progressed, advancements in the design and architecture used network functions virtualization (NFV) tools and capabilities that greatly improved their capacity and performance.
These improvements, together with other software enhancements, such as the Data Plane Development Kit (DPDK), have the vSGW/vPGW approaching the capacity and performance of dedicated hardware platforms.
Converged NMS/VNF manager: The key to seamless vEPC network operations
A lot of progress has been made with enhancements to the ETSI Management and Orchestration (MANO) architecture. However, rather than having separate element management system (EMS) and VNF manager (VNFM) functions, there’s been a move to converge these functions since both are integral to managing the VNFs. (The EMS described by MANO includes both network and element management (NMS/EMS) functions).
By unifying the VNF manager and NMS functions, an MNO can seamlessly manage and orchestrate the vEPC. This makes it easy for an MNO to perform VNF lifecycle management functions from the same NMS that is used on a day-to-day basis for network operations.
When EMS and VNFM are converged:
The traditional NMS Fault, Configuration, Accounting, Performance and Security (FCAPS) management function is now applicable to both the EPC VNFs and the physical network functions (PNF). This enables a common and consistent approach.
This also provides the topology and logical connectivity of the individual VNFs/PNFs and more advanced performance and SLA reporting. A single manager simplifies overall coordination and adaptation for configuration and event reporting between the virtualized infrastructure manager (VIM) and the NMS.
Troubleshooting is simplified because traditional NMS faults/events are correlated with VNF related events/faults. The VNFM provides lifecycle management and automates the self-healing of VNFs. It uses recipes to describe the vEPC VNF, its VNF components (underlying VM instances) and their interdependencies. Each VNF component has its own recipe, which includes a description of how to monitor, self-heal, and scale it.
With coordinated fault management and automated self-healing, the MNO’s operations team will have the visibility and intelligence to understand whether alarms are caused by normal maintenance activities or are indeed an emerging issue that they need to react to quickly. In addition, new advanced NMS approaches to network assurance visualization will speed problem assessment for both VNF and PNFs. These developments will also provide the VNF and network event data to support reporting and analysis.
When the VNFM and the NMS are combined into a single management functional instance, the management and orchestration of the vEPC VNF and integration of the vEPC into the existing OSS/BSS infrastructure is greatly simplified. This is because the VNFM/ NMS has complete knowledge and visibility of VNFs within the physical and virtual EPC network.
Is the vEPC ready for commercial deployment?
Based on the progress made in both the scalability and performance of the vEPC VNFs and the advances made in management and orchestration of the vEPC, 2015 will be the year for vEPC deployments to commence at some Tier 1 mobile operators. The momentum and confidence of mobile operators in NFV will make it a reality.
Alcatel-Lucent at Mobile World Congress
Alcatel-Lucent will have a large presence at Mobile World Congress in Barcelona. I will take part in a panel discussion on “Unifying Network IT and Telco IT” on Thursday, March 5th from 11.30 – 13.00.
We will also be demonstrating our vEPC at our booth. There you will be able to see the dynamic scaling of our Virtualized Mobile Gateway and the operational elegance of our NMS/VNFM system. I look forward to seeing you there and discussing how our vEPC solution can meet your NFV evolution plans.
Related Material
To contact the author or request additional information, please send an email to techzine.editor@alcatel-lucent.com.
]]>Originally posted on Alcatel-Lucent Blog February 3, 2015
Talk of “cyber armies” working on behalf of nations might once have been the work of Hollywood, but recent events have demonstrated the opening of a new front in the global war on terror: cyber security.
High-profile attacks on film studios, a US military Twitter account, and several US retailers have led President Barack Obama to declare that cyber terrorism is "one of the biggest threats to national security" and that his administration is working to develop better intelligence on cyber threats. "No foreign nation, no hacker, should be able to shut down our networks, steal our trade secrets, or invade the privacy of American families, especially our kids," Obama said during his State of the Union address on January 20.
The head of cyber defence for the French military, Arnaud Coustilliere, also expressed his concern at apparent attacks on French websites in the wake of the terrorist tragedy in Paris on January 7. "What's new, what's important is that this is 19,000 sites." Coustilliere said. "That's never been seen before."
A similar cyber attack at an unnamed German steel factory in 2014, which sabotaged parts of the control system resulting in severe damage to a blast furnace, show that it's not just web servers and databases that are under threat, but complete ICT (information and communications technology) infrastructure. Train, air and road traffic control systems are as a result all vulnerable, which poses unthinkable consequences for governments around the world.
There is currently a widespread misconception that IP communication networks are more susceptible to attack than a proprietary or TDM network. However, the German steel plant attack in 2014 and the hacking of legacy and proprietary industrial SCADA infrastructure in the Middle East by Flame and Stuxnet worms in 2012 show any kind of infrastructure is vulnerable.
Alcatel-Lucent has consistently invested in researching and developing highly secure solutions for its communications networks and infrastructure to ward off potential threats and provide added peace of mind to its customers.
For example, IP network infrastructure utilizes Network Access Control (NAC), encryption, and traffic anomaly detection. IP/MPLS also uses traffic segregation and isolation, which means that if one VPN network is compromised, the attacker cannot reach out to other VPN domains.
In addition, to detect intrusions and protect optical fiber networks, Alcatel-Lucent integrates advanced security features into its DWDM optical equipment, 1830 Photonic Service Switch. Layer 1 encryption of high-speed lines (10G), which are based on AES 256, one of the most advanced market standards, guarantees data integrity and confidentiality, and by preventing latency of even a few microseconds, does not compromise performance.
This type of encryption is ideal to secure the transmission of real-time high-speed data used by data centers, cloud infrastructure, and all critical communications. For railway operators, airports, road authorities and government agencies, which rely on these networks, constant availability is essential. However, with cyber threats only likely to become more sophisticated, they should be mindful of taking necessary precautions to avoid becoming the cyber terrorists' next victim.
See a detailed demonstration of how to fully protect the confidentiality of the information carried over the fiber:
For what seems like ages now the communications industry has been talking about convergence. We have already gone through many phases as networks move from TDM to being end-to-end Internet Protocol (IP) with voice traffic increasingly being carried on converged networks. Indeed, the popularity of Voice-over-IP (VoIP) and the coming of Voice-over-LTE (VoLTE) on mobile networks is the future.
That said, convergence is not just about IP but is also about the transformation of global network infrastructures in the wired world, with legs into the wireless one as well, of IP and Optics. And, as Steve Vogelsang, VP Strategy and CTO, IP Routing and Transport Business Division, Alcatel-Lucent noted in a recent TechZine blog, IP and optics: Time to make nice, “Let’s face it. The future of the communications industry requires a convergence of IP and optics. So maybe it’s time to give each other some overdue respect."
Vogelsang starts with the acknowledgment that: “Optical networking is very different from IP networking. The base system designs and some of the underlying technology are similar, but the design goals and resulting optimizations are quite different.” He continues by saying reality is that, “IP and Optics are destined to come together. “
Vogelsang then goes on to provide four insightful observations about IP and Optical convergence that are good food for thought. The four are:
Vogelsang proceeds to delve into an interesting discussion as to what needs to be addressed to get to IP and optical convergence. He notes that: “The first problem to solve is automating the optical layer, because much of what happens, even today, involves hands-on setup. ROADMs were a great start, but they only allow automation of the middle of the route, but not the ingress and egress points. Next-generation ROADMs solve a lot of these issues by making them colorless, directionless, contentionless (CDC) and, for networks over 100 Gbit/s, flexible (CDCF). But the key will be getting the routing layer to talk intelligently to the optical control layer and vice versa.”
How the network of the future gets to being ultra-broadband, including that of mobile operators, is going to be through IP and Optical integration in almost every part of the infrastructure. And, while we are not there yet, as Vogelsang says, “There is a lot of sophisticated and tricky maneuvering happening at the optical layer, which few IP engineers recognize or understand. While that was OK in the past, it is entirely insufficient today.” It is the reason the title of his posting about now being the time for IP and optical engineers to “make nice” is not just an observation but should be construed as a call to action.
]]>It goes without saying anymore that people and businesses in an increasingly connected world rely on the Internet for personal and commercial communication. We are also in the midst of a continuing migration of people are increasingly moving to cities as the world is becoming more urbanized. What has also become clear is that cities with a smart grid and a solid IP infrastructure thrive more than cities that do not. The case for the smart city has never been stronger.
First, the demographic shift: Roughly half of the world’s population lived in an urban area in 2010. By 2050, according to the World Health Organization, nearly 7 out of 10 people will live in an urban environment. Unsurprisingly, by 2025 there will be 37 mega-cities with a population above 10 million people, according to the United Nations Environment Programme.
This alone should be reason for government and industry to come together and invest in the network resources to support this city population. But there is good economic reason, too.
Cities with good broadband infrastructure reap the benefits, according to stats compiled in a recent TechZine posting, Smart cities are built on smart networks, by Marc Jadoul, Strategic Marketing Director, and Jacques Vermeulen, Director, Global Solution Leader for Smart Government, Alcatel-Alcatel-Lucent. As the authors note, a 10 percent increase in broadband penetration produces between 0.25 and 3.6 percent growth in GDP, and 80 new jobs are created for every 1,000 additional broadband users. Further, broadband is responsible for 20 percent of new jobs across all businesses, and 30 percent of new jobs in businesses with less than 20 employees.
But what does it mean to be a smart city?
First, it means having a city-net based on wireline and wireless broadband networks that give access to a high-capacity IP and optical communications infrastructure.
Second, it means investment. Smart cities invest in data centers and a government cloud, control platforms for multimedia and machine-to-machine (M2M) communications.
Third, once the foundation is laid, the city’s public infrastructure (including buildings, public space, roads, traffic lights, parking, etc.) is optimized for peak efficiency and environmental preservation.
“Elements like a smart grid helps reduce CO2 footprint and energy bills, and wireless sensors can continuously monitor and control pollution, lighting, and waste,” noted the Alcatel-Lucent blog post.
Fourth, entrepreneurship is leveraged to create new applications to enrich daily life of all citizens. New York City, for instance, relies upon third-party developers for apps that make its metro easier to navigate.
Participation is also key. In fact, community engagement is a crucial factor for successful smart city rollout. This includes citizen participation, feedback loops, as well as social media interaction and dedicated community portals.
The case for the smart city is obvious. But will government heed the call.