Significant investments require significant returns. How do companies ensure their benefits are measured, tracked and realized during IP Transformation Programs?
Success is Not Guaranteed
Think about the hardest project you have ever delivered. Just think back… that one ‘special project’, the one that spiraled out of control, the one where the requirements kept changing, the one where the objectives kept moving, the one project that would not de-scope, where the tsunami of work was towering over the team, and impossible deadlines were looming. Yes, that one.
Most of us have experienced THAT project. And we probably sat with our colleagues, asking ourselves how a project under such pressure could even exist. Why would the sponsors not revise the scope, refocus the team, or even reinvest the budget elsewhere?
We all know that technical projects can go awry. IT, Networking and Engineering projects – famously 50% overrun on budget, and many are cancelled altogether.
So, what are the figures for complex Transformation Programs? For Programs where IT, Network, Operations and Engineering are undergoing change simultaneously. With an objective eye, it’s easy to question how any of them actually deliver results. But indeed they do.
But, how, and what can we measure to be certain we are achieving the desired results?
The Value of Tracking the Benefits
This is where Benefits Management steps in. The major goals of the approach are to ensure that:
Benefits Management provides a set of disciplines that operate out of the Program Management Office (PMO), and ensure the benefits of complex programs are constantly measured, tracked, assessed and reshaped. Programs that truly measure benefit as a real ‘outcome’ (rather than waiting for the end of the program) are more agile in adapting to changing conditions and can reshape to meet new requirements. Benefits management is in-fact the fourth dimension to the classic time/cost/quality triangle of managing good projects.
How Does It Work?
The role of Benefits Management is to answer the four key questions in planning and delivery:
These questions are addressed through three phases of planning and measurement;
The Strategy Phase is focused on reviewing the company’s IP Transformation strategy, in order to select at a number of sub-programs that can deliver benefit or align existing change programs with that strategy. A high-level benefits map should be produced at this stage.
The Program Phase is concerned with the definition and selection of projects and initiatives to create a portfolio that will deliver the required benefits. This is the phase where most use is made of the Benefits management tools such as the Benefits Plan, Benefits Register and Risk Analysis.
The Project Phase is about the delivery of projects and initiatives to support the program, the monitoring of actual benefits against targets and benefits harvesting.
What is the Cost?
Managing benefits is a key component of complex change programs. Without it, the technical, operational, IT, business and engineering change aspects of the program can happily deliver in parallel, and then fail to coordinate the realized benefits during the program. A small investment in dedicated PMO resources to measure, track and adjust the program is a small price to pay for assurance of successful delivery. Typically, this team is no more than two or three dedicated resources and the assurance they provide outweighs the cost.
Measure, Track and Realize
We started by discussing our most challenging project experiences. In many cases, Benefits Management can reshape projects far earlier, and save those projects from failure.
The approach of measuring benefit, tracking the measurable benefits and realizing the benefits provides the tools and information for program managers to govern the program, and reshape it, or address constraints if benefits are not likely to be realized.
Benefits Management ensures that in two years’ time, we are discussing the realized benefits over a coffee, and not talking about another one of ‘those projects’. For the small cost of a few specialist resources, that’s a worthwhile investment.
These ideas are explored further in the white paper “Better business case management for IP Transformation.”
For more information on how to address the challenge of building and managing the business case for IP Transformation, please see our earlier blogs on TMCnet:
Delivering successful change programs is a significant challenge. Undertaking a Readiness Assessment speeds the launch of new IP services, reduces risks and aligns corporate objectives with your program.
The Challenge of Change…a true story
So your company is planning an all IP network. The CTO is delivering technology roadmaps, the COO is assessing the service portals, and network designers have been architecting for eight months. The program is well underway and people are now starting to plan the migration.
So, you start to scope out the effort required to deliver migration and calculate that it requires hundreds of resources to manage a switchover. You approach engineering to secure the resources, and are informed HR is managing a release program, remunerating engineers to leave the company. The same engineers that you need to deliver your program!
Sound familiar?
This is a real story, it happened to me as a Program Director. I often wonder how executives can launch complex investment programs that impact every aspect of the business, IT, Operations and Network, and then fail to align the vision with the wider company operations. And then I remember;
Addressing the Barriers to Assessment
Assessing the capability of the corporation to manage large scale change is not often addressed. It’s perceived as an academic exercise, one that poses uncomfortable questions of the companies own abilities and management, and one that can become time consuming when resources are most in demand.
However, if undertaken properly, with a specific objective and a hard timeline, such an exercise can save enormous costs in launching in managing Transformation Programs.
The Purpose of a Business Readiness Assessment
A Business Readiness Assessment provides a structured approach to determine the current state of the business to enter a large change program, and identifies the key activities that are;
Scoping Readiness Assessments
Readiness Assessments should be scoped against the range of the business that will be engaged in, and impacted by the Transformation program. That includes all organizations that manage or deliver the program, all organizations that are providing resources (or arranging resources through supplier management) and all organizations that are running parallel programs that are dependent.
The scope should assess two attributes of the organization;
Managing Readiness Assessment…Comprehensively
Readiness Assessments are undertaken through two data gathering and analysis methods:
First, the inventory of existing and planned corporate initiatives are identified and assessed. This is achieved through interviews across the stakeholder organizations, unless a central portfolio management team exists within the company. For each program, the objectives, timeline and program impacts are assessed.
Second, the ability of each organization to deliver against their program requirements is determined. Capability is measured in terms of available resources, skills, experience, reusable processes and specific tooling. This is captured through a structured interview approach, using a method of weighted scoring. This allows metric analysis of strengths and weaknesses against each area of the program, by delivery organization and capability area of discipline.
The diagram below defines the usual capability areas that need to be assessed:Generating Value
Post analysis, the Readiness Assessment is used to define a readiness action plan that prepares the organization for the Transformation Program. This includes;
A Lesson Learned
Whilst a Readiness Assessment is not necessary prior to launching large change programs, it is certainly useful. In my experience, the small investment required to run this short exercise has paid dividends across multi-year programs. It aligns program sponsors and cross division stakeholders to the IP Transformation program, which itself accelerates the launch process. In short, this approach is certainly, worth considering prior to investing in a large program, to avoid costly oversights later.
More information…
A Readiness Assessment is normally performed as part of Program Setup. These ideas are explored further in our white paper “Better business case management for IP Transformation.”
Watch for our next blog, The business case for IP Transformation: Realizing the benefits
For more information on creating an effective business case for IP Transformation, please see our earlier blogs on TMCnet, “The Business Case for IP Transformation: Creating the Case and The Business Case for IP Transformation: Managing the Service Roadmap.
]]>We need to ask… “Why Are We Building The Network?”
With IP Transformation programs sponsored and funded by the CTO, delivered by technology-focused teams, and culturally embedded within network operations, it is easy to forget that the over-riding objective of many programs is to actually change the service portfolio mix, for the benefit of both customers, and the provider.
So, how can Service Portfolio managers ensure that this vision is not lost when the programs are so heavily influenced by technology?
Step-forward…Marketing Are Sponsors Too!
The first step is to ensure that marketing (and service portfolio owners) are identified as key sponsors at the outset of the program. This is often missed when the needs of marketing are identified late in the process (during delivery), often halting the program, until the priority between technology requirements (decommissioning and cost saving) are balanced with those of services (new product launch and portfolio consolidation). The late impact of failing to recognize the central role of the services portfolio can be costly and embarrassing.
Avoid the Road To Nowhere.
Without a clear service roadmap, the program is on a road to nowhere. A service roadmap lays out the direction for service change, and ultimately derives the technical and migration priorities. In setting out a service roadmap, several considerations have to be undertaken by the portfolio team:
When analyzing each existing service, the roadmap accounts for 4 outcomes post analysis:
Each of these requires specific consideration in planning the program, prior to technology planning.
Legacy Mapping
Legacy mapping is the staple of IP Transformation programs. It is the migration of existing services to IP equivalent circuits. Whilst the protocol (transport, edge devices , and in some cases, CPE) may change, the actual service offering is within contractual boundaries of equivalence. The service team will still need to determine the customer impact (during and post migration) and if the service is still delivered, managed and priced correctly in an IP world.
Retirements
Service retirement is often discussed by network operators, but rarely executed, due to the brand issues in disrupting paying customers, and subsequent loss of revenue stream. However, the business case for IP Transformation often predicates the removal of legacy equipment, and older services with diminished user base are often loss leaders halting legacy removal. IP Transformation gives a one in a decade opportunity to finally retire those services. It takes significant account planning, and a combined executive will, but it can be done.
Service Substitution
Service Substitution replaces those services where emulation (equivalency) is not possible. In essence, the service managers identify services that can substitute similar business features of legacy ones, and position those with customers. Financial modeling and customer demographic analysis is necessary to determine the compelling adoption point (to achieve take up rates), supported by an orchestrated account management policy to engage customers and meet the program timelines.
New Service Launches
New services (or expansions to existing services) are enabled through the introduction of IP Networks. The roadmap should be used to identify which customers will be targeted, where and when, to enable new product launches during the IP Transformation timeline.
Bring It All Together.
Only when the service roadmap has been defined can the technology and migration programs be shaped and prioritized. The service roadmap defines the order in which technology is deployed, the geographies, user groups and engagement strategy. Without a service roadmap, the business case itself is called into question - which is why the roadmap is authored prior to the financial budgets.
The Service Roadmap – The Heart Of IP Transformation
We started this blog by stating that IP Transformation programs are technology-centric. In nature, they are, but at their heart lies the driver for a new service roadmap. Whilst technology is an enabler, the program outcomes are all about marketing, customer demand and return on the investment.
To learn more, please download the white paper “Better business case management for IP Transformation.” And, for more information on creating an effective business case for IP Transformation, please see our earlier blog on TMCnet, “The Business Case for IP Transformation: Creating the Case.”
]]>
In a technology-focused environment it is possible to conclude that building the business case for IP transformation is all about the network, the technology and the associated spend. That would be a mistake. To build an effective business case network operators must take into account the complexity of the program and its far reaching impact on their business.
The business case validates and supports the transformation activity. As the network operator invests (both capex and opex), the business case demonstrates the feasibility of the exercise and also that the tangible benefits (the return on investment) warrant the expenditures and opportunity cost. IP Transformation isn’t easy, but a well-executed strategy based on a strong business case will result in years of tangible benefits for your business.
IP Transformation: Steps to Success
The first challenge is to justify the costs. It is crucial to determine how the company is going to realize a return on the money invested.
There are 3 essential activities you should complete first in order to build a strong business case:
Build a Better Business Case
In my experience, only those business cases that took into account the ‘big picture’ really stood the test of time, and were not revisited or even scrapped during the delivery program.
The current services portfolio and future roadmap for sales offerings must be understood and modeled. Without it, the company is embarking upon change without understanding its very purpose: what it sells, to whom it sells, where, when and how. This applies equally in strategic industries, such as energy distribution and transportation, where infrastructure services are provided to support business and engineering applications.
Why is this so hard to achieve, and so often ignored? There are several reasons that need to be addressed.
#1. Get the Right Sponsors
Sponsors are often technology focused, and they primarily see the feature roadmap and decommissioning benefits. They tend to ignore the wider organizational stakeholder needs and benefits. The technological benefits of change rarely justify the investment on their own. It takes a holistic set of benefits to make the numbers work.
Also, the portfolio is often fragmented across the business, and pulling together the roadmap is seen as almost impossible. This is not an insignificant undertaking, but it is a necessary cost if the real benefits are to be understood and realized.
#2. Do a Technology Audit
The next step is to understand the current technology baseline, and the level of change that is required. This includes not only the physical network assets, but also the data models, the logical service layer, and the associated OSS and BSS changes.
In many cases, a multi-pronged approach to audit is required. This will cumulatively drive a deeper understanding of the known starting position and give a baseline for planning the investment in network and IT change.
Coordinating these technology efforts, driven by different organizations, but with dependent outcomes, takes significant effort and forethought. Only when they are delivered as a combined view can you truly understand the technology roadmap costs.
#3. Establish Good Governance
In parallel to the network change you must determine the scope and effort required to smoothly and quickly migrate the network and IT operations environments. This audit is driven by a different stakeholder base, with their business-as-usual demands and their own drivers for influencing the network and IT change.
I have witnessed several instances of C-levels operating in ‘splendid isolation’ at this stage, and later wondering why the dependencies between Operations, IT and network change were not planned in when considering the business case.
You must impose strict governance and coordination to plan the roadmap, appease all the stakeholders, and identify the true requirements and costs of operations uplift.
#4. Get Full Corporate Visibility
Any financing should take into account the wider business. What parallel investments are occurring elsewhere? Can they be leveraged, or are they going to impede the change program, and cost the company time and money, or cause blocking dependencies later?
In one particular case, I witnessed HR releasing resources through a funded early retirement plan, only for the change program to hire back those very same resources as contactors. This went on for 18 months, with both programs claiming success against their own measures.
Such company investment programs running in isolation are not uncommon. Early analysis can identify dependencies, and save enormous financial impacts later.
IP Transformation: It’s a Journey
Make your business case first, before you embark on your IP Transformation journey. It gives you the map, which then provides the route and directions for the program’s journey. Like any seasoned traveler, I have learned that a sound map and a route marked with clear waypoints is a pre-requisite before setting out. The white paper, “Better Business Case Management for IP Transformation” outlines these ideas in more detail.
Watch for our next blog, The business case for IP Transformation: Managing the service roadmap.
By Steve Blackshaw, IP Transformation Product Line Management, Alcatel-Lucent
In his role as Senior Director of IP Transformation at Alcatel-Lucent, Steve Blackshaw leads large-scale network evolution and transformation programs for some of the world’s largest telecommunications service providers.
]]>The cost savings and reduced complexity from enterprises moving to an all-wireless communications network is a seductive one. However, worries still exist among many enterprise IT managers that Wi-Fi is not up to snuff. Indeed, there are still concerns about scalability, quality, and security issues.
A recent TechZine article by Subramania Vasudevan, Director, Advanced Performance in WCTO, Alcatel-Lucent, All-wireless enterprise with LTE and Wi-Fi, notes that enterprise IT managers have a particular lack of confidence in the quality of the wireless link provided by an all Wi-Fi infrastructure.
“There’s the limited ability of the Wi-Fi network to scale with increasing data rate needs,” Vasudevan noted. “In fact, we’ve seen aggregate capacities barely increase — even as Wi-Fi networks densify.”
LTE small cells can help. Small cells help provide in-building LTE on a cost-effective, as-needed basis.
Many mobile operators are considering unlicensed spectrum to bring greater bandwidth into the enterprise, he added. This can help meet the scalability demands. In fact, operators are looking to aggregate LTE in licensed bands along with LTE in the 5GHz unlicensed bands, which are known together as Licensed Assisted Access (LAA) or LTE unlicensed (LTE-U).
The limitations of Wi-Fi often come from the sharing mechanism between the uplink and the downlink. By using an LTE-based system, enterprises can resolve the problem of contentious uplink by means of scheduled access. This frees up the enterprise’s existing Wi-Fi for downlink, according to Vasudevan.
“By offloading the Wi-Fi uplink to cellular, LTE small cells improve enterprise services,” he wrote. “In addition, in-building enterprise traffic, such as Lync application data, can be shunted across the enterprise LAN (i.e., local breakout is enabled).”
At the same time, the combination relies on pre-existing Wi-Fi APs and user equipment, so the sum total is that the LTE downlink capacity can be aggregated with the Wi-Fi APs downlink capacity. This can lead to users everywhere seeing higher throughput in more locations because they benefit from LTE Wi-Fi aggregation and LTE-only for uplink.
The all-wireless enterprise network might be closer than many enterprise IT managers realize. This is a good thing since so many of us use our smartphones as our primary communications device and a significant number of interactions on those devices originate or terminate in-building where coverage and quality of service are a challenge.
If you spend any time in a developing country, you quickly discover that the majority of Internet connectivity comes via cellular connections. For many in developing countries, a smartphone effectively is their first regular connection to the Internet.
Roughly 87 percent of all broadband connections in emerging markets will be by way of cellular by 2017, according to Alcatel-Lucent forecasts. This is especially true in Latin America and the Caribbean, where the GSMA estimates that Latin America will have the second highest installed base of smartphones in the world behind only Asia Pacific by 2020.
The latest 4G Americas report shows that Latin America added 17 million LTE connections over the past twelve months, a 324 percent connection growth rate and the highest in the world.
Small cells technology is helping operators in Latin America and the Caribbean keep up with mobile broadband demand. Small cells are inexpensive to deploy, and they enable operators to add coverage and density as subscriber demand warrants.
“As mobile data usage escalates, adding small cells has become the popular solution,” noted a recent Alcatel-Lucent blog post on the topic, Latin America’s path to broadband increasingly made possible by small cells. The post noted that small cells are increasingly being used as the primary means for servicing cellular connections in Latin America and the Caribbean, with macro cells adding density in areas of particularly high use.
Alcatel-Lucent should know. The company leads the market in Latin America for small cell use according to Frost & Sullivan. In fact, Alcatel-Lucent has more than 50 percent of the market, and has secured 18 contracts in 13 countries since 2013.
“Small cells are the key to bringing mobile broadband to their citizens,” noted the Alcatel-Lucent blog post. “And as operators move from 3G to 4G/LTE networks, small cells play an even more important role in providing increased bandwidth and capacity needed to support advanced communications applications.”
Leading the way in Latin America and the Caribbean are Brazil, Mexico, Argentina and Colombia, with the highest small cells usage. But small cells make so much sense that countries in all parts of Latin America are jumping on the bandwagon.
]]>I’ve always thought of trains as one of the safer modes of transportation. But recent high-profile train accidents remind us that even vehicles on tracks can run into problems that can result in crashes, with potential results including death, injury, and property loss.
You may remember the tragic Amtrak accident on May 12 in Philadelphia. It killed eight people and injured more than 200 others. The train derailed while taking a curve for which the maximum recommended speed was 50 miles per hour, but preliminary analysis from the National Transportation Safety Board indicates the train was moving at 102 miles per hour. This wreck put new focus on the need for positive train control, better known as PTC, systems.
The NTSB has been talking about the need to improve railway safety with PTC since 1969. However, when two Penn Central commuter trains collided head on, killing four and injuring 43 things heated up due to the increased national attention. In fact, it should be noted that the NTSB in 2014 put out a “most wanted list” on which implementing PTC systems ranked first. The list also noted at least six other railroad accidents from 2008 to 2012.
“PTC systems work by monitoring the location and movement of trains, then slowing or stopping a train that is not being operated in accordance with signal systems and/or operating rules,” the NTSB explains. “This safety redundancy prevents train-to-train collisions and overspeed derailments, as well as the associated injuries and fatalities to passengers, railway workers, and others.”
Yet for all the talk about the need for PTC systems, and the fact the government has set requirements regarding the installation of PTC systems, most U.S. railroads will fail to install positive train control by the Dec. 31 federally mandated deadline, notes Thierry Sens, marketing director of the transportation segment at Alcatel-Lucent in a recent TrackTalk article, Give PTC* the best chance of success with IP/MPLS.
That said, PTC systems do exist. Toward the middle of this year an estimated 14,300 of the 22,000 locomotives in the U.S. were partially equipped with PTC, Sens says. Plus, 19,000 of the 32,600 wayside interface units and 1,800 of the 4,000 base station radios required for PTC had been installed since the government in 2008 ordered PTC be installed on lines carrying hazardous materials or passengers.
As the NTSB paper notes, PTC systems are in use on the Northeast Corridor and on the Michigan Line between Chicago and Detroit. And as Sens discusses, Norfolk Southern is also among the organizations moving PTC forward by upgrading its communications network to IP/MPLS.
The IP/MPLS network allows the railroad, which is one of the nation’s largest (with a 34,600km network), to separate and prioritize traffic, and provides the resiliency required for the important PTC function via its fast reroute, link aggregation group, non-stop routing, and non-stop services capabilities. Alcatel-Lucent’s ADSL+ solutions, integrated access devices, microwave technology, and Service Access Routers power the Norfolk Southern IP/MPLS network, which was first deployed in 2010 and now operates in 22 states.
“PTC is the right thing for the U.S. railroad industry, particularly following recent high-profile accidents,” says Sens. “It will prevent train-to-train collisions, derailments caused by excessive speed, unauthorized incursions on track where maintenance is taking place and the movement of a train through a switch left in the wrong position. Its interoperability features are also a critical element of an efficient and successful rail network.”
Attorney Barlow Keener agrees. As he mentions in a recent INTERNET TELEPHONY magazine column, railroad safety and railroad viability are both for railroad companies and their riders, as well as for the American economy itself. According to the Federal Railroad Administration, he notes, 140,000 miles of U.S. railroads deliver 40 percent of all national freight.
]]>The cloud era is here -- do you think your network is ready? As a network operator, you will need to deliver on-demand network services that are just as dynamic as the cloud services that now dominate network traffic. You face many challenges in making this happen.
But a new study from ACG Research shows you can achieve this quickly and profitably with advancements that are available now. Their analysis of the new Alcatel-Lucent Network Services Platform in a national network scenario showed you can cut service creation time, generate more revenue, and achieve significant ROI very quickly.
Network complexity & waste block profitability
So what’s stopping you with the present mode of operation? Complexity and waste are getting in the way of your profitability.
The business processes used to plan, build, and operate network infrastructure involve manual handoffs between the network engineering processes that control network resources and the network operations processes that provision services. Each is further divided into separate packet and transport silos. OSSs/IT and element management systems are forced to interoperate with the network through multiple, complex, and vendor-specific APIs.
The impact of these limitations can be crippling as operators make the transition from static to dynamic network services.
Carrier SDN: Automate & optimize for true freedom
Carrier SDN offers a fresh way forward. The NSP leverages Carrier SDN to unify service automation and network optimization in one integrated platform. The result is that network operators can deliver dynamic services quickly, efficiently, and at great scale.
The NSP accomplishes this by:
ACG Research test results
To put the NSP to the test, ACG Research made comparisons between an NSP-enabled and PMO-enabled national network used to deliver bandwidth calendaring and bandwidth on-demand services to a target market of 10,000 large enterprises.
ACG found the following:
Operators can achieve the dramatic increase in revenue with the NSP, compared to the PMO, because it can improve capacity utilization by 40 percent. This utilization improvement enables profitable operation at a price point that is 29 percent lower than the PMO price point. This, in turn, stimulates demand relative to what is possible using the PMO. The NSP’s 58 percent faster service creation time also provides a prime-mover advantage and advances revenue recognition.
To be as dynamic as the cloud services that now dominate network traffic, you will need to:
We believe the NSP can help you make this happen. Download the Carrier SDN business case and register for our upcoming webinar series to find out how you can cut service creation time and grow revenue.
Ninety percent of those 4.2 billion people without access live in the developing world, and in the least developed countries less than one person in 10 is online. Meanwhile, in the developed world, 82 percent of the population is online.
These statistics are laid out in a new blog by Marcus Weldon, president of Bell Labs and the CTO of Alcatel-Lucent, who in his piece calls on people and companies to do their part to help the Broadband Commission achieve its goals to flatten the digital playing field across the globe and among different groups of people. In his blog, Weldon talks about the problem that the “digital deserts” that exist today play in setting up a long-term environment in which one set of people can collaborate, communicate, and conduct commerce, and another group of people – to whom he refers as “an analog underclass,” operate primarily in physical space, and if they do want to connect digitally have to wander from connected oasis to connected oasis.
“If we want to avoid this dystopia, we all need to do more to help the [Broadband] Commission and its incredibly laudable goals. And this must start at home – in the organizations for which we work and in which we are involved,” writes Weldon, who has already has offered Bell Labs resources to help the Broadband Commission create and build on projects to bring connectivity to those who lack it.
The Broadband Commission has been around since 2010, but was just re-chartered with the aim of helping achieve the United Nations’ 17 Sustainable Development Goals.
“The UN Sustainable Development Goals will stimulate action over the next 15 years in areas of critical importance for humanity and the planet,” explains ITU Secretary-General Houlin Zhao. “All three pillars of sustainable development – economic development, social inclusion, and environmental protection – need ICTs as key catalysts. That is why the Commission believes that ICTs, and particularly broadband, will be absolutely crucial for achieving the SDGs.”
The ITI Secretary-General made those comments last month during The Broadband Commission for Sustainable Development, an ITU and UNESCO gathering at which high-profile people representing academia, government, and industry came together to discuss and debate how to accelerate the adoption and availability of broadband around the world. The event had a special focus on developing and less developed nations and groups such as the disabled, non-English speakers, rural dwellers, and women.
Communications business magnate Carlos Slim Sr. and the president of Rwanda chair the commission. Other members include former FCC chairman Kevin Martin, who is now with Facebook; Bharti Enterprises CEO and founder Sunil Bharti Mittal; MIT Media Labs founder Nicolas Negroponte; Jeffrey Sachs, who is the special advisor to the U.N. Secretary General and an expert on poverty; and several operator and vendor CEOs, and telecom ministers from around the world.
]]>
For large enterprises, small cells make a lot of sense.
Upwards of 80 percent of all mobile usage now occurs indoors, according to Alcatel-Lucent, and enterprise small cells deliver a flexible and economical way for reliable mobile connectivity in-building.
Recently a field trial held at a large financial institution in Mumbai showed the potential of enterprise small cells. Small cells bathed a 45,000-square-foot, all-glass office space with cellular connectivity that replaced an existing DAS and delivered a call drop rate of only 0.87 percent, an increase in average throughput of 42 percent, and a boost in peak throughput of 82 percent, according to a recent TechZine posting, Field insights: Deploying enterprise small cells, that went into detail on the deployment.
Impressively, this was done with the use of only nine small cells.
There were five key takeaways from the field trial that large enterprises should note.
First, don’t forget about macro cell connectivity. It is easy to focus on femto-to-femto handovers and overlook macro cells, but ignoring macro cell connectivity can greatly reduce the effectiveness of enterprise small cells deployment.
Second, the field trial found that IP/backhaul expertise helped the small cells deployment meet all key performance indicators despite the fact that the core network the financial center was connecting with was more than 1,440 km away in Delhi.
Third, the trial found that proper advance planning made a huge difference.
“In the Mumbai enterprise, an early solution design called for using 12 cells across the 45,000-square-foot office space. But the initial design was then optimized upfront, based on network expertise and Bell Labs tools, which eliminated 3 small cells,” noted the Alcatel-Lucent blog post. That’s significant.
Fourth, scalability needs to be kept in mind when it comes to enterprise small cells. Enterprises often need to expand capacity, and not all small cells configurations can scale to meet extra demand later on. But proper small cells architecture can enable scalability as needed.
Finally, the field trial found that reliability should be a point of focus when designing enterprise small cells configurations.
“The most reliable enterprise small cell solutions avoid single points of failure,” noted the Alcatel-Lucent blog. “Each of the nine cells used in the Mumbai financial institution operates independently. That makes sure that any failure is isolated and does not affect the rest of the network.”
Enterprise small cells deployment makes a lot of sense. But the devil is in the details.
]]>
Roughly 90 percent of all EU jobs will require some ICT skills in the near future, yet 39 percent of EU workers have little or no ICT skills as of 2014, according to the European Commission. In the U.S., the digital skills gap between what’s needed of employees and what’s available in the market comes at an estimated cost of $1 trillion per year in lost productivity, according to estimates from Entrepreneur.com. ICT-based employment is growing 7 times faster than overall employment in the EU, too.
The situation is even worse in developing countries, where ICT training is often lacking—especially for girls. While 77 percent of the population in developed countries is online, only 31 percent of people in developing countries have access according to ITU figures for 2013. And globally, women are 16 percent less likely than men to have Internet access.
Looking to help with this problem, Alcatel-Lucent and World Education developed the ConnectEd program, which helps disadvantaged youth achieve better learning outcomes, become better prepared for the world of work, and engage meaningfully in their communities. Between 2011 and 2015, the program provided training to 25,000 young people in Australia, Brazil, China, India, and Indonesia. Roughly 58 percent of those helped were girls, the group with the greatest need.
The ConnectEd program had a huge impact on the lives of the young people they helped. More than 90 percent of program participants passed ConnectEd digital skills training, and more than 95 percent of the in-school youth remained in school. In Indonesia, 21 ConnectEd students even broke the stereotypes against street children and entered university as a direct result of the program.
“In all countries, what comes out most strongly in terms of ConnectEd’s longer-term impact are the effects of having improved confidence,” noted Estelle Day in a recent blog post, the director of the ConnectEd program.
“It sounds such a small thing, but for excluded youth, it seems to be a key to unlocking their potential,” she added. “Disadvantaged youth, more than anything, need someone who believes in them, respects them, who identifies their strengths and helps build on them. And that is where, I believe, ConnectEd and the inputs of Alcatel-Lucent volunteers have had so much power.”
Most community giveback programs make a difference. But when it comes to helping disadvantages youth build ICT training, such programs can make a huge difference in the lives of those it helps. ConnectEd is one such program.
]]>Sometimes fiber to the subscriber is the best fit to support broadband services for residential and small and medium businesses. However, existing copper continues to have an amazing ability to be enhanced to meet broadband requirements. Indeed, copper-based technologies such as VDSL2 vectoring, Vplus, and G.fast can support bandwidth rates of 100, 300mbps or even 1gbps.
To decide which areas are ideal candidates for fiber-to-the-home (FTTH) or business, and which can be more than adequately served with copper-based technologies, Bell Labs Consulting suggests that service providers consider:
“To do this, service providers need to conduct a thorough access study, including a detailed market analysis of the service area,” Mohamed El-Sayed, consulting manager of the network strategy and technology evolution practice of Bell Labs, writes in an aptly titled recent TechZine article, Study shows ultra-broadband potential of copper. “With this information, the service provider can determine present and near-future bandwidth demand.”
The average bandwidth required for a fixed network in a residential area can vary significantly based on all of the above. Here are a few of the many related data points mentioned in the blog. A study by Alcatel-Lucent suggests that the current upper bound broadband access rate is about 50mbps and will be 100mbps by 2020. A Bell Labs study for a major operator in Western Europe indicates 40mbps is sufficient for triple play resident services there. And a study by U.K. government regulator Ofcom reports that average fixed residential broadband subscribers get 22.9mbps, and that broadband with a minimum download speed of 30mbps is available to three-fourths of subscribers by only has seen 21 percent penetration.
“For residential and SMB subscribers, high-speed copper technologies can deliver bandwidth in excess of current and anticipated demand,” says El-Sayed.
The bottom line is that extending the life of copper provides two major benefits. First, it is less costly than putting in fiber particularly in residential or rural areas. Second, it enables service providers to offer ultra-broadband services quickly. In a hotly competitive world with a seemingly insatiable appetite for high-speed services now, this point is as if not as important as the first.
]]>Small cells are a boon for mobile network operators, as they easily and cheaply expand wireless network connectivity. However, they also can strain an operator’s evolved packet core (EPC).
“The EPC may be called upon to deliver a significant increase in scale, capacity, and performance beyond that which was required initially to support the macro-cellular network,” noted David Nowoswiat, Sr. Product and Solutions Marketing Manager, Alcatel-Lucent in a recent TechZine posting, Is your EPC ready for the small cells onslaught? He suggests that operators look at three areas when examining if their EPC is up for the challenge.
First, is the network architecture ready for numerous small cells. Two of the options involve the addition of a small cell gateway to aggregate control and/or user traffic from a group of small cells back to the EPC, while a third option brings direct connectivity from each small cell to the EPC.
Adding a small cell gateway reduces the scaling and capacity requirements of the EPC but increases the network and operations complexity, and connecting the EPC directly to each small cell significantly increases its scalability and performance requirements yet keeps the network flat. Each operator will need to assess what makes sense in their particular case.
Second, does the EPC support the scaling and performance of the additional small cells load.
“If it’s directly connected to the small cell network, the biggest impact is on the control plane and the mobility management entity (MME) -- with all of the additional signaling that’s required,” noted Nowoswiat. But the EPC also should support an integrated and operationally simple model.
Third, is the mobile operator to offload data to take some of the load off of the EPC. Local breakout options can be implemented in small cell networks to offload data traffic that brings little value to the mobile operator, thus saving the EPC from added load. In that case, though, the EPC must support the requirements necessary to redirect traffic to the appropriate gateway and packet data network.
Nowoswiat questions whether most EPCs are up to the challenge. Is a virtual EPC a better option and a way to handle the extra load from small cells? While the answer is “it depends,” to learn more about EPC and small cell network choices the whitepaper Evolved Packet Core for Small Cell Networks, which compares architecture options, is a great place to start.
Carriers’ mobile networks are extremely vulnerable to sudden changes in the signaling behavior of popular applications. In fact, Patrick McCabe, Senior Product Marketing Manager, Alcatel-Lucent, devolves into this subject in some detail in a recent blog, Google’s power to impact network signaling. In fact, while Google Cloud Messages provide an example in the blog, the companies recent Mobile Device Report goes into the topic regarding the impact of the top mobile apps on signaling in greater detail.
Google Cloud Messaging for Android, according to the search giant, is a service that allows data to be sent from the App Engine or other backends to users’ Android-powered devices. That could involve the transmission of a lightweight push notification telling an Android application that there is new data to be accessed from the server (like a movie uploaded by a friend) or a message containing up to 4kb of payload data (so apps like IM can consume the message directly).
Such apps and interactions, however, can have a notable and negative impact on both mobile networks and the endpoints connected to them, according to McCabe. And, in the case of Google Cloud Messaging for Android there is ample evidence it already has.
The study by Alcatel-Lucent indicated there was a dramatic increase in signaling traffic from Jan. 12 to Feb. 19 due to the Google Cloud Messaging application. That involved a Jan. 12 signaling increase from 17 percent to 20 percent. Then, on Feb. 4, such signaling went from 21 percent to a peak 23 percent. Signaling relative to this Google application returned to expected levels on Feb. 19, according to Alcatel-Lucent, which added that these variations were not due to any increases in active subscribers.
The reason why Alcatel-Lucent is highlighting this is to increase awareness of the challenges for the signaling network and the mobile network at large, as well as a drain on related user endpoints (in this case Android smartphones) that the explosion in applications is causing.
“Although a rise in signaling share from 17 percent to 23 percent on a single application may appear rather innocuous at first, it does have a significant impact on mobile networks,” writes McCabe, based on information derived from Alcatel-Lucent’s the Motive Wireless Network Guardian for mobile network analysis. “During this period of signaling increase, an average erosion of 6 percent in overall signaling capacity was experienced across the networks that were analyzed. This is a costly loss that can place a large strain on radio resources, and it can even cause outages in locations that were already operating close to capacity — or where there was a dominant proportion of Android users.”
Concentration of the impact of the increasingly app-centric use of the network tends to look almost exclusively at traffic in general. However, in order for all of those apps to work with a high quality of service (QoS) the signaling network needs to be able to understand accommodate the spikes the various types of apps can cause. It is why having network visibility into app impact on signaling is so important.
]]>It’s monsoon season here in Arizona, so we desert dwellers know as much as anybody about the power of a storm. We also understand the problems that storms can create, such as taking out the power.
However, natural occurrences like storms and other unexpected events like power line cuts by backhoes aren’t the only external challenges with which power utilities have to contend. In a recent blog Dave Christophe, Director of Utilities Marketing at Alcatel-Lucent, explained that there’s now an additional consideration that could negatively impact power company abilities to bring people and businesses power consistently, cost effectively, and safely. That is the systematic decommissioning of legacy telephone and data networks.
Sun setting analog, frame relay, and TDM networks, Christophe explains, eliminates the communications infrastructure on which power utilities have long relied to transmit data from substations and do teleprotection, as just a couple examples.
Christophe in his piece references a recent article by his colleague Mark Madden, Vice President of North American Utilities at Alcatel-Lucent, in which the latter notes the risks of such sun setting and offers tips on steps utilities should take to avoid any interruption in the networks on which they rely – and thus in their power infrastructure and services overall. Madden also provides an example of what the transition away from legacy communications networks could lead to if not managed properly.
The example involves a regional utility that depends upon circuit-switched and frame relay technologies to support dynamic line rating sensors that track the characteristics of high-voltage transmission lines, including heat load and sagging.
“Imagine that the carrier that provided their circuit-switched and frame relay network –which, although outdated, were reliable – suddenly served notice that they planned to shut down the service within 120 days,” writes Madden. “This might sound extreme, but it is a realistic scenario. Required notice periods in many parts of the country are very short.”
To avoid getting into such a pinch, Christophe and Madden urge utilities to develop plans to transition from legacy to newer communications services and technologies.
]]>