BFSI VISION | Virtualization

1. How data virtualization speeds application delivery

Source- GCN

Applications are the lifeblood of government IT. They help agencies do more with less while delivering services for citizens and warfighters. In government's increasingly transparent and budget-conscious world, agencies are being pressured to modernize their applications and IT environments to quickly roll out more comprehensive applications.

While more applications will increase the services government can offer, they also bring challenges, namely time and budget.

Traditionally, application installs or upgrades can take months, or even years, to complete, which drives up costs. Often a development team's request for a copy of the last application release gets bogged down while waiting for database, storage and systems administrators to allocate storage, network and server space. Additionally, each copy of an application environment and its data consumes redundant infrastructure and requires weeks of time to provision.

Virtualizing data, however, can significantly increase the success rate of an agency's application development projects by enabling better data management and faster delivery of applications in a risk-free environment.

Virtualization involves creating a virtual instance of a specific item – whether it's a server, application or desktop. In the case of applications, data can be virtualized from databases, data warehouses, applications and files. This enables application development teams to create environments in minutes – eliminating the inefficient process associated with building environments for the application and testing teams.

When copies of applications are created, data blocks are shared across all copies so each copy looks and behaves like its own separate, fully functional environment for testing, development and reporting. Using application programming interfaces (APIs), source applications can connect with one another, copy data blocks and create a single master image that can be shared across more than 20 virtual environments.

As data blocks are shared across numerous copies, each new environment takes up only minimal space and can be created in minutes.

Benefits of virtualizing
Using virtualization to roll out new applications provides agencies several advantages, including reduction in time, money and resources. Overall, the biggest benefit is increased project output. The automated process of creating many copies of data so the right people can access what they need, when they need it, eliminates approval delays and cross-departmental dependencies. It also lets application development teams conduct parallel development and testing, increasing project output by up to 80 percent. Likewise, access to point-in-time application data also helps agencies provide greater quality assurance and lets them fix errors in significantly less time.

Other benefits include:

  • Infrastructure space requirements can be reduced by 90 percent.
  • Application development teams can access everything they need as a self-service.
  • Virtual application data copies can be refreshed in minutes, bookmarked and rolled back to previous points in time.
  • Shared data blocks give applications flexible retention policies so data can easily be retained or recovered for backup, disaster recovery or application archival.
  • Complex application projects and tools can be deployed in minutes, running on any server and in any storage environment – including public or private clouds
    or hybrid environments.
The virtual application development approach enables broad access to data at any time without individuals being forced to wait for a lengthy request process. Virtualized data also ensures the right data is available to the right people when they need it – promoting the efficiency and cost effectiveness needed in government.

With this accelerated approach, agencies can innovate to supply warfighters, citizens or internal stakeholders the apps they need when they need it.


2. Is Virtualization Right for You?

Source- Billing World
Network Function Virtualization (NFV) is gaining momentum throughout the cable industry, but the long-term implications of this trend are yet to be seen. Though modern physical infrastructure equivalents are currently the most efficient way of provisioning and optimizing singular services, these hardware-based environments lack opportunities for service providers to customize their networks and address the dynamic and fluctuating bandwidth requirements brought on by booming multimedia services like OTT and P2P content.

So which aspects of virtualization are right for you?

First, you should consider the effects on CAPEX. By leveraging NVF on the CMTS/CCAP, you can dynamically adjust and optimize specific functions for a flexible period of time. If your subscriber base is forecast to grow by an amount that doesn't justify spending money on a new physical CMTS, then there are significant benefits in being able to temporarily increase resources through a virtual instance. This also ensures that you aren't purchasing hardware resources that will end up being underused after an unexpected boost in bandwidth has subsided. All that said, there are advantages to a virtual CCAP approach, which replaces analog nodes with digital counterparts, ultimately providing a path to a pure IP solution over the existing network from head-end to node. Using a virtual CCAP can offer efficiency gains in fiber utilization, power requirements, and physical space footprint.

CableLabs is also spearheading a new initiative that suggests the possibility of virtualizing cable modems (vCMs) into cloud-based systems to reduce costs by simplifying much of the intelligence of the devices and offloading them on-demand. The vCM fits nicely into the realm of Software Defined Networking (SDN) via L2VPN, which can dramatically improve the service delivery performance capacity and reduce the amount of errors in setting up complex network connections. The results of this initiative could have a major impact on ISP trends, but configuring, provisioning, and collecting data from virtual CMs is not fully tested or understood. CableLabs is continuing to partner with industry leaders to test this new concept and bring CM virtualization to reality.

NFV aligns with other virtual-design trends, like SDN, to assist with your network operations and innovations. While they are not dependent on one another, SDN mechanisms can be used to complement the goals of NFV by separating control and data functions within a network to optimize traffic management and bandwidth routing via low-level abstraction layers, ultimately reducing the capital expenditures by eliminating wasteful over provisioning. This programmable network logic simplifies NFV compatibility with existing network deployments, and facilitates and automates operational procedures to improve efficiency, speed and accuracy, reducing OPEX while leveraging your existing control plane.

Operation and Business Support System Function Virtualization (OBSSFV), which runs OSS/BSS software tools like provisioning or billing systems, can also be easily migrated from existing hardware spaces to newer hardware with minimal disruption to operational processes — a significant advantage to virtualization. Adding virtual instances in the background of your current systems can provide insurance to network services and reduce the risk of potential physical hardware failures. It is extremely important for network operations that the OSS/BSS solutions are able to be virtualized to reduce costs, increase efficiencies, and provide a framework for rapid service recovery.

From a data provisioning and services perspective, NFV shouldn't affect the way service APIs or other provisioning and management tools will run on your network. If set up correctly, a virtualized instance will react to software tools the same way as its hardware equivalent.

With the implications of NFV, SDN, OBSSFV, and the advancements of DOCSIS 3.1 and new broadband network protocols, this is an exciting time for the cable industry. We're looking forward to seeing where these latest developments take us!


3. Data center virtualization drives up adoption of next-gen firewalls

Source- Billing World
As businesses continue to consolidate their data centers and adopt virtualization, software-defined networking (SDN) and cloud computing, the use of next-gen firewalls is rising, says a new ABI Research report.

Companies deploying virtualized data centers are turning to next-generation firewalls (NGFW) in order to protect, scale and evolve with whatever virtualization needs an organization has. While still a developing segment, ABI said that there is a niche market for NGWF for virtualized data centers, valued at $375,000 in 2014.

"NGFWs deliver much more granular control than traditional firewalls by being application and user aware, which in turn ensures better security without impacting user productivity," says Monolina Sen, ABI Research's senior analyst in cybersecurity.

A number of service providers and traditional vendors, including Trend Micro, Cisco, Imperva, NTT Com Security, Centrify and Veeam Software, have emerged as key players in the data-center virtualization security market.


4. Data center virtualization drives next generation firewalls

Source- Infotech Lead
A recent research from ABI Research reveals that virtualization of data centers is driving the adoption of next-generation firewalls (NGWF).

ABI Research believes that there is a niche market for NGWF for virtualized data centers, valued at US$375,000 in 2014.

According to ABI Research, there is an emerging need for "virtualization-aware" security solutions that monitor intra-server communications flows and protect virtual resources.

Data center operators today are under pressure to roll out new applications and services faster than ever before. To address these, they are turning to technologies like virtualization, software-defined networking, and cloud computing.

Virtualization of data centers enables organizations to utilize their data center hardware infrastructure effectively, leading to reduction in costs, and improvements in operational efficiencies. As traditional data centers evolve to virtualized and cloud computing environments, they pose significant new security challenges that need to be addressed.

Traditional tools such as antivirus and firewalls fail to address the dynamic nature of the virtualized environment, and cannot track policies to virtual machine creation or movement. This is leaving organizations open to the risk of cyberattacks and loss of critical business data.

Next-generation firewalls (NGFW) have emerged as the security solution of choice for many virtualized data centers as they provide a security architecture that can protect, scale, and evolve with virtualization needs, ABI Research said.

"NGFWs deliver much more granular control than traditional firewalls by being application and user aware, which in turn ensures better security without impacting user productivity," says Monolina Sen, ABI Research's Senior Analyst in Cybersecurity.

Players like Trend Micro, Cisco, Imperva, NTT Com Security, Centrify, Veeam Software, and others offer innovative and interesting offerings in the data-center virtualization security market.


5. Virtualization Reality Check

Source- PipelinePub
Virtualization – instantiating physical devices in software - is the topic of discussion as service providers transform their businesses from capacity and cables to digital services and connected applications. By establishing cloud platforms and enlisting partners, service providers worldwide are building profitable businesses that deliver managed services and operations, machine-to-machine (M2M) platforms, and connected applications.

In addition to data center virtualization of IT elements, network function virtualization (NFV) is seen by enterprise IT and service providers alike as a solution for the challenges associated with the increasing volume and complexity of network connectivity solutions. Rather than continually adding network appliances and absorbing the cost of energy, capital, and unique skills required to design, configure, and operate these devices; service providers are considering software-based alternatives. Constantly deploying additional network appliances is no longer sustainable and NFV offers the option of consolidating that vast number of physical appliances to a set of virtual software appliances that run on standard high volume servers and Ethernet switches.

As with every new technology hype cycle, it is important to look beyond the marketing literature and slide decks to understand the real benefits and challenges of this next iteration of network evolution. Make no mistake, virtualization is happening and will continue to impact network and service operations going forward, but the reality is that service providers have invested billions in network and operations infrastructure that will not be abruptly shut down to implement virtualization. To that end, there are some important considerations.

Architecture

Network hardware will never be entirely replaced by software. Arguably, hardware-based network elements have shorter productive lives and continuous replacement and integration is not a cost-effective approach to delivering service innovation. However, those elements are already in-place and working well within existing network architectures and operating processes. There is a significant risk associated with wholesale replacement of physical elements with virtual network elements. Architectures, processes, performance, and reliability are all affected and service providers will not proceed without careful consideration and planning.

NFV proposes to replace any large complex set of incompatible network appliances with standardized software versions that are easier and less expensive to configure, upgrade, and operate. Yet, for many years, vendors have had the ability to develop and deploy network functionality in software.

It's no secret that software costs less. Software does not present any manufacturing or component issues, there are no mechanical assemblies that are vulnerable to environmental conditions, design and build costs are reduced, and software can be readily modified whereas hardware is fixed. Whether upgrading existing network architectures or designing new features and functionality, there is always a discussion of what could be done using software versus what should be implemented in hardware.

Still, engineers regularly decide to implement specific network functionality in hardware. However, there are numerous network appliances that are little more than a processor and a software application. Signaling, policy, and charging elements readily lend themselves to virtualization on common platforms, as do subscriber management and other edge functions. The volume of appliances appearing at the edge of the network emphasizes the efficiency and cost-effectiveness of virtualization; but as you get closer to the core network infrastructure, network elements become more complex and the benefits of virtualization become less clear.

Security
Every day there are headlines trumpeting the latest Internet security breach or compromise of personal or financial data. As service providers consider virtualization, security remains top-of-mind. Securing virtualization includes securing the servers and software that comprise the virtual elements. And, in the case of NFV, service providers must also consider how virtualization affects the security of the network as a whole.

First securing the software. Service providers must ensure that individual subscriber functions are isolated from each other and that the control data or management network is isolated from the services being delivered. Shared virtual elements or server environments could be compromised or create a blind spot if not properly configured.

In the 2014 Verizon Data Breach Investigations Report, 35% of the 1,367 breaches examined were the result of web application attacks. Hackers that are using the Internet to compromise software applications can also affect the virtual elements being delivered by service providers across the public network. Subscriber isolation requires careful management of customer configurations and connectivity. Enforcement of resource access restrictions are valuable security measures, but limit the ability of service providers to easily expose network elements to partners and content providers.

Of equal importance to service providers is securing the network. In the Verizon report, network assets including routers, switches, and other physical devices are consistently at the bottom of the list of compromised assets. Although malicious traffic passes across the network, network devices are seldom the access point for a breach. Service providers are required to secure all traffic across the network and peeling back multiple layers of virtualization make it extremely difficult to isolate one stream of traffic from another and detect intrusions or breaches as they happen. Many security vendors admit that they cannot unravel multiple layers of virtualization quickly or accurately enough to isolate compromised transactions or fraudulent users.

Quality
Charging and policy enforcement benefit from the comprehensive view of infrastructure assets resulting from abstraction from the network layer and are readily virtualized. However, virtualization of network functions that affect security, service quality, and configuration could potentially delay critical responses such as prioritizing traffic or thwarting security threats. Edge devices used for security and traffic management are able to more rapidly identify problems and take immediate action like shutting down a port or blocking incoming traffic.

Digital media traffic, especially video, is much less tolerant of delay than even voice traffic and requires precise timing and priority handling. Latency, delay, and jitter are all amplified in video transfers and service providers cannot tolerate quality erosion from delays caused by virtual network elements. Managing quality of service (QoS) typically must be accomplished as close to line rate as possible and that becomes difficult when accessing a virtual device requires additional transit time.

Isolating virtual elements and the high performance computing platforms required to ensure quality increase the cost of virtualization and must be considered. The configuration of a hybrid network must take into account the wide variety of network functions involved and implement virtualization where it makes both business and technical sense. Hybrid networks ensure that service providers are able to implement the optimal combination of physical and virtual elements in the right ways. Switching, routing, and network control functions are best conducted close to the network so that quality is not affected.

Operations
As NFV trials proceed and service providers begin to understand the intricacies of building hybrid virtual and physical networks, the next challenge is how to operate them. From fulfillment platforms to service assurance to customer care, there are a lot of details to work out in the operations arena. While network engineers might believe their job is done when the NFV elements are integrated into the network, the challenge for operations is just beginning.

Beyond cost savings, most service providers hope to use virtualization to bring new products to market more quickly. The challenge then is not the network, but the OSS/BSS platforms used for product development, service launch, fulfillment, and assurance. Existing fulfillment and assurance solutions are cumbersome, clumsily integrated, and often require manual efforts to complete. The volume and variety of new services intended to be enabled by NFV simply cannot be delivered using today's maze of OSS/BSS implementations.

Early adopters are implementing an orchestration layer over existing network infrastructure to create a hybrid network. To do that, service providers must develop numerous application program interfaces (APIs) and tons of customized integration to enable automation of fulfillment and assurance functions. The orchestration layer is fully automated and greatly improves delivery time for new services while eliminating error-prone manual tasks. However, all those APIs and integrations have to be meticulously maintained and regression tested every time there is a change to real or virtual network elements.

Start Smart
In the pursuit of NFV, it is important to recognize that no single virtualization strategy will be optimal for all network functions. A hybrid network strategy bridges appliance-based networks to NFV and implements an architecture optimized for performance, reliability, cost, and customer experience. To ensure interoperability and consistent implementation of virtualization in the public network, service providers will require standardization of APIs, management data, and control. A fully-automated and standardized orchestration layer is the key to delivering both physical and virtual network functions economically and at scale.

There are valid engineering reasons to implement network functionality in both hardware and software. Time-to-market for new products and reduced costs are arguments for NFV; while speed, performance, reliability, and visibility are among the reasons that an appliance may be preferable to an application. Not every piece of hardware will be replaced and not every piece of software will perform as it should, so it is important to understand the entirety of each function and how virtualization will affect the network, services, OSS/BSS, and customers both now and in the future.