Copyrights © Dhiman Deb Chowdhury, 2015. All rights reserved.
In the previous articles ( http://www.dhimanchowdhury.blogspot.com ), I discussed how the need to share resources led to gradual development of technologies concerning network programmability and virtualization. I am hopeful that readership is now conversant on the notion of network virtualization (NV) and SDN and how network programmability technologies such as SDN coexist with topological virtualization of physical network. I connoted that both SDN and NV decouple network abstractions, the difference is that SDN separates data plane from control plane providing centralize network control and programming mechanisms while NV decouples physical infrastructure from logical topologies.
|
Figure 1. A diagrammatical representation of the interworks of NV and SDN. |
With this distinction in mind, we embark on yet another discussion about how the needs for network programmability (i.e. SDN) and virtualization (i.e. NV) find common ground in NFV (Network Function Virtualization) architectural framework.
In October, 2012 some of major network/telecom operators got together at Darmstadt in Germany to devise a framework for NFV (Network Function Virtualization) (Haleplidis et al., 2015) penciling ideas from the technological advances in system virtualization, SDN and NV. It is an architectural framework that make use of virtualization technologies to define virtual elements of entire classes of node (compute, storage and network) functions into building block that can be connected or chained to deliver communication services. The framework is devised under the stewardship of ETSI (European Telecom Standard Institute) and various workgroups within ETSI entrusted with the responsibility to define different aspects of NFV framework (e.g. network elements and APIs etc).
The need for network/telecom operators to devise a white paper on the NFV framework comes from the growing challenges to accommodate customer demands for improve services while reducing CAPEX (Capital Expenditure) and OPEX (Operational Expenditure). The NFV framework aims to address those challenges by virtualizing network functions (NF) [e.g. load balancer, firewall, PE Router, CG-NAT etc] and allowing network/telecom operators to deploy those NFs at will on high end devices. One of the major challenges in NFV deployment is the orchestration and management of network functions (NF) such as load balancer, firewall and other control functionalities. A workgroup within ETSI NFV undertaking, namely “MANO” (Management and Orchestration) was formed to develop specification on management and orchestration framework for NFV architecture (figure 2). We will discuss framework of MANO later in this article but for now think of NFV framework as the main architectural construct or reference model for which VNF, NF and Management/Orchestrations are the elements or subsets.
|
Figure 2. Timeline on NFV related research works at ETSI (source: ETSI, 2015; ETSI, 2014). |
One more important subset of NFV framework is the NFVI (Network Function Virtualization Infrastructure). The NFVI specifies various physical resources and how abstraction of those resources are virtualized, e.g. virtual compute, virtual network and virtual storage. Applying our understanding of system virtualization (hypervisor/VM concept) [if you need further clarification, please visit my blog: http://www.dhimanchowdhury.blogspot.com/2015/07/network-virtualization-101-prelude.html ], the NFVI framework can be understood as hardware resources (compute, storage and network) virtualized by hypervisor for which VM is virtual instances and VNFs (Virtual Network Functions) are implemented as application on top of VMs. Please refer to the NFV reference model as depicted in figure 3. The NFV reference model comprises of three main subsets: NFVI, VNFs and NFV Management and Orchestration. The element management system (EMS) is commonly used in a typical telecom network to manage network element. In my good old Nortel days, we used element manager to retrieve configuration information in various networking and telecom gears while pulling device information through NMS. Once a device is found in the network using NMS network map, you could double click on the device to open up element manager to configure and manage the device in a way that is not possible through simply using MIBs as you would through the NMS. I hope this imparts you the usefulness and purpose of EMS. In a telecom network EMS brings many values, for example, you could integrate different vendors and their element manager to a NMS and manage the entire group of systems through a single NMS.
|
Figure 3. The NFV reference Model. |
The OSS (Operational/Operation Support System) and BSS (Business Support Systems) are back-office software applications to supports activities related to services. For example, OSS includes software toolsets that are related to management and operation aspects of network services while BSS toolsets are used for billing, order management and CRM etc. For telecom professional these terms and software toolsets are nothing new but for rest of the readership this brief explanation is useful.
Now that we understand some key terms and functional elements that connects with NFV framework, let us explore essential components of NFV reference model. Please note that NFVI specifies hypervisor based virtualization but it is also possible to use containers (a notion mainly associated with LinuX Container) to support same VNF instances as in hypervisor/VM based system. The notion of container was born in 2005, out of necessities for webscale services at Google Data center. Google was experimenting with hypervisor based virtualization at the time but soon realized that performance penalty for such virtualization is too high and was not elastic enough to support a web-based services at scale (Bottomley, 2014). During the time, a group of engineers were working on Linux around a concept based on cgroup called process container who were later hired by Google to advance experimentation for webscale services. Some of these Cgroup technology later found its way into Linux Kernel and the LinuX Container (LXC) project was born in January 2008.
To date, implementation of containers are mainly done at server level system virtualization but works are underway to realize some benefits of container based virtualization at networking gears. As such, implementation of VNFs in networking switching gears may well be done using container virtualization techniques within next two years.
The VNFs (Virtual Network Functions) are simply network node or network functions (NF) running as virtual instance on top of VMs (virtual machines) (Vilalta, et al., 2014). For example, a VNF can be load balancer, NAT/firewall and many other similar network/network node functions. Today, much of the VNF implementations are done at server level but it is also possible to implement VNFs on networking gears. NFV framework (ETSI, 2014) did not constrained use case scenarios of VNFs and works are already underway to have server like experience in networking gears making it possible to implements VNFs in various use case scenarios, such as Residential, CDN, Fixed Access Networks, Mobile Station and Mobile Core & IMS deployments. This implies that even future residential or CPE gateways will have VNFs implemented on them allowing service providers to deploy service at will.
The Management and configuration of VNFs and other subsets of the NFV reference model (as depicted in figure 3) is done through MANO (Management and Orchestration) toolsets. The MANO architecture presents three elements: VIM (Virtual Infrastructure Manager), VNF manager and Orchestrator. The VIM is used for the management and allocation of virtual resources, e.g. add/remove and modification of resources related to compute, network and storage. The Openstack constitutes a close reality to ideal VIM but requires further works related to various plugins and integration of automation/configuration chef and puppet tools to realize its deployment in service provider network.
The VNF managers handle the configuration, lifecycle management, and element management of the virtualized network functions. Within Openstack initiative, the “Tracker” project aims to deliver VNF manger and NFV orchestration capabilities for NFV platforms. For further information on “Tracker”, please visit https://wiki.openstack.org/wiki/Tacker .
![]() |
Figure 4. The diagrammatical representation of Openstack “Open NFV Orchestration” “Tracker” project focus. (Openstack, 2015). |
Apart from the Openstack Tracker project, many other vendors also offer application solutions but challenges remain to develop a comprehensive toolsets due to complexities of unified transport networks systems.
I hope this article provided a brief overview of NFV reference model: readership should follow the work of ETSI at http://www.etsi.org/technologies-clusters/technologies/nfv to advance their understanding and stay in touch with the latest works in this field. In the next article, we will explore various APIS of SDN and understand how SDN and NFV are deployed in a given network scenario.
Please stay tune and follow me. If you are interested, please join me in the free live webinar on “Open Networking” at https://lnkd.in/bhydy6x .
Reference
[Bottomley, 2014] Bottomley, J., 2014. What is All the Container Hype? Parallels.
[Haleplidis et al., 2001] Haleplidis, E., Salim H. J., Denazis, S. & Koufopavlou, O, 2015. Towards a Network Abstraction Model for SDN. Network System Management (2015) 23:309-327.
[ETSI, 2014] ETSI, 2014. Network Functions Virtualisation – White Paper #3. ETSI NFV portal. Available online at https://portal.etsi.org/Portals/0/TBpages/NFV/Docs/NFV_White_Paper3.pdf
[ETSI, 2015] ETSI, 2015. NFV activity report 2014. ETSI NFV Portal. Available online at https://portal.etsi.org/TBSiteMap/NFV/ActivityReport.aspx
[Openstack, 2015] Openstack, 2015. Tacker. Available online at https://wiki.openstack.org/wiki/Tacker .
[Vilalta, et al., 2014] Vilalta, R., Muñoz, R., Casellas, R., Martínez, R., López, V., & López, D., 2014). Transport PCE network function virtualization. In European Conf. Optical Communications, Cannes, France.
Posted by Dhiman Chowdhury at 4:36 PM
Labels: Cloud Architecture, Cloud Computing, Data Center Networking, Hybrid Cloud, NFV, Software Define Data Center
Thursday, September 3, 2015
Network Virtualization 101: The NV
In previous article, I have discussed about historical perspective of Network virtualization ( http://www.dhimanchowdhury.blogspot.com/2015/07/network-virtualization-101-nve-overlay.html ): how the need to share resources and induce flexibility and programmability in network environment led to the series of research undertakings, e.g. MBone project (Almeroth, C.K., 2000) – an experimental backbone project for carrying IP “multicast” that was developed in 1994. If you have not read that article, I suggest that you do: it will help you understand the development in network virtualization and benefits thereof.
Figure 1. Timeline of Network Virtualization. |
This article is third in the “Network Virtualization 101” series.
In the previous article, I presented a review of network programmability works in four stages rather than in chronological order: three stages were discussed in earlier section and the fourth “network virtualization” (NV) is presented herein as follows.
The notion of “network virtualization” (NV) can be understood as decoupling physical topology from logical topology (e.g., Overlay networks) and as such, implementation does not require SDN. Similarly, the common notion of SDN (separation of control plane and data plane) does not require network virtualization. This distinction is important since many may be confuse by the symbiotic nature in which network virtualization and SDN relate (Feamster, N., Rexford, J. & Zegura, E., 2014):
- SDN as enabling technology: with the advent of cloud computing, service providers faced the challenge to share and isolate resources to multiple tenants in a way that make best use of available network infrastructure. A common method to share such isolation at VM level is to use overlay networks through protocols such as VxLAN and NVGRE. While VxLAN and/or NVGRE does not require SDN to implement but having the capability to provision the network for VxLAN and/or NVGRE from centralized server surely helpful. Another example of SDN as an enabling is Nicira’s network virtualization platform or NVP. The NVP framework implements Open vSwitch (a virtual switching platform), a controller and South bound API to facilitate network transport. The Open vSwitch is hardware agnostic and can be implemented in servers without the need for networking gears.
- Slicing or virtualizing an SDN: A hybrid switch for example implements both traditional protocol suits and OF (OpenFlow) agents and other flow control APIs. With appropriate arbitration mechanisms, network flow can be logically separated from other logical instances of network. Similarly, Flowvisor (a special purpose controller that works as transport proxy between OF agent and OF controller) allows slicing of network resources and delegates control of each slice to a different controller (Flowvisor, 2014).
From historical perspective, the work on Network virtualization can be predated to the early days of MBone experiment. The MBone otherwise known as “Multicast Backbone” is a virtual network built on top of internet. It was invented by Van Jacobson, Steve Deering and Stephen Casner in 1992 as part of an undertaking by IETF (Internet Engineering Task Force). In the early 1990s, majority of the routers in the internet did not support IP Multicasting and packets were transported through IP unicast. As a result one-to-many communication was difficult. The solution was MBone or “Multicast Backbone” in which multicast function provided by workstation running a daemon process known as “mrouted” (Almeroth, 2000). Workstation running “mrouted” process is known as “mrouter” (essentially a multicast router). These mrouters are than placed in special group of LAN or single LAN that are multicast capable. The “mrouted” process received unicast-encapsulated multicast packets on an incoming interface and then forwarded packets over the appropriate set of outgoing interfaces. Connectivity among these machines was provided using point-to-point, IP-encapsulated tunnels. Each tunnel is connected two endpoints via one logical link. The routing decisions were made using DVMRP (Distance Vector Routing Protocol) as shown the figure below.
Figure 2. MBone topology during the early years of its deployment. |
The DVMRP since then replaced by PIM (Protocol Independent Multicast) helping MBone to be integrated with internet than its initial attempts. For many years, network equipment supported the creation of virtual networks, e.g. VLAN which allows the creation of multiple logical networks on top of physical topology. But such network virtualization is limited to L2 network segments and impedes on the deployment of new technologies traversing across the network. To overcome this, researchers and practitioner resorted to running overlay networks which allows endpoint nodes to run their own control plane and forward data traffic and control-plane message across the networks traversing multi hop L3 networks. The MBone (for multicast) and 6Bone (for IPV6) are example of such overlay network virtualization. In the previous article, I discussed example of overlay network architecture and some of the protocol used for tunneling and will explore this further in the succeeding articles about network virtualization configurations. The complete survey of network virtualization is cumbersome, however, the historical perspective I am presenting herein, though brief, is important in the research of network programmability (SDN) and in the gradual development of programmable and dynamic network systems. It is to be noted both SDN and network virtualization is tightly coupled despite their distinctions. Programmable network (i.e. SDN) often presume “network virtualization” as an integral part to share network infrastructure for multi-tenant services supporting logical network topologies that differ from physical network. The early overlay network that is essential in evaluating and understanding “network virtualization” often used dedicated nodes running special protocols. The notion of such early day overlay network soon expanded to include any host computer that run special application in hope of supporting peer to peer file sharing application (e.g. Napster; Wikipedia, 2015). The research on peer to peer networking reignited interest and research works in the development of robust overlay network technologies. An example of such work is “Resilient Overlay Networks” (Andersen et al., 2015) in which a small number of network nodes form overlay network detecting network failure and recovering quickly from network issues and performance problem. Since overlay network does not require any special equipment (unlike Active network; please refer to my previous article), researchers began building experimental infrastructure like Planetlab (Peterson et al., 2002) to support wider research works on network virtualization. Interestingly, PlanetLab itself was a form of “programmable router/switch” active networking, but using a collection of servers rather than the network nodes, and offering programmers a conventional operating system (i.e., Linux) (Feamster, N., Rexford, J. & Zegura, E., 2014). The project of GENI (GENI, 2015) took this notion of programmable virtual network infrastructure to next level supporting much large scale national experimental for research in networking and distributed system.
Figure 3. GENI – The vast experimental virtualized network infrastructure project (GENI, 2015). Figure (Courtsey, GENI, 2015). |
If you are interested in experimenting your concept in a virtualized network infrastructure, please visit http://groups.geni.net/geni/wiki/GENIConcepts and join the project.
Considering the project like GENI, one can easily perceive the potential of network virtualization. Some researchers argued that network virtualization is key to next generation internet architecture. In the first article of this series, discussing about NVE (Network Virtualization Environment) I explored the theoretical connotations from various scholars regarding the need for next generation internet and service provider network in which multiple network architectures can coexist at the same time (each optimized for different applications or requirements, or run by different business entities), and evolve over time to meet changing needs (Feamster, N., Rexford, J. & Zegura, E., 2014; Carapinha & Jiménez, 2009; Chowdhury & Boutaba, 2009; Chowdhury & Boutaba, 2008).
I hope this brief overview of NV (Network Virtualization) is helpful in understanding the difference and dependencies between network virtualization and SDN and the importance of network virtualization in future network design. You will find the basic understanding helpful in the succeeding articles about network architecture and configurations.
In the next article, I will extend the notion of Network Virtualization to VNF (Virtual Network Function) and NFV (Network Function Virtualization). Please stay tune and follow me at linkedin (https://www.linkedin.com/in/dhiman1 ), twitter @dchowdhu ( https://twitter.com/dchowdhu ) and google plus (https://plus.google.com/u/0/+DhimanChowdhury/posts ). You may also subscribe to all these feeds through Agema System’s linkedin page at https://www.linkedin.com/company/agema-systems-inc?trk=top_nav_home
Reference
[Almeroth, C.K., 2000] Almeroth, C.K., 2000. The Evolution of Multicast: From the MBone to Interdomain Multicast to Internet2 Deployment. IEEE Network. Available online at http://www.cs.ucsb.edu/~almeroth/classes/F05.276/papers/evolution.pdf .
[Andersen et al., 2001] Andersen, D. G., Balakrishnan, H., Kaashoek, M. F. & Morris, R., 2015. Resilient Overlay Networks. In Proc. 18th ACM Symposium on Operating Systems Principles (SOSP), pages 131–145, Banff, Canada, Oct. 2001.
[Carapinha, J. & Jiménez, J. 2009] Carapinha, J. & Jiménez, J. 2009. VISA ’09 Proceedings of the 1st ACM workshop on Virtualized infrastructure systems and architectures. The ACM Digital Library.
[Chowdhury, K.M.M.N. & Boutaba, R., 2008 ] Chowdhury, K.M.M.N. & Boutaba, R., 2008. A Survey of Network Virtualization. Technical Report CS-2008-25. University of Waterloo.
[Chowdhury, K.M.M.N. & Boutaba, R., 2009 ] Chowdhury, K.M.M.N. & Boutaba, R., 2009. Network Virtualization: State of the Art and Research Challenges. IEEE COMMUNICATIONS MAGAZINE.
[Clark et al., 2006] Clark, D., Lehr, B., Bauer, S., Faratin, P., Sami, R. & Wroclawski, J., 2006. Overlay Networks and the Future of the Internet. Communications & Strategies, no. 63, 3rd quarter 2006, p. 109.
[Feamster, N., Rexford, J. & Zegura, E., 2014] Feamster, N., Rexford, J. & Zegura, E., 2014. The Road to SDN: An Intellectual History of Programmable Networks. ACM Queue, 2014.
Flowvisor, 2014. Flowvisor. Atlassian Confluence Open Source Project: University of Stanford. Available online at https://openflow.stanford.edu/display/DOCS/Flowvisor
[Feamster et al., 2004] Feamster, N., Balakrishnan, H., Rexford, J., Shaikh, A. & van der Merwe, J., 2004. The Case for Separating Routing from Routers. SIGCOMM’04 Workshops, Aug. 30-Sept. 3, 2004, Portland, Oregon, USA.
[GENI, 2015] GENI: Global Environment for Network Innovations. Available online at http://www.geni.net/.
[Peterson, et al., 2002] Peterson, L., Anderson, T., Culler, D. & Roscoe, T., 2002. A Blueprint for Introducing Disruptive Technology into the Internet. Planet Lab. Proceedings of the First ACM Workshop on Hot Topics in Networks (HotNets-I), Princeton, NJ, October 2002.
[Wikipedia, 2015]. Wikipedia, 2015. Napster. Wikipedia: The free Encyclopedia. Available online athttps://en.wikipedia.org/wiki/Napster
Great post. Thanks for sharing!