Transcript
Experiences Managing a Parallel Mobile Ad-hoc Network Emulation Framework Travis W. Parker Contractor, ICF Jacob & Sundstrom, Inc. U.S. Army Research Laboratory
[email protected] Abstract Modeling a large mobile ad-hoc network is well suited to a cluster computing environment due to its parallel nature. Following the design of high-performance computing systems, uniform and scalable clusters optimized for wireless network emulation can be built using inexpensive commodity hardware. In this paper, we describe our experiences with managing a parallel mobile ad-hoc network emulation framework. We will discuss design differences in small and medium-to-large scale deployments, the use of cluster interconnects and bonded-channels with a network emulation framework, and the application of cluster management and job control solutions to clusters designed for network emulation frameworks. 1 Introduction Large-scale wired or wireless network scenarios may contain hundreds to thousands of nodes. The Extendible Mobile Ad-Hoc Network Emulator (EMANE) provides a framework capable of emulating large wireless networks. To effectively emulate networks of this size requires considerable computational power. High-fidelity radio frequency propagation models require calculation of every possible signal path between all possible pairs of transmitters and receivers. These calculations may need to be repeated whenever there are changes in the positions of the nodes and environmental factors such as terrain and weather. This is well suited to cluster computing, as pathloss calculations can be computed in parallel. 1 Emulating the nodes of the network may be lightweight compared to RF modeling, but the number of nodes in the emulation network may outnumber the number of physical nodes in the cluster. Using virtual machines as emulation nodes permits efficient use of available resources. Real-time emulation of wireless networks requires introducing as little latency as possible to the emulated wireless traffic. Latency can be significantly reduced by carrying emulation traffic over high-speed interconnects often used in cluster computing. Through proper hybrid EMANE deployment, it is possible to leverage both virtual machines and high-speed interconnects. As emulation framework requirements grow, the cluster should be expandable to meet those requirements. A good cluster design should be scalable and uniform. As hardware
is added or replaced, or software is upgraded, ensuring all nodes have the same system image becomes a challenge. The use of a cluster provisioning system allows new hardware and software to be deployed with minimal configuration changes. Likewise, a job control system designed for parallel computing solves the problem of starting and controlling the emulation and framework and related process 2 Overview of EMANE The Extendable Mobile Ad-Hoc Network Emulator (EMANE) is an open-source network emulator designed to fully model the radio nodes and the emulate the wireless environment. The principal components of EMANE are the platform, the transport, and the event system. A platform consists of one or more Network Emulation Modules (NEMs) which model the wireless network from the perspective of the associated nodes. The MAC (Media Access Control, or data-link) and PHY (Physical, or RF communication) layers of the emulated radios are modeled by NEM plugins. The transport connects the emulated radio to a network interface on the emulation node, and an Over-the-Air (OTA) manager relays the emulated RF signals to all NEMs in the emulation. Events are generated by the event service and received by the NEMs and event daemons on the emulation nodes. Events typically modify or control the emulation environment. For example, location events indicate or change the position of the nodes in emulated space. This event can be used by the PHY plugin of the NEM to compute path-loss and also by the emulated node for 2 positional awareness. 2.1 EMANE Deployment EMANE supports centralized, distributed and hybrid platform deployment. Centralized deployment places all NEMs on a single platform server. The transport on each node is connected over a network to the associated NEM on the platform, and the OTA manager is internal to the platform. If deploying EMANE on a single physical machine (possibly hosting virtual nodes), centralized deployment of a single platform on the host is possible. The raw transport can be used to capture traffic from network
interfaces, or each virtual machine contains a transport and communicates with the platform on the host. Distributed deployment places a platform at each node to host the NEMs associated with that node. Communication between the transport and the NEM is internal to the node, and the OTA manager network carries the emulated wireless traffic between nodes. A hybrid deployment distributes the NEMs 3 across some combination of nodes and platform servers.
machines on the cluster nodes. In a distributed EMANE deployment, the NEM for each node is hosted inside the virtual machine. In a hybrid EMANE deployment, we deploy a platform server on each physical node, containing the NEMs for the virtual machines hosted on that node.
3 Cluster Design for EMANE
Cluster design and EMANE deployment to our small and medium-scale clusters differ based on functional requirements. In the case of the small cluster, the requirement was to address performance, uniformity and management while preserving as much of the original configuration as possible. Scalability was not a consideration. The medium-scale cluster is a development environment and test-bed for a design scalable by at minimum an order of magnitude. To address these issues, we implemented these following guidelines.
The basic cluster design consists of a set of compute nodes controlled by a head node. Nodes are connected by a conventional LAN network for management and optionally 4 by a high-speed interconnect for computational traffic. When designing a cluster for use with EMANE, consider the EMANE functional components: the nodes of the emulated network, the NEM platforms, the OTA manager, and the event servers. In a centralized EMANE deployment, the compute nodes are the nodes of the emulation and the head node serves as the platform, the OTA manager, and the event server. In a distributed deployment, the NEMs are distributed to the compute nodes. As the size of the emulated network increases, the platform and the event server may place an overwhelming load on the head node. To keep the head node responsive for scheduling and interactive sessions, one or more compute nodes should be assigned as dedicated event servers or NEM platforms. Therefore, three types of nodes are defined: the head node, the nodes of the emulated network, and compute nodes that actually perform the network emulation. 3.1 The Cluster Network and Interconnects At a minimum, a cluster must have a management network through which the head node can communicate with the other nodes. Cluster management should always be on a dedicated network to avoid conditions where management of the cluster is not possible due to high network load. Conversely, computational traffic should be affected as little as possible by management overhead. Typically a cluster will have additional networks or interconnects for computational traffic. EMANE generates two types of emulation-related traffic: OTA manager messages and event messages. OTA manager communication is between all platforms, whereas events are broadcast from the event server to the platforms and individual nodes of the emulation. Therefore, there needs to be logical or physical segmentation into three networks: Management, Event, and OTA manager. To minimize latency, if a high-speed interconnect is available, it should be used as the OTA manager. 3.2 Virtualization of Nodes To emulate a network with more network nodes than the number of physical nodes in the cluster, a method is needed to provide each emulated node with an independent environment. This can be accomplished by hosting virtual
3.3 Design experiences
3.3.1 Small cluster Our small-scale deployment is an upgrade of an existing 5 MANE emulation platform. The hardware consists of 1 head node, 8 compute nodes, and a single GPU-equipped server dedicated to event services and path-loss calculation. The compute nodes are eight-core servers capable of hosting 16 virtual machines each, for a total capacity of 128 emulated network nodes. This was designed to be a fully distributed EMANE deployment: each emulated node runs a transport, NEM, and event daemon. A stacked pair of Cisco Catalyst 37506 switches are configured to provide three logical VLAN segments. The Catalyst switches are configured to provide three VLANs, plus outside connectivity to the cluster. Ports 1 through 10 of switch 1 are the management network and as such have no special configuration, placing them on the default VLAN. The management interface of each node connects to these ports. Ports 1 through 10 of switch 2 are configured as the external network (VLAN 100). The external interface of the head node is connected to this VLAN, as is other hardware that must be accessible from the LAN. The head node and compute nodes are equipped with two quad-port GigE NICs. The four ports on each NIC are bonded to form a 4Gb/sec channel. Port 12 and ports 17-48 of switch 1 are the event network, and are assigned to VLAN 10. Each group of four ports starting at port 17 is configured as a bonded portchannel, and connected to one NIC of a compute node. The event server is connected to port 12. Ports 17 through 48 of switch 2 are the OTA manager network, and are assigned to VLAN 20. Each group of four ports starting at port 17 is configured as a bonded Port-channel, and connected to the remaining NIC of a compute node. The head node is connected to the emulation networks to allow for the possibility of a NEM platform on the head node. Ports 13-16 of switch 1 and ports 13-16 of switch 2 are bonded to form an 8Gb/sec channel. Each switch is
connected to a quad-port NIC of the head node. This channel is configured on the switch and the master host as a trunk with access to VLAN 10 and VLAN 20 traffic. The head node is configured to provide virtual VLAN interfaces sharing 8Gb/sec maximum bandwidth. (While the ports for the head node could be configured identically to the cluster hosts, this configuration provides the head node with extra bandwidth to both the event and OTA networks.) Segregation of VLANs between switch 1 and switch 2 is important when the switch architecture is taken into consideration. The two switches forming the stack use a proprietary interconnect that provides 32Gb/sec throughput. By placing the management and event VLANs on switch 1, and the external and OTA VLANs on switch 2, the only traffic that will cross the interconnect is the trunked traffic to and from the head node. At 8Gb/sec, this load will not saturate the interconnect. As this deployment is an upgrade of an existing, unmanaged deployment, some elements of the original 7 cluster remain: The existing install of CentOS 5.6 on the head node and event server was preserved, as was the existing shared storage and EMANE environment on the head node. Centos 5.6 is based on the Linux 2.6.18 kernel 8 and lacks virtualization support . For this reason Scientific Linux 6 images were customized and employed for the 9 compute nodes and virtual machines . 3.3.2 Medium-scale cluster In comparison, the medium-scale deployment was planned to be an EMANE emulation cluster from the onset. The hardware in this cluster consists of 2 head nodes (one active and one currently a cold-spare), 28 emulation platform nodes, and 2 compute nodes equipped with NVIDIA Tesla GPGPUs. Two independent Gigabit Ethernet switches provide a management network (to which all nodes are connected) and an event network (to which platform and compute nodes are connected). Infiniband adapters and a 32port Infiniband switch provide a high-speed interconnect between platform nodes. A Panasas storage appliance is connected via Ten-Gigabit Ethernet to the management 10 network to provide shared storage . No special bonding or VLAN configuration of the management or event network switches is required. A hybrid EMANE deployment is used on the mediumscale cluster. Each platform node can host 16 virtual machines and one platform containing the NEMs for the virtual machines on the node. This allows the platform direct access to the interconnect. This cluster is expected to emulate networks of up to 448 nodes. 11 For this cluster we chose Infiscale's CAOS Linux . CAOS is a Redhat-based, HPC-targeted distribution that meets our criteria for a stable, cluster-ready operating system. A design goal for this deployment was to standardize on a single distribution for all nodes, and to have a single image for all managed nodes. A CAOS install onto the head node includes a deployable node image. Our goal is to maintain one image for platform, compute, and virtual
nodes, so the base CAOS node image was customized with packages from the CAOS repository to enable virtual machine support, plus the NVIDIA drivers and CUDA SDK 12 to enable OpenCL support 3.4 Virtual machines and EMANE deployment To use virtual machines as EMANE emulated network nodes, the virtual machines must have near-native performance, proper network connectivity and acceptable latency. Our virtualization technology of choice is the KVMenabled version of QEMU. Newer releases of the Linux kernel, coupled with virtualization-capable host hardware, allow QEMU to execute code in the virtual machine directly on the host CPU, providing true virtualization as opposed to CPU-level emulation.13 QEMU can be configured to emulate one or more Ethernet interfaces in the virtual machine. Virtual network interfaces are connected to VLANs, which can be connected to virtual interfaces created by the QEMU process in the host OS. This mechanism creates a channel between the virtual machine and the host. On the host side of the channel, a software bridge can be used to connect one or more virtual machines to each other and to the physical network interfaces of the host, allowing the virtual machines access to the network.14 Both of our EMANE clusters have three virtual interfaces per virtual machine (eth0, eth1, eth2) each connected a separate VLAN which is connected to a tap on the host. The host side of the eth0 taps are added to a bridge with the physical eth0 interface of the host, connecting the eth0 interfaces of the virtual machines to the management network. The virtual eth1 taps are handled in the same way (being bridged with the host event interface or bonded channel). Connection of the virtual machine eth2 interface depends on the type of EMANE deployment. In a fully distributed EMANE deployment, the NEMs are located at the emulation nodes, therefore inside the virtual machines. The virtual eth2 interface is part of the OTA network, and is bridged to the other virtual eth2 interfaces and the physical eth2 interface. However, the VM-to-host channel, the kernel network bridge, and the physical interface all incur some degree of latency, and the resulting total latency may unacceptably delay OTA messages. To illustrate this point, Tabel 1 shows round-trip times across selected paths, measured using mpong15 avg min RTT RTT loopback 10.8 10 GigE 187 86 4xGigE (bonded channel) 70.1 51 Infiniband 47.3 32 VM to Host 118 104 VM to VM (same host) 228.2 209 VM to VM (over GigE) 349.1 284 Table.1: Multicast RTT times (microseconds)
max RTT 3210 5394 1175 5220 5335 6706 5488
Non-Ethernet interfaces cannot be bridged with Ethernet interfaces16. This prohibits bridging the host-side tap interfaces to an Infiniband interface. QEMU does provide a workaround by which a QEMU VLAN can be encapsulated in multicast IP, but testing shows the resulting latency is equivalent to VM-to-VM over Gigabit Ethernet, negating the advantage of a high-speed interconnect. The solution is a hybrid EMANE deployment. Moving the NEMs outside the virtual machine to a platform on the host OS moves the VM-to-host latency to the transport; the OTA latency is reduced to the latency of the OTA manager and associated network or interconnect. In this configuration, the virtual eth2 taps are bridged with each other but not with any physical interface. (In Linux, an IP address is assigned to the bridge's interface, not to the bridged interfaces, therefore the bridge interface can be used as the platform endpoint) A script to start QEMU virtual machines with the proper interfaces, and qemu-ifup/qemu-ifdown scripts to configure the proper bridging are included in the appendices. 3.5 Cluster Provisioning with Perceus 17
Perceus is an open-source cluster provisioning system provided by Infiscale. Perceus is included with CAOS and available for Redhat and Debian. The Perceus master runs on the head node and provides, in addition to DHCP and DNS services for the cluster, network booting of cluster nodes. Cluster nodes are PXE booted to a provisioning state, while virtual machines are started with the provisioning kernel and initrd (initial ramdisk image) loaded directly into memory. Nodes in the provisioning state contact the Perceus master and load a VNFS image over NFS, which contains a new kernel and compressed root filesystem. The VNFS image then executes entirely from RAM, requiring no NFS or SAN access for the operating system. Each node should first be added to the Perceus database by management interface MAC address. The following commands will add a platform node, a compute node, and a virtual machine. (The convention we use for virtual machine MAC addresses is the format 00:01:00:vv:nn:nn, where vv is the virtual interface index, in this case eth0, and nn:nn are the two bytes of the node number in bigendian format.) perceus node add xx:xx:xx:xx:xx:xx n0001 perceus node set group n0001 nodes perceus node set vnfs
perceus node add xx:xx:xx:xx:xx:xx c0001 perceus node set group c0001 compute perceus node set vnfs perceus node add 00:01:00:00:00:01 vm0001 perceus node set group vm0001 vm perceus node set vnfs
Perceus has the ability to create configuration files in the node filesystem before the new kernel is started, allowing the network interfaces to be configured in complex ways before the node boots. This is important for platform nodes
that will host virtual machines. For this feature to work properly, every network interface of the node should be added to /etc/hosts with the node's hostname as an alias to the management interface. This will configure the IP addresses to be assigned to each interface. (Note that for platform nodes, we will assign IP addresses to the bridge interfaces, not the network interfaces.) 10.0.1.1 n0001-br0 n0001 10.1.1.1 n0001-br1 10.2.1.1 n0001-ib0 10.0.2.1 c0001-eth0 c0001 10.1.2.1 c0001-eth1 10.0.10.1 vm0001-eth0 vm0001 10.1.10.1 vm0001-eth1 10.255.10.1 vm0001-eth2
The ipaddr module allows Perceus to configure the network interfaces of the nodes before the VNFS is booted. The following lines configure the ipaddr module to create the proper bridges and assign IP addresses based on /etc/hosts entries. (Note the assignment of a fixed 10.255.0.1 address to br2 on each platform node, this is the bridge used for transport-to-NEM traffic and has no network connectivity.) n* br0(TYPE=bridge&ENSLAVE=eth0):[default:NAMENIC]/255.255.0.0/10.0.0.1 br1(TYPE=bridge&ENSLAVE=eth1):[default:NAMENIC]/255.255.0.0 ib0:[default:NAMENIC]/255.255.0.0 br2(TYPE=bridge):10.255.0.1/255.255.0.0 c* eth0:[default:NAME-NIC]/255.255.0.0/10.0.0.1 eth1:[default:NAME-NIC]/255.255.0.0 vm* eth0:[default:NAME-NIC]/255.255.0.0/10.0.0.1 eth1:[default:NAME-NIC]/255.255.0.0 eth2:[default:NAME-NIC]/255.255.0.0
3.6 Job Management with SLURM A compute cluster of any size can benefit from resource management. The three important aspects of resource management are allocation, control, and monitoring. Our criteria are that a resource manager must be: Able to execute parallel jobs, so that the emulation can be started on all nodes and virtual machines. Scriptable, as the EMANE transport, event daemon, and related processes such as GPSd must be started in the correct order. Able to control child processes spawned by the job. The Simple Linux Utility for Resource Management 18 fits our criteria and has the additional advantages of being lightweight, efficient, included in CAOS Linux, and having a single point of configuration. The latter is important with regard to maintaining a single VNFS image for the entire cluster: A single configuration file is shared by all head nodes and copied to the VNFS. SLURM nodes determine their role based on hostname.
SLURM provides two methods of starting jobs from the command line: srun and sbatch. srun executes the given command and arguments in parallel based on the options given. Output from the running processes is piped back to the shell. Srun returns when the remote process exits, and can optionally provided an interactive shell that will send input in parallel to all processes in the job. sbatch is a job script submission utility. A shell script submitted with sbatch will be queued to run on the first node of the allocation when the required set of nodes is available. Commands to be run in parallel across the allocation are invoked from the script using srun. Each invocation of srun is considered a job step. Steps can be run concurrently in the background (via the “&” shell syntax). However, when the job script exits, all processes spawned from the script are terminated therefore, the script must transfer execution with “exec” or not exit while child processes are running. SLURM passes the environment of the srun/sbatch command to the job processes, in addition to setting jobspecific variables. The SLURM_PROCID and SLURM_STEPID variables are the most useful. SLURM_STEPID indicates which step of the job script is executing. SLURM_STEPID starts at zero and increments each time srun is invoked in the job script. If srun is invoked from the command line, the SLURM_STEPID is 0. SLURM_STEPID is not set in the environment of the job script. This allows a shell script to determine if it is running as a job script or the actual job. SLURM_PROCID indicates the parallel job index; starting at zero and incrementing with each parallel job spawned by that step. Detecting these variables allows development of a single script for all use cases, and allows the job to generate the ID of the node. Consider the start_kvm script for starting virtual machines: This script could be invoked one of three ways: Directly from the shell, in parallel via srun, or submitted via sbatch. The script first checks for the SLURM_PROCID variable. If SLURM_PROCID is present but SLURM_STEPID is not, the script is being run by sbatch as a job script on the first node of the allocation, therefore, the script will invokes srun with itself as the command to run in parallel on the allocated nodes. If SLURM_PROCID is not set or SLURM_STEPID is set, the script is being invoked from the shell or is a parallel job. In either case the virtual machine should be started. The script attempts to derive the NODEID from SLURM_PROCID and falls back to the command line parameters if SLURM_PROCID is not set. As an example, if the following command was executed: sbatch -N16 -n128 start_kvm
SLURM would allocate 16 nodes to run 128 parallel jobs and spawn a single instance of start_kvm on the first node of the allocation. start_kvm would detect it is being run as a job script and srun itself, spawning eight instances of start_kvm on each of the 16 nodes in the allocation.
These instances of start_kvm would detect they are being run in parallel and, based on SLURM_PROCID, start virtual machines with a NODEID of {1,2,3,4,5,6,7,8} on the first node, {9,10,11,12,13,14,15,16} on the second, and so on. The NODEID is used to generate the MAC addresses of the virtual interfaces, allowing Perceus to provision the nodes. An emane_node to start the EMANE transport, event daemon, and related processes is included in the appendix. Some elements of the EMANE configuration depend on the IP address of the node, so the emane_node script determines the node ID from the hostname of the node, not by SLURM_PROCID. 4 Evaluation To evaluate the performance of the small and medium scale clusters, we choose to measure the latency of delivering OTA manager packets to the NEM platforms. 4.1 Measurement of OTA Latency OTA manager network latency was measured for a 16node emulation in EMANE. Each node emits UDP broadcast traffic to the transport interface approximately once per second. The broadcast traffic is encapsulated and unicast by the transport to the NEM platform, where it is relayed to other NEMs on the platform and across the OTA manager network as UDP multicast traffic. A packet logger is run on all NEM platform nodes, listening to the multicast group and port used by the OTA manager. The packet logger records the source IP address, receive UNIX timestamp and microseconds, IP header checksum, and IP ID of packets on the OTA. Time synchronization of all platform nodes is maintained using a Cengen-recommended method. 19 For any given packet, the tuple of (source IP, timestamp, checksum) serves to identify the packet in each node’s log. The absolute send time (as timestamp+microseconds/1000000) is determined per packet from the source node’s log. For the source node, the latency is zero. For each other node, the send time is subtracted from the logged time to determine the latency. The absolute send time is translated to an elapsed time by subtracting the lowest timestamp recorded, and the resulting tuple of (time, node, latency) is output. Figure 1 and Figure 2 are plots of time vs. latency for a distributed deployment and hybrid deployment with interconnect. Figure 1 is a distributed deployment of 16 virtual machines across 8 nodes, with one NEM per virtual machine and one packet logger per virtual machine. The OTA manager network is quad-gigabit Ethernet bonded channel. 3475 distinct packets were logged with an average latency of 133 microseconds and a maximum latency of 774 microseconds. Figure 2 is a Hybrid deployment of 16 virtual machines across 8 nodes, with one NEM platform and packet logger per node. The OTA manager network is IP-over-Infiniband. 2213 distinct packets were logged with an average latency of
17 microseconds, a maximum latency of 78 microseconds. References 1
Figure 1: Distributed deployment,4xGigE, Tsend vs. Treceive-Tsend in microseconds.
Figure 2: Hybrid Deployment, Infiniband, Tsend vs. Treceive-Tsend in microseconds. 5 Summary and Future Direction We have presented guidelines for designing, configuring, and managing cluster systems capable of high-performance, low-latency wireless network emulations using the EMANE framework. Future work will focus on the following:
Develop a web-based front-end for configuring, submitting, and monitoring EMANE jobs. This will greatly simplify sharing the resources of a large cluster.
Scale to a planned cluster of approximately 5000 cores and 500 GPGPUs across several hundred nodes. Ten-gigabit Ethernet interfaces interconnected via a fat tree of low-latency switches will serve as a shared event and OTA manager network. This is expected to support emulation of multiple networks with 5000-10000 nodes total.
Cavalcante, A.M.; de Sousa, M.J.; Costa, J.C.W.; Frances, C.R.L.; Protasio dos Santos Cavalcante, G.; de Souza Sales, C. "3D ray-tracing parallel model for radiopropagation prediction," Telecommunications Symposium, 2006 International , vol., no., pp.269-274, 36 Sept. 2006 doi: 10.1109/ITS.2006.4433282 URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&ar number=4433282&isnumber=4433230 2 Patel, K., Galgano, S. (2010). Emulation Experimentation Using the Extendable Mobile Ad-hoc Emulator. Retrieved from http://labs.cengen.com/emane/doc/emaneemulation_experimentation.pdf 3 CenGen, Inc. EMANE User Training. Retrieved from http://labs.cengen.com/emane/doc/0.6.2/training/emaneus ertraining-slides.0.6.2.20100301-1.pdf. 4 Lehmann, T. (2009). Building a Linux-Based HighPerformance Compute Cluster. Linux Journal. Retrieved from http://www.linuxjournal.com/magazine/buildinglinux-based-high-performance-compute-cluster 5 US Naval Research Laboratory. Mobile Ad-hoc Network Emulator (MANE). Retrieved from http://cs.itd.nrl.navy.mil/work/mane/index.php 6 Cisco Systems, Inc. Cisco Catalyst 3750 Series Switches. Retrieved from http://www.cisco.com/ en/US/products/hw/switches/ps5023/index.html 7 Centos Project. The Commuinty ENTerprise Operating System. Retrieved from http://www.centos.org/ 8 KVM. Retrieved from http://www.linuxkvm.org/page/Main_Page 9 Scientific Linux. Retrieved from http://www.scientificlinux.org/ 10 Panasas, Inc. Panasas: Parallel File System for HPC Storage. Retrieved from http://www.panasas.com/ 11 Infiscale, Inc. (2008). CAOS Linux. Retrieved from http://www.caoslinux.org/ 12 Nvidia, Inc .(2011). CUDA Toolkit 4.0. Retrieved June 2011 from http://developer.nvidia.com/cuda-toolkit-40 13 Bellard, F. (2011). QEMU open source processor emulator. Retrieved June 2011 from http://wiki.qemu.org/ Main_Page 14 QEMU User Documentation. (Sec 3.7). Retrieved June 2011 from http://qemu.weilnetz.de/ qemu-doc.html#pcsys_005fnetwork 15 Informatica, Inc. (2010). Test Your Network's Multicast Capability. Retrieved from http://www.29west.com/ docs/TestNet/index.html 16 http://www.linuxfoundation.org/collaborate/workgroups/ne tworking/bridge#What_can_be_bridged.3F 17 Infiscale, Inc. Provision Enterprise Resources & Clusters Enabling Uniform Systems. Retrieved from http://www.perceus.org 18 Lawrence Livermore National Laboratory. (2011). Simple
Linux Utility for Resource Management. Retrieved June 2011 from https://computing.llnl.gov/linux/slurm/ 19 CenGen, Inc. Emane 0.7.1 Documentation. (Sec. 11). Retrieved from http://labs.cengen.com/emane/ doc/0.7.1/html/emane.0.7.1.html#id370045