spacer

Navigation

Table Of Contents

Previous topic

About BonFIRE

Next topic

Infrastructure

This Page

An overview of BonFIRE features¶

An overview of features of BonFIRE are discussed below with links for more information.

Please note that not all features are supported by all testbeds in BonFIRE. For an overview of the features supported by each testbed and more information about using the different testbeds, please see the page on Using the BonFIRE Testbeds.

Multiple client tools¶

To create experiments and Cloud resources in BonFIRE, it is possible to issue raw HTTP commands to the BonFIRE API via, for example, cURL. However, we also offer multiple client tools for interacting easily and effectively with BonFIRE:

  • BonFIRE Portal: a graphical client tool that operates via your web browser, which is very intuitive to use. The Portal also gives you the option to set up a Managed Experiment in a step-by-step procedure to define the initial deployment of compute, storage and networking resources for your experiment. As part of this procedure, the Portal will create an Experiment Descriptor, which you can save and use again.
  • BonFIRE Experiment Descriptor: instead of following a step-by-step procedure on the Portal, you can write/edit an experiment descriptor in either JSON or OVF. These descriptors are interpreted by an Experiment Manager, allowing you to describe all of the resources for an initial deployment of an experiment without needing to know OCCI or how to form HTTP messages.
  • Restfully: a general-purpose client library for RESTful APIs, which is written in Ruby. Its goal is to abstract the nitty-gritty details of exchanging HTTP requests between the user-agent and the server. Restfully also allows you to write scripts for your experiment deployment which can be automatically executed.
  • Command Line Interface Tools: BonFIRE also offers CLI tools that are meant to provide users with a way to interact with the BonFIRE API from a command line interface. This could be used in an interactive, manual, fashion or programmatically (i.e., scripted).

Infrastructure sites and resources¶

BonFIRE integrates multiple Infrastructure that are geographically distributed across Europe, offering heterogeneous resources: EPCC in Edinburgh, Scotland; HP Cells in Bristol, England; Inria in Rennes, France; HLRS in Stuttgart, Germany; IBBT Virtual Wall in Ghent, Belgium; and PSNC in Poznan, Poland.

Each infrastructure site offers compute, storage and network resources, which can be deployed within an experiment in BonFIRE. Some infrastructure sites offer additional functionalities, such as on-request resources (at Inria) and advanced network emulation (at the Virtual Wall). The resources are accessed with single sign-on via SSH.

On-request compute resources¶

Inria currently offers on-request resources in BonFIRE, allowing experimenters to reserve large quantities of physical hardware (162 nodes/1800 cores available). This gives experimenters the flexibility to perform large-scale experimentation, as well as providing greater control of the experiment variables as exclusive access to the physical hosts is possible. It is also possible to control which physical host Virtual Machines are deployed to.

Virtual machines and instance types¶

BonFIRE offers several Virtual Machine (VM) images based on Debian Squeeze, which vary in storage size. The storage size can be further extended with block storage, as discussed below.

Experimenters can create new VM images by deploying a BonFIRE base image, installing and configuring software, and saving this with a desired name. The size of the VM image will be the same as the base image. BonFIRE does not currently offer the ability for experimenters to create VM images locally and upload to BonFIRE because a certain amount of configuration is required for the images to integrate in BonFIRE.

When deploying compute resources, an instance type must be set. The instance types are labelled according to a small/medium/large type of taxonomy, which vary the number of virtual CPU cores and RAM size. Please note that the availability of instance types vary across the different infrastructure sites. It is, however, also possible to deploy custom instance types, giving you fine-grained control of the compute resource specifications (including percentage of a CPU).

Storage¶

As mentioned above, the VM images come with a certain OS storage size. If these images do not provide enough storage, the VMs can be extended with storage resources. These are DATA BLOCKS that you can define the file system of (e.g., ext3, jfs, reiserfs) and the desired storage size, which are mounted in the VMs. It is possible to make this storage extend the root partition of your compute node instead of being mounted as a separate partition.

By default, storage resources that are created in BonFIRE are duplicated when they are deployed with compute resources, and are destroyed after the experiment duration has finished. Whilst other resources (compute and network) must be created within the context of an experiment, storage resources can be created outside of an experiment and will therefore outlive the duration of any experiments that use that storage resource. Moreover, it is possible to persist the storage resources on the OpenNebula testbeds in BonFIRE (see Infrastructure:), so that data written to the storage during the execution of an experiment is saved when the compute resource is shut down.

Another special type of storages is the shared storage. At the moment they can only be created at the be-ibbt testbed. Unlike the other storage types, Shared storages are accessible by multiple computes at the same time. If the storage was created outside the scope of an experiment, it can even be accessed by computes from different experiments.

Networking¶

The infrastructure sites in BonFIRE are interconnected via the public Internet, but not all providers are able to provision public IPv4 addresses. Therefore, BonFIRE operates a VPN to tunnel traffic between sites over what we call the BonFIRE WAN. For more information, please have a look at the Networking Overview.

One of the usage scenarios in BonFIRE is Cloud With Emulated Network Implications, for which it is possible to control the networking on the Virtual Wall. As an experimenter it is possible to control the networking in three ways:

  • Network topologies: the Virtual Wall allows you to set up any network topology you want and, by default, provides shortest path routing between any compute resources installed on the network nodes, with Gigabit connectivity.
  • Network impairments: if you want to study the influence of network impairments on the performance of your system under test, you can specify link characteristics such as bandwidth, latency and loss rate on each link, both statically and dynamically.
  • Background traffic: it is also possible to introduce TCP or UDP streams on the links to represent background traffic, with parameters to specify the packet size and the throughput (#packets/second).

For more information about the controlled networking at the Virtual Wall, see Emulated Network at the Virtual Wall:.

In the next release of BonFIRE, there will be more control of the networking between certain sites. An integration with AutoBAHN will give Bandwidth-on-Demand between VMs on the EPCC and PSNC sites. Interconnection with FEDERICA is also in progress.

Monitoring¶

BonFIRE provides fine-grained monitoring information about the virtual resources in your experiments as well as the physical hosts (on the OpenNebula testbeds) that the virtual resources are deployed on. The latter is a unique feature of BonFIRE, which allows you to correlate observations from the data collected in your VMs (VM performance metrics or applications/services running in your VMs) with events on the physical host.

The monitoring solution offered by BonFIRE is based on the open source monitoring software Zabbix. The software comprises two major software components: Zabbix server and Zabbix agent. The server is referred to as an ‘Aggregator’ in BonFIRE, which is deployed on a separate resource, whilst the agents reside in the deployed VMs. The Zabbix Aggregator collects monitoring information reported by the Zabbix Agents, giving you a single view of the data for all your experiment resources.

There is easy access to the Zabbix GUI via the BonFIRE Portal, where you can view the monitoring data and make any configurations of the metrics you want to monitor. The default configuration in BonFIRE will monitor over 100 VM metrics and 16 infrastructure metrics. In addition to these, you can add your own metrics, which makes it easy to monitor your application/services if desired. Moreover, you can also specify the monitoring metrics in OCCI when you deploy compute resources.

Elasticity¶

It is the goal to make Cloud features easy to use in BonFIRE, and Elasticity as a Service is a new feature in release 3 to do just that. Elasticity refers to dynamically increasing or decreasing resources according to load/demand, which is a popular selling point of ‘the Cloud’. BonFIRE provides a VM image with an Elasticity Engine that performs load balancing based on either HAProxy or Kamailio.

The EaaS in BonFIRE uses the monitoring information from the Zabbix Aggregator (see above) deployed in an experiment for retrieving information regarding the load of the compute resources. It interoperates with the BonFIRE API for dynamically adding and removing compute resources based on elasticity triggering rules specified by the user.

Notifications¶

As your experiment is executing, particularly if elasticity is used, it may be important to track changes/events in the experiment. BonFIRE offers a way for clients to subscribe to notifications of experiment state changes, as well as resource state changes, such as when they are created, updated, or destroyed. The state changes are available as events on a message queue in BonFIRE, which uses RabbitMQ.

The experiment states are in accordance with the Experiment Lifecycle, and the resource (compute, storage and network) states are in accordance with the specified OCCI States.

Contextualisation¶

Contextualisation elements in OCCI can be used to pass initialisation values for an experiment in the form of simple key-value pairs. This is used by the BonFIRE testbeds and the BonFIRE API, but is also available to the users for specifying, for example:

  • The IP address of the Zabbix Aggregator when deploying compute resources (the Portal will do this automatically).
  • Any custom monitoring metrics.
  • Elasticity trigger rules, if the EaaS is used.
  • Post-install scripts, which can run after the VM has been deployed.
  • Any additional SSH keys, to allow multiple users access to experiment VMs.

The contextualisation element is generic, so that any key-value pairs could be defined. The contextualisation variables are written to /etc/default/bonfire, so you can source this file to access the variables within your experiment VMs.

Advanced features¶

As discussed above, BonFIRE offers the features you would expect from a Cloud provider. However, BonFIRE goes beyond that to offer a facility for research that gives maximum control and observability. BonFIRE exceeds the standard by offering the following advanced features:

  • Experiment descriptors:

    As discussed above, there are many client tools available in BonFIRE. An important feature of BonFIRE is the availability of experiment descriptors, which can be used to specify the deployment of compute, storage and network resources, along with contextualisation such as specifying monitoring metrics and elasticity rules. The experiment descriptor can be specified in either JSON or OVF, depending on preference. Thereby, it is very easy to share and extend upon existing experiment deployments.

  • Infrastructure monitoring:

    Unlike other public Cloud providers, BonFIRE is able to expose much more information about what is going on “behind the scenes” to help maximise the observability in the experiments. Infrastructure monitoring is available on the OpenNebula testbeds in BonFIRE, exposing metrics such as the number of VMs and CPU load of the physical host. This source of monitoring information can be essential to explain observations made in experiments that could be caused events outside the experimenter’s control.

  • Time-stamped information for internal BonFIRE processes:

    In its quest for advanced monitoring, BonFIRE provides the timestamps of specific experiment events to the experimenters. Triggered by specific experimenter requirements, the timestamps of VM requests going through the Experiment Manager are logged and served via HTTP.

  • Infrastructure and VM logs:

    To further increase the observability in BonFIRE, the testbeds publish hypervisor information about the state of the site hosts. Additionally, in-depth timestamped information of the status changes of each VMs is available; easily accessible from the VM log linked to on the VM page on the BonFIRE Portal.

  • Shared storage:

    Another special type of storages is the shared storage. At the moment these storage resources can only be created at the be-ibbt testbed but may be used by compute resources at any of the BonFIRE testbeds. Unlike the other storage types, shared storages are accessible by multiple, potentially distributed, computes at the same time. If the storage was created outside the scope of an experiment, it can even be accessed by computes from different experiments.

  • Custom instance types:

    Central to VM instantiation is the notion of instance types; the combination of CPUs and RAM available to the VM created. BonFIRE lets you control the make up of your VM by allowing to specify any such combination. Although combinations exceeding the available resources will not be instantiated, this feature is another example of BonFIRE functionality particularly conducive to experimentation.

  • Exclusive access to physical hosts:

    Contention between VMs running on the same physical host is to be expected, and although infrastructure monitoring can help interpret experiment results, BonFIRE offers experimenters control of this by gaining exclusive access to physical hosts. This is possible via the on-request resources.

  • Control the deployments of VMs on physical clusters:

    BonFIRE has always allowed its users to specify which site they want their VMs to run on. Since Release 3, all BonFIRE interfaces allow to also specify on which specific host to deploy that VM. This is essential for using on-request resources, but also provides fine-grained control of your experiment. Therefore, in combination with exclusive access to physical hosts, it is possible to run controlled experiments to, for example, ensure that there is no contention from other VMs, or to actually induce contention by varying the load of the physical host.

  • Background network traffic emulation:

    The BonFIRE Virtual Wall testbed supports configuration of the network topology and network impairments such as bandwidth, latency and loss rate on each network link. New in Release 3 is the ability to also introduce background traffic on a user-specified network link. The background traffic can be configured to be a TCP or UDP stream, with parameters to specify the packet size and the throughput (#packets/second).

 

Navigation

© Copyright 2012, BonFIRE Staff. Created using Sphinx 1.1.2.
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.