Ceilometer performance sampling is supported by Hyperglance since release 4.0. Charts for single nodes are now available for user-configured metrics and filters are able to request sample data in order to make visual changes to the topology.


The user can configure certain aspects of Hyperglance's Ceilometer support to better suit specific needs. There are two main configurable areas:


  1. Which metrics Hyperglance will be able to poll from Ceilometer
  2. How long is a metric considered "fresh"


1. Defining the list of metrics Hyperglance looks at


The list of metrics (or meters) that a usual Ceilometer installation is able to calculate is quite long, and can even be user extended. Yet, some of these metrics may not be relevant for everyone. The full list of metrics can be found in http://docs.openstack.org/admin-guide-cloud/telemetry-measurements.html. Some of the values you can find in this page are relevant when configuring Hyperglance to fetch metric data it does not already do by default. However, not all of these values are probably being used by your Ceilometer configuration. To find out which ones are actually being used, just access the Resource Usage Overview page on your local OpenStack dashboard. In a typical OpenStack installation, it looks something like:




With that in mind, Hyperglance contains a configurable sub-set list of metrics it is interested in. This list is available in the local Hyperglance installation, usually in:

 

/opt/wildfly/standalone/deployments/collector-plugins/OpenStackCollector.ear/lib/config.jar/runtime.properties

 

There, you can see a property called ceilometer-relevant-metrics. This is a double semi-colon separated list of metric data. Each metric follows the following data pattern: 


<openstackName>::<hyperglanceUIName>


Explaining each value:


  • openstackName - The metric name as used in Ceilometer
  • hyperglanceUIName - The metric name as used in Hyperglance. For instance in its charts.

 


To configure a new metric into Hyperglance, and after ensuring it is being correctly calculated in Ceilometer, one just has to append a new value to the end of the double semi-colon separated list. Naturally, the new values must follow the structure defined above. 


As an example, let's assume we want Hyperglance to fetch data related to metric disk.read.requestsand that we're starting from an out-of-the-box Hyperglance configuration. By looking at the list of metrics, on the Ceilometer metrics page (as stated above), we would be able to define the metric according to Hyperglance's pattern as such:

 

disk.read.requests::Disk Read Requests


Respectively:

  • disk.read.requests - The metric name, as defined on this page
  • Disk Read Requests - The metric name we want to be displayed on the user interface


All we need to do is place that line in the pre-existing config. The final configuration parameter would look like:


ceilometer-relevant-metrics=cpu_util::CPU Utilization;;disk.read.requests::Disk Read Requests


After the desired metrics are configured the server needs to be restarted (service jboss restart). If the metric was correctly defined, and Ceilometer is generating it, it will show up  on the list of options of a Node data chart.


And that's it! Hyperglance should now be able to pull data related to this new metric.


2. Metric freshness


The metric freshness parameter, defined in the runtime.properties file as metric.freshness.lifetime.in.minutes, can be used to improve performance of certain Hyperglance/Ceilometer installations, particularly when applying filters to the topolgy that change the latter based on performance metrics. Notwithstanding, it can be left alone and the system will still work, as it has a default value of 10 minutes.

 

Nevertheless, this parameter should be changed if the Ceilometer configuration that is integrated with Hyperglance is generating meter data at a slower or even faster rate than 1 sample for every 10 minutes. The parameter is used to heuristically determine which samples are still "fresh" from a given time-stamped list of samples, for any meter. Therefore, if the Ceilometer configuration is generating data for relevant meters at a rate that's different than the currently defined one, the user should change the config to match Ceilometer. After the property file is changed, the server must be restarted (service jboss restart) for the changes to take effect.

 

To explain how this value may impact Hyperglance's metering polling performance, let's consider the following example:

 

Given a network of 1000 servers whose CPU utilization is being monitored by Ceilometer and Hyperglance in order to highlight any possible over utilization, this last part by a filter named F. This Ceilometer's configuration was set in a way that makes it generate CPU utilization samples every 5 minutes. Hyperglance, in it's turn, is set to consider a sample fresh if it's at most 10 minutes old.

 

When fetching data from Ceilometer, that will be fed into F, Hyperglance will try to obtain data that is at most 10 minutes old. Since Ceilometer generates 2 CPU utilization samples every 10 minutes, Hyperglance will fetch back 2 values for every Node, in total 2000 samples. However, as only the latest value is used, half the samples will be ignored. In any case, the cost of transferring/processing those useless 1000 samples, both on Hyperglance's and Ceilometer's part, was already paid. The system could operate quicker if Hyperglance was set to match Ceilometer's configuration, particularly placing a metric.freshness.lifetime.in.minutes value of 5 minutes, instead of 10. As the number of servers increases, the cost will increase linearly, and having a good configuration could improve significantly the overall system.