Skip to content

Billing, Metering and Rating

Resources usage (vcpu, ram, disk, IP,... ) are accounted by the Openstack projects Ceilometer and CloudKitty. They are stored into timeseries databases. These metrics can later be used for billing, alerting and/or auto-scaling.

You will find below some typical use cases, please refer to the full documentation for other uses.

You can query these metrics at any time, to verify your resource consumption for example.

Billing

Billing is accounted with a virtual currency called ICU (Infomaniak Cloud Unit), you can find the currency converter here.

Example :

Currency Value Infomaniak Cloud Unit (ICU)
CHF 1 50
EUR 1 55.5

Installing The Clients

sudo apt install python3-gnocchiclient
sudo apt install python3-cloudkittyclient

Or using pip:

python3 -m pip install python-openstackclient gnocchiclient python-cloudkittyclient

Using The Clients

How much money have I been charged ?

This can be calculated by summing all the ICU ratings for the desired period and then dividing by the ICU's monetary equivalent in the wanted currency.
For example, using shell command subsitution to get the past date and some jq to make the calculation, the following snippet calculates the CHF cost (50 ICUs equalling to CHF 1.-) of the last hour :

taylor@laptop:~$ openstack rating dataframes get -b $(date +'%Y-%m-%dT%H:00:00+00:00' -u --date="2 hours ago") -c Resources -f json \
    | jq 'map(.Resources[].rating | tonumber) | add | . / 50'
17.0628355

Info

You may wonder why the command specify 2 hours ago instead of 1 hour. The reason to this is that the billing is calculated every hour, therefore the last hour has not been calculated yet. A more precise term would be last calculated hour.

Get rating/hour

For example, command below shows the rating value per hour for the running VM instances for the period starting the 2021-11-01 :

taylor@laptop:~$ openstack rating dataframes get -b 2021-11-01 -r instance_up --fit-width
+---------------------+---------------------+----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+
| Begin               | End                 | Project ID                       | Resources                                                                                                                                |
+---------------------+---------------------+----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------+
| 2021-11-01T00:00:00 | 2021-11-01T01:00:00 | 247f44b372c44205b59bfeacc2ea86bc | [{'rating': '0.402435', 'service': 'instance_up', 'desc': {'flavor_name': 'a2-ram4-disk20-perf1', 'id':                                  |
|                     |                     |                                  | '10736315-bd1a-406c-b90a-2f2fcce8674f', 'project_id': '247f44b372c44205b59bfeacc2ea86bc', 'tenant_id':                                   |
|                     |                     |                                  | '247f44b372c44205b59bfeacc2ea86bc'}, 'volume': '1', 'rate_value': '0.4024'}]                                                             |
| 2021-11-01T00:00:00 | 2021-11-01T01:00:00 | 247f44b372c44205b59bfeacc2ea86bc | [{'rating': '0.804869', 'service': 'instance_up', 'desc': {'flavor_name': 'a4-ram8-disk20-perf1', 'id':                                  |
|                     |                     |                                  | 'b8880e4e-9cb7-4419-b2cf-d20254ce53da', 'project_id': '247f44b372c44205b59bfeacc2ea86bc', 'tenant_id':                                   |
|                     |                     |                                  | '247f44b372c44205b59bfeacc2ea86bc'}, 'volume': '1', 'rate_value': '0.8049'}]                                                             |
| 2021-11-01T01:00:00 | 2021-11-01T02:00:00 | 247f44b372c44205b59bfeacc2ea86bc | [{'rating': '0.402435', 'service': 'instance_up', 'desc': {'flavor_name': 'a2-ram4-disk20-perf1', 'id':                                  |
|                     |                     |                                  | '10736315-bd1a-406c-b90a-2f2fcce8674f', 'project_id': '247f44b372c44205b59bfeacc2ea86bc', 'tenant_id':                                   |
|                     |                     |                                  | '247f44b372c44205b59bfeacc2ea86bc'}, 'volume': '1', 'rate_value': '0.4024'}]                                                             |
| 2021-11-01T01:00:00 | 2021-11-01T02:00:00 | 247f44b372c44205b59bfeacc2ea86bc | [{'rating': '0.804869', 'service': 'instance_up', 'desc': {'flavor_name': 'a4-ram8-disk20-perf1', 'id':                                  |
|                     |                     |                                  | 'b8880e4e-9cb7-4419-b2cf-d20254ce53da', 'project_id': '247f44b372c44205b59bfeacc2ea86bc', 'tenant_id':                                   |
|                     |                     |                                  | '247f44b372c44205b59bfeacc2ea86bc'}, 'volume': '1', 'rate_value': '0.8049'}]                                                             |
+---------------------+---------------------+----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------
  • VM instance with id 10736315-bd1a-406c-b90a-2f2fcce8674f is/was rated 0.4024 ICU/hour
  • VM instance with id b8880e4e-9cb7-4419-b2cf-d20254ce53da is/was rated 0.8049 ICU/hour

Info

Please note the -r argument, which is used here to filter on the 'instance_up' resource type.
This only shows rating for the VM running cost (reserved CPU/RAM), but does not include HDD/GPU, which are also billed when the instance is not running and are thus rated under the 'instance_reserved' resource type. So, to have a full breakdown of your compute costs, you have to add up both resource types.

Apart from instance resources, several other resource types exist, each rating different metrics for your infrastructure. Here's a snippet to find out about all your project's used resource types, again using some jq :

taylor@laptop:~$ openstack rating dataframes get -c Resources -f json | jq -r 'map(.Resources | map(.service)) | flatten | unique | .[]'
image.size
instance_reserved
instance_up
ip.floating
network.ports.ext-net1
network.ports.router-gw-ext-floating1
storage.objects.size
volume.size

Get resources usage for the past 30 days

taylor@laptop:~$ openstack usage show
Usage from 2021-04-21 to 2021-05-20 on project 0c92a99218894f148b297bf539ce521a:
+---------------+------------+
| Field         | Value      |
+---------------+------------+
| CPU Hours     | 999.54     |
| Disk GB-Hours | 13581.45   |
| RAM MB-Hours  | 2047067.84 |
| Servers       | 41         |
+---------------+------------+

Display Available Metrics

Metrics are available for all types of resources: VM instances, volumes, floating IP addresses, or anything else...

You can either display all available metrics in your project (and grep what you're looking for).

$ # The column "resource_id" corresponds to the id of your VM instance or volume or floating IP, etc...
$ # In this example we only want the metrics for the VM instance with ID "36886229-f6f0-4784-a389-e78feab9b42a"
$ openstack metric list | grep "36886229-f6f0-4784-a389-e78feab9b42a"
| 460f10c7-2d5c-44e9-bd8c-3c8f4a4fe15c | ik-medium-rate      | disk.ephemeral.size            | GB      | 36886229-f6f0-4784-a389-e78feab9b42a |
| 67ad4165-0e79-4dd8-8a8c-83c6c74e176b | ik-medium-rate      | memory                         | MB      | 36886229-f6f0-4784-a389-e78feab9b42a |
| 687c8882-86a9-46bb-9f3c-229636a0a222 | ik-medium-rate      | memory.usage                   | MB      | 36886229-f6f0-4784-a389-e78feab9b42a |
| 6fce8350-e606-4dbc-85dd-1a6df0edfb54 | ik-medium-rate      | vcpus                          | vcpu    | 36886229-f6f0-4784-a389-e78feab9b42a |
| dd88f6f3-400a-47eb-b633-69d55f356206 | ik-medium-rate      | disk.root.size                 | GB      | 36886229-f6f0-4784-a389-e78feab9b42a |
| e3aebc62-77d9-4516-a49a-2580686575e3 | ik-medium-rate      | cpu                            | ns      | 36886229-f6f0-4784-a389-e78feab9b42a |
| fe4f55e9-9603-4b66-9719-107b349f3f14 | ik-medium-rate      | compute.instance.booting.time  | sec     | 36886229-f6f0-4784-a389-e78feab9b42a |

Or display available metrics for a specific resource:

$ openstack metric resource show 36886229-f6f0-4784-a389-e78feab9b42a -c metrics
+---------+---------------------------------------------------------------------+
| Field   | Value                                                               |
+---------+---------------------------------------------------------------------+
| metrics | compute.instance.booting.time: fe4f55e9-9603-4b66-9719-107b349f3f14 |
|         | cpu: e3aebc62-77d9-4516-a49a-2580686575e3                           |
|         | disk.ephemeral.size: 460f10c7-2d5c-44e9-bd8c-3c8f4a4fe15c           |
|         | disk.root.size: dd88f6f3-400a-47eb-b633-69d55f356206                |
|         | memory.usage: 687c8882-86a9-46bb-9f3c-229636a0a222                  |
|         | memory: 67ad4165-0e79-4dd8-8a8c-83c6c74e176b                        |
|         | vcpus: 6fce8350-e606-4dbc-85dd-1a6df0edfb54                         |
+---------+---------------------------------------------------------------------+

Warning

Sometimes there are multiple resources for a given instance (for example a VM and its volume), so metrics can be linked to another resource than your instance id. In this case, you can use the search function:

$ # notice the "%" after the instance id in the search
$ openstack metric resource search "original_resource_id like '36886229-f6f0-4784-a389-e78feab9b42a%'"
+--------------------------------------+---------------+----------------------------------+----------------------------------+------------------------------------------+----------------------------------+----------+----------------------------------+--------------+-------------------------------------------------------------------+
| id                                   | type          | project_id                       | user_id                          | original_resource_id                     | started_at                       | ended_at | revision_start                   | revision_end | creator                                                           |
+--------------------------------------+---------------+----------------------------------+----------------------------------+------------------------------------------+----------------------------------+----------+----------------------------------+--------------+-------------------------------------------------------------------+
| 33d60f84-ce92-5dd8-a9bd-971bc221543d | instance_disk | d1440aa24a65411fb9bac2b842c8defa | 153855c4fa1b4415bdd768f88c559f72 | 36886229-f6f0-4784-a389-e78feab9b42a-vda | 2021-04-29T07:15:19.775934+00:00 | None     | 2021-04-29T07:15:19.775950+00:00 | None         | 35b89d95e2504576866fa01215247181:d1860cf28a264c6c8bbf0b5468c9b9c8 |
| 36886229-f6f0-4784-a389-e78feab9b42a | instance      | d1440aa24a65411fb9bac2b842c8defa | 153855c4fa1b4415bdd768f88c559f72 | 36886229-f6f0-4784-a389-e78feab9b42a     | 2021-04-29T07:12:06.362575+00:00 | None     | 2021-04-29T08:00:26.396108+00:00 | None         | 35b89d95e2504576866fa01215247181:d1860cf28a264c6c8bbf0b5468c9b9c8 |
+--------------------------------------+---------------+----------------------------------+----------------------------------+------------------------------------------+----------------------------------+----------+----------------------------------+--------------+-------------------------------------------------------------------+

Display Measures

Once you identified the metric you are interested in, you can display its measures. For example to display vcpus metric of our instance 36886229-f6f0-4784-a389-e78feab9b42a:

$ openstack metric measures show 6fce8350-e606-4dbc-85dd-1a6df0edfb54
+---------------------------+-------------+-------+
| timestamp                 | granularity | value |
+---------------------------+-------------+-------+
| 2021-04-29T06:00:00+00:00 |     21600.0 |   2.0 |
| 2021-04-29T12:00:00+00:00 |     21600.0 |   1.4 |
| 2021-04-29T18:00:00+00:00 |     21600.0 |   1.0 |
| 2021-04-30T00:00:00+00:00 |     21600.0 |   1.0 |
| 2021-04-30T06:00:00+00:00 |     21600.0 |   1.0 |
| 2021-04-30T12:00:00+00:00 |     21600.0 |   1.0 |
| 2021-04-29T07:00:00+00:00 |      3600.0 |   1.0 |
| 2021-04-29T08:00:00+00:00 |      3600.0 |   1.0 |
...
| 2021-04-30T11:00:00+00:00 |      3600.0 |   1.0 |
| 2021-04-30T12:00:00+00:00 |      3600.0 |   1.0 |
| 2021-04-29T09:00:00+00:00 |       300.0 |   1.0 |
| 2021-04-29T10:00:00+00:00 |       300.0 |   1.0 |
...
| 2021-04-30T11:00:00+00:00 |       300.0 |   1.0 |
| 2021-04-30T12:00:00+00:00 |       300.0 |   1.0 |
+---------------------------+-------------+-------+

The granularity varies over time, according to the archive policy of the metric (see next section).

Tip

There is a shortcut if you don't know the metric's id:

$ openstack metric measures show --resource-id 36886229-f6f0-4784-a389-e78feab9b42a vcpus
+---------------------------+-------------+-------+
| timestamp                 | granularity | value |
+---------------------------+-------------+-------+
| 2021-04-29T06:00:00+00:00 |     21600.0 |   2.0 |
...
| 2021-04-30T12:00:00+00:00 |       300.0 |   1.0 |
+---------------------------+-------------+-------+

You can also resample measures if you prefer to get a daily usage for example:

$ openstack metric measures show --start 2021-03-08T00:00:00 --granularity 3600 --resample 86400 --resource-id 3fa780c7-3897-4fd2-a177-120bc2a94b65 vcpus
+---------------------------+-------------+--------------------+
| timestamp                 | granularity |              value |
+---------------------------+-------------+--------------------+
| 2021-03-08T01:00:00+01:00 |     86400.0 | 1.2285714285714284 |
| 2021-03-09T01:00:00+01:00 |     86400.0 |                1.0 |
| 2021-03-10T01:00:00+01:00 |     86400.0 |                1.0 |
+---------------------------+-------------+--------------------+

Archive Policy

All metrics are stored according to the ik-medium-rate policy:

$ openstack metric archive-policy show ik-medium-rate
+---------------------+-------------------------------------------------------------------+
| Field               | Value                                                             |
+---------------------+-------------------------------------------------------------------+
| aggregation_methods | mean, rate:mean, std, count, max, min, sum                        |
| back_window         | 0                                                                 |
| definition          | - timespan: 62 days, 0:00:00, granularity: 1:00:00, points: 1488  |
|                     | - timespan: 732 days, 0:00:00, granularity: 6:00:00, points: 2928 |
|                     | - timespan: 7 days, 0:00:00, granularity: 0:05:00, points: 2016   |
| name                | ik-medium-rate                                                    |
+---------------------+-------------------------------------------------------------------+

Advanced usage

Operations can be performed against metric, here an example with the metric cpu

Full documentation here

Display usage for the whole project

  • Network in GB
openstack metric aggregates --resource-type=instance_network_interface --groupby=project_id  "(/(aggregate rate:max (metric network.incoming.bytes max))1000000000)" "project_id!='dummy'" --granularity 3600
  • Vcpus
openstack metric aggregates --resource-type=instance --groupby=project_id  "(aggregate mean (metric vcpus mean))" "project_id!='dummy'" --granularity 3600
  • Memory in GB
openstack metric aggregates --resource-type=instance --groupby=project_id  "(/(aggregate mean (metric memory mean))1024)" "project_id!='dummy'" --granularity 3600

cpu load in nanoseconds

The only cpu metric available is cpu corresponding to the amount of CPU time used by the VM, in nanoseconds. It is a counter and therefore always increases over time (it is however reset in case of reboot).

Here is an example:

$ openstack metric measures show -r 0cc04d30-42c8-4afb-8666-cc932ec978e1 cpu
+---------------------------+-------------+----------------+
| timestamp                 | granularity |          value |
+---------------------------+-------------+----------------+
| 2021-04-29T16:40:00+02:00 |       300.0 |  11000000000.0 |
| 2021-04-29T16:45:00+02:00 |       300.0 |  13280000000.0 |
+---------------------------+-------------+----------------+

The following command will give you the same result but will allow us to perform operations on it later :

$ openstack metric aggregates '(metric cpu mean)' id=0cc04d30-42c8-4afb-8666-cc932ec978e1
+---------------------------+-------------+----------------+
| timestamp                 | granularity |          value |
+---------------------------+-------------+----------------+
| 2021-04-29T16:40:00+02:00 |       300.0 |  11000000000.0 |
| 2021-04-29T16:45:00+02:00 |       300.0 |  13280000000.0 |
+---------------------------+-------------+----------------+

In this example, the VM instance 0cc04d30-42c8-4afb-8666-cc932ec978e1 consumed 13280000000 - 11000000000 = 2 280 000 000 ns of CPU time between 2021-04-29T16:40:00+02:00 and 2021-04-29T16:45:00+02:00

We already store the delta between 2 consecutive measurements, so you can display it directly using the rate:mean metric instead of mean.

$ openstack metric aggregates '(metric cpu rate:mean)' id=0cc04d30-42c8-4afb-8666-cc932ec978e1
+----------------------------------------------------+---------------------------+-------------+-------------------+
| name                                               | timestamp                 | granularity |             value |
+----------------------------------------------------+---------------------------+-------------+-------------------+
| 0cc04d30-42c8-4afb-8666-cc932ec978e1/cpu/rate:mean | 2021-04-29T14:45:00+00:00 |       300.0 |      2280000000.0 |
+----------------------------------------------------+---------------------------+-------------+-------------------+

We get the same value as calculated earlier: 2 280 000 000 ns

Another way to get the same result without using the rate:mean metric is to re-aggregate the values on the fly:

$ openstack metric aggregates '(aggregate rate:mean (metric cpu mean))' id=0cc04d30-42c8-4afb-8666-cc932ec978e1
+----------------------------------------------------+---------------------------+-------------+-------------------+
| name                                               | timestamp                 | granularity |             value |
+----------------------------------------------------+---------------------------+-------------+-------------------+
| 0cc04d30-42c8-4afb-8666-cc932ec978e1/cpu/rate:mean | 2021-04-29T14:45:00+00:00 |       300.0 |      2280000000.0 |
+----------------------------------------------------+---------------------------+-------------+-------------------+

Once again, we get the value calculated earlier: 2 280 000 000 ns

cpu load in seconds

The cpu metric is expressed in nanoseconds, to convert it to seconds, divide by one billion:

$ openstack metric aggregates '(/ (metric cpu rate:mean) 1000000000 )' id=0cc04d30-42c8-4afb-8666-cc932ec978e1
+----------------------------------------------------+---------------------------+-------------+--------------------+
| name                                               | timestamp                 | granularity |              value |
+----------------------------------------------------+---------------------------+-------------+--------------------+
| 0cc04d30-42c8-4afb-8666-cc932ec978e1/cpu/rate:mean | 2021-04-29T14:40:00+00:00 |       300.0 |               0.07 |
| 0cc04d30-42c8-4afb-8666-cc932ec978e1/cpu/rate:mean | 2021-04-29T14:45:00+00:00 |       300.0 |               2.28 |
+----------------------------------------------------+---------------------------+-------------+--------------------+

cpu load

To express cpu load relatively to its maximum, we need to divide the cpu time used by the time elapsed, ensuring both are expressed in the same unit. One way to achieve this, is to convert the granularity to nanoseconds. For example, assuming a granularity of 300 seconds:

$ openstack metric aggregates '(/ (metric cpu rate:mean) 
300000000000)' id=0cc04d30-42c8-4afb-8666-cc932ec978e1
+----------------------------------------------------+---------------------------+-------------+----------------------+
| name                                               | timestamp                 | granularity |                value |
+----------------------------------------------------+---------------------------+-------------+----------------------+
| 0cc04d30-42c8-4afb-8666-cc932ec978e1/cpu/rate:mean | 2021-04-29T14:40:00+00:00 |       300.0 | 0.000233333333333334 |
| 0cc04d30-42c8-4afb-8666-cc932ec978e1/cpu/rate:mean | 2021-04-29T14:45:00+00:00 |       300.0 |               0.0076 |
+----------------------------------------------------+---------------------------+-------------+----------------------+
In this example, the average cpu usage of the VM 0cc04d30-42c8-4afb-8666-cc932ec978e1 was 0.0076 (or 0.76%) between 2021-04-29T14:40:00+00:00 and 2021-04-29T14:45:00+00:00

Let's increase the load (in this case, by using the command stress) and check the cpu usage again:

$ openstack metric aggregates '(/ (metric cpu rate:mean) 300000000000 )' id=0cc04d30-42c8-4afb-8666-cc932ec978e1
+----------------------------------------------------+---------------------------+-------------+----------------------+
| name                                               | timestamp                 | granularity |                value |
+----------------------------------------------------+---------------------------+-------------+----------------------+
| 0cc04d30-42c8-4afb-8666-cc932ec978e1/cpu/rate:mean | 2021-04-29T14:40:00+00:00 |       300.0 | 0.000233333333333334 |
| 0cc04d30-42c8-4afb-8666-cc932ec978e1/cpu/rate:mean | 2021-04-29T14:45:00+00:00 |       300.0 |               0.0076 |
| 0cc04d30-42c8-4afb-8666-cc932ec978e1/cpu/rate:mean | 2021-04-29T14:50:00+00:00 |       300.0 |    0.724266666666666 |
| 0cc04d30-42c8-4afb-8666-cc932ec978e1/cpu/rate:mean | 2021-04-29T14:55:00+00:00 |       300.0 |    0.992766666666667 |
| 0cc04d30-42c8-4afb-8666-cc932ec978e1/cpu/rate:mean | 2021-04-29T15:00:00+00:00 |       300.0 |   1.0021666666666667 |
| 0cc04d30-42c8-4afb-8666-cc932ec978e1/cpu/rate:mean | 2021-04-29T15:05:00+00:00 |       300.0 |    0.996966666666667 |
| 0cc04d30-42c8-4afb-8666-cc932ec978e1/cpu/rate:mean | 2021-04-29T15:10:00+00:00 |       300.0 |   1.0040666666666667 |
| 0cc04d30-42c8-4afb-8666-cc932ec978e1/cpu/rate:mean | 2021-04-29T15:15:00+00:00 |       300.0 |   1.0018333333333334 |
+----------------------------------------------------+---------------------------+-------------+----------------------+

We see that the cpu usage of the VM is roughly 1 (or one vcpu at 100%) starting at 2021-04-29T14:55:00+00:00.

Tip

Here are the rate:mean values corresponding to various cpu thresholds for the Infomaniak public cloud setup:

CPU % rate:mean CPU % rate:mean
100 300000000000.0 50 150000000000.0
90 270000000000.0 40 120000000000.0
80 240000000000.0 30 90000000000.0
70 210000000000.0 20 60000000000.0
60 180000000000.0 10 30000000000.0

Acknowledgment

Some of the content on this page is derived from Bernd Bausch's article How I Learned to Stop Worrying and Love Gnocchi aggregation.

Back to top