Skip to content

Real world scenarios

You will find below working templates for the infomaniak public cloud.

Basic examples to get familiar

  • Usecase 1 : Single instance with a Floating IP (internet access)
  • Usecase 2 : Usecase 1 + multiple VMs on a private network
  • Usecase 3 : Usecase 2 + autoscalling

Usecase 1 : Single instance with a Floating IP (internet access)

  • Copy/Pase the code below into a file called server.yaml
---
heat_template_version: rocky

description: Simple template to deploy a single compute instance

parameters:
  key_name:
    type: string
    label: Key Name
    description: Name of key-pair to be used for compute instance
    default: yubikey-taylor
  image_id:
    type: string
    label: Image ID
    description: Image to be used for compute instance
    default: Debian 11 bullseye
  instance_type:
    type: string
    label: Instance Type
    description: Type of instance (flavor) to be used
    default: a1-ram2-disk20-perf1

resources:
  cl_net:
    type: OS::Neutron::Net
    properties:
      name: {list_join: ['-', [{get_param: "OS::stack_name"}, 'net']]}
      value_specs:
        mtu: 1500

  cl_subnet:
    type: OS::Neutron::Subnet
    properties:
            #name: 'taylor-stack-subnet'
      name: {list_join: ['-', [{get_param: "OS::stack_name"}, 'subnet']]}
      network_id: {get_resource: cl_net}
      cidr: "10.10.0.0/16"
      dns_nameservers:
        - "84.16.67.69"
        - "84.16.67.70"
      ip_version: 4

  cl_router:
    type: OS::Neutron::Router
    properties:
      name: {list_join: ['-', [{get_param: "OS::stack_name"}, 'router']]}
      external_gateway_info: { network: ext-floating1 }


  cl_router_subnet_interface:
    type: OS::Neutron::RouterInterface
    properties:
      router_id: {get_resource: cl_router}
      subnet: {get_resource: cl_subnet}


  floating_ip:
    type: OS::Neutron::FloatingIP
    properties:
      floating_network: ext-floating1

  my_instance_port:
    type: OS::Neutron::Port
    properties:
      network: {get_resource: cl_net}
      fixed_ips:
        - subnet_id: {get_resource: cl_subnet}

  my_instance:
    type: OS::Nova::Server
    properties:
      key_name: { get_param: key_name }
      image: { get_param: image_id }
      flavor: { get_param: instance_type }
      networks:
        - port: { get_resource: my_instance_port }

  association:
    type: OS::Neutron::FloatingIPAssociation
    properties:
      floatingip_id: { get_resource: floating_ip }
      port_id: { get_resource: my_instance_port }

outputs:
  instance_name:
    description: Name of the VM instance
    value: {get_attr: [my_instance, name]}
  instance_publicIP:
    description: The Floating IP address of the deployed instance
    value: {get_attr: [floating_ip, floating_ip_address]}
  instance_privateIP:
    description: The Private IP address of the deployed instance
    value: { get_attr: [my_instance, first_address]}
  • Replace the following parameters acccordingly to your needs :

key_name image_id instance_type

  • Create the stack
$ openstack stack create -t server.yaml my-stack
+---------------------+-----------------------------------------------------+
| Field               | Value                                               |
+---------------------+-----------------------------------------------------+
| id                  | 1ef702b3-b3da-4a16-a11c-3d632c682933                |
| stack_name          | my-stack                                            |
| description         | Simple template to deploy a single compute instance |
| creation_time       | 2021-04-01T15:41:13Z                                |
| updated_time        | None                                                |
| stack_status        | CREATE_IN_PROGRESS                                  |
| stack_status_reason | Stack CREATE started                                |
+---------------------+-----------------------------------------------------+
  • Display the information
$ openstack stack show my-stack
+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Field                 | Value                                                                                                                                 |
+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| id                    | 1ef702b3-b3da-4a16-a11c-3d632c682933                                                                                                  |
| stack_name            | my-stack                                                                                                                              |
| description           | Simple template to deploy a single compute instance                                                                                   |
| creation_time         | 2021-04-01T15:41:13Z                                                                                                                  |
| updated_time          | None                                                                                                                                  |
| stack_status          | CREATE_COMPLETE                                                                                                                       |
| stack_status_reason   | Stack CREATE completed successfully                                                                                                   |
| parameters            | OS::project_id: ac4fafd60021431585bbb23470119557                                                                                      |
|                       | OS::stack_id: 1ef702b3-b3da-4a16-a11c-3d632c682933                                                                                    |
|                       | OS::stack_name: my-stack                                                                                                              |
|                       | image_id: Debian 11 bullseye                                                                                                        |
|                       | instance_type: a1-ram2-disk20-perf1                                                                                                   |
|                       | key_name: yubikey-taylor                                                                                                              |
|                       |                                                                                                                                       |
| outputs               | - description: The Floating IP address of the deployed instance                                                                       |
|                       |   output_key: instance_publicIP                                                                                                       |
|                       |   output_value: 195.15.240.222                                                                                                        |
|                       | - description: Name of the VM instance                                                                                                |
|                       |   output_key: instance_name                                                                                                           |
|                       |   output_value: my-stack-my_instance-2c4yucnqamns                                                                                     |
|                       | - description: The Private IP address of the deployed instance                                                                        |
|                       |   output_key: instance_privateIP                                                                                                      |
|                       |   output_value: 10.10.2.56                                                                                                            |
|                       |                                                                                                                                       |
| links                 | - href: https://pub1-api.cloud.infomaniak.ch/v1/ac4fafd60021431585bbb23470119557/stacks/my-stack/1ef702b3-b3da-4a16-a11c-3d632c682933 |
|                       |   rel: self                                                                                                                           |
|                       |                                                                                                                                       |
| deletion_time         | None                                                                                                                                  |
| notification_topics   | []                                                                                                                                    |
| capabilities          | []                                                                                                                                    |
| disable_rollback      | True                                                                                                                                  |
| timeout_mins          | None                                                                                                                                  |
| stack_owner           | taylor                                                                                                                                |
| parent                | None                                                                                                                                  |
| stack_user_project_id | 78002ceaf5fd4a39999d9fe458f07f85                                                                                                      |
| tags                  | []                                                                                                                                    |
|                       |                                                                                                                                       |
+-----------------------+---------------------------------------------------------------------------------------------------------------------------------------+

important section if outputs

Your instance public IP is 195.15.240.222

  • You can now access you instance
ssh debian@195.15.240.222
debian@my-stack-my-instance-2c4yucnqamns:~$

Usecase 2 : Usecase 1 + multiple VMs on a private network

Assuming you created the stack described in the Usecase 1 you can now create additionnal VMs on the private network this way.

  • Create a file called nodes.yaml and paste the following content :
heat_template_version: rocky
description: Template to show off parameters

parameters:
    key_name:
        type: string
        label: Key Name
        description: SSH key to be used for all instances
        default: mykeypair
    node_count:
        type: number
        label: Number of Virtual Machine instances
        description: Number of Virtual Machine instances
        default: 2
    node_image:
        type: string
        label: Image ID
        description: Virtual Machine instances OS
        default: Debian 11 bullseye
    node_flavor:
        type: string
        label: Node Instance Type
        description: Type of instance (flavor) to deploy
        default:  a1-ram2-disk20-perf1
    private_net:
        type: string
        description: ID of private network into which servers get deployed
        default: mynetwork
resources:
    nodes:
        type: OS::Heat::ResourceGroup
        properties:
            count: { get_param: node_count }
            resource_def:
                type: OS::Nova::Server
                properties:
                    key_name: { get_param: key_name }
                    image: { get_param: node_image }
                    flavor: { get_param: node_flavor }
                    name: {list_join: ['-', [{get_param: "OS::stack_name"}, 'node', '%index%']]}
                    networks: [{network: { get_param: private_net}}]
outputs:
  internal_ip:
    description: Internal IP of the VMs
    value: {get_attr: [nodes, first_address]}
  server_id:
    description: Name of VMs
    value: {get_attr: [nodes, name]}
  • Let's create the stack

By default it will create 2 VMs.

Replace private_net with the name of your private network created during Usecase 1 and key_name accordingly o your setup.

$ openstack stack create -t nodes.yaml --parameter key_name=yubikey-taylor --parameter private_net=my-stack-net privateVMs 
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| id                  | 6c3944d6-33e3-49e8-b0a3-112d516e2428 |
| stack_name          | privateVMs                           |
| description         | Template to create multiple VMs      |
| creation_time       | 2021-04-06T07:10:53Z                 |
| updated_time        | None                                 |
| stack_status        | CREATE_IN_PROGRESS                   |
| stack_status_reason | Stack CREATE started                 |
+---------------------+--------------------------------------+
  • Let's check the 2 VMs are active
$ openstack server list
+--------------------------------------+-----------------------------------+--------+-----------------------------------------+-----------------------------------+----------------------+
| ID                                   | Name                              | Status | Networks                                | Image                             | Flavor               |
+--------------------------------------+-----------------------------------+--------+-----------------------------------------+-----------------------------------+----------------------+
| d730d0f4-c016-42c6-b7f3-37eb94be787d | privateVMs-node-1                 | ACTIVE | my-stack-net=10.10.0.79                 | Debian 11 bullseye               | a1-ram2-disk20-perf1 |
| 266b3ef3-55e2-4260-8177-a7a5faaee208 | privateVMs-node-0                 | ACTIVE | my-stack-net=10.10.0.217                | Debian 11 bullseye               | a1-ram2-disk20-perf1 |
| 6a42e3b8-32b5-426d-bd73-56de8fad7c23 | my-stack-my_instance-2c4yucnqamns | ACTIVE | my-stack-net=10.10.2.56, 195.15.240.222 | Debian 11 bullseye               | a1-ram2-disk20-perf1 |
+--------------------------------------+-----------------------------------+--------+-----------------------------------------+-----------------------------------+----------------------+

privateVMs-node-0 and privateVMs-node1 are active and on the same private network than my-stack-my_instance-2c4yucnqamns which has been created previously in Usecase 1.

You can now ssh your VMs connecting first to my-stack-my_instance-2c4yucnqamns (don't forget to forward your ssh key)

taylor@laptop:~/SCRIPTS/Openstack/heat/docs$ ssh -A debian@195.15.240.222
Then ssh your private VM using its IP:

debian@my-stack-my-instance-2c4yucnqamns:~$ ssh 10.10.0.79
debian@privatevms-node-1:~$

Usecase 3 : Usecase 2 + autoscalling based on memory usage

We will create a new stack with 1 to 3 VMs depending of the MEMORY usage.

You will need the client aodh in order to retrieve information :

Debian - Ubuntu:

sudo apt install python3-aodhclient

RedHat - Centos:

sudo yum install python3-aodhclient

Create a file autoscalling_mem.yaml with the following content :

heat_template_version: rocky
description: AutoScaling
parameters:
    key_name:
      type: string
      label: Key Name
      description: SSH key to be used for all instances
      default: mykeypair
    node_count:
        type: number
        label: Number of Virtual Machine instances
        description: Number of Virtual Machine instances
        default: 2
    node_image:
        type: string
        label: Image ID
        description: Virtual Machine instances OS
        default: Debian 11 bullseye
    node_flavor:
        type: string
        label: Node Instance Type
        description: Type of instance (flavor) to deploy
        default:  a1-ram2-disk20-perf1
    private_net:
        type: string
        description: ID of private network into which servers get deployed
        default: mynetwork
resources:
    autoscaling-group:
      type: OS::Heat::AutoScalingGroup
      properties:
        cooldown: 60
        desired_capacity: 1
        max_size: 3
        min_size: 1
        resource:
          type: OS::Nova::Server
          properties:
            key_name: { get_param: key_name }
            image: { get_param: node_image }
            flavor: { get_param: node_flavor }
            metadata: {"metering.server_group": {get_param: "OS::stack_id"}}
            networks: [{network: { get_param: private_net}}]

    scaleup_policy:
      type: OS::Heat::ScalingPolicy
      properties:
        adjustment_type: change_in_capacity
        auto_scaling_group_id: { get_resource: autoscaling-group }
        cooldown: 60
        scaling_adjustment: 1

    scaledown_policy:
      type: OS::Heat::ScalingPolicy
      properties:
        adjustment_type: change_in_capacity
        auto_scaling_group_id: { get_resource: autoscaling-group }
        cooldown: 60
        scaling_adjustment: -1

    memory_alarm_high:
      type: OS::Aodh::GnocchiAggregationByResourcesAlarm
      properties:
        description: Scale up if memory > 1000 MB
        metric: memory.usage
        aggregation_method: mean
        granularity: 300
        evaluation_periods: 1
        threshold: 1024
        resource_type: instance
        comparison_operator: gt
        alarm_actions:
          - str_replace:
              template: trust+url
              params:
                url: {get_attr: [scaleup_policy, signal_url]}
        query:
          str_replace:
            template: '{"=": {"server_group": "stack_id"}}'
            params:
              stack_id: {get_param: "OS::stack_id"}

    memory_alarm_low:
      type: OS::Aodh::GnocchiAggregationByResourcesAlarm
      properties:
        description: Scale down if memory < 512MB
        metric: memory.usage
        aggregation_method: mean
        granularity: 300
        evaluation_periods: 1
        threshold: 512
        resource_type: instance
        comparison_operator: lt
        alarm_actions:
          - str_replace:
              template: trust+url
              params:
                url: {get_attr: [scaledown_policy, signal_url]}
        query:
          str_replace:
            template: '{"=": {"server_group": "stack_id"}}'
            params:
              stack_id: {get_param: "OS::stack_id"}

outputs:
  scaleup_policy_signal_url:
    value: {get_attr: [scaleup_policy, signal_url]}

  scaledown_policy_signal_url:
    value: {get_attr: [scaledown_policy, signal_url]}
  • We create the stack
$ openstack stack create -t autoscaling_mem.yaml --parameter key_name=yubikey-taylor --parameter private_net=my-stack-net autoscaling_mem
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| id                  | 300733c5-4244-485c-aa56-9e3e4d1d48b9 |
| stack_name          | autoscaling_mem                      |
| description         | AutoScaling                          |
| creation_time       | 2021-04-26T06:54:04Z                 |
| updated_time        | None                                 |
| stack_status        | CREATE_IN_PROGRESS                   |
| stack_status_reason | Stack CREATE started                 |
+---------------------+--------------------------------------+
  • We check there's a new VM active
$ openstack server list
+--------------------------------------+-------------------------------------------------------+--------+--------------------------+-------------------------------+----------------------+
| ID                                   | Name                                                  | Status | Networks                 | Image                         | Flavor               |
+--------------------------------------+-------------------------------------------------------+--------+--------------------------+-------------------------------+----------------------+
| 926bf65f-de56-41dd-8a42-ffc885144998 | au-aling-group-rwkazxonolym-ylzrvmwqcyc2-uiogzaect6p2 | ACTIVE | my-stack-net=10.10.0.243 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
| 1ce3f12f-0e4e-4e44-ae4b-111612c8ad8d | privateVMs-node-1                                     | ACTIVE | my-stack-net=10.10.2.233 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
| 6673256b-da62-4399-a31d-0d8f6b267b3f | privateVMs-node-0                                     | ACTIVE | my-stack-net=10.10.3.235 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
| 8800369f-31f6-4ed5-a320-c09028aa9ff4 | my-stack-my_instance-uib3vgq7ipet                     | ACTIVE | my-stack-net=10.10.1.163 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
+--------------------------------------+-------------------------------------------------------+--------+--------------------------+-------------------------------+----------------------+

The VM au-aling-group-rwkazxonolym-ylzrvmwqcyc2-uiogzaect6p2 is active. Other VMs come from previous Usecases.

  • we check the alarms/triggers are set :

$ openstack alarm list --fit-width
+--------------------------------------+--------------------------------------------+------------------------------------------------+-------------------+----------+---------+
| alarm_id                             | type                                       | name                                           | state             | severity | enabled |
+--------------------------------------+--------------------------------------------+------------------------------------------------+-------------------+----------+---------+
| 4d2cb366-db1e-4c7f-ac3d-fa07609d76d3 | gnocchi_aggregation_by_resources_threshold | autoscaling_mem-memory_alarm_high-tekish7n4o5v | insufficient data | low      | True    |
| bdd1b1cf-2894-458e-a475-b3678a19a2d0 | gnocchi_aggregation_by_resources_threshold | autoscaling_mem-memory_alarm_low-plegqmjk5eei  | insufficient data | low      | True    |
+--------------------------------------+--------------------------------------------+------------------------------------------------+-------------------+----------+---------+
state is for the moment insufficient data because metrics are aggragated only every few minutes (5 min by default).

After some time you should see something like this :

$ openstack alarm list
+--------------------------------------+--------------------------------------------+------------------------------------------------+-------+----------+---------+
| alarm_id                             | type                                       | name                                           | state | severity | enabled |
+--------------------------------------+--------------------------------------------+------------------------------------------------+-------+----------+---------+
| 4d2cb366-db1e-4c7f-ac3d-fa07609d76d3 | gnocchi_aggregation_by_resources_threshold | autoscaling_mem-memory_alarm_high-tekish7n4o5v | ok    | low      | True    |
| bdd1b1cf-2894-458e-a475-b3678a19a2d0 | gnocchi_aggregation_by_resources_threshold | autoscaling_mem-memory_alarm_low-plegqmjk5eei  | alarm | low      | True    |
+--------------------------------------+--------------------------------------------+------------------------------------------------+-------+----------+---------+

autoscaling_mem-memory_alarm_high-tekish7n4o5v state is ok because the VM memory usage is < threshold (No action taken) autoscaling_mem-memory_alarm_low-plegqmjk5eei state is alarm because the VM memory usage is < threshold (action taken is or would be to scaledown the number VMs to a minimun value of min_size defined in the heat template. in our case 1)

  • Let's trigger the autoscale up

We'll trigger the autoscale up by adjusting the threshold (or you could do it by consuming memory inside the VM with the command stress)

First we check the current memory usage reported for the VM:

$ openstack metric measures show -r 926bf65f-de56-41dd-8a42-ffc885144998 memory.usage
+---------------------------+-------------+---------+
| timestamp                 | granularity |   value |
+---------------------------+-------------+---------+
| 2021-04-26T09:00:00+02:00 |       300.0 | 155.375 |
+---------------------------+-------------+---------+

The VM consumes 155MB for the time being. We can therefore trigger an alarm and a autoscaleup by adjusting the scaleup_policy and scaledown_policy thresholds.

openstack alarm update --threshold 100 autoscaling_mem-memory_alarm_low-plegqmjk5eei

We lower the autoscaling_mem-memory_alarm_low-plegqmjk5eei threshhold to 100 MB to set the alarm state to active and avoid scaling down the number of VMs

$ openstack alarm list
+--------------------------------------+--------------------------------------------+------------------------------------------------+-------+----------+---------+
| alarm_id                             | type                                       | name                                           | state | severity | enabled |
+--------------------------------------+--------------------------------------------+------------------------------------------------+-------+----------+---------+
| bdd1b1cf-2894-458e-a475-b3678a19a2d0 | gnocchi_aggregation_by_resources_threshold | autoscaling_mem-memory_alarm_low-plegqmjk5eei  | ok    | low      | True    |
| 4d2cb366-db1e-4c7f-ac3d-fa07609d76d3 | gnocchi_aggregation_by_resources_threshold | autoscaling_mem-memory_alarm_high-tekish7n4o5v | ok    | low      | True    |
+--------------------------------------+--------------------------------------------+------------------------------------------------+-------+----------+---------+

Alarm is now ok because the VM consumes 155 MB of memory which > to the threshold set to 100 MB.

openstack alarm update --threshold 130 autoscaling_mem-memory_alarm_high-tekish7n4o5v

We lower the autoscaling_mem-memory_alarm_high-tekish7n4o5v threshold in order to trigger an alarm

$ openstack alarm list
+--------------------------------------+--------------------------------------------+------------------------------------------------+-------+----------+---------+
| alarm_id                             | type                                       | name                                           | state | severity | enabled |
+--------------------------------------+--------------------------------------------+------------------------------------------------+-------+----------+---------+
| 4d2cb366-db1e-4c7f-ac3d-fa07609d76d3 | gnocchi_aggregation_by_resources_threshold | autoscaling_mem-memory_alarm_high-tekish7n4o5v | alarm | low      | True    |
| bdd1b1cf-2894-458e-a475-b3678a19a2d0 | gnocchi_aggregation_by_resources_threshold | autoscaling_mem-memory_alarm_low-plegqmjk5eei  | ok    | low      | True    |
+--------------------------------------+--------------------------------------------+------------------------------------------------+-------+----------+---------+

autoscaling_mem-memory_alarm_high-tekish7n4o5v state is alarm. Action taken in to scale up the number of VMs.

$ openstack server list
+--------------------------------------+-------------------------------------------------------+--------+--------------------------+-------------------------------+----------------------+
| ID                                   | Name                                                  | Status | Networks                 | Image                         | Flavor               |
+--------------------------------------+-------------------------------------------------------+--------+--------------------------+-------------------------------+----------------------+
| c7eea4fe-3b79-4917-986b-1aaef1594e38 | au-aling-group-rwkazxonolym-wvndpxsqgvc4-hleq3jbkjsrz | BUILD  |                          | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
| 926bf65f-de56-41dd-8a42-ffc885144998 | au-aling-group-rwkazxonolym-ylzrvmwqcyc2-uiogzaect6p2 | ACTIVE | my-stack-net=10.10.0.243 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
| 1ce3f12f-0e4e-4e44-ae4b-111612c8ad8d | privateVMs-node-1                                     | ACTIVE | my-stack-net=10.10.2.233 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
| 6673256b-da62-4399-a31d-0d8f6b267b3f | privateVMs-node-0                                     | ACTIVE | my-stack-net=10.10.3.235 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
| 8800369f-31f6-4ed5-a320-c09028aa9ff4 | my-stack-my_instance-uib3vgq7ipet                     | ACTIVE | my-stack-net=10.10.1.163 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
+--------------------------------------+-------------------------------------------------------+--------+--------------------------+-------------------------------+----------------------+

We see a new VM is being built. the auto scale up worked.

  • Manually trigger scaleup or scaledown
$ openstack stack resource signal  300733c5-4244-485c-aa56-9e3e4d1d48b9 scaleup_policy

Where 300733c5-4244-485c-aa56-9e3e4d1d48b9 is your stack id and scaleup_policy the signal_name to trigger (name you defined in your heat template).

  • Another way to trigger an auto scale is to call the API directly :
/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/resources/{resource_name}/signal

Which in our case corresponds to :

$ TOKEN=`openstack token issue -f value -c id`
We need a token to call the API

$ curl -g -i -X POST "https://api.pub1.infomaniak.cloud/orchestration-api/v1/d1440aa24a65411fb9bac2b842c8defa/stacks/autoscaling_mem/300733c5-4244-485c-aa56-9e3e4d1d48b9/resources/scaleup_policy
/signal" -H "Accept: application/json" -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN"
HTTP/1.1 200 OK
content-type: application/json
x-openstack-request-id: req-270b888c-dc02-4f5f-9e34-7b18bf76651a
strict-transport-security: max-age=63072000
connection: close

API call and the answer is 200 OK

to scale down :

$ curl -g -i -X POST "https://api.pub1.infomaniak.cloud/orchestration-api/v1/d1440aa24a65411fb9bac2b842c8defa/stacks/autoscaling_mem/300733c5-4244-485c-aa56-9e3e4d1d48b9/resources/scaledown_poli
cy/signal" -H "Accept: application/json" -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN"
HTTP/1.1 200 OK
content-type: application/json
x-openstack-request-id: req-67d15bbf-363e-480a-b31e-c9640f51f0b0
strict-transport-security: max-age=63072000
connection: close

Usecase 4 : Usecase 2 + autoscalling based on cpu usage

We will create a new stack with 1 to 3 VMs depending of the CPU usage.

New VMs will be created if the CPU usage is > 60% and VMs number will decrease if CPU usage < 30%.

The autoscaling is based on the metric cpu which corresponds to the amount of CPU time used by the VM, in nanoseconds. It is a counter and therefore always increases over time and can be a bit confusing when you want to set thresholds. More information here

We calculated the thresholds for you according to the infomaniak public cloud setup :

CPU usage in % Threshold
100 300000000000.0
90 270000000000.0
80 240000000000.0
70 210000000000.0
60 180000000000.0
50 150000000000.0
40 120000000000.0
30 90000000000.0
20 60000000000.0
10 30000000000.0

Create a file autoscalling_cpu.yaml with the following content :

heat_template_version: rocky
description: AutoScaling
parameters:
    key_name:
      type: string
      label: Key Name
      description: SSH key to be used for all instances
      default: mykeypair
    node_count:
        type: number
        label: Number of Virtual Machine instances
        description: Number of Virtual Machine instances
        default: 2
    node_image:
        type: string
        label: Image ID
        description: Virtual Machine instances OS
        default: Debian 11 bullseye
    node_flavor:
        type: string
        label: Node Instance Type
        description: Type of instance (flavor) to deploy
        default:  a1-ram2-disk20-perf1
    private_net:
        type: string
        description: ID of private network into which servers get deployed
        default: mynetwork
resources:
    autoscaling-group:
      type: OS::Heat::AutoScalingGroup
      properties:
        cooldown: 60
        desired_capacity: 1
        max_size: 3
        min_size: 1
        resource:
          type: OS::Nova::Server
          properties:
            key_name: { get_param: key_name }
            image: { get_param: node_image }
            flavor: { get_param: node_flavor }
            metadata: {"metering.server_group": {get_param: "OS::stack_id"}}
            networks: [{network: { get_param: private_net}}]

    scaleup_policy:
      type: OS::Heat::ScalingPolicy
      properties:
        adjustment_type: change_in_capacity
        auto_scaling_group_id: { get_resource: autoscaling-group }
        cooldown: 60
        scaling_adjustment: 1

    scaledown_policy:
      type: OS::Heat::ScalingPolicy
      properties:
        adjustment_type: change_in_capacity
        auto_scaling_group_id: { get_resource: autoscaling-group }
        cooldown: 60
        scaling_adjustment: -1

    cpu_alarm_high:
      type: OS::Aodh::GnocchiAggregationByResourcesAlarm
      properties:
        description: Scale up if CPU > 60%
        metric: cpu
        aggregation_method: rate:mean
        granularity: 300
        evaluation_periods: 1
        threshold: 180000000000.0
        resource_type: instance
        comparison_operator: gt
        alarm_actions:
          - str_replace:
              template: trust+url
              params:
                url: {get_attr: [scaleup_policy, signal_url]}
        query:
          str_replace:
            template: '{"=": {"server_group": "stack_id"}}'
            params:
              stack_id: {get_param: "OS::stack_id"}

    cpu_alarm_low:
      type: OS::Aodh::GnocchiAggregationByResourcesAlarm
      properties:
        description: Scale down if CPU < 30%
        metric: cpu
        aggregation_method: rate:mean
        granularity: 300
        evaluation_periods: 1
        threshold: 90000000000.0
        resource_type: instance
        comparison_operator: lt
        alarm_actions:
          - str_replace:
              template: trust+url
              params:
                url: {get_attr: [scaledown_policy, signal_url]}
        query:
          str_replace:
            template: '{"=": {"server_group": "stack_id"}}'
            params:
              stack_id: {get_param: "OS::stack_id"}

outputs:
  scaleup_policy_signal_url:
    value: {get_attr: [scaleup_policy, signal_url]}

  scaledown_policy_signal_url:
    value: {get_attr: [scaledown_policy, signal_url]}
  • We create the stack
$ openstack stack create -t autoscaling_cpu.yaml --parameter key_name=yubikey-taylor --parameter private_net=my-stack-net autoscaling_cpu
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| id                  | 93ae1fd1-920d-4d5a-8198-cbb8b52e94fc |
| stack_name          | autoscaling_cpu                      |
| description         | AutoScaling                          |
| creation_time       | 2021-04-30T07:17:32Z                 |
| updated_time        | None                                 |
| stack_status        | CREATE_IN_PROGRESS                   |
| stack_status_reason | Stack CREATE started                 |
+---------------------+--------------------------------------+
  • We check there's a new VM active
$ openstack server list
+--------------------------------------+-------------------------------------------------------+--------+--------------------------+-------------------------------+----------------------+
| ID                                   | Name                                                  | Status | Networks                 | Image                         | Flavor               |
+--------------------------------------+-------------------------------------------------------+--------+--------------------------+-------------------------------+----------------------+
| 8c9b605c-2fe5-4879-be48-5201c27954cd | au-aling-group-d5y3tkua27eq-nswgw5m6k5al-4kd5t56p7jva | ACTIVE | my-stack-net=10.10.0.209 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
| 1ce3f12f-0e4e-4e44-ae4b-111612c8ad8d | privateVMs-node-1                                     | ACTIVE | my-stack-net=10.10.2.233 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
| 6673256b-da62-4399-a31d-0d8f6b267b3f | privateVMs-node-0                                     | ACTIVE | my-stack-net=10.10.3.235 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
| 8800369f-31f6-4ed5-a320-c09028aa9ff4 | my-stack-my_instance-uib3vgq7ipet                     | ACTIVE | my-stack-net=10.10.1.163 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
+--------------------------------------+-------------------------------------------------------+--------+--------------------------+-------------------------------+----------------------+

The VM au-aling-group-d5y3tkua27eq-nswgw5m6k5al-4kd5t56p7jva is active. Other VMs come from previous Usecases.

  • we check the alarms/triggers are set :
$ openstack alarm list --fit-width
+--------------------------------------+--------------------------------------------+---------------------------------------------+-------+----------+---------+
| alarm_id                             | type                                       | name                                        | state | severity | enabled |
+--------------------------------------+--------------------------------------------+---------------------------------------------+-------+----------+---------+
| 3a7de77e-6c36-40d8-bc2c-c1f6f7e70523 | gnocchi_aggregation_by_resources_threshold | autoscaling_cpu-cpu_alarm_low-7ayvocoa563o  | alarm | low      | True    |
| 16b55ca2-1076-4b17-9022-3a3dcee12d26 | gnocchi_aggregation_by_resources_threshold | autoscaling_cpu-cpu_alarm_high-zibxutz4hytg | ok    | low      | True    |
+--------------------------------------+--------------------------------------------+---------------------------------------------+-------+----------+---------+

autoscaling_cpu-cpu_alarm_high-zibxutz4hytg state is ok because the VM CPU usage is < threshold (No action taken)

autoscaling_cpu-cpu_alarm_low-7ayvocoa563o state is alarm because the VM CPU usage is < threshold (action taken is or would be to scaledown the number VMs to a minimun value of min_size defined in the heat template. in our case 1)

  • Let's trigger the autoscale up

We'll trigger the autoscale up by consuming CPU usage inside the VM with the command stress.

Our VM doesn't have an internet connectivity so we assigned it a floating IP using this guide.

Check also you security group allows you to ssh your VM. In our case we added a rule to the security group default this way :

openstack security group rule create --ingress --protocol tcp --dst-port 22 --ethertype IPv4 default

We ssh the VM and install stress

$ ssh debian@195.15.246.40
debian@au-aling-group-d5y3tkua27eq-nswgw5m6k5al-4kd5t56p7jva:~$ sudo apt update && sudo apt install stress

We check the cpu usage before stressing the VM, we see that the cpu usage is 0%

top - 06:56:45 up 20 min,  1 user,  load average: 0.00, 0.00, 0.00
Tasks:  62 total,   1 running,  61 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   1997.7 total,   1710.3 free,     45.9 used,    241.4 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   1811.7 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
    1 root      20   0   21732   9848   7820 S   0.0   0.5   0:00.52 systemd
    2 root      20   0       0      0      0 S   0.0   0.0   0:00.00 kthreadd
    3 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 rcu_gp
    4 root       0 -20       0      0      0 I   0.0   0.0   0:00.00 rcu_par_gp

We now stress the VM:

debian@au-aling-group-d5y3tkua27eq-nswgw5m6k5al-4kd5t56p7jva:~$ stress -c 1
stress: info: [924] dispatching hogs: 1 cpu, 0 io, 0 vm, 0 hdd

In another terminal we check the cpu usage and we see that the cpu is 100% used

top - 06:57:19 up 20 min,  1 user,  load average: 0.08, 0.02, 0.01
Tasks:  66 total,   2 running,  64 sleeping,   0 stopped,   0 zombie
%Cpu(s):100.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   1997.7 total,   1707.9 free,     47.8 used,    241.9 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   1809.8 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
  916 debian    20   0    3848     96      0 R  99.9   0.0   0:05.41 stress                                                                                                                                     
  1 root      20   0   21732   9848   7820 S   0.0   0.5   0:00.52 systemd

We can check the cpu usage in % reported for our VM using this command :

$ openstack metric aggregates '(* (/ (metric cpu rate:mean) 300000000000 ) 100)' id=8c9b605c-2fe5-4879-be48-5201c27954cd
+----------------------------------------------------+---------------------------+-------------+----------------------+
| name                                               | timestamp                 | granularity |                value |
+----------------------------------------------------+---------------------------+-------------+----------------------+
| 8c9b605c-2fe5-4879-be48-5201c27954cd/cpu/rate:mean | 2021-04-30T06:00:00+00:00 |     21600.0 |    8.780416666666667 |
| 8c9b605c-2fe5-4879-be48-5201c27954cd/cpu/rate:mean | 2021-04-30T07:00:00+00:00 |      3600.0 |    8.780416666666667 |
| 8c9b605c-2fe5-4879-be48-5201c27954cd/cpu/rate:mean | 2021-04-30T07:20:00+00:00 |       300.0 | 0.016666666666666666 |
| 8c9b605c-2fe5-4879-be48-5201c27954cd/cpu/rate:mean | 2021-04-30T07:25:00+00:00 |       300.0 | 0.016666666666666666 |
| 8c9b605c-2fe5-4879-be48-5201c27954cd/cpu/rate:mean | 2021-04-30T07:30:00+00:00 |       300.0 |                 0.02 |
| 8c9b605c-2fe5-4879-be48-5201c27954cd/cpu/rate:mean | 2021-04-30T07:35:00+00:00 |       300.0 | 0.013333333333333334 |
| 8c9b605c-2fe5-4879-be48-5201c27954cd/cpu/rate:mean | 2021-04-30T07:40:00+00:00 |       300.0 | 0.016666666666666666 |
| 8c9b605c-2fe5-4879-be48-5201c27954cd/cpu/rate:mean | 2021-04-30T07:45:00+00:00 |       300.0 | 0.016666666666666666 |
| 8c9b605c-2fe5-4879-be48-5201c27954cd/cpu/rate:mean | 2021-04-30T07:50:00+00:00 |       300.0 |  0.07333333333333333 |
| 8c9b605c-2fe5-4879-be48-5201c27954cd/cpu/rate:mean | 2021-04-30T07:55:00+00:00 |       300.0 |                70.07 |
+----------------------------------------------------+---------------------------+-------------+----------------------+

After a few minutes we see that the cpu usage went from 0.07% to 70.07% and should have trigger an alarn as we defined a threshold corresponding to a cpu usage > 60%.

$ openstack alarm list
+--------------------------------------+--------------------------------------------+---------------------------------------------+-------+----------+---------+
| alarm_id                             | type                                       | name                                        | state | severity | enabled |
+--------------------------------------+--------------------------------------------+---------------------------------------------+-------+----------+---------+
| 3a7de77e-6c36-40d8-bc2c-c1f6f7e70523 | gnocchi_aggregation_by_resources_threshold | autoscaling_cpu-cpu_alarm_low-7ayvocoa563o  | ok    | low      | True    |
| 16b55ca2-1076-4b17-9022-3a3dcee12d26 | gnocchi_aggregation_by_resources_threshold | autoscaling_cpu-cpu_alarm_high-zibxutz4hytg | alarm | low      | True    |
+--------------------------------------+--------------------------------------------+---------------------------------------------+-------+----------+---------+
The alarm autoscaling_cpu-cpu_alarm_high-zibxutz4hytg has been triggered so new VMs should be created

+--------------------------------------+-------------------------------------------------------+--------+--------------------------+-------------------------------+----------------------+
| ID                                   | Name                                                  | Status | Networks                 | Image                         | Flavor               |
+--------------------------------------+-------------------------------------------------------+--------+--------------------------+-------------------------------+----------------------+
| 6431ae7a-9f80-4ecd-a993-11d5b184a2a6 | au-aling-group-d5y3tkua27eq-nd7dgrijwxbj-f5z2t5dtmww4 | ACTIVE | my-stack-net=10.10.1.236 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
| 8c9b605c-2fe5-4879-be48-5201c27954cd | au-aling-group-d5y3tkua27eq-nswgw5m6k5al-4kd5t56p7jva | ACTIVE | my-stack-net=10.10.0.209 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
| 1ce3f12f-0e4e-4e44-ae4b-111612c8ad8d | privateVMs-node-1                                     | ACTIVE | my-stack-net=10.10.2.233 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
| 6673256b-da62-4399-a31d-0d8f6b267b3f | privateVMs-node-0                                     | ACTIVE | my-stack-net=10.10.3.235 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
| 8800369f-31f6-4ed5-a320-c09028aa9ff4 | my-stack-my_instance-uib3vgq7ipet                     | ACTIVE | my-stack-net=10.10.1.163 | Debian 11 bullseye           | a1-ram2-disk20-perf1 |
+--------------------------------------+-------------------------------------------------------+--------+--------------------------+-------------------------------+----------------------+

We see a new VM active : au-aling-group-d5y3tkua27eq-nd7dgrijwxbj-f5z2t5dtmww4. the auto scale up worked.