Skip to content

Manage Loadbalancers

This section presents a use cases for OpenStack LBaaS (Octavia). Other examples are available on the official documentation

Introduction

Introduction to Load Balancing as a Service (LBaaS) with Octavia

Load Balancing as a Service (LBaaS) is an OpenStack feature that enables automatic distribution of incoming application traffic across multiple backend servers. Octavia, the default LBaaS implementation in OpenStack, is designed to deliver scalable and highly available load balancing. It is built on a foundation of HAProxy and VRRP (Virtual Router Redundancy Protocol) to manage and route traffic efficiently. With Octavia, users can define, configure, and operate load balancers to ensure their applications remain responsive and reliable.

Key Components of Octavia LBaaS

Load Balancer

The load balancer is the central object of the service, responsible for managing the traffic flow. It distributes incoming traffic to backend servers based on the user-defined configuration.

  • Purpose: Serves as the gateway for distributing traffic.
  • Technical Note: Load balancers in Octavia are implemented using HAProxy with VRRP to ensure high availability.

VIP (Virtual IP Address)

The VIP is the IP address associated with the load balancer that external clients use to send traffic.

  • Purpose: Acts as the external entry point for traffic.
  • Example: A VIP could be 192.168.1.10, which clients use to access an application or service.

Listener

A listener defines how the load balancer processes incoming traffic on the VIP.

  • Purpose: Configures protocol (e.g., HTTP, HTTPS) and port (e.g., 80, 443) for incoming traffic.
  • Technical Note: This corresponds to the listen section in HAProxy’s configuration.

Pool

A pool is a collection of backend servers that the listener forwards traffic to.

  • Purpose: Manages the configuration and logic for distributing traffic among backend servers.
  • Technical Note: Pools map to the backend section in HAProxy.

Member

Members are the individual backend servers (e.g., virtual machines or containers) within a pool.

  • Purpose: Represent the actual endpoints where traffic is routed.
  • Technical Note: In HAProxy, members correspond to server lines within a backend configuration.

Health Monitor

A health monitor periodically checks the status of the members within a pool to ensure they are responsive and available.

  • Purpose: Detects and excludes unhealthy backend servers from receiving traffic.
  • Technical Note: Maps to the check parameters in HAProxy’s backend section.

L7 Policy

An L7 policy is a rule set for advanced traffic management based on Layer 7 (application layer) attributes.

  • Purpose: Defines actions for packet forwarding, such as directing traffic to a specific pool or rejecting requests.
  • Example: Forward requests to a specific pool if the URL path starts with /api.

L7 Rule

An L7 rule specifies the conditions under which an L7 policy is applied.

  • Purpose: Matches application-layer attributes, such as domain, URL, or HTTP headers.
  • Example: A rule can forward traffic to the “webserver” pool if the domain is example.com.

Example architecture

An example architecture for deploying a loadbalancer in front of a pool of servers with OpenStack Octavia would look somewhat like this.

---
title: Simple Loadbalancer Architecture
---
graph LR;
  subgraph frontendnetwork[Single Tenant Network]
  direction TB
    amphora1(<span style='min-width:40px;display:block;'><img src='/assets/images/network/haproxy-color.png'width='40'height='40'/><span>)
    amphora2(<span style='min-width:40px;display:block;'><img src='/assets/images/network/haproxy-color.png'width='40'height='40'/><span>)

    amphora1 <--VRRP--> amphora2

    vip((Virtual IP))
    vip --o|master| amphora1
    vip -.-o|backup| amphora2

    intuser1(<span style='min-width:40px;display:block;'><img src='/assets/images/network/user-solid-black.png'width='40'height='40'/><span>)
    intuser1 -->|internal users| vip

    kube1(<span style='min-width:40px;display:block;'><img src='/assets/images/network/kubernetes-color.svg'width='40'height='40'/><span>)
    kube2(<span style='min-width:40px;display:block;'><img src='/assets/images/network/kubernetes-color.svg'width='40'height='40'/><span>)

    amphora1 --o kube1
    amphora1 --o kube2
    amphora2 --o kube1
    amphora2 --o kube2
  end

  floating1((Floating IP)) -->|1-to-1 NAT| vip

  extuser1(<span style='min-width:40px;display:block;'><img src='/assets/images/network/user-solid-black.png'width='40'height='40'/><span>)
  extuser1 -->|external users| floating1

For a clearer separation between public and private facing application, one could choose to segregate the load balancer and application network, essentially creating a two-tier architecture.

---
title: Two-tier Loadbalancer Architecture
---
graph LR;
  subgraph frontendnetwork[Frontend Tenant Network]
  direction TB
    amphora1(<span style='min-width:40px;display:block;'><img src='/assets/images/network/haproxy-color.png'width='40'height='40'/><span>)
    amphora2(<span style='min-width:40px;display:block;'><img src='/assets/images/network/haproxy-color.png'width='40'height='40'/><span>)



    amphora1 <--VRRP--> amphora2

    vip((Virtual IP))
    vip --o|master| amphora1
    vip -.-o|backup| amphora2
    intuser1(<span style='min-width:40px;display:block;'><img src='/assets/images/network/user-solid-black.png'width='40'height='40'/><span>)
    intuser1 -->|internal users| vip
  end

  floating1((Floating IP)) -->|1-to-1 NAT| vip

  extuser1(<span style='min-width:40px;display:block;'><img src='/assets/images/network/user-solid-black.png'width='40'height='40'/><span>)


  extuser1 -->|external users| floating1

  subgraph backendnetwork[Backend Tenant Network]
  direction LR
    kube1(<span style='min-width:40px;display:block;'><img src='/assets/images/network/kubernetes-color.svg'width='40'height='40'/><span>)
    kube2(<span style='min-width:40px;display:block;'><img src='/assets/images/network/kubernetes-color.svg'width='40'height='40'/><span>)
  end

  amphora1 --o kube1
  amphora1 --o kube2
  amphora2 --o kube1
  amphora2 --o kube2

  router[Router]-->backendnetwork
  router[Router]-->frontendnetwork
Note

In this configuration, it is necessary to create a router that has interfaces in both networks, as the loadbalancers and backends servers need to be able to communicate with each other.

Warning

The public network ext-net1 should NOT be used as the virtual-ip network for octavia loadbalancers, as it can cause issues during failover operations, if there are not enough IP available in the virtual-ip subnet to recreate ports. It is instead recommended to use a private, customer-owned network, and attach a floating ip to the virtual-ip port, in order to expose the loadbalancer externally.

Usecases

You can read the following guides on setting up different kinds of loadbalancers in OpenStack: