Skip to content

TLS Passthrough Loadbalancer

This usecase focuses on setting up a loadbalancer with TLS passthrough, meaning that the application is the one providing the client with a TLS certificate, and the loadbalancer do not terminate the TLS connection. In this configuration, the traffic is encrypted end-to-end and is never decrypted except by the application.

Info

Similar to the basic HTTP usecase, we've created a single network pcp-dawxdax-frontend-network with a single subnet pcp-dawxdax-frontend-subnet-1, and a cidr of 10.4.128.128/27.

Creating backend instances

We'll create a single instance ofr this example, to not have to worry about clustering on the application side. We'll be deploying a k3s single node machine, and expose it through an Octavia loadbalancer with TLS passthrough.

First, we need to create the security group that we will associate to the instance.

 openstack security group create demo-security-group
# we get id de505897-c348-4b1b-bd65-85315c5145d6 for our new security group openstack security group rule create \
  --ingress \
  --ethertype IPv4 \
  --protocol tcp \
  --dst-port 6443 \
  --remote-ip 0.0.0.0/0 \
  de505897-c348-4b1b-bd65-85315c5145d6

TODO

resource "openstack_networking_secgroup_v2" "demo" {
  name        = "demo-security-group"
  description = "Terraform managed."
}

resource "openstack_networking_secgroup_rule_v2" "ingress" {
  direction         = "ingress"
  security_group_id = openstack_networking_secgroup_v2.demo.id

  description      = "Terraform managed."
  ethertype        = "IPv4"
  protocol         = "tcp"
  port_range_min   = 6443
  port_range_max   = 6443
  remote_ip_prefix = "0.0.0.0/0"
}

Then, we will create our instances to serve as a backend for the loadbalancer, and install a simple webserver on them.

 openstack image list --name "Debian 12 bookworm"
+--------------------------------------+--------------------+--------+
| ID                                   | Name               | Status |
+--------------------------------------+--------------------+--------+
| 39d7884c-b173-4d0b-9b80-233a2acb3588 | Debian 12 bookworm | active |
+--------------------------------------+--------------------+--------+

❯ openstack flavor show a1-ram2-disk20-perf1
+----------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                      | Value                                                                                                                                             |
+----------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                                                                                                                             |
| OS-FLV-EXT-DATA:ephemeral  | 0                                                                                                                                                 |
| access_project_ids         | None                                                                                                                                              |
| description                | None                                                                                                                                              |
| disk                       | 20                                                                                                                                                |
| id                         | 093a53d7-f420-4b79-9bb0-9ad4eb190631                                                                                                              |
| name                       | a1-ram2-disk20-perf1                                                                                                                              |
| os-flavor-access:is_public | True                                                                                                                                              |
| properties                 | hw:cpu_sockets='1', quota:disk_read_bytes_sec='209715200', quota:disk_read_iops_sec='500', quota:disk_write_bytes_sec='209715200',                |
|                            | quota:disk_write_iops_sec='500'                                                                                                                   |
| ram                        | 2048                                                                                                                                              |
| rxtx_factor                | 1.0                                                                                                                                               |
| swap                       | 0                                                                                                                                                 |
| vcpus                      | 1                                                                                                                                                 |
+----------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------+

❯ openstack network list --name pcp-dawxdax-frontend-network
+--------------------------------------+------------------------------+--------------------------------------+
| ID                                   | Name                         | Subnets                              |
+--------------------------------------+------------------------------+--------------------------------------+
| 83f559b1-3522-4bb2-a179-4f3f4ec58b71 | pcp-dawxdax-frontend-network | be560c41-bcf4-49a1-8117-caf292bd9e49 |
+--------------------------------------+------------------------------+--------------------------------------+

❯ openstack keypair create demo-keypair > demo-keypair.pem

❯ chmod 600 demo-keypair.pem

❯ cat > user_data_script.sh <<EOL
#! /bin/bash
DEBIAN_FRONTEND=noninteractive
apt update
apt install -y curl
curl -sfL https://get.k3s.io | sh -
EOL

# here, we re-use the values retrieved earlier for the flavor id,
# network id, image id, as well as the security group name we created. openstack server create \
  --flavor a1-ram2-disk20-perf1  \
  --image 39d7884c-b173-4d0b-9b80-233a2acb3588 \
  --key-name demo-keypair \
  --security-group demo-security-group \
  --network 83f559b1-3522-4bb2-a179-4f3f4ec58b71 \
  --user-data user_data_script.sh \
  demo-k3s-server-1

TODO

data "openstack_compute_flavor_v2" "a1_ram2_disk20_perf1" {
  name = "a1-ram2-disk20-perf1"
}

data "openstack_images_image_v2" "debian_12" {
  name_regex  = "^Debian 12.*"
  most_recent = true
}


data "openstack_networking_network_v2" "demo" {
  name = "pcp-dawxdax-frontend-network"
}

resource "openstack_compute_keypair_v2" "demo_keypair" {
  name = "demo-keypair"
}

resource "openstack_compute_instance_v2" "k3s_server" {
  count = 1

  name      = "demo-k3s-server-${count.index + 1}"
  flavor_id = data.openstack_compute_flavor_v2.a1_ram2_disk20_perf1.id
  image_id  = data.openstack_images_image_v2.debian_12.id

  # key_pair = openstack_compute_keypair_v2.demo_keypair.name
  key_pair = "lanson"

  security_groups = [openstack_networking_secgroup_v2.demo.name]

  network {
    uuid = data.openstack_networking_network_v2.demo.id
  }

  user_data = <<EOT
#! /bin/bash
DEBIAN_FRONTEND=noninteractive
apt update
apt install -y curl
curl -sfL https://get.k3s.io | sh -
EOT
}

Creating the loadbalancer

We can now create our loadbalancer, as well as a listener and a pool for our http endpoint. We'll also add a heatlthcheck to automatically remove from the round robin, any backend that does not respond as it should.

 openstack subnet list --network 83f559b1-3522-4bb2-a179-4f3f4ec58b71
+--------------------------------------+-------------------------------+--------------------------------------+-----------------+
| ID                                   | Name                          | Network                              | Subnet          |
+--------------------------------------+-------------------------------+--------------------------------------+-----------------+
| be560c41-bcf4-49a1-8117-caf292bd9e49 | pcp-dawxdax-frontend-subnet-1 | 83f559b1-3522-4bb2-a179-4f3f4ec58b71 | 10.4.128.128/27 |
+--------------------------------------+-------------------------------+--------------------------------------+-----------------+

❯ openstack port create demo-loadbalancer-port \
  --network 83f559b1-3522-4bb2-a179-4f3f4ec58b71 \
  --fixed-ip subnet=be560c41-bcf4-49a1-8117-caf292bd9e49 \
  --enable \
  --no-security-group
# Here, we get id faee4783-c9bb-4ec4-9d1a-f8f5d553b120 for the newly created port openstack loadbalancer create \
  --name demo-loadbalancer-1 \
  --vip-port-id faee4783-c9bb-4ec4-9d1a-f8f5d553b120

❯ openstack loadbalancer listener create \
  --name demo-listener-https-passthrough \
  --protocol HTTPS \
  --protocol-port 6443 \
  demo-loadbalancer-1

❯ openstack loadbalancer pool create \
  --name demo-pool-https \
  --protocol HTTPS \
  --lb-algorithm ROUND_ROBIN \
  --listener demo-listener-https-passthrough

# to add the members, we will re-use the ip aqddress of the instance we created earlier, here, 10.4.128.143 openstack loadbalancer member create \
  --name demo-member-1 \
  --address 10.4.128.143 \
  --protocol-port 6443 \
  demo-pool-https

❯ openstack loadbalancer healthmonitor create \
  --type HTTPS \
  --delay 30 \
  --timeout 5 \
  --max-retries 2 \
  --expected-codes 401 \
  --url-path "/livez" \
  demo-pool-https

TODO

data "openstack_networking_subnet_v2" "demo" {
  name       = "pcp-dawxdax-frontend-subnet-1"
  network_id = data.openstack_networking_network_v2.demo.id
}

resource "openstack_networking_port_v2" "demo_lb" {
  name                  = "demo-loadbalancer-port"
  network_id            = data.openstack_networking_network_v2.demo.id
  admin_state_up        = "true"
  port_security_enabled = true
  no_security_groups    = true
  fixed_ip {
    subnet_id = data.openstack_networking_subnet_v2.demo.id
  }
}

resource "openstack_lb_loadbalancer_v2" "demo" {
  name        = "demo-loadbalancer-1"
  vip_port_id = openstack_networking_port_v2.demo_lb.id
  security_group_ids = [
    openstack_networking_secgroup_v2.demo.id,
  ]
}

resource "openstack_lb_listener_v2" "demo_tls_passthrough" {
  name            = "demo-listener-https-passthrough"
  protocol        = "HTTPS"
  protocol_port   = 6443
  loadbalancer_id = openstack_lb_loadbalancer_v2.demo.id
}

resource "openstack_lb_pool_v2" "demo" {
  name     = "demo-pool-https"
  protocol = "HTTPS"
  # tls_enabled = true
  lb_method   = "ROUND_ROBIN"
  listener_id = openstack_lb_listener_v2.demo_tls_passthrough.id
}

resource "openstack_lb_member_v2" "demo_https_passthrough" {
  count = 1

  name          = "demo-member-${count.index + 1}"
  pool_id       = openstack_lb_pool_v2.demo.id
  address       = openstack_compute_instance_v2.k3s_server[count.index].access_ip_v4
  protocol_port = 6443
}

resource "openstack_lb_monitor_v2" "demo_liveness" {
  pool_id        = openstack_lb_pool_v2.demo.id
  type           = "HTTPS"
  url_path       = "/livez"
  http_method    = "GET"
  expected_codes = "401"
  delay          = 30
  timeout        = 5
  max_retries    = 2
}

Associate a floating IP

The last step is to associate a floating ip to our loadbalancer VIP, in order to make it available publicly.

Warning

This step is optional, and should only be done for testing purpose, or if you intend to make your loadbalancer public-facing.

 openstack network show -f value -c id ext-floating1
34a684b8-2889-4950-b08e-c33b3954a307

❯ openstack floating ip create 34a684b8-2889-4950-b08e-c33b3954a307
# here we get the ip 37.156.43.216, and the floating ip id of 6029cc65-4802-4fe2-9d46-846db9046ee5

# we reference the id of the previously created vip-port of the loadbalancer (faee4783-c9bb-4ec4-9d1a-f8f5d553b120) openstack floating ip set --port faee4783-c9bb-4ec4-9d1a-f8f5d553b120 6029cc65-4802-4fe2-9d46-846db9046ee5
data "openstack_networking_network_v2" "floating" {
  name = "ext-floating1"
}

resource "openstack_networking_floatingip_v2" "demo_float" {
  pool = data.openstack_networking_network_v2.floating.name
}

# we reference the loadbalancer port from the previous step here
resource "openstack_networking_floatingip_associate_v2" "demo_float_lb" {
  floating_ip = openstack_networking_floatingip_v2.demo_float.address
  port_id     = openstack_networking_port_v2.demo_lb.id
}

Testing your loadbalancer

Now that everything is created, we should be able to access our k3s api through our loadbalancer via HTTPS, with the TLS certificate being served by the kubernetes api. The certificate will not be trusted as it is self-signed, so we'll use the --insecure flag.

 curl --insecure -o /dev/null -s -w "%{http_code}\n" https://37.156.40.41:6443
401 curl --insecure https://37.156.40.41:6443
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}

We get a 401 error because we're not authenticated to k3s api, but this means that the loadbalancer is working as intended

 echo | openssl s_client -connect 37.156.40.41:6443 2>/dev/null | openssl x509 -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 7289255002910718698 (0x6528a2ce15bb7aea)
        Signature Algorithm: ecdsa-with-SHA256
        Issuer: CN = k3s-server-ca@1736259053
        Validity
            Not Before: Jan  7 14:10:53 2025 GMT
            Not After : Jan  7 14:10:53 2026 GMT
        Subject: O = k3s, CN = k3s
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                pub:
                    04:d9:b5:3f:c4:57:d9:7f:b2:83:e6:1c:cc:83:31:
                    48:77:ca:19:0c:3c:19:14:ea:e3:c6:18:3b:88:c8:
                    8f:f1:fb:28:7c:56:20:1a:d9:b5:64:4d:e7:94:70:
                    3c:18:dc:c0:46:d0:38:d7:18:7c:6e:77:cd:4c:fa:
                    18:fc:1e:c4:91
                ASN1 OID: prime256v1
                NIST CURVE: P-256
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Authority Key Identifier:
                91:F6:29:C5:6F:A0:63:AD:68:BB:C8:7F:6B:45:43:AC:A9:20:CD:14
            X509v3 Subject Alternative Name:
                DNS:demo-k3s-server-1, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:localhost, IP Address:10.4.128.143, IP Address:10.43.0.1, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1
    Signature Algorithm: ecdsa-with-SHA256
    Signature Value:
        30:45:02:20:54:b8:c9:b1:57:67:e3:b4:da:f4:2d:50:69:d2:
        4d:00:8c:09:04:6c:6b:ee:07:5d:e5:4a:57:b9:de:7f:29:8e:
        02:21:00:f3:6a:02:52:9d:25:dc:a8:e9:b4:3c:1b:61:0c:7e:
        11:96:c9:48:21:24:f4:ad:05:68:b3:3a:5e:e8:91:80:93

If we recover the certificate that is presented to us when connecting, we can see it is indeed the k3s certificate.