Skip to content

Kubernetes : Auto provisionning and configurable k8s cluster powered by Heat and Kubespray

Create in a few minutes the complete network architecture for the deployment of a kubernetes cluster

All deployment files: Infomaniak ressources

k8s architecture

install dependencies

To begin with, let's set up the virtual environment to install all the useful dependencies therein.

python3 -m venv venv
source venv/bin/active
pip3 install -r requirements.txt
source /Users/myuser/PCP-XXXX-openrc.sh
openstackclient
python-heatclient
python-swiftclient
python-glanceclient
python-novaclient
gnocchiclient
python-cloudkittyclient

Create the Openstack infrastructure

openstack stack create -t cluster.yaml --parameter whitelisted_ip_range="10.8.0.0/16" stack name --wait
The white_list_ip range parameter corresponds to the ips addresses which will be authorized to connect to the bastion (entry point of the k8s cluster) and to the api server of the cluster

Create a directory named heat_k8s:

mkdir heat_k8s
Inside this directory create files called bastion.yaml, cluster.yaml, worker.yaml, master.yaml with the following content:

bastion.yaml:

heat_template_version: 2016-10-14
parameters:
  image:
    type: string
    description: Image used for VMs
    default: Ubuntu 20.04 LTS Focal Fossa
  ssh_key:
    type: string
    description: SSH key to connect to VMs
    default: yubikey
  flavor:
    type: string
    description: flavor used by the bastion
    default: a1-ram2-disk20-perf1
  whitelisted_ip_range:
    type: string
    description: ip range allowed to connect to the bastion and the k8s api server
  network:
    type: string
    description: Network used by the VM
  subnet:
    type: string
    description: Subnet used by the VM
resources:
  bastion_floating_ip:
    type: OS::Neutron::FloatingIP
    properties:
      floating_network: ext-floating1
  bastion_port:
    type: OS::Neutron::Port
    properties:
      name: bastion
      network: {get_param: network}
      fixed_ips:
        - subnet_id: {get_param: subnet}
      security_groups:
        - {get_resource: securitygroup_bastion}
  bastion_instance:
    type: OS::Nova::Server
    properties:
      name: bastion
      key_name: { get_param: ssh_key }
      image: { get_param: image }
      flavor: { get_param: flavor }
      networks: [{port: {get_resource: bastion_port} }]
  association:
    type: OS::Neutron::FloatingIPAssociation
    properties:
      floatingip_id: { get_resource: bastion_floating_ip }
      port_id: { get_resource: bastion_port }
  securitygroup_bastion:
    type: OS::Neutron::SecurityGroup
    properties:
      description: Bastion SSH
      name: bastion
      rules:
        - protocol: tcp
          remote_ip_prefix: {get_param: whitelisted_ip_range}
          port_range_min: 22
          port_range_max: 22
outputs:
  bastion_public_ip:
    description: Floating IP of the bastion host
    value: {get_attr: [bastion_floating_ip, floating_ip_address]}

master.yaml:

---
heat_template_version: 2016-10-14
description: A load-balancer server
parameters:
  name:
    type: string
    description: Server name
  image:
    type: string
    description: Image used for servers
  ssh_key:
    type: string
    description: SSH key to connect to VMs
  flavor:
    type: string
    description: flavor used by the servers
  pool_id:
    type: string
    description: Pool to contact
  metadata:
    type: json
  network:
    type: string
    description: Network used by the server
  subnet:
    type: string
    description: Subnet used by the server
  security_groups:
    type: string
    description: Security groups used by the server
  allowed_address_pairs:
    type: json
    default: []
  servergroup:
    type: string
resources:
  port:
    type: OS::Neutron::Port
    properties:
      network: {get_param: network}
      security_groups: [{get_param: security_groups}]
      allowed_address_pairs: {get_param: allowed_address_pairs}
  server:
    type: OS::Nova::Server
    properties:
      name: {get_param: name}
      flavor: {get_param: flavor}
      image: {get_param: image}
      key_name: {get_param: ssh_key}
      metadata: {get_param: metadata}
      networks: [{port: {get_resource: port} }]
      scheduler_hints:
        group: {get_param: servergroup}
  member:
    type: OS::Octavia::PoolMember
    properties:
      pool: {get_param: pool_id}
      address: {get_attr: [server, first_address]}
      protocol_port: 6443
      subnet: {get_param: subnet}
outputs:
  server_ip:
    description: IP Address of master nodes.
    value: { get_attr: [server, first_address] }
  lb_member:
    description: LB member details.
    value: { get_attr: [member, show] }

worker.yaml:

---
heat_template_version: 2016-10-14
description: A Kubernetes worker node
parameters:
  name:
    type: string
    description: Server name
  image:
    type: string
    description: Image used for servers
  ssh_key:
    type: string
    description: SSH key to connect to VMs
  flavor:
    type: string
    description: flavor used by the servers
  metadata:
    type: json
  network:
    type: string
    description: Network used by the server
  security_groups:
    type: string
    description: Security groups used by the server
  allowed_address_pairs:
    type: json
    default: []
  servergroup:
    type: string
resources:
  port:
    type: OS::Neutron::Port
    properties:
      network: {get_param: network}
      security_groups: [{get_param: security_groups}]
      allowed_address_pairs: {get_param: allowed_address_pairs}
  server:
    type: OS::Nova::Server
    properties:
      name: {get_param: name}
      flavor: {get_param: flavor}
      image: {get_param: image}
      key_name: {get_param: ssh_key}
      metadata: {get_param: metadata}
      networks: [{port: {get_resource: port} }]
      scheduler_hints:
        group: {get_param: servergroup}
outputs:
  server_ip:
    description: IP Address of the worker nodes.
    value: { get_attr: [server, first_address] }

cluster.yaml:

heat_template_version: 2016-10-14
description: Kubernetes cluster
parameters:
  image:
    type: string
    description: Image used for VMs
    default: Ubuntu 20.04 LTS Focal Fossa
  ssh_key:
    type: string
    description: SSH key to connect to VMs
    default: your-key
  master_flavor:
    type: string
    description: flavor used by master nodes
    default: a2-ram4-disk80-perf1
  worker_flavor:
    type: string
    description: flavor used by worker nodes
    default: a4-ram8-disk80-perf1
  bastion_flavor:
    type: string
    description: flavor used by the bastion
    default: a1-ram2-disk20-perf1
  whitelisted_ip_range:
    type: string
    description: ip range allowed to connect to the bastion and the k8s api server
    default: 10.8.0.0/42
  subnet_cidr:
    type: string
    description: cidr for the private network
    default: 10.11.12.0/24
  kube_service_addresses:
    type: string
    description: Kubernetes internal network for services.
    default: 10.233.0.0/18
  kube_pods_subnet:
    type: string
    description: Kubernetes internal network for pods.
    default: 10.233.64.0/18

resources:
  # Private Network
  k8s_net:
    type: OS::Neutron::Net
    properties:
      name: k8s-network
      value_specs:
        mtu: 1500
  k8s_subnet:
    type: OS::Neutron::Subnet
    properties:
      name: k8s-subnet
      network_id: {get_resource: k8s_net}
      cidr: {get_param: subnet_cidr}
      dns_nameservers:
        - 84.16.67.69
        - 84.16.67.70
      ip_version: 4
  # router between loadbalancer and private network
  k8s_router:
    type: OS::Neutron::Router
    properties:
      name: k8s-router
      external_gateway_info: { network: ext-floating1 }
  k8s_router_subnet_interface:
    type: OS::Neutron::RouterInterface
    properties:
      router_id: {get_resource: k8s_router}
      subnet: {get_resource: k8s_subnet}

  # master nodes
  group_masters:
    type: OS::Nova::ServerGroup
    properties:
      name: k8s-masters
      policies: ["anti-affinity"]
  k8s_masters:
    type: OS::Heat::ResourceGroup
    properties:
      count: 3
      resource_def:
        type: master.yaml
        properties:
          name: k8s-master-%index%
          servergroup: {get_resource: group_masters}
          flavor: {get_param: master_flavor}
          image: {get_param: image}
          ssh_key: {get_param: ssh_key}
          network: {get_resource: k8s_net}
          subnet:  {get_resource: k8s_subnet}
          pool_id: {get_resource: pool_masters}
          metadata: {"metering.server_group": {get_param: "OS::stack_id"}}
          security_groups: {get_resource: securitygroup_masters}
          allowed_address_pairs:
            - ip_address: {get_param: kube_service_addresses}
            - ip_address: {get_param: kube_pods_subnet}
  securitygroup_masters:
    type: OS::Neutron::SecurityGroup
    properties:
      description: K8s masters
      name: k8s-masters
      rules:
        - protocol: icmp
          remote_ip_prefix: {get_param: subnet_cidr}
        - protocol: tcp
          direction: ingress
          remote_ip_prefix: {get_param: subnet_cidr}
        - protocol: udp
          direction: ingress
          remote_ip_prefix: {get_param: subnet_cidr}

  # worker nodes
  group_workers:
    type: OS::Nova::ServerGroup
    properties:
      name: k8s-workers
      policies: ["anti-affinity"]
  k8s_workers:
    type: OS::Heat::ResourceGroup
    properties:
      count: 2
      resource_def:
        type: worker.yaml
        properties:
          name: k8s-worker-%index%
          servergroup: {get_resource: group_workers}
          flavor: {get_param: worker_flavor}
          image: {get_param: image}
          ssh_key: {get_param: ssh_key}
          network: {get_resource: k8s_net}
          metadata: {"metering.server_group": {get_param: "OS::stack_id"}}
          security_groups: {get_resource: securitygroup_workers}
          allowed_address_pairs:
            - ip_address: {get_param: kube_service_addresses}
            - ip_address: {get_param: kube_pods_subnet}
  securitygroup_workers:
    type: OS::Neutron::SecurityGroup
    properties:
      description: K8s workers
      name: k8s-workers
      rules:
        - protocol: icmp
          remote_ip_prefix: {get_param: subnet_cidr}
        - protocol: tcp
          direction: ingress
          remote_ip_prefix: {get_param: subnet_cidr}
        - protocol: udp
          direction: ingress
          remote_ip_prefix: {get_param: subnet_cidr}

  # k8s api loadbalancer
  loadbalancer_k8s_api:
    type: OS::Octavia::LoadBalancer
    properties:
      name: k8s-api
      vip_subnet: {get_resource: k8s_subnet}
  listener_masters:
    type: OS::Octavia::Listener
    properties:
      name: k8s-master
      loadbalancer: {get_resource: loadbalancer_k8s_api}
      protocol: HTTPS
      protocol_port: 6443
      allowed_cidrs:
        - {get_param: whitelisted_ip_range}
        - {get_param: subnet_cidr}
        - {get_param: kube_service_addresses}
        - {get_param: kube_pods_subnet}
  pool_masters:
    type: OS::Octavia::Pool
    properties:
      name: k8s-master
      listener: {get_resource: listener_masters}
      lb_algorithm: ROUND_ROBIN
      protocol: HTTPS
      session_persistence:
        type: SOURCE_IP
  monitor_masters:
    type: OS::Octavia::HealthMonitor
    properties:
      pool: { get_resource: pool_masters }
      type: HTTPS
      url_path: /livez?verbose
      http_method: GET
      expected_codes: 200
      delay: 5
      max_retries: 5
      timeout: 5
  loadbalancer_floating:
    type: OS::Neutron::FloatingIP
    properties:
      floating_network_id: ext-floating1
      port_id: {get_attr: [loadbalancer_k8s_api, vip_port_id]}

  # bastion
  bastion:
    type: bastion.yaml
    depends_on: [ k8s_net, k8s_router ]
    properties:
      image: {get_param: image}
      ssh_key: { get_param: ssh_key }
      flavor: { get_param: bastion_flavor }
      whitelisted_ip_range: {get_param: whitelisted_ip_range}
      network: {get_resource: k8s_net}
      subnet: {get_resource: k8s_subnet}
      #bastion_floating_ip: {get_resource: }

outputs:
  bastion_public_ip:
    description: Floating IP of the bastion host
    value: {get_attr: [bastion, resource.bastion_floating_ip, floating_ip_address]}
  k8s_masters:
    description: Masters ip addresses
    value: {get_attr: [k8s_masters, server_ip]}
  k8s_workers:
    description: Workers ip addresses
    value: {get_attr: [k8s_workers, server_ip]}
  vrrp_public_ip:
    description: vrrp public ip
    value: {get_attr: [loadbalancer_floating, floating_ip_address]}

These templates correspond to the architectural definition from an Openstack point of view of your different components of your future cluster with ip addressing, traffic rules, ssh key management, all of these components are called in the cluster manifest. yaml which aims to organize and deploy all the resources defined to create your infrastructure ready to host the installation of kubernertes. To add this ssh key to the machines provided by these templates add the name of your key given by the command:

openstack keypair list
ssh_key:
  type: string
  description: SSH key to connect to VMs
  default: your-key
Creating the stack may take a few minutes. Once the stack is created you can observe your payment values.

openstack rating dataframes get -b 2021-08-25T00:00:00 | awk -F 'rating' '{print $2}' | awk -F "'" '{sum+=$3} END{print sum/50}'
To update the stack with new values:
openstack stack update -t cluster.yaml --parameter whitelisted_ip_range="10.8.0.0/16" stack-name --wait
To delete stack:
openstack stack delete k8s --wait

To view the security groups of your master nodes, worker nodes:

openstack security group rule list k8s-masters
openstack security group rule list k8s-workers

Deploy Kubernetes: Kubespray (tested on 2.16)

Install Kubespray repositpry: Kubespray

git clone https://github.com/kubernetes-sigs/kubespray.git
git checkout tags/v2.16.0

We need to add all pre-configured stuff to configure kubespray inside kubespray repository. Copy the inventory/k8s folder in the ansible project.

Edit your inventory inventory/k8s/hosts.yaml file to replace nodes and bastion IP addresses.

Display Ips of your cluster:

openstack server list -cName -cNetworks --sort-column Name -f value

sample hosts.yaml:

all:
  hosts:
    k8s-master-0:
      ansible_host: 10.11.12.163
      ansible_user: ubuntu
      ip: 10.11.12.163
    k8s-master-1:
      ansible_host: 10.11.12.103
      ansible_user: ubuntu
      ip: 10.11.12.103
    k8s-master-2:
      ansible_host: 10.11.12.94
      ansible_user: ubuntu
      ip: 10.11.12.94
    k8s-worker-0:
      ansible_host: 10.11.12.73
      ansible_user: ubuntu
      ip: 10.11.12.73
    k8s-worker-1:
      ansible_host: 10.11.12.180
      ansible_user: ubuntu
      ip: 10.11.12.180
  children:
    kube_control_plane:
      hosts:
        k8s-master-0:
        k8s-master-1:
        k8s-master-2:
    kube_node:
      hosts:
        k8s-master-0:
        k8s-master-1:
        k8s-master-2:
        k8s-worker-0:
        k8s-worker-1:
    etcd:
      hosts:
        k8s-master-0:
        k8s-master-1:
        k8s-master-2:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}
    bastion:
      hosts:
        bastion:
          ansible_host: 195.15.244.254
          ansible_user: ubuntu
Edit the inventory inventory/k8s/group_vars/all/all.yaml file to replace the k8s api load balancer IP address in loadbalancer_apiserver.

openstack loadbalancer show -c vip_address k8s-api -f value
To allow the management of your kubernetes cluster outside its private network we add the public IP of the server API to the configuration to also apply the ssl configuration. Edit the inventory inventory/k8s/group_vars/all/all.yaml file to replace the k8s api load balancer IP address in loadbalancer_apiserver.
openstack float ip list --port $(openstack loadbalancer show k8s-api -c vip_port_id -f value) -c "Floating IP Address" -f value

We can install Kubernetes via the kubespray playbook. Run ansible playbook by running:

ansible-playbook -i inventory/k8s/hosts.yaml --become --become-user=root cluster.yml
After install finish, replace the load balancer private IP address in inventory/k8s/artifacts/admin.conf with the public one.

Find ssh config for cluster access:

cat Kubespray/ssh-bastion.conf


Host X.X.X.X
  Hostname X.X.X.X
  StrictHostKeyChecking no
  ControlMaster auto
  ControlPath ~/.ssh/ansible-%r@%h:%p
  ControlPersist 5m

Host  10.11.12.163 10.11.12.103 10.11.12.94 10.11.12.73 10.11.12.180
  ProxyCommand ssh -F /dev/null -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -W %h:%p -p 22 ubuntu@195.15.244.254
Export your administration file of your cluster in order to be able to interact with it.
export KUBECONFIG=/Users/leopoldjacquot/code/kubespray/inventory/k8s/artifacts/admin.conf
kubectl get nodes

Check networking.

To get the most recent and cluster-wide network connectivity report, run from any of the cluster nodes.

curl http://localhost:31081/api/v1/connectivity_check

You can list the pods that are running on your cluster.

kubectl get pods -A
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   cilium-2zrbm                           1/1     Running   0          2d16h
kube-system   cilium-6t62k                           1/1     Running   0          2d16h
kube-system   cilium-9jln6                           1/1     Running   0          2d16h
kube-system   cilium-btvdw                           1/1     Running   0          2d16h
kube-system   cilium-n7vrp                           1/1     Running   0          2d16h
kube-system   cilium-operator-68ff55c94b-wfkw4       1/1     Running   0          2d16h
kube-system   kube-apiserver-k8s-master-0            1/1     Running   0          2d16h
kube-system   kube-apiserver-k8s-master-1            1/1     Running   0          2d16h
kube-system   kube-apiserver-k8s-master-2            1/1     Running   0          2d16h
kube-system   kube-controller-manager-k8s-master-0   1/1     Running   1          2d16h
kube-system   kube-controller-manager-k8s-master-1   1/1     Running   0          2d16h
kube-system   kube-controller-manager-k8s-master-2   1/1     Running   0          2d16h
kube-system   kube-proxy-6njkh                       1/1     Running   0          2d16h
kube-system   kube-proxy-dmw54                       1/1     Running   0          2d16h
kube-system   kube-proxy-g5tdj                       1/1     Running   0          2d16h
kube-system   kube-proxy-gbvjn                       1/1     Running   0          2d16h
kube-system   kube-proxy-pf56h                       1/1     Running   0          2d16h
kube-system   kube-scheduler-k8s-master-0            1/1     Running   0          2d16h
kube-system   kube-scheduler-k8s-master-1            1/1     Running   0          2d16h
kube-system   kube-scheduler-k8s-master-2            1/1     Running   0          2d16h
Use Kubernetes API with scoped token: Create user admin_account.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin (or other name)
  namespace: default
Add role to user admin_role.yaml:
---
apiVersion:  rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-rbac
subjects:
  - kind: ServiceAccount
    # Reference to upper's `metadata.name`
    name: admin
    # Reference to upper's `metadata.namespace`
    namespace: default
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
Apply files configuration:
kubectl apply -f admin_account.yaml
kubectl apply -f admin_role.yaml
Api server adress:
APISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
Pull scoped token with service account admin name in example:
TOKEN=$(kubectl get secret $(kubectl get serviceaccount default -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )

Communication with Kubernetes API, list namespace:

curl -X GET $APISERVER/api/v1/namespaces/ --header "Authorization: Bearer $TOKEN" --insecure
{
 "kind": "NamespaceList",
 "apiVersion": "v1",
 "metadata": {
   "resourceVersion": "772347"
 },
 "items": [
   {
     "metadata": {
       "name": "default",
       "uid": "ab798fdb-ea43-42b7-bdbb-a049ec3673b6",
       "resourceVersion": "199",
       "creationTimestamp": "2021-10-10T16:15:58Z",
       "managedFields": [
         {
           "manager": "kube-apiserver",
           "operation": "Update",
           "apiVersion": "v1",
           "time": "2021-10-10T16:15:58Z",
           "fieldsType": "FieldsV1",
           "fieldsV1": {"f:status":{"f:phase":{}}}
         }
       ]
     },
     "spec": {
       "finalizers": [
         "kubernetes"
       ]
     },
     "status": {
       "phase": "Active"
     }
   },
   {
     "metadata": {
       "name": "kube-node-lease",
       "uid": "9b5a0397-a59a-47b5-9637-11c0497f179c",
       "resourceVersion": "6",
       "creationTimestamp": "2021-10-10T16:15:56Z",
       "managedFields": [
         {
           "manager": "kube-apiserver",
           "operation": "Update",
           "apiVersion": "v1",
           "time": "2021-10-10T16:15:56Z",
           "fieldsType": "FieldsV1",
           "fieldsV1": {"f:status":{"f:phase":{}}}
         }
       ]
     },
     "spec": {
       "finalizers": [
         "kubernetes"
       ]
     },
     "status": {
       "phase": "Active"
     }
   },
   {
     "metadata": {
       "name": "kube-public",
       "uid": "06942f2a-ce84-45da-a661-c5fce770a8c0",
       "resourceVersion": "5",
       "creationTimestamp": "2021-10-10T16:15:56Z",
       "managedFields": [
         {
           "manager": "kube-apiserver",
           "operation": "Update",
           "apiVersion": "v1",
           "time": "2021-10-10T16:15:56Z",
           "fieldsType": "FieldsV1",
           "fieldsV1": {"f:status":{"f:phase":{}}}
         }
       ]
     },
     "spec": {
       "finalizers": [
         "kubernetes"
       ]
     },
     "status": {
       "phase": "Active"
     }
   },
   {
     "metadata": {
       "name": "kube-system",
       "uid": "c3536096-9c88-408d-ae57-2ceb39fad7e2",
       "resourceVersion": "4",
       "creationTimestamp": "2021-10-10T16:15:56Z",
       "managedFields": [
         {
           "manager": "kube-apiserver",
           "operation": "Update",
           "apiVersion": "v1",
           "time": "2021-10-10T16:15:56Z",
           "fieldsType": "FieldsV1",
           "fieldsV1": {"f:status":{"f:phase":{}}}
         }
       ]
     },
     "spec": {
       "finalizers": [
         "kubernetes"
       ]
     },
     "status": {
       "phase": "Active"
     }
   }
 ]
}

Example of deployment

We'll deploy an echo-server with external DNS, certmanager and an ingress controller. We'll use the proxy protocol to forward client IP from Octavia loadbalancer to the pod.

  • Generate a token on https://manager.infomaniak.com/v3/infomaniak-api
  • Edit manifest file to replace change_domain domain name with yours.
  • Edit kustomization.yaml and replace changeme with the generated token.

Apply the kustomize a first time:

  • kustomize build . | kubectl apply -f -

Some errors will appears because some CRD are not created yet.

Then wait for certmanager and external-dns to be ready then execute the kustomize and second time.

  • kustomize build . | kubectl apply -f -

A floating ip should be pending:

kubectl -ningress-nginx get service
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.233.20.165   <pending>     80:31683/TCP,443:30852/TCP   77s

After 1 minute your floating should be assigned.

kubectl -ningress-nginx get service
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.233.20.165   195.15.245.99   80:31683/TCP,443:30852/TCP   3m46s
ingress-nginx-controller-admission   ClusterIP      10.233.18.79    <none>          443/TCP  

A floating and a loadbalancer appears in openstack

openstack float ip list
+--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+
| ID                                   | Floating IP Address | Fixed IP Address | Port                                 | Floating Network                     | Project                          |
+--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+
| 0bb528d4-c29c-426c-ab08-95356900d91b | 195.15.245.99       | 10.11.12.24      | 7dddc5bf-037c-4988-8786-d9ea5d302ff1 | 0f9c3806-bd21-490f-918d-4a6d1c648489 | 3bef6ea32166448f96f188c81e25c4c6 |
| d4ae085a-33c4-43ac-8c53-84b71f061f40 | 195.15.245.220      | 10.11.12.81      | 5a2111ba-687b-4ba1-9d88-2ec86059c93c | 0f9c3806-bd21-490f-918d-4a6d1c648489 | 3bef6ea32166448f96f188c81e25c4c6 |
| e13db372-f2ba-4bf4-9376-9e0ee4309db1 | 195.15.246.232      | 10.11.12.73      | 5ca10481-3169-4faf-9e89-1fe4a2214bd3 | 0f9c3806-bd21-490f-918d-4a6d1c648489 | 3bef6ea32166448f96f188c81e25c4c6 |
+--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+
openstack loadbalancer list
+--------------------------------------+-------------------------------------------------------------------+----------------------------------+-------------+---------------------+------------------+----------+
| id                                   | name                                                              | project_id                       | vip_address | provisioning_status | operating_status | provider |
+--------------------------------------+-------------------------------------------------------------------+----------------------------------+-------------+---------------------+------------------+----------+
| de322525-6694-4004-9bf3-51a41cdc4388 | k8s-api                                                           | 3bef6ea32166448f96f188c81e25c4c6 | 10.11.12.81 | ACTIVE              | ONLINE           | amphora  |
| 13c78302-3e5c-4209-9b50-57b94f2915a3 | kube_service_cluster.local_ingress-nginx_ingress-nginx-controller | 3bef6ea32166448f96f188c81e25c4c6 | 10.11.12.24 | ACTIVE              | ONLINE           | octavia  |
+--------------------------------------+-------------------------------------------------------------------+----------------------------------+-------------+---------------------+------------------+----------+

Your ingress should be ready too.

kubectl -ndefault get ingress
NAME         CLASS    HOSTS                  ADDRESS         PORTS     AGE
echoserver   <none>   echoserver.change_domain   195.15.245.99   80, 443   5m55s

Your DNS A record should be setted automatically by external-dns and Infomaniak API.

host echoserver.change_domain
echoserver.change_domain has address 195.15.245.99

You can check the status of your Let's encrypt certificate.

kubectl -ndefault get certificate
NAME         READY   SECRET            AGE
echoserver   True    echoserver-cert   20s

Finaly call the service. You should see a valid Lets encrypt certificate, and your real client IP address in the x-real-ip field

curl https://echoserver.change_domain

Hostname: echoserver-5c79dc5747-rfj9f

Pod Information:
        -no pod information available-

Server values:
        server_version=nginx: 1.13.3 - lua: 10008

Request Information:
        client_address=10.233.68.3
        method=GET
        real path=/
        query=
        request_version=1.1
        request_scheme=http
        request_uri=http://echoserver.change_domain:8080/

Request Headers:
        accept=*/*
        host=echoserver.change_domain
        user-agent=curl/7.64.1
        x-forwarded-for=10.8.8.21
        x-forwarded-host=echoserver.change_domain
        x-forwarded-port=443
        x-forwarded-proto=https
        x-forwarded-scheme=https
        x-real-ip=x.x.x.x
        x-request-id=2706af401c9380eed3a7d09f40a466a7
        x-scheme=https

Request Body:
        -no body in request-

Sources

Back to top