Kubernetes is overkill for a blog

Back to index

If you'd say that setting up a Kubernetes cluster for a simple blog is a bit overkill, you'd be right! But that doesn't stop me. This is how I'm learning to use these things.

I have been using Kubernetes at work for our game backend infrastructure, and it's awesome. The major point for me to run my own cluster is to learn new things. And this blog is not the only thing I'm hosting on my cluster.

But let's talk about setting up the cluster with Terraform and hosting the blog now.

As a sidenote, I wish Zola supported Terraform formatting out of the box.

Terraform

I'm using Terraform to set up the cluster on Hetzner. kube-hetzner to be more specific. It's a Terraform module that sets up a ready to use cluster and keeps it up to date. Here's how to set one up:

module "kube-hetzner" {
  providers = {
    hcloud = hcloud
  }
  hcloud_token = "${var.hcloud_token}"
  source = "kube-hetzner/kube-hetzner/hcloud"
  # version = "1.2.0"

  ssh_public_key = file("${var.ssh_public_key_path}")
  ssh_private_key = file("${var.ssh_private_key_path}")

  # Internal network location, this is currently the only value for all of europe
  network_region = "eu-central"

  # Control plane settings, 3 for HA (High Availability)
  control_plane_nodepools = [
    {
      name        = "control-plane-${var.cluster_location}",
      server_type = "${var.control_plane_node_instance_type}",
      location    = "${var.cluster_location}",
      labels      = [],
      taints      = [],
      count       = 3
    }
  ]

  # Define agent settings
  agent_nodepools = [
    {
      name        = "agent-${var.cluster_location}",
      server_type = "${var.worker_node_instance_type}",
      location    = "${var.cluster_location}",
      labels      = [],
      taints      = [],
      count       = 3
    }
  ]

  # Load balancer settings
  load_balancer_type     = "lb11"
  load_balancer_location = "${var.cluster_location}"

  cluster_name = "${var.cluster_name}"
  block_icmp_ping_in = true

  # Don't create a local kubeconfig file. For backwards compatibility this is set to true 
  # by default in the module but for automatic runs this can cause issues.
  # See https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/issues/349
  # The kubeconfig file can instead be created by executing: 
  # "terraform output --raw kubeconfig > cluster_kubeconfig.yaml"
  # Be careful to not commit this file!
  create_kubeconfig = false
}

provider "hcloud" {
  token = var.hcloud_token
}

terraform {
  required_version = ">= 1.2.0"
  # Use AWS S3 for state storage
  backend "s3" {
    bucket = "<bucket>"
    key = "<cluster-state-file>"
    region = "eu-north-1"
    encrypt = "true"
  }
  required_providers {
    hcloud = {
      source  = "hetznercloud/hcloud"
      version = ">= 1.35.1"
    }
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = "1.14.0"
    }
  }
}

output "kubeconfig" {
    value = module.kube-hetzner.kubeconfig_file
    sensitive = true
}

That's about it. Pretty simple to set up a cluster, right?

What I like about this is that you can also define multiple locations for your control plane and agents. That way you can make it even better from the HA standpoint. Like this:

# Control plane settings, 3 for HA (High Availability)
control_plane_nodepools = [
  {
    name        = "control-plane-${var.cluster_location1}",
    server_type = "${var.control_plane_node_instance_type}",
    location    = "${var.cluster_location1}",
    labels      = [],
    taints      = [],
    count       = 1
  },{
    name        = "control-plane-${var.cluster_location2}",
    server_type = "${var.control_plane_node_instance_type}",
    location    = "${var.cluster_location2}",
    labels      = [],
    taints      = [],
    count       = 1
  },{
    name        = "control-plane-${var.cluster_location3}",
    server_type = "${var.control_plane_node_instance_type}",
    location    = "${var.cluster_location3}",
    labels      = [],
    taints      = [],
    count       = 1
  },
]

I didn't do this as it's not as mission-critical for me to have this cluster running 100% of the time. And it assumably will anyway knocing on wood.

Once that's running it's fairly trivial to set up stuff running in the cluster. But while we're using Terraform wouldn't it make sense to automate more stuff? So let's add some stuff to the stack:

provider "kubernetes" {
  host                   = module.kube-hetzner.kubeconfig.host
  client_certificate     = module.kube-hetzner.kubeconfig.client_certificate
  client_key             = module.kube-hetzner.kubeconfig.client_key
  cluster_ca_certificate = module.kube-hetzner.kubeconfig.cluster_ca_certificate
}

provider "helm" {
  kubernetes {
    host                   = module.kube-hetzner.kubeconfig.host
    client_certificate     = module.kube-hetzner.kubeconfig.client_certificate
    client_key             = module.kube-hetzner.kubeconfig.client_key
    cluster_ca_certificate = module.kube-hetzner.kubeconfig.cluster_ca_certificate
  }
}

resource "helm_release" "prom-stack" {
  namespace = "prometheus"
  wait      = true
  timeout   = 600

  name = "kube-prometheus-stack"

  depends_on = [
    module.kube-hetzner.kubeconfig
  ]

  create_namespace = true

  repository = "https://prometheus-community.github.io/helm-charts"
  chart      = "kube-prometheus-stack"
  version    = "v41.7.4"
}

resource "helm_release" "nginx-ingress-controller" {
  namespace = "ingress-nginx"
  wait      = true
  timeout   = 600

  name = "nginx-ingress-controller"

  depends_on = [
    module.kube-hetzner.kubeconfig
  ]

  create_namespace = true

  repository = "https://helm.nginx.com/stable"
  chart      = "nginx-ingress"
  #version    = "latest"

  set {
    name = "controller.replicaCount"
    value = 3
  }
}

resource "helm_release" "cert-manager" {
  namespace = "cert-manager"
  wait      = true
  timeout   = 600

  name = "cert-manager"

  depends_on = [
    helm_release.prom-stack
  ]

  create_namespace = true

  repository = "https://charts.jetstack.io"
  chart      = "cert-manager"
  version    = "v1.10.0"

  set {
    name="installCRDs"
    value="true"
  }
}

These things will set up prometheus, cert-manager and nginx ingress automatically whenever I decide to redeploy my cluster. This makes it easier to move my cluster somewhere from hetzner as majority of the configuration has been automated, in case I ever need to move the cluster elsewhere.

I like the idea of Terraform for this reason. :)

Hosting the Blog

Then finally you can set up a standard configuration for running the blog container in the cluster.

resource "kubernetes_namespace" "fullstackgamedev" {
  depends_on = [
    module.kube-hetzner
  ]
  metadata {
    name = "fullstackgamedev"
  }
}

resource "kubernetes_deployment" "fullstackgamedev-deployment" {
  metadata {
    name = "fullstackgamedev-deployment"
    namespace = "fullstackgamedev"
  }

  spec {
    replicas = 3

    selector {
      match_labels = {
        app = "fullstackgamedev-app"
      }
    }

    template {
      metadata {
        labels = {
          app = "fullstackgamedev-app"
        }
      }

      spec {
        image_pull_secrets {
          name = "dockerconfigjson-github-com"
        }

        container {
          image = "<your-container>:latest"
          name  = "fullstackgamedev-app"
          image_pull_policy = "Always"

          port {
            container_port = 80
          }

          resources {
            limits = {
              cpu    = "0.2"
              memory = "256Mi"
            }
            requests = {
              cpu    = "100m"
              memory = "50Mi"
            }
          }

          liveness_probe {
            http_get {
              path = "/"
              port = 80
            }

            initial_delay_seconds = 3
            period_seconds        = 3
          }
        }
      }
    }
  }
}

resource "kubernetes_service" "fullstackgamedev" {
  metadata {
    name = "fullstackgamedev-service"
    namespace = "fullstackgamedev"
  }

  spec {
    selector = {
      app = "fullstackgamedev-app"
    }
    port {
      name = "web"
      protocol = "TCP"
      port = 8080
      target_port = 80
    }
  }
}

resource "kubernetes_ingress_v1" "fullstackgamedev" {
  metadata {
    name = "fullstackgamedev-ingress"
    namespace = "fullstackgamedev"
    annotations = {
      "nginx.ingress.kubernetes.io/rewrite-target": "/"
    }
  }

  spec {
    ingress_class_name = "nginx"
    rule {
      host = "fullstackgame.dev"
      http {
        path {
          path = "/"
          path_type = "Prefix"
          backend {
            service {
              name = kubernetes_service.fullstackgamedev.metadata[0].name
              port {
                number = kubernetes_service.fullstackgamedev.spec[0].port[0].port
              }
            }
          }
        }
      }
    }
  }
}

That's it for hosting the blog as a deployment with a service and an ingress. The configuration is very similar to native kubernetes yaml format, it's just the same thing in Terraform format.

Back to index