NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

< BLOG HOME

Getting Started with k0smotron

image

Need to deliver lots of Kubernetes clusters, dynamically, with minimum operations headaches? k0smotron – an operator-based multi-cluster manager – solves the problem … literally in minutes

k0s Kubernetes is a CNCF-validated, easily-customized, single-binary, almost-no-dependencies Kubernetes distribution that installs with one command pretty much anywhere (i.e., Intel or ARM, down to 512MB RAM, any popular Linux, in containers, etc.). The k0s project prides itself on keeping up with kubernetes.io: security patches in under 72 hours, minor releases validated and provided to users in days. k0s deploys with one command anywhere, and brings along its own CLI and kubectl – with additional open source deployment and operations tools available for download from Mirantis, e.g., k0sctl, Lens Desktop).

Users now run k0s on IoT devices, laptops, edge hardware, and in VM- and bare-metal-hosted datacenters, tested to 1500+ nodes. This year, Team k0s has introduced a set of operators aimed at making k0s even simpler to maintain:

  • An Autopilot k0s operator that manages safe updates according to user-set Plans

  • A Kubernetes Cluster API (CAPI) operator, with providers and extensions, that lets you use ClusterAPI as a (Kubernetes-native) vehicle for cluster operations

k0s also supports Konnectivity - a protocol enabling secure bidirectional communication between Kubernetes workers and control planes, even when separated by passive firewalls. Konnectivity enables more complete (and ‘just works’ simple) control plane/worker separation with less network fiddling.

Enter k0smotron

To see k0smotron in action, check out this Tech Talk showcasing its capabilities.

K0smotron is the next step: it’s another open source operator (runs on any CNCF-validated Kubernetes) that lets you host, scale, and lifecycle-manage containerized k0s control planes on a Kubernetes cluster, using Kubernetes-native methods – and then configure and attach workers to these virtual control planes from anywhere. K0smotron is being built to solve big challenges now being faced by organizations that want to leverage Kubernetes with agility, cost-effectively – with minimal need for platform engineering, special skills, and operations person-power.

Some of these challenges are perennial, e.g., “How do I deliver and manage Kubernetes clusters for developers and teams? How do I enable safe, simple self-service so my teams can get what they need quickly, and leverage modern cloud native development techniques like blue/green and canary deployments?”

Other challenges are more nuanced (and fun, and potentially disruptive). Examples: “How can I enable Kubernetes-as-a-service with hosted control planes and remote workers/storage on customer sites?” Or “How can I build a distributed Kubernetes architecture for hosting containerized applications at many edge locations, while centralizing control planes and operations?” Or  “How do I create a vast IoT application with hosted/distributed control planes and tens of thousands of tiny worker nodes performing tasks at mobile endpoints?”

How to Build a Multi-Cluster Kubernetes Solution with k0s/k0smotron

This article provides a simple recipe for building a bare-bones multicluster Kubernetes-hosting system with k0s and k0smotron. Anyone comfortable with Linux commands, AWS EC2, and Kubernetes basics should have no trouble following it. Our recipe should be readily adaptable to other clouds, desktop virtual machines, Linux distros, etc.

What we’re building:

  • A single-node ‘mothership’ k0s cluster to host k0smotron. We’ll put this on one AWS t2.large VM and deploy it quickly with k0sctl.

  • One child cluster k0s control plane, which we’ll start using k0smotron, on the above node.

  • Another AWS t2.large VM for configuration as a k0s worker node – that we’ll attach to our child control plane. We’ll set the worker node up manually using k0s one-line install.

  • (Optional) Another k0s worker node on an 8GB Raspberry Pi 4 (set up the same way) that we’ll also attach to our child control plane. This is to underscore:

    • That you actually can attach worker nodes from anywhere

    • That remote workers can indeed be IoT-scale.

All nodes (and our Raspberry Pi) will be running Ubuntu 22.04 LTS. The two AWS VMs can run the standard AWS-supplied Ubuntu 22.04 LTS AMI image. The Pi 4 can also install/run the Ubuntu 22.04 LTS .iso out of the box – either by installing/running it from a microSD card or by configuring the Pi firmware to boot from USB, and installing/running Ubuntu from a USB stick, via the Raspberry Pi Imager.

Pre-requisites:

  1. A laptop or desktop VM (e.g., VirtualBox) to deploy and control things from. Handiest if this runs desktop Linux – e.g., Ubuntu 22.04 LTS desktop. This lets you run a browser and Lens on it locally, and is generally convenient.

  2. Access to an AWS account, and credentials for an IAM user appropriately empowered to deploy things (ideally not the root user).

  3. An AWS SSH keypair for that user. This will be used when provisioning VMs and will enable logging into them (without passwords) after they start. AWS will need to have recorded the private key. You’ll need a copy of the public key on your laptop (typically, this is stored in the ~/.ssh folder.

One note:

  • K0sctl requires passwordless sudo access for the user on target machines. This is set up by default for the ‘ubuntu’ default user when you start a VM using an AWS standard image. On other VMs, or on other platforms (e.g., desktop VMs, your Raspberry Pi, etc.), you may need to set it up.

Initial Preparations - Install Lens Desktop and k0sctl on your command/control machine

To begin, we’ll fire up our Linux laptop/VM, and visit the Lens Desktop website with a browser, clicking to download an appropriate binary version, install it locally, make it executable if necessary, and add it to your $PATH.

We’ll also visit the k0sctl Releases page, download the latest release of k0sctl, make it executable if necessary, and add it to your $PATH.

Create an AWS security group for your project

It helps to set this up before you launch VMs. The idea is to create a security group that does three things:

  • Allows all inbound IPv4 traffic from the public IP of your local network (assuming you’re connected to the internet via a standard home router that uses NAT). This lets you log into and control machines you create on EC2. If your Raspberry Pi (optional) is also on your local network, this will also let that remote worker talk to the k0smotron child cluster control plane.

  • Allows all inbound and outbound traffic among machines in this security group, i.e., your k0s/k0smotron mothership machine and your AWS-resident worker node. Even when using the same security group, AWS instances can’t talk to each other by default.

  • Permit all outbound traffic to anywhere (i.e., CIDR 0.0.0.0/0).

Note that this security group setup is very lax and basic – inappropriate for production or for anything you intend to use over time. But it has the benefit (for demos like this) of not getting in the way and making you scratch your head about why Part A can’t talk to Part B.

Setting it up is a two-phase process:

  1. Create a new security group in your VPC (call it k0smotron-project or something obvious), and add one inbound rule to it, declaring the source to be ‘My IP’ (AWS figures this out for you and puts it in a popdown).

  2. Also create an outbound rule enabling machines in this security group to send All Traffic to anywhere (0.0.0.0/0).

  3. Click submit to save, and note the security group UUID (unique identifier).

  4. Now edit the inbound rules. Add a new rule for All Traffic, where Source is the UUID of your security group (will appear in the popdown list under ‘security groups’).

  5. Finally, edit the outbound rules. Add a new rule for All Traffic, where Destination is again the UUID of your security group.

  6. Click submit to save.

Launch two AWS t2.large VMs and update them

From the EC2 console, launch two t2.large VMs (2 vCPUs, 8GB RAM) with SSDs (20GB is safe). Designate your SSH keypair for access, and select your new security group.

Perform some quick pre-flight checks

  • Be sure that you can SSH from your configuration laptop/VM to the public IP addresses of your AWS machines and log in using your key without needing to provide a password (the default username for ubuntu instances is ‘ubuntu’).

  • Check to be sure that you can enter commands with sudo without the host requiring a password (k0sctl requires this).

  • Perform updates and upgrades on your AWS instances for the sake of hygiene. (sudo apt update followed by sudo apt upgrade). Restart the machines if advised to do so.

  • Check to make sure your two instances can ping one another. This is not definitive, but can help identify potential security group issues.

Deploy your k0s “mothership”

Create a k0sctl.yaml file for k0sctl on your laptop, as follows:

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: cluster
spec:
  hosts:
  - role: controller+worker
    installFlags:
      - --no-taints
    ssh:
      address: 
      keyPath: 
      user: ubuntu
  k0s:
    version: v1.28.2+k0s.0

This describes a minimal single-node cluster in k0s’ default configuration. The --no-taints flag enables workloads to be scheduled on the same node as the control plane (not a best-practice, but okay for a demo).

Apply the above config with k0sctl to build your one-node mothership on your first AWS instance.

$ k0sctl apply --config path/to/k0sctl.yaml

Deployment is usually complete in under three minutes. Please check the docs for more details on k0sctl usage.

Once the cluster is deployed, you can use k0sctl to retrieve a kubeconfig for the mothership cluster, and write it to a local k0s.config file:

$ k0sctl kubeconfig --config path/to/k0sctl.yaml > k0s.config

You can then start Lens Desktop, click + to add a new cluster, and copy/paste the kubeconfig into the editable pane. Click Save and your cluster will be added to Lens. Double-click on the cluster name to connect and view its internals. More information on using Lens Desktop can be found here.

Note that Lens normally does not count control plane nodes as ‘Nodes,’ because they don’t run a kubelet. In this case, we’ve created a single node cluster, so what you’re seeing identified as a Node here is its worker part.

Install k0smotron

Once you’ve connected to your Mothership cluster with Lens Desktop, you can use the Lens built-in terminal to give kubectl commands to the cluster.

To install the k0smotron operator, click Terminal at the bottom left of the Lens UI, and enter the following command:

$ kubectl apply -f https://docs.k0smotron.io/stable/install.yaml

Deploy a Child Cluster Control Plane

Now we’re going to use k0smotron to create a minimal virtual control plane for our child cluster. Create a file with an appropriate name (we called ours k0smotron-child-1), inserting the IP address of your Mothership host. You’ll later communicate with the Kubernetes API of your child cluster on externalAddress:apiPort.

A note on versions: This won’t always be true, but for the moment, it’s important that the version of k0s used to create the child cluster backplane and the version of k0s you’ll later use to create worker nodes is the same. Check versions on the k0s Releases page. Version 1.28.2-k0s.0 (see below next to the key k0sVersion) was latest as of time of writing.

apiVersion: k0smotron.io/v1beta1
kind: Cluster
metadata:
  name: k0smotron-child-1
spec:
  replicas: 1
  k0sImage: k0sproject/k0s
  k0sVersion: v1.28.2-k0s.0
  externalAddress: 
  service:
    type: NodePort
    apiPort: 32443
    konnectivityPort: 32132
  persistence:
    type: emptyDir
  k0sConfig:
    apiVersion: k0s.k0sproject.io/v1beta1
    kind: ClusterConfig
    spec:
      network:
        calico: null
        provider: kuberouter

Obtain a Kubeconfig for your Child Cluster

Your child cluster control plane will be active in just a minute or two. Meanwhile, you can retrieve a kubeconfig for the child cluster from the Mothership by entering:

$ kubectl get secret k0smotron-child-1-kubeconfig -o jsonpath='{.data.value}' | base64 -d > ~/.kube/k0smotron-child-1.conf

Edit the file k0smotron-child-1.conf and cut/paste to create a new cluster in Lens. Connect and you should see basic information from your child cluster. For the moment, no Nodes will be visible (as noted above, this is because Lens defines a Node as ‘something that executes a kubelet,’ which control plane nodes do not). If you use Lens to look at Pods and set the filter to All namespaces, you’ll see only two pods present – one for coredns and the other for metrics-server – both in a Pending state. This is normal for a worker-free child control plane running in k0smotron: more pods will appear when you add a worker and pod states will shift to Running.

Obtain a Join Token for Workers

Next, you need to obtain a Join Token that can be used to authenticate workers as you add them to your hosted child backplane. To do this, begin by creating a JoinTokenRequest config file (we called ours ‘k0smotron-child-1-join-token-request’).

apiVersion: k0smotron.io/v1beta1
kind: JoinTokenRequest
metadata:
  name: my-token
  namespace: default
spec:
  clusterRef:
    name: k0smotron-child-1
    namespace: default

You’ll provide your own values for the token name (ours is my-token) and identify the virtual control plane by the name you assigned it when it was created (our control plane/clustername is k0smotron-child-1).

Once the file is created, apply it again to k0smotron, via the host cluster’s API and kubectl:

$ kubectl apply -f k0smotron-child-1-join-token-request

You can then retrieve the secret and save it in a local file (ours is called k0smotron-child-1-join-token) as follows:

$ kubectl get secret my-token -o jsonpath='{.data.token}' | base64 -d > k0smotron-child-1-join-token

Create a k0s Worker and Join it to your Cluster

Almost done! Our last step is to create a k0s worker node and join it to our child cluster.

To do this, we’ll start by SSH’ing to the second AWS instance, prepared earlier. We need to transport our join token to this machine, which we can do with scp or by simply copying and pasting via text editor (on the source machine) and vi/vim (on the destination server). We’ll call the token file k0smotron-child-1-join-token here, as well.

Next we enter the following command, using sudo:

$ curl -sSLf https://get.k0s.sh | sudo K0S_VERSION=v1.28.2+k0s.0 sh

Note that we’re choosing a k0s version to match the version used to create our child cluster control plane, rather than letting the script select latest, automatically. When this command executes, it will download k0s and make it executable.

Finally, we can use the k0s client to install our worker node as a service, handing it the join token file.

$ sudo k0s install worker –token-file ./k0smotron-child-1-join-token

And then we can start the k0s service, executing the worker and joining it up:

$ sudo k0s start

Add More Workers!

k0smotron lets a cloud-resident virtual control plane manage workers running anywhere – potentially on radically different platforms. To test this, we used exactly the same steps (and the same token) to create a k0s worker on a Raspberry Pi 4+ in our home lab (also running Ubuntu 22.04 LTS) and join it to our control plane as a second worker. Here’s the Nodes view from Lens, showing both our workers Ready.

Tearing Things Down

When you’re finished, tear everything down as follows:

Child cluster workers: SSH to each worker and enter:

$ sudo k0s stop
$ sudo k0s reset
$ sudo shutdown

This resets k0s to the state it was in when first downloaded (but does not remove the binary from the hosts). A reboot is recommended to ensure that all reset operations are completed.

Join token: Connect to the Mothership cluster with Lens, open a terminal, and type:

$ kubectl get secrets

… to list the secrets. You should see the name of your join token in that list. Then:

$ kubectl delete secret my-token

Child cluster control plane: To tear down the child cluster control plane, again connect to the Mothership cluster with Lens, open a terminal, and type:

$ kubectl delete cluster -f k0smotron-child-1

… where k0smotron-child-1 is the name of the configuration file you applied to build the control plane on k0smotron.

Mothership cluster: Back to your configuration laptop, we can use k0sctl to reset the Mothership:

$ k0sctl reset –config k0sctl.yaml

If you intend to use these instances for something else, you should restart them to ensure process completion. Otherwise, just use the AWS EC2 webUI to terminate them.

Join our Community!

At Mirantis, we’re working hard to make k0s and k0smotron ever more powerful, flexible, and easy to use. Please visit us at https://k8slens.dev, https://github.com/k0sproject/k0s, and https://github.com/k0sproject/k0smotron, try out these fast-evolving projects, and join our community for regular events, demos, and news about Kubernetes.

Mirantis simplifies Kubernetes.

From the world’s most popular Kubernetes IDE to fully managed services and training, we can help you at every step of your K8s journey.

Connect with a Mirantis expert to learn how we can help you.

Contact Us
NEWSLETTER

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

JOIN NOW