Jun 30, 2020| Jessica Stenning
The SUSE Containers as a Service (CaaS) Platform deployment on AWS recently went into technological preview (as of version 4.1.2). I spent a few days going through the docs and deploying my own SUSE CaaS Platform (v4.2.1) on AWS, and here are my tips for a smooth deployment.
This blog post makes reference to “Master” nodes. This terminology is used only to maintain consistency with the official documentation.
First off, what is SUSE CaaS Platform? As SUSE have put it: ‘SUSE CaaS Platform is an enterprise class container management solution that enables IT and DevOps professionals to more easily deploy, manage, and scale container-based applications and services’.
Or, if that’s too much of a mouthful for you, think Kubernetes with bells and whistles on.
It uses:
Here’s the reference architecture for CaaS Platform 4.2.X:
To put it plainly, you’re an enterprise organisation that wants to utilise the benefits of Kubernetes (reduced infrastructure costs, faster application delivery cycle times, all-round improvements to productivity and so on), BUT you’re an enterprise so you need a solution that will get sign-off from the security team - a fully supported solution based on a robust container OS helps ticks those boxes.
Additionally, because SUSE’s certified distribution of upstream Kubernetes utilises only Kubernetes’ features and APIs (no unnecessary additional layers or special APIs), there’s no vendor lock-in. If you think one day you might want to move Kubernetes to another cloud provider, or utilise Kubernetes across multiple public and private clouds (or non-cloud resources), SUSE CaaS Platform provides an out of the box portable solution.
This blog post is based on the deployment of SUSE CaaS Platform on AWS, rather than any ongoing maintenance or administration
The process to deploy on AWS is documented, but as previously mentioned, it’s currently in tech preview, and there are a couple of caveats to getting it up and running quickly and successfully. The following steps have been written to be read alongside the official SUSE documentation, not instead of.
This is easily done via the SUSE Website, once you’ve gone through the registration process you’ll be given a trial registration code - keep a note of this as you’ll be using it at various stages of the deployment process. A trial of SUSE CaaS Platform lasts 2 months.
Firstly you’ll need an instance to bootstrap the entire deployment process. The initial deployment step for CaaS Platform on AWS utilises some SUSE-defined Terraform templates to spin up the infrastructure for your cluster. Before you can do that you’ll need somewhere to run that Terraform from, and as the docs state, you’ll need to be running SUSE Linux Enterprise Server (SLES) 15 SP1 to install those packages.
As someone with no experience deploying or operating SLES, it took me a little while to work out the best way to spin up one of these instances to act as my Management instance.
When you register for your free trial of SUSE CaaS Platform you’re given access to some downloadable material, some iso
images for the SLES server with or without packages bundled in (visible in the screenshot above). I started out creating an instance from the SLES iso
image on Virtualbox to get up and running quickly. While this technically worked, it was a bit painful to get things done, and I wanted to look at automated deployments down the line so pivoted to using AWS instead.
While AWS offers a standard SLES 15 SP1 in its marketplace, it is (currently) not possible to add the CaaS Platform repos to this instance. This took me a while to work out, and as far as I can see, isn’t currently documented.
So, while the first paragraph of the deployment docs states that you need a workstation running SLES 15 SP1 or equivalent, the ‘standard’ AWS AMI is not sufficient.
Instead, using my previous Virtualbox deployment, I identified the AMI used in the terraform to deploy the Master and Worker nodes and used this AMI to create my Management workstation (currently ami-020aaee0bf8836bf0
) - the docs state that you can run cluster bootstrap commands from the master node, so I took an educated guess that this AMI would have everything needed, and it did!
Again, this isn’t documented as a supported method of implementation, but it was effective in my case.
When creating the SSH key on your Management instance in the first step of the deployment instructions, make sure to run ssh-keygen -t rsa
, rather than copy pasting the ssh-keygen -t ed25519
as suggested. You’ll hit issues with AWS key incompatibiliy during the Terraform stages later on if you don’t.
Go through the tool installation steps to get Terraform and skuba
installed on the Management instance along with the necessary configuration files (you’ll need your trial registration code to complete these steps).
Note:
skuba
is the SUSE-built cli that wraps aroundkubeadm
to simplify deployments and upgrades of kubadm-based clusters.
Once you’ve completed these steps you can navigate to the AWS specific deployment instructions
As instructed, fill out the values required in the terraform.tfvars
file, and then run a terraform apply
to spin up your infrastructure (you’ll need your trial registration code for this step too).
You might need to install vim to make filling this file out easier, if that’s the case do it with zypper
sudo zypper in vim
Seeing as the Worker nodes aren’t assigned a public IP, and the Management instance is deployed to a different VPC than the Master and Worker nodes, I found that the easiest way to boostrap the cluster is from the Master node.
This means that your Master node will need to have ssh access to the nodes you intend to add to the cluster, it’ll also need skuba
installed to perform the bootstrapping.
You’ll need the skuba
cli installed on the Master node to complete the bootstrap process. Run the commands included in the ‘Preparation’ section to set this up. Here the <PRODUCT-KEY>
is your SUSE CaaS Platform registration code. You will need to run these commands with root priveleges.
Rather than copying over the private key from the Management instance, use ssh-agent
forwarding to allow the Master node to use the local ssh-agent on the Management instance.
From the Management node run:
ssh -A ec2-user@myip
This will give you ssh access to the other nodes in the cluster, which is required for bootstrapping the cluster with skuba
.
Now it’s just a case of initialising your cluster, you can get your Load Balancer IP/FQDN from the terraform output on your Management instance, or directly from the AWS console.
Once your cluster’s initialised you can boostrap the nodes.
A couple of points to note:
skuba
on AWS the user will be ec2-user
, not sles
.So, your first skuba
bootstrap command will look look like this:
skuba node bootstrap --user ec2-user --sudo --target <NODE_IP/FQDN> <NODE_PRIVATE_DNS>
Once you’ve joined all of your Master and Worker nodes to the cluster you’re done! You should be able to see a happy cluster with skuba cluster status
, and install and interact with kubectl
commands to start deploying pods your cluster!
For guidance on ongoing maintenance and administrative tasks associated with your SUSE CaaS Platform see the official Administration Guide