In this blog post, the first in a series, we will be taking a look at Kubernetes. More specifically the differences between Local setups and setups on a cloud provider like AWS and Digital Ocean. We will take a look at how to spin up a basic cluster on both Local and Cloud.
If you are new to Kubernetes and the concept in general I recommend you check out the Cassandra Lunch #41 where our very own Rahul Singh explains the core concepts of Docker Containers and how we use that in Kubernetes.
In general, Kubernetes should really be able to run anywhere, but there are more integrations for certain Cloud providers. Things like Volumes and External Load Balancers work only with supported Cloud Providers. First, we will spin up a Local single machine using Minikube. To Spin up a highly available production cluster we will use Kops with AWS.
Minikube is a tool that makes it easy to run Kubernetes locally, it can run a single-node cluster. The aim is on users who want to just test out development. It cannot spin up a production cluster, it’s a one-node machine without high availability.
The good news is it works on Windows, Linux, and macOS, you don’t need Docker installed to run Minikube but it’s easier than using VM. To start follow this link and select your system.
minikube start will launch a one-node cluster
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment hello-minikube --type=NodePort --port=8080
minikube dashboard will enable the dashboard add-on, and open the proxy in the default web browser.
Congratulations you just spun up your first Kubernetes cluster using Minikube and Docker. Now there is even an easier way to do this and it’s by using Docker Desktop. If you go to settings and check Enable Kubernetes docker will do this all for you in just a few seconds.
The Minikube documentation is excellent and easy to follow along so if you want to dive deeper outside just starting a cluster I recommend following along.
To set up Kubernetes on AWS we need to use a tool called kops, which stands for Kubernetes Operations. The tool allows us to do production-grade Kubernetes installations, upgrades, and management. There is one thing to note here, Kops only works on Mac/Linux so if you are already on these operating systems then great, but if you are using Windows, like me, you will have to boot a Virtual Machine. The easiest to use is Virtualbox, then boot up a Ubuntu image.
Before we start working we need to do a few things in AWS, firstly create an account if you haven’t, then using IAM service add a new user with access key ID and secret access key, which we will use to configure kops, then give this user Administrator Access. Also, use the S3 service to create a bucket for our state and create it in the region closes to you. Next is the DNS, using Amazon Route 53 you can either register a new domain name or use an existing one and create a Hosted Zone. This might be a little tricky so I suggest you read up on how to Configure DNS.
After we set up our work environment we need to install the latest release of kops
then we will need python-pip in order to install AWS runtime
sudo apt-get install python-pip
sudo pip install awscli
Then we use the
aws configure and enter the access key ID and secret access key from your account and what this will do is create 2 files config and credentials. Next, we need to install kubectl.
Now finally we can create a simple cluster with one master node and 2 worker nodes in the us-west-2a zone
kops create cluster --name=play.anantkops.net --state=s3://kops-state-b420a --zones=us-west-2a --node-count=2 --node-size=t3.micro --master-size=t3.micro --dns-zone=play.anantkops.net
kops update cluster play.anantkops.net --state=s3://kops-state-b420a --yes
To check if our nodes are up we can either use
kubectl get node or just go on AWS and see the nodes there
We looked at two ways to spin up a cluster. Locally we can spin it up easily and use it for testing and deploying our first app on it. On the cloud, however, we can do much more and we will look into that in the next part.
Cassandra.Link is a knowledge base that we created for all things Apache Cassandra. Our goal with Cassandra.Link was to not only fill the gap of Planet Cassandra but to bring the Cassandra community together. Feel free to reach out if you wish to collaborate with us on this project in any capacity.
We are a technology company that specializes in building business platforms. If you have any questions about the tools discussed in this post or about any of our services, feel free to send us an email!
Subscribe to our monthly newsletter below and never miss the latest Cassandra and data engineering news!