2

Kubernetes 101 & 102

Lets dive into kubernetes(will refer to it as k8s from here on out).

What is k8s? according to wikipedia:

Kubernetes (commonly stylized as k8s) is an open-source container-orchestration system for automating application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a "platform for automating deployment, scaling, and operations of application containers across clusters of hosts". It works with a range of container tools, including Docker. Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions.

Now that that's out of the way lets get started. I am wanting to set up a k8s cluster that has both dynamic NFS provisioning as well as dynamic Ceph provisioning for persistent volumes.

OS INSTALL

hostnameIPvCPUMem2nd disk (/dev/sdb)role
server0192.168.1.4028G1TBNFS server, k8s master, ansible
server1192.168.1.41416G500Gk8s node
server2192.168.1.42416G500Gk8s node
server3192.168.1.43416G500Gk8s node

With the specs above the main disk is 25G in size with default partitioning installing CentOS7 with minimal install.

After the minimal install is completed the following was added to ALL servers:

yum install epel-release git rsync wget curl nfs-utils net-tools -y
yum install open-vm-tools -y # since my vm's are on ESXi
yum update -y

cat >> /etc/hosts<EOF
#
192.168.1.40 server0.home.lab server0
192.168.1.41 server1.home.lab server1
192.168.1.42 server2.home.lab server2
192.168.1.43 server3.home.lab server3
EOF

sed -i s/^SELINUX=.*$/SELINUX=disabled/ /etc/selinux/config
systemctl enable network
systemctl disable firewalld
systemctl disable auditd
systemctl disable postfix
yum remove NetworkManager -y

For server0 we added ansible: yum install ansible -y

Create ssh keys on server0 and copy it over to the other servers for root and for myself (jlim)

ssh-keygen -b 2048
for host in server0 server1 server2 server3
do
  ssh-copy-id jlim@$host
  ssh-copy-id root@$host
done

This is a good place for a clean reboot & SNAPSHOT in case you mess up and need to revert back!

NFS install (optional if you want to setup dynamic NFS provisioning)

You can do this on one of the nodes or have an external NFS source but since I am adding the 1TB disk on /dev/sdb for server0 as NFS export I will share the steps as well.

mkfs.xfs /dev/sdb
mkdir /nfsexport

cat >> /etc/fstab<EOF
# /dev/sdb for /nfsexport
/dev/sdb /nfsexport xfs defaults 0 0
EOF

mount -a
chown nfsnobody:nfsnobody /nfsexport
systemctl enable nfs-server
systemctl start nfs-server

cat >> /etc/exports<EOF
/nfsexport *(rw,sync,no_subtree_check,no_root_squash,no_all_suqash,insecure)
EOF

exportfs -rav
exportfs -v

This is a good place for another SNAPSHOT for NFS server on server0

Install and configure k8s

# On server0 I ran this as myself "jlim"
git clone https://github.com/jlim0930/k8s-pre-bootstrap.git
cd k8s-pre-bootstrap
cat >> hosts<EOF
[k8s-nodes]
192.168.1.40
192.168.1.41
192.168.1.42
192.168.1.43
EOF

cat >> k8s-prep.yml<EOF
---
- name: Setup Proxy
  hosts: k8s-nodes
  remote_user: root
  become: yes
  become_method: sudo
  #gather_facts: no
  vars:
    k8s_cni: flannel                                      # calico, flannel
    container_runtime: docker                            # docker, cri-o, containerd
    configure_firewalld: false                            # true / false
    # Docker proxy support
    setup_proxy: false                                   # Set to true to configure proxy
    proxy_server: "proxy.example.com:8080"               # Proxy server address and port
    docker_proxy_exclude: "localhost,127.0.0.1"          # Adresses to exclude from proxy
  roles:
    - kubernetes-bootstrap
EOF

ansible-playbook -i hosts k8s-prep.yml

Now that all the hosts are prepped for k8s install lets get started:

# On server0 I ran this as myself "jlim"
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 
# I am adding the pod-network-cidr for flannel CNI if you are using different ones you can remove or update it

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# when kubeadm init completes it should give you a join command to use.  Please go ahead and apply it to server1/server2/server3 and make sure to run it with sudo logged in as yourself not root

# while the nodes are getting setup you can watch the status from server0 
watch kubectl get nodes -o wide

# you should have all 4 servers listed in Ready status

This is a good place for another SNAPSHOT so that you can revert to after k8s is installed.

Deploy dynamic NFS provisioning

This section and the NFS section above is borrowed from https://blog.exxactcorp.com/deploying-dynamic-nfs-provisioning-in-kubernetes/

# On server0 I ran this as myself "jlim"

git clone https://exxsyseng@bitbucket.org/exxsyseng/nfs-provisioning.git
cd nfs-provisioning

kubectl apply -f rbac.yaml

# Edit class.yaml and I changed example.com/nfs to home.lab/nfs
kubectl apply -f class.yaml

kubectl get storageclass
NAME                  PROVISIONER       AGE
managed-nfs-storage   home.lab/nfs      25s

# EDIT deployment.yaml and change example.com/nfs to home.lab/nfs, add the value for the NFS server, add the value for the NFS export path

kubectl apply -f deployment.yaml

Now for the NFS provisioner to act as the default dynamic provisioner you need to set it as default storageclass

kubectl get storageclass
NAME                        PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage         home.lab/nfs                 Delete          Immediate           false                  34h

kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

kubectl get storageclass
NAME                        PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage (default) home.lab/nfs                 Delete          Immediate           false                  34h

You can also create Persistent Volumes and Persistent Volume Claims to use with various PODS but I will not go into the examples here. Make sure to use the name managed-nfs-storage as your storageClassName in your PersistentVolumeClaim

Again another good spot for a SNAPSHOT!

CEPH via rook

Will be using the 500GB on server1/server2/server3 to stand up a Ceph cluster using rook. More details on https://rook.io/docs/rook/v1.3/ceph-quickstart.html

# on server0
git clone --single-branch --branch release-1.3 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/ceph

kubectl apply -f common.yaml
kubectl apply -f operator.yaml

kubectl -n rook-ceph get pod # verify the rook-ceph-operator is in Running before next steps

# you can customize cluster.yaml to fit your needs on hosts/disks/devices as you need if your environment looks different than mine but I set both
#    useAllNodes: true
#    useAllDevices: true
# as is and it used /dev/sdb on server1/server2/server3 to create my ceph cluster.

kubectl apply -f cluster.yaml

Lets deploy the toolbox and take a look at our ceph environment

kubectl create -f toolbox.yaml

kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" # to ensure that your pod is running

kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash # to bash into the container

# from here you can run any of the ceph commands:
ceph status
ceph odd status
ceph df
rados df

exit # to get out

# if you don't want to keep the toolbox running
kubectl -n rook-ceph delete deployment rook-ceph-tools

Ceph Dashboard can be configured wtih NodePort since our LB is not up yet.

# Login name: admin
# Password
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo

kubectl apply -f dashboard-external-https.yaml
kubectl -n rook-ceph get service # get the port for NodePort
# you can browse to https://192.168.1.40:xxxx from your NodePort

Create your storageclass

kubectl apply -f ./csi/rbd/storageclass.yaml

If you want to set the Ceph as your default storage class you can change it

kubectl get storageclass
NAME                        PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage (default)   home.lab/nfs                 Delete          Immediate           false                  34h
rook-ceph-block             rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   31h

kubectl patch storageclass rook-ceph-block -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

kubectl get storageclass
NAME                        PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
managed-nfs-storage         home.lab/nfs                 Delete          Immediate           false                  34h
rook-ceph-block (default)   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   31h

Another great place for a SNAPSHOT to revert to.

Load Balancer - Metalb

Metalb is a great little LB that works great for on prem k8s clusters. https://metallb.universe.tf/concepts/. It can run in layer 2 mode as well as BGP mode.

Enable strict ARP mode

# see what changes would be made, returns nonzero returncode if different
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl diff -f - -n kube-system

# actually apply the changes, returns nonzero returncode on errors only
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system

Install by manifest

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

For my use-case I will configure it as Layer2 with 192.168.1.45-192.168.1.55

cat >>lb.yaml<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.45-192.168.1.55
EOF
kubectl apply -f lb.yaml

To user the LB you just need to create services with spec.type set to LoadBalancer and it will even respect spec.loadBalancerIP as well.

Again another great spot for a SNAPSHOT!

And we are all done with 101 & 102! enjoy

jlim0930

2 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.