0

How to increase space for your elasticsearch instances in k8s on ECK

One of the most common issues of running elasticsearch on k8s is the need to increase space for your elasticsearch-data volume. It is very simple to do so lets demo it. I stood up an environment using my deploy-eck.sh script

$ kubectl get pods,pvc
NAME                              READY   STATUS    RESTARTS   AGE
pod/eck-lab-es-data-0             1/1     Running   0          112m
pod/eck-lab-es-data-1             1/1     Running   0          112m
pod/eck-lab-es-data-2             1/1     Running   0          112m
pod/eck-lab-es-master-0           1/1     Running   0          112m
pod/eck-lab-es-master-1           1/1     Running   0          112m
pod/eck-lab-es-master-2           1/1     Running   0          8m12s
pod/eck-lab-kb-794785d7f7-zxqlk   1/1     Running   0          110m

NAME                                                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/elasticsearch-data-eck-lab-es-data-0     Bound    pvc-4081bb0d-1664-46fe-97bd-08520075bbdc   1Gi        RWO            standard       112m
persistentvolumeclaim/elasticsearch-data-eck-lab-es-data-1     Bound    pvc-f2f08a88-de70-4f70-8221-d0650d303d4f   1Gi        RWO            standard       112m
persistentvolumeclaim/elasticsearch-data-eck-lab-es-data-2     Bound    pvc-74eeb987-ebe1-45eb-888c-40fb9c2307ea   1Gi        RWO            standard       112m
persistentvolumeclaim/elasticsearch-data-eck-lab-es-master-0   Bound    pvc-a42c27b9-2ea3-46d3-90d6-1756d1cb5a2c   1Gi        RWO            standard       112m
persistentvolumeclaim/elasticsearch-data-eck-lab-es-master-1   Bound    pvc-a4a0893c-98a1-4286-afb4-d12169f230e2   1Gi        RWO            standard       112m
persistentvolumeclaim/elasticsearch-data-eck-lab-es-master-2   Bound    pvc-016a21e8-b28c-4250-8c97-4e0fc6100b9e   1Gi        RWO            standard       112m

We can see that we have a total of 6 nodes 3 masters and 3 data. On top each of the nodes only have 1GB for elasticsearch-data

If you are running out of space or is expecting a big ingestion coming your way you will want to increase your elasticsearch-data

In order to increase the space you have to first find out if volume expansion is enabled on your storageClass. Lets find out what storageClass our PVCs came from and find out if the volume expansion is allowed.

$ kubectl get pvc
NAME                                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
elasticsearch-data-eck-lab-es-data-0     Bound    pvc-4081bb0d-1664-46fe-97bd-08520075bbdc   1Gi        RWO            standard       114m
elasticsearch-data-eck-lab-es-data-1     Bound    pvc-f2f08a88-de70-4f70-8221-d0650d303d4f   1Gi        RWO            standard       114m
elasticsearch-data-eck-lab-es-data-2     Bound    pvc-74eeb987-ebe1-45eb-888c-40fb9c2307ea   1Gi        RWO            standard       114m
elasticsearch-data-eck-lab-es-master-0   Bound    pvc-a42c27b9-2ea3-46d3-90d6-1756d1cb5a2c   1Gi        RWO            standard       114m
elasticsearch-data-eck-lab-es-master-1   Bound    pvc-a4a0893c-98a1-4286-afb4-d12169f230e2   1Gi        RWO            standard       114m
elasticsearch-data-eck-lab-es-master-2   Bound    pvc-016a21e8-b28c-4250-8c97-4e0fc6100b9e   1Gi        RWO            standard       114m

Lets pick on data-0

$ kubectl describe pvc elasticsearch-data-eck-lab-es-data-0
Name:          elasticsearch-data-eck-lab-es-data-0
Namespace:     default
StorageClass:  standard
Status:        Bound
Volume:        pvc-4081bb0d-1664-46fe-97bd-08520075bbdc
Labels:        common.k8s.elastic.co/type=elasticsearch
               elasticsearch.k8s.elastic.co/cluster-name=eck-lab
               elasticsearch.k8s.elastic.co/statefulset-name=eck-lab-es-data
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       eck-lab-es-data-0
Events:        <none>

We now know that it came from standard storageClass

$ kubectl describe sc standard
Name:                  standard
IsDefaultClass:        Yes
Annotations:           storageclass.kubernetes.io/is-default-class=true
Provisioner:           kubernetes.io/gce-pd
Parameters:            type=pd-standard
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

We can see that AllowVolumeExpansion is enabled for this storageClass

Lets do this!

We can edit the pvc or patch it or apply a new manifest but today we will just patch it

$ kubectl patch pvc elasticsearch-data-eck-lab-es-data-0 -p '{"spec":{"resources":{"requests":{"storage":"5Gi"}}}}}'
persistentvolumeclaim/elasticsearch-data-eck-lab-es-data-0 patched

After a while we will see that this space was increased

$ kubectl get pvc
NAME                                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
elasticsearch-data-eck-lab-es-data-0     Bound    pvc-4081bb0d-1664-46fe-97bd-08520075bbdc   5Gi        RWO            standard       118m
elasticsearch-data-eck-lab-es-data-1     Bound    pvc-f2f08a88-de70-4f70-8221-d0650d303d4f   1Gi        RWO            standard       118m
elasticsearch-data-eck-lab-es-data-2     Bound    pvc-74eeb987-ebe1-45eb-888c-40fb9c2307ea   1Gi        RWO            standard       118m
elasticsearch-data-eck-lab-es-master-0   Bound    pvc-a42c27b9-2ea3-46d3-90d6-1756d1cb5a2c   1Gi        RWO            standard       118m
elasticsearch-data-eck-lab-es-master-1   Bound    pvc-a4a0893c-98a1-4286-afb4-d12169f230e2   1Gi        RWO            standard       118m
elasticsearch-data-eck-lab-es-master-2   Bound    pvc-016a21e8-b28c-4250-8c97-4e0fc6100b9e   1Gi        RWO            standard       118m

And in our events we can see what happened in the backend to allow this to happen

$ kubectl get events --sort-by='.metadata.creationTimestamp' -A
...
default     52s         Normal    ExternalExpanding            persistentvolumeclaim/elasticsearch-data-eck-lab-es-data-0   CSI migration enabled for kubernetes.io/gce-pd; waiting for external resizer to expand the pvc
default     51s         Normal    Resizing                     persistentvolumeclaim/elasticsearch-data-eck-lab-es-data-0   External resizer is resizing volume pvc-4081bb0d-1664-46fe-97bd-08520075bbdc
default     44s         Normal    FileSystemResizeRequired     persistentvolumeclaim/elasticsearch-data-eck-lab-es-data-0   Require file system resize of volume on node
default     6s          Normal    FileSystemResizeSuccessful   persistentvolumeclaim/elasticsearch-data-eck-lab-es-data-0   MountVolume.NodeExpandVolume succeeded for volume "pvc-4081bb0d-1664-46fe-97bd-08520075bbdc"
default     6s          Normal    FileSystemResizeSuccessful   pod/eck-lab-es-data-0                                        MountVolume.NodeExpandVolume succeeded for volume "pvc-4081bb0d-1664-46fe-97bd-08520075bbdc"

Now do the same for the other nodes.

Enjoy!

jlim0930

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.