Welcome to the Linux Foundation Forum!

Taints (Lab 2.2)

Hello,
I have some little doubts.

When I create the cluster (lab 2.2), my nodes are ready.
When I try to remove the taints, the procedure is unsuccessful for node.kubernetes.io/disk-pressure- taint.

Why this thing? What am I doing wrong?

Thanks so much

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Comments

  • Posts: 1,000

    Hello,

    If you look at the output you'll notice it says "disk pressure". Meaning the nodes do not meet the requirements for Kubernetes to run. What are you using to run the labs, CPU/Memory/Disk type and size?

    Regards,

  • Hi @MariangelaPetraglia,

    I recently noticed similar node behaviors with GCE VM instances configured with 10 GB disks, when I installed additional cluster management tools that were otherwise not part of the course lab material.

    You could run sudo du / -h -d 1 on each VM to see the sizes of all high level directories, and drill down into the ones that seem to take up too much disk space.

    Regards,
    -Chris

  • Hi @serewicz
    The following output is master node :

    1. mary@master:~$ sudo lshw -short
    2. [sudo] password for mary:
    3. H/W path Device Class Description
    4. =================================================
    5. system VirtualBox
    6. /0 bus VirtualBox
    7. /0/0 memory 128KiB BIOS
    8. /0/1 memory 7898MiB System memory
    9. /0/2 processor Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz
    10. /0/100 bridge 440FX - 82441FX PMC [Natoma]
    11. /0/100/1 bridge 82371SB PIIX3 ISA [Natoma/Triton II]
    12. /0/100/1.1 storage 82371AB/EB/MB PIIX4 IDE
    13. /0/100/2 display SVGA II Adapter
    14. /0/100/3 enp0s3 network 82540EM Gigabit Ethernet Controller
    15. /0/100/4 generic VirtualBox Guest Service
    16. /0/100/5 multimedia 82801AA AC'97 Audio Controller
    17. /0/100/6 bus KeyLargo/Intrepid USB
    18. /0/100/6/1 usb1 bus OHCI PCI host controller
    19. /0/100/6/1/1 input USB Tablet
    20. /0/100/7 bridge 82371AB/EB/MB PIIX4 ACPI
    21. /0/100/d storage 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode]
    22. /0/3 scsi0 storage
    23. /0/3/0.0.0 /dev/cdrom disk CD-ROM
    24. /0/4 scsi2 storage
    25. /0/4/0.0.0 /dev/sda disk 31GB VBOX HARDDISK
    26. /0/4/0.0.0/1 /dev/sda1 volume 10238MiB EXT4 volume
    27. /1 docker0 network Ethernet interface
    28. mary@master:~$ uname -a
    29. Linux master 5.3.0-62-generic #56~18.04.1-Ubuntu SMP Wed Jun 24 16:17:03 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
    30. mary@master:~$ free
    31. total used free shared buff/cache available
    32. Mem: 8088556 1154048 5746168 31680 1188340 6647276
    33. Swap: 483800 0 483800
    34.  

    The following output is worker node (is the same of master):

    1. mary@worker1:~$ sudo lshw -short
    2. [sudo] password for mary:
    3. H/W path Device Class Description
    4. =================================================
    5. system VirtualBox
    6. /0 bus VirtualBox
    7. /0/0 memory 128KiB BIOS
    8. /0/1 memory 7898MiB System memory
    9. /0/2 processor Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz
    10. /0/100 bridge 440FX - 82441FX PMC [Natoma]
    11. /0/100/1 bridge 82371SB PIIX3 ISA [Natoma/Triton II]
    12. /0/100/1.1 storage 82371AB/EB/MB PIIX4 IDE
    13. /0/100/2 display SVGA II Adapter
    14. /0/100/3 enp0s3 network 82540EM Gigabit Ethernet Controller
    15. /0/100/4 generic VirtualBox Guest Service
    16. /0/100/5 multimedia 82801AA AC'97 Audio Controller
    17. /0/100/6 bus KeyLargo/Intrepid USB
    18. /0/100/6/1 usb1 bus OHCI PCI host controller
    19. /0/100/6/1/1 input USB Tablet
    20. /0/100/7 bridge 82371AB/EB/MB PIIX4 ACPI
    21. /0/100/d storage 82801HM/HEM (ICH8M/ICH8M-E) SATA Controller [AHCI mode]
    22. /0/3 scsi0 storage
    23. /0/3/0.0.0 /dev/cdrom disk CD-ROM
    24. /0/4 scsi2 storage
    25. /0/4/0.0.0 /dev/sda disk 31GB VBOX HARDDISK
    26. /0/4/0.0.0/1 /dev/sda1 volume 10238MiB EXT4 volume
    27. /1 docker0 network Ethernet interface
    28. mary@worker1:~$ uname -a
    29. Linux worker1 5.3.0-62-generic #56~18.04.1-Ubuntu SMP Wed Jun 24 16:17:03 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
    30. mary@worker1:~$ free
    31. total used free shared buff/cache available
    32. Mem: 8088556 1124936 5865968 38576 1097652 6671776
    33. Swap: 483800 0 483800
  • Hi @chrispokorni

    On master node that command produces following output:

    1. mary@master:~$ sudo du / -h -d 1
    2. du: cannot access '/run/user/1000/gvfs': Permission denied
    3. 1,6M /run
    4. 3,8G /snap
    5. 3,5G /usr
    6. 12M /sbin
    7. 40K /root
    8. 385M /home
    9. 127M /boot
    10. 13M /bin
    11. 1,2G /lib
    12. 226M /opt
    13. 80K /tmp
    14. 4,0K /mnt
    15. 4,0K /srv
    16. 16K /lost+found
    17. 4,0K /cdrom
    18. 4,0K /lib64
    19. du: cannot access '/proc/12488/task/12488/fd/4': No such file or directory
    20. du: cannot access '/proc/12488/task/12488/fdinfo/4': No such file or directory
    21. du: cannot access '/proc/12488/fd/3': No such file or directory
    22. du: cannot access '/proc/12488/fdinfo/3': No such file or directory
    23. 0 /proc
    24. 3,3G /var
    25. 8,0K /media
    26. 13M /etc
    27. 0 /sys
    28. 0 /dev
    29. 13G /

    On worker node that command produces following output:

    1. mary@worker1:~$ sudo du / -h -d 1
    2. du: cannot access '/run/user/1000/gvfs': Permission denied
    3. 1,6M /run
    4. 3,8G /snap
    5. 3,3G /usr
    6. 12M /sbin
    7. 40K /root
    8. 381M /home
    9. 114M /boot
    10. 13M /bin
    11. 878M /lib
    12. 226M /opt
    13. 80K /tmp
    14. 4,0K /mnt
    15. 4,0K /srv
    16. 16K /lost+found
    17. 4,0K /cdrom
    18. 4,0K /lib64
    19. du: cannot access '/proc/8722/task/8722/fd/4': No such file or directory
    20. du: cannot access '/proc/8722/task/8722/fdinfo/4': No such file or directory
    21. du: cannot access '/proc/8722/fd/3': No such file or directory
    22. du: cannot access '/proc/8722/fdinfo/3': No such file or directory
    23. 0 /proc
    24. 3,1G /var
    25. 8,0K /media
    26. 13M /etc
    27. 0 /sys
    28. 0 /dev
    29. 13G /
  • Thanks for the detailed outputs @MariangelaPetraglia.

    It seems that the 10 GB volumes assigned to each VBox VM may not be sufficient. Maybe going up to 15 GB volumes per VM would help.

    Regards,
    -Chris

  • Posts: 1,000

    Hello,

    While du shows disk usage this information is useless when determining if the disk is full. Please run df -h

    From the output 3.8G  /snap, 3,3G  /usr, 3.1G /var you are out of space if you chose a 10G disk.

    Why you have this much in those directories I'm unsure. What did you do that is not in the lab guide? What did you install using snap which is notorious for wasting space?

    A new cluster shows this on my 2cpu/7.5G/10Gdisk node:

    1. Filesystem   Size Used Avail Use% Mounted on
    2. udev      3.7G   0 3.7G  0% /dev
    3. tmpfs      746M 1.9M 744M  1% /run
    4. /dev/sda1    9.6G 6.1G 3.5G 64% /
    5. tmpfs      3.7G   0 3.7G  0% /dev/shm
    6. tmpfs      5.0M   0 5.0M  0% /run/lock
    7. tmpfs      3.7G   0 3.7G  0% /sys/fs/cgroup
    8. /dev/sda15   105M 3.6M 101M  4% /boot/efi
    9. /dev/loop0    30M  30M   0 100% /snap/snapd/8790
    10. /dev/loop1    56M  56M   0 100% /snap/core18/1885
    11. /dev/loop3   126M 126M   0 100% /snap/google-cloud-sdk/147
    12. /dev/loop4   126M 126M   0 100% /snap/google-cloud-sdk/148
    13. tmpfs      746M   0 746M  0% /run/user/1001


    What does yours show?

    Tim

  • edited September 2020

    Hi @chrispokorni
    Actually, node disks are dynamically allocated and their capacity is 30 GB. Could dynamic allocation be the problem?
    Below there is the screenshot (first and third).

    Hi @serewicz
    this is the execution of df -h command on node:

    1. mary@master:~$ df -h
    2. Filesystem Size Used Avail Use% Mounted on
    3. udev 3,9G 0 3,9G 0% /dev
    4. tmpfs 790M 1,6M 789M 1% /run
    5. /dev/sda1 9,8G 9,1G 195M 98% /
    6. tmpfs 3,9G 0 3,9G 0% /dev/shm
    7. tmpfs 5,0M 4,0K 5,0M 1% /run/lock
    8. tmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup
    9. /dev/loop0 55M 55M 0 100% /snap/core18/1754
    10. /dev/loop2 3,8M 3,8M 0 100% /snap/gnome-system-monitor/127
    11. /dev/loop3 256M 256M 0 100% /snap/gnome-3-34-1804/36
    12. /dev/loop1 162M 162M 0 100% /snap/gnome-3-28-1804/128
    13. /dev/loop4 1,0M 1,0M 0 100% /snap/gnome-logs/100
    14. /dev/loop5 15M 15M 0 100% /snap/gnome-characters/399
    15. /dev/loop6 2,5M 2,5M 0 100% /snap/gnome-calculator/748
    16. /dev/loop7 63M 63M 0 100% /snap/gtk-common-themes/1506
    17. /dev/loop8 161M 161M 0 100% /snap/gnome-3-28-1804/116
    18. /dev/loop9 97M 97M 0 100% /snap/core/9804
    19. /dev/loop10 56M 56M 0 100% /snap/core18/1885
    20. /dev/loop12 2,3M 2,3M 0 100% /snap/gnome-system-monitor/148
    21. /dev/loop11 97M 97M 0 100% /snap/core/9436
    22. /dev/loop13 45M 45M 0 100% /snap/gtk-common-themes/1440
    23. /dev/loop14 4,3M 4,3M 0 100% /snap/gnome-calculator/544
    24. /dev/loop15 1,0M 1,0M 0 100% /snap/gnome-logs/81
    25. /dev/loop16 384K 384K 0 100% /snap/gnome-characters/550
    26. tmpfs 790M 36K 790M 1% /run/user/121
    27. tmpfs 790M 32K 790M 1% /run/user/1000
  • Posts: 1,000

    Hello,

    If you look at your root file system you will notice it shows as 98% full. While the disk may be dynamically allocated, the OS using the disk is not. The file system thinks /dev/sda1 is 9.8G, not 30G

    I would either make a larger disk without dynamic allocation or not add all the stuff not mentioned in the course. Snap installed software tends to take up a lot of space.

    Regards,

  • Ok,
    I will.

    Thank you
    Regards

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Welcome!

It looks like you're new here. Sign in or register to get started.
Sign In

Categories

Upcoming Training