Welcome to the Linux Foundation Forum!

Lab 8 review how evaluate the good settings ?

in the file I have this :

 resources:
          limits:
            cpu: "1"
            memory: "1Gi"
          requests:
            cpu: "2.5"
            memory: "500Mi"
        args:
        - -cpus
        - "2"
        - -mem-total
        - "1950Mi"

if I have to fix something.. there is a way to identify if the user did a mistake in the args, or limits or requests ?

let's say I have to fix that... I could change the values in all those 3 options.. which one should I consider that is the good one ?

here we can quickly see that the requested CPU > limits CPU, we have a problem here.. but I could reduce the requests value and the args values to match the CPU Limits.

so my question.. there is a way to identify what I can't change ?

Comments

  • In my setup, I have 1 master, 2 workers. but each worker have 2 CPU. if I deploy the app with 2 CPU, I got insufficient CPU.. so I reduce it to 1.5.. but if I have to do something like that in the exam.. can I play with the values in args too.. I just have to make it work..that's it ?

  • here my setup on my node :

    Capacity:
      cpu:                2
      ephemeral-storage:  30435260Ki
      hugepages-1Gi:      0
      hugepages-2Mi:      0
      memory:             1975380Ki
      pods:               110
    Allocatable:
      cpu:                2
      ephemeral-storage:  28049135570
      hugepages-1Gi:      0
      hugepages-2Mi:      0
      memory:             1872980Ki
      pods:               110
    ...
    Allocated resources:
      (Total limits may be over 100 percent, i.e., overcommitted.)
      Resource           Requests     Limits
      --------           --------     ------
      cpu                1550m (77%)  1700m (85%)
      memory         790Mi (43%)  1736Mi (94%)
      ephemeral-storage  0 (0%)       0 (0%)
      hugepages-1Gi      0 (0%)       0 (0%)
      hugepages-2Mi      0 (0%)       0 (0%)
    Events:
      Type     Reason     Age                 From     Message
      ----     ------     ----                ----     -------
      Warning  SystemOOM  29s (x11 over 12m)  kubelet  (combined from similar events): System OOM encountered, victim process: stress, pid: 27825
    

    I'm using those settings

     resources:
              limits:
                cpu: "1.5"
                memory: "1.5Gi"
              requests:
                cpu: "1"
                memory: "500Mi"
            args:
            - -cpus
            - "1"
            - -mem-total
            - "1350Mi"
            - -mem-alloc-size
            - "100Mi"
            - -mem-alloc-sleep
            - "1s"
    
    
  • Hi @SebastienDionne,

    First you would need to identify if there are any issues, at all. What are the best commands that display possible problems?

    Then, when trying to resolve the issues, digging into all API resources involved would reveal which properties are not correctly set, or not following the rules of a policy.

    In your case, what is the significance of limit and request. What is their relationship with the Node's resources? Which ones can you quickly edit in the Kubernetes CLI to fix a possible resource issue?

    Regards,
    -Chris

  • ok thank, so I was OK. my pod doesn't crash. so I completed the class.

Sign In or Register to comment.