Welcome to the Linux Foundation Forum!

Lab 3.4.15: Kubectl replace Error

Hey guys,

I am getting the following error when trying to terminate and create a new deployment. Has anybody seens this and point what am I doing wrong.

master@master-virtual-machine:~$ kubectl replace -f first.yaml
Error from server (Conflict): error when replacing "first.yaml": Operation cannot be fulfilled on deployments.apps "ngnix": StorageError: invalid object, Code: 4, Key: /registry/deployments/default/ngnix, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: c106df54-09bd-48bb-8534-bce528740b4a, UID in object meta: 78793e67-fd44-4f9d-8f9e-377ec09542b5
master@master-virtual-machine:~$ kubectl get deployment

master@master-virtual-machine:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-virtual-machine Ready master 26h v1.16.3
slave-virtual-machine Ready 5h8m v1.16.3
master@master-virtual-machine:~$

master@master-virtual-machine:~$ kubectl describe deployments.apps
Name: ngnix
Namespace: default
CreationTimestamp: Tue, 10 Dec 2019 12:47:17 -0500
Labels: app=ngnix
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=ngnix
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=ngnix
Containers:
nginx:
Image: nginx
Port: 80/TCP
Host Port: 0/TCP
Environment:
Mounts:
Volumes:
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets:
NewReplicaSet: ngnix-6865b468fd (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 17m deployment-controller Scaled up replica set ngnix-6865b468fd to 1
master@master-virtual-machine:~$

Comments

  • serewicz
    serewicz Posts: 1,000

    Hello,

    First off you may note you are using v1.16.3. Which would indicate you haven't followed the setup. Other steps may have been skipped as well.

    Also re-read step nine, which says to edit the first.yaml file and remove the various lines which are throwing errors when you try the replace.

    Regards,

  • deepakgcp
    deepakgcp Posts: 10
    edited December 2020

    @serewicz said:
    Hello,

    First off you may note you are using v1.16.3. Which would indicate you haven't followed the setup. Other steps may have been skipped as well.

    Also re-read step nine, which says to edit the first.yaml file and remove the various lines which are throwing errors when you try the replace.

    Regards,

  • Hi @deepakgcp,

    Was there an issue you tried to report in the forum? Please provide specifics as to what command you were running and what errors or discrepancies you received on your end. Being more specific aids us in the troubleshooting process, to come up with a solution to your issue.

    Regards,
    -Chris

  • pyi1024
    pyi1024 Posts: 6

    Hello, I am getting this same error. Here is my first.yaml. The instructions say that the spec section is around line 31. However for me it was around line 117. I am assuming that I'm not supposed to change what is in the metadata section.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    annotations:
    deployment.kubernetes.io/revision: "1"
    creationTimestamp: "2021-03-22T01:30:05Z"
    generation: 1
    labels:
    app: nginx
    managedFields:
    - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
    f:metadata:
    f:labels:
    .: {}
    f:app: {}
    f:spec:
    f:progressDeadlineSeconds: {}
    f:replicas: {}
    f:revisionHistoryLimit: {}
    f:selector:
    f:matchLabels:
    .: {}
    f:app: {}
    f:strategy:
    f:rollingUpdate:
    .: {}
    f:maxSurge: {}
    f:maxUnavailable: {}
    f:type: {}
    f:template:
    f:metadata:
    f:labels:
    .: {}
    f:app: {}
    f:spec:
    f:containers:
    k:{"name":"nginx"}:
    .: {}
    f:image: {}
    f:imagePullPolicy: {}
    f:name: {}
    f:resources: {}
    f:terminationMessagePath: {}
    f:terminationMessagePolicy: {}
    f:dnsPolicy: {}
    f:restartPolicy: {}
    f:schedulerName: {}
    f:securityContext: {}
    f:terminationGracePeriodSeconds: {}
    manager: kubectl-create
    operation: Update
    time: "2021-03-22T01:30:05Z"
    - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
    f:metadata:
    f:annotations:
    .: {}
    f:deployment.kubernetes.io/revision: {}
    f:status:
    f:availableReplicas: {}
    f:conditions:
    .: {}
    k:{"type":"Available"}:
    .: {}
    f:lastTransitionTime: {}
    f:lastUpdateTime: {}
    f:message: {}
    f:reason: {}
    f:status: {}
    f:type: {}
    k:{"type":"Progressing"}:
    .: {}
    f:lastTransitionTime: {}
    f:lastUpdateTime: {}
    f:message: {}
    f:reason: {}
    f:status: {}
    f:type: {}
    f:observedGeneration: {}
    f:readyReplicas: {}
    f:replicas: {}
    f:updatedReplicas: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-03-22T01:30:11Z"
    name: nginx
    namespace: default
    resourceVersion: "8165"
    selfLink: /apis/apps/v1/namespaces/default/deployments/nginx
    uid: 6a6f1874-10af-4d51-84c2-aa321b0f8cdd
    spec:
    progressDeadlineSeconds: 600
    replicas: 1
    revisionHistoryLimit: 10
    selector:
    matchLabels:
    app: nginx
    strategy:
    rollingUpdate:
    maxSurge: 25%
    maxUnavailable: 25%
    type: RollingUpdate
    template:
    metadata:
    creationTimestamp: null
    labels:
    app: nginx
    spec:
    containers:
    - image: nginx
    imagePullPolicy: Always
    name: nginx
    ports:
    - containerPort: 80
    protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    terminationGracePeriodSeconds: 30

  • pyi1024
    pyi1024 Posts: 6

    just deleting the original deployment and re-creating seems to work though.

Categories

Upcoming Training