Welcome to the Linux Foundation Forum!

Please provide a viable restore method for etcd

I am having a very hard time getting a restore of etcd to actually work. Nowhere, that I can find, in the training materials is there a substantive guide/lab on doing an actual successful restore of etcd. Is it possible that you can clarify this? I have read the docs, I've found articles on the web that claim to show how to do it, but nothing works. I assume that if I used a pre-built tool designed for this, it would work fine. BUT, just doing it with kubectl, kubeadm, and whatever is found Ubuntu just isn't working.

From the Lab "Basic Node Maintenance"

  1. Backup the snapshot as well as other information used to create the cluster both locally as well as another system in
    case the node becomes unavailable. Remember to create snapshots on a regular basis, perhaps using a cronjob to
    ensure a timely restore. When using the snapshot "restore" it’s important the database not be in use. An HA cluster
    would remove and replace the control plane node, and not need a restore. More on the restore process can be found
    here: https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#restoring-an-etcd-cluster

This is all that the training materials have to say on this subject. The exam objectives say we will be tested on backup and restore. Backup is not a problem, seems everybody can do that. But if we are going to be tested on restore, then you need to provide the proper procedure for restore.

Here are the hurdles:

  • I have searched the web, not too many articles, but enough, however, their procedures don't work in this lab environment.
  • I've checked etcd docs, again, light on procedure or real world examples, and what it does have, doesn't work in this lab environment.
  • All are agreed that the database should not be in use, but guess what, there is no "stop the database" command in these lab environments. How is it done?
  • It is mentioned to specify new --data-dir, and --name when restoring and to edit etcd.yaml with the change. Okay, I've done that, attempted the restore, modified the etcd.yaml. I get one of two results, always:
    ** Either, the restore actually appears to succeed, except, I have not coredns, no calico pods, and existing deployments not restored, and unable to get coredns or calico going.
    ** Or, I get crashed kubernetes.

Needless to say, I've tried many different combinations of parameters, editing of etcd.yaml, restart of services, yada yada.

This is exhausting.

Am I truly the only one with this problem? If I am please, anybody, but mostly the experts at Linux Foundation, do tell how to do a restore.

Comments

  • I totally second this request.
    Especially in the case where etcd is itself a pod of the K8S clusterd that has to be restored it is a chicken-and-egg problem.
    I have no idea how one could restore it.

Categories

Upcoming Training