Welcome to the new Linux Foundation Forum!

Lab exercise 42.4 repairing the MBR

saqman2060saqman2060 Posts: 777
edited February 2017 in LFS201 Class Forum

I noticed a strange method in copying the mbr code to a file using dd. The area to the save the mbr was located in the "/root" directory of the primary hard drive; the hard drive of the mbr that I was supposed to destroy then repair. After copying the mbr to the suggested location (/root/mbrsave), I zeroed the first 446 bytes of the mbr and rebooted.

As expected, the system did not boot. However, I could not eccess the saved mbr file. None of the partitions were accessible. 

Instead, I reinstalled my system back to virtualbox. Performed the steps again. This time, I made sure I had attached another Virtualbox disk to my VM guest. The mbr code from my primary hard drive was then saved to the new attached vm disk. I was able to successfully repair my mbr.

The steps in this lab did not make clear to save the mbr code to a separate storage, nor did it meantion any procedures of accessing any of the missing partitions to retreive the saved mbr code. Did I misinterprate this exercise?

https://lms.360training.com/scorm/799658/Lab%2042.4.pdf

Comments

  • mobilemobile Posts: 15
    edited February 2017

    I don't think you misinterpreted the lab. I think saving the mbr to a external disk is a better backup method than saving it to the /root directory.

    Did you try any other methods to access the backup other than the dd command specified in the lab? Like try to remount all the filesystems manually with uuids, run fsck, etc. I realize that is not what the lab explicitly outlines, but it's possible your rescue boot handles the corrupt filesystem differently than the lab anticipated. The lab assumes that the rescue boot setup will mount the root directory in /mnt/sysimage[/root/mbrsave]

    Is it possible that your rescue boot did not mount the system at all, or mounted it somewhere else? Maybe a different disk setup than the lab anticipated ? The boot log can shine some light as to how your rescue boot handles some of these things.

    I'm revisiting all the labs and let you know if I can gather any more info too. Keep us posted.

  • coopcoop Posts: 339
    edited February 2017

    The exercise says: "Reboot into the rescue environment and ....." which means use the rescue disk since the hard disk will not boot.  Did you try booting off the "rescue disk" which depending on your distro is probably the "live" or "install" disk.

  • coopcoop Posts: 339

    Believe me, I have done in this the past when I borked grub somewhow.  

  • That is possible, the rescue disk that lab used handles corrupted mbrs differently. When I zeroed my mbr, I could not access any of the partitions. Fsck was of no help. The rescue disk I used (which was the liveinstall) did not have any options to the drive for missing partitions.
  • Yes, I did all outlined in the lab
  • BattogtokhBattogtokh Posts: 19
    edited February 2017

    I saved mbrsave as per lab document in my /root. Used rescue disk and recovered successfully. I did it on Centos server.

  • BattogtokhBattogtokh Posts: 19
    edited February 2017

    I saved mbrsave as per lab document in my /root. Used rescue disk and recovered successfully. I did it on Centos server.

Sign In or Register to comment.