I've scoured the Internet, but have been unable to find any clear, unambiguous, step-by-step guide as to how to make this drive remaining functioning drive bootable, either from the "grub-rescue>" prompt or by some other mechanism. I tried a couple of rescue disks that I located on the Internet, but they both errored out when I attempted to "rescue" the
drive. So I've given up, at least for now, on trying to fix the problem from the "grub-rescue>" prompt.
Due to a cascading series of failures (some of hardware, some of my brain),
I find myself in the following situation:
I had a linux-raid two-drive system that was working fine for many years.
The system uses legacy BIOS booting. My notes from long ago say that both drives had a working GRUB; but it seems that my notes were wrong: one of the drives died without warning, leaving me with a drive with a
fully-functioning trixie (and all the user data, etc.) present, but that drive seems to have no working GRUB in the MBR. Trying to boot it gives me a "grub-rescue>" prompt.
I've scoured the Internet, but have been unable to find any clear, unambiguous, step-by-step guide as to how to make this drive remaining functioning drive bootable, either from the "grub-rescue>" prompt or by some other mechanism. I tried a couple of rescue disks that I located on the Internet, but they both errored out when I attempted to "rescue" the drive. So I've given up, at least for now, on trying to fix the problem from the "grub-rescue>" prompt.
I can physically remove the drive and place it on a functioning machine, and have done so. With the drive in the functioning machine, I have checked that indeed all the data on it (that were in the original "/" hierarchy) are readable. So I just want to find a way to install GRUB on the MBR in a way that will cause the disk to be bootable into the system that was on it. That is, I want to be able to remove the disk from the functioning machine that it's currently (temporarily) on, put the drive back in the original machine, power on, and have the system come up as it used to (except now with just
one active drive in the RAID array).
From there I can add a new drive to the array and get myself back a fully-functioning two-drive RAID-based system.
I hope that that's a pretty clear description of the problem. If more information is needed, I can of course provide it.
I hope that someone here understands all this GRUB-and-boot stuff better
than I do, and can provide steps that my child-like brain can follow to get me back to a working system.
Doc
--
Web: http://enginehousebooks.com/drevans
All the system info is in that second partition. I don't rightly recall why the first partition is present (it's been an awfully long time since I installed this disk). I suspect that it's reserved for swap, although I
doubt that swapping has ever occurred.
This means that the root filesystem is /dev/sda2, rather than /dev/sda1 as you assumed.
Identify the hard disk that contains your system, look at /proc/partitions.
By "the hard disk that contains your system", I assume you mean the RAID disk.
But should I be using /dev/sda2 or /dev/md126 (as listed in /proc/partitions)??
So here I have a question. This looks like it will try to copy the /dev/
from the running OS (i.e., the non-RAID drive) and overwrite the /dev that
is on the RAID disk.
Why would one do that? The /dev that was on the RAID disk worked fine until the other drive of the pair failed; so why does it need to be overwritten by the
/dev from the running system?
I'm sorry if I'm being dense. In this situation, I'm very nervous about running commands whose purpose I don't understand.
On Thu, Jan 22, 2026 at 02:40:10PM -0700, D. R. Evans wrote:You got the right idea, but wrong method.
All the system info is in that second partition. I don't rightly recall why >> the first partition is present (it's been an awfully long time since IAs it says: it is a boot partition.
installed this disk). I suspect that it's reserved for swap, although I
doubt that swapping has ever occurred.
So when you get into the chrooted environment you also should do:
mount /dev/sda1 /boot
This means that the root filesystem is /dev/sda2, rather than /dev/sda1 as >> you assumed.Correct.
YesIdentify the hard disk that contains your system, look at /proc/partitions. >> By "the hard disk that contains your system", I assume you mean the RAID disk.
But should I be using /dev/sda2 or /dev/md126 (as listed in /proc/partitions)??Prolly /dev/md126 - what works.
So here I have a question. This looks like it will try to copy the /dev/If you type the following it will tell you that /dev is a udev file system:
from the running OS (i.e., the non-RAID drive) and overwrite the /dev that >> is on the RAID disk.
Why would one do that? The /dev that was on the RAID disk worked fine until >> the other drive of the pair failed; so why does it need to be overwritten by >> the
/dev from the running system?
df -h
This is where device files are created on the fly as needed.
You need /dev/sda* and a few others - the easiest way is to copy from the live
system. When you reboot into your recovered system the contents that you copy should be wiped out (or mounted over).
I'm sorry if I'm being dense. In this situation, I'm very nervous aboutGood to be nervous!
running commands whose purpose I don't understand.
alain williams wrote on 1/22/26 11:01 AM:
Copy/dev/ to/tmp/RFS/dev/
so is the actual command:
cp -r /dev/ /tmp/RFS/dev/
If I try without the -r I get the error/warning message:
cp: -r not specified; omitting directory /dev/
Copy/dev/ to/tmp/RFS/dev/
so is the actual command:
cp -r /dev/ /tmp/RFS/dev/
If I try without the -r I get the error/warning message:
cp: -r not specified; omitting directory /dev/
Due to a cascading series of failures (some of hardware, some of my brain), I find myself in the following situation:
I had a linux-raid two-drive system that was working fine for many years. The system uses legacy BIOS booting. My notes from long ago say that both drives had a working GRUB; but it seems that my notes were wrong: one of the drives died without warning, leaving me with a drive with a fully-functioning trixie (and all the user data, etc.) present, but that drive seems to have no working
GRUB in the MBR. Trying to boot it gives me a "grub-rescue>" prompt.
I've scoured the Internet, but have been unable to find any clear, unambiguous, step-by-step guide as to how to make this drive remaining functioning drive bootable, either from the "grub-rescue>" prompt or by some other mechanism.
But, crucially, if RAID made the two disks identical, then the code that
grub is looking for is very likely already present on the drive you have.
On Fri, Jan 23, 2026 at 09:05:20AM +0000, David wrote:t
But, crucially, if RAID made the two disks identical, then the code tha
e.grub is looking for is very likely already present on the drive you hav
Be careful: there are 2 ways of setting up RAID-1 (mirror) for a partitioned disk:
? mirror the entire disk, ie sda & sdbight
? mirror partition by partition, ie sda1 & sdb1; sda2 & sdb2; ...
If the second way then unless grub-install is run on both disks then it m
only be present on one of them.
| Sysop: | Jacob Catayoc |
|---|---|
| Location: | Pasay City, Metro Manila, Philippines |
| Users: | 5 |
| Nodes: | 4 (0 / 4) |
| Uptime: | 20:50:41 |
| Calls: | 117 |
| Calls today: | 117 |
| Files: | 367 |
| D/L today: |
559 files (257M bytes) |
| Messages: | 70,875 |
| Posted today: | 26 |