help-grub
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: implicit mounting of the root partition for rw


From: U.Mutlu
Subject: Re: implicit mounting of the root partition for rw
Date: Fri, 11 Nov 2011 04:09:44 +0100
User-agent: Mozilla/5.0 (X11; Linux i686; rv:6.0) Gecko/20110813 Firefox/6.0 SeaMonkey/2.3

Jordan Uggla wrote, On 2011-11-11 01:31:
On Thu, Nov 10, 2011 at 2:31 PM, U.Mutlu<address@hidden>  wrote:
grub2 on my system (Debian 6) seems to implicitly mount the
root partition for rw. But in /etc/fstab it is mounted again.
Isn't that additional mounting not harmful for the HD ?
Can one safely disable the mount for / in /etc/fstab ?

Grub needs to be able to read files from /boot/ to be able to load the
kernel. This is unavoidable, and does absolutely no harm to the hard
drive. In addition, with the single possible exception of writing to
/boot/grub/grubenv (not done by default in Debian's grub) grub never
writes to the hard drive at all. Once grub loads the linux kernel and
jumps to it, almost nothing that grub has done is applicable to the OS
being booted. Grub doesn't, and can't, "mount" the root filesystem for
the OS it loads, the kernel and initrd scripts need to do that on
their own from the ground up after grub is finished with its part and
is long gone. This is simply how booting works. In addition to that,
"mounting" is not something that affects hardware, you can read from
and write to a device through /dev/ without mounting it. What mounting
does is tell the kernel to interpret the bits that it is already able
to read from the hardware so that you can work with the abstraction
that is files and paths.

In short, nothing can or should be done differently, your hardware is fine.

My hardware (HD) unfortunately isn't fine. Here's the story:

I have a HD with 4 primary partitions. On the first partition
the OS was installed (ie. the root partition incl. /root and boot).
Recently SMART reported HD errors in that partition.
A badblocks test revealed many bad blocks on that partition
(the SATA-controller gets exception errors and these get written
to syslog):
  Checking blocks 0 to 30274460
  Checking for bad blocks (read-only test): done
  Pass completed, 1175 bad blocks found.

But the funny thing is that all the 3 other partitions of
that same HD have not even a single bad block!
So I wonder how this is possible, even mathematically...

Using ddrescue I then copied the partition to one of the other partitions.
Fortunately less than 2 MB data loss, mainly some of the log files
(in my case the log files are frequently written with some debug outputs).
So, my system is up again, but I wonder why that happened at all,
and why it happened so? :-)  I mean, one would expect that the
bad blocks should normally be distributed among all partitions,
but in this case only the root partition was affected...

I have only this speculative explanation: since the root partition
is (somehow) implicitly mounted during kernel startup and again
in /etc/fstab then maybe that partition 'aged' much earlier than
the other partitions, due to the double mounting... (?)
If someone has a better theory let me know please.

BTW, I now disabled the (unneccessary?) mounting of the root partition
in /etc/fstab, and the system still works normal.

BTW2, using "mke2fs -v -c -t ext3 -j /dev/sda1" one can do a "low-level"-format
and exclude such bad blocks for allocation. I would recommend to format
a partition only that way. It takes much longer, but is IMO much safer.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]