bug-parted
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: parted fat bug on ARM platform


From: Andrew Clausen
Subject: Re: parted fat bug on ARM platform
Date: Tue, 8 Mar 2005 04:38:37 +1100
User-agent: Mutt/1.3.28i

On Mon, Mar 07, 2005 at 11:04:09PM +0100, Lennert Buytenhek wrote:
> > Does gcc work around the problem?  (i.e. does it do the necessary
> > bit-bashing based on aligned reads only?)
> 
> No.  gcc doesn't know in advance that a pointer will be unaligned,
> so it will just emit a regular load/store in any case.

That's annoying.  It should be able to do it for __attribute__((packed))
structs, IMHO.

> Most ARMs out there can trap unaligned accesses, and the kernel will
> then fix up the access so that it's done only with aligned loads and
> stores.  However, this must be enabled by hand by writing the correct
> value into /proc/cpu/alignment (under linux.)  And at least until
> 2.6.11 or so, the ARM alignment trap handler blindly assumed little
> endian byte ordering, so doing an unaligned load on a big endian ARM
> (most ARMs can run in either big endian or little endian mode) with
> alignment fixups enabled would give you a byteswapped value.

Ouch!

> Whenever the linux kernel does a load that it suspects might be not
> properly aligned, it uses a function called get_unaligned(), which
> then uses aligned accesses, instead of dereferencing the pointer
> directly.  There is no such function for stores, though.
> 
> There's also a kernel option to have unaligned userspace accesses
> send a fatal signal to the offending process, so it would be easy
> for me to test patches.  (Just shout if you need (temporary) shell
> access to an ARM box.)

It sounds to me like the best solution is to:
 * complain if ARM && kernel<2.6.11
 * set /proc/cpu/alignment on ARM

Doing the necessary bit-bashing decreases the maintainability of Parted
for other platforms.  (And besides, it's rather tedious/difficult work.)

Do any ARM uses care about FAT support?

Cheers,
Andrew





reply via email to

[Prev in Thread] Current Thread [Next in Thread]