[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
bug#39774: guix incorrectly says "No space left on device"
From: |
Jesse Gibbons |
Subject: |
bug#39774: guix incorrectly says "No space left on device" |
Date: |
Tue, 25 Feb 2020 07:59:05 -0700 |
User-agent: |
Evolution 3.32.4 |
On Mon, 2020-02-24 at 22:15 -0500, Julien Lepiller wrote:
> Le 24 février 2020 22:01:45 GMT-05:00, Jesse Gibbons <
> address@hidden> a écrit :
> > I have a laptop with two drives. A few days ago, when I ran `df -h`
> > it
> > outputs:
> > Filesystem Size Used Avail Use% Mounted on
> > none 16G 0 16G 0% /dev
> > /dev/sdb1 229G 189G 29G 87% /
> > /dev/sda1 458G 136G 299G 32% /gnu/store
> > tmpfs 16G 0 16G 0% /dev/shm
> > none 16G 64K 16G 1% /run/systemd
> > none 16G 0 16G 0% /run/user
> > cgroup 16G 0 16G 0% /sys/fs/cgroup
> > tmpfs 3.2G 16K 3.2G 1% /run/user/983
> > tmpfs 3.2G 60K 3.2G 1% /run/user/1001
> >
> > As you can see, /dev/sda1 is the drive mounted on /gnu/store.
> > Everything in the store is written to it, and it has plenty of
> > space
> > available.
> >
> > Guix sometimes says there is "No space left on device". This always
> > happens in particular when I try `guix gc --optimize`, but it
> > sometimes
> > happens when I call `guix pull` or `guix upgrade`. When guix pull
> > or
> > guix upgrade fails with this message, I can clear up more space by
> > deleting ~/.cache and emtpying my trash and it works.
> >
> >
> > Today I have also seen this happen when I'm trying to upgrade a
> > large
> > profile. It said it could not build anything because there was no
> > more
> > disk space, even after I cleaned up /dev/sdb1 to 40% use. It
> > finally
> > recognized the empty disk space when I called guix gc and it
> > deleted a
> > few of the dependencies needed for the upgrades. But it didn't take
> > long to trigger this bug again. Here's the new output of `df -h`:
> >
> > Filesystem Size Used Avail Use% Mounted on
> > none 16G 0 16G 0% /dev
> > /dev/sdb1 229G 86G 131G 40% /
> > /dev/sda1 458G 182G 253G 42% /gnu/store
> > tmpfs 16G 0 16G 0% /dev/shm
> > none 16G 80K 16G 1% /run/systemd
> > none 16G 0 16G 0% /run/user
> > cgroup 16G 0 16G 0% /sys/fs/cgroup
> > tmpfs 3.2G 24K 3.2G 1% /run/user/983
> > tmpfs 3.2G 12K 3.2G 1% /run/user/1000
> > tmpfs 3.2G 60K 3.2G 1% /run/user/1001
> >
> > Any clues why this happens and what can be done to fix it? Could it
> > be
> > related to how /dev/sdb1 is 229G large, and the total used space in
> > /
> > and /gnu/store is more than that?
> >
> > -Jesse
>
> There could be two explanations: you've run out of inodes or the
> filesystem that was out of space is not the one you think (maybe it
> was during a build and your /tmp is a tmpfs?). Try `df -i`.
~$ df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
none 4.0M 525 4.0M 1% /dev
/dev/sdb1 15M 77K 15M 1% /
/dev/sda1 30M 29M 1015K 97% /gnu/store
tmpfs 4.0M 1 4.0M 1% /dev/shm
none 4.0M 47 4.0M 1% /run/systemd
none 4.0M 4 4.0M 1% /run/user
cgroup 4.0M 11 4.0M 1% /sys/fs/cgroup
tmpfs 4.0M 13 4.0M 1% /run/user/983
tmpfs 4.0M 24 4.0M 1% /run/user/1001
tmpfs 4.0M 1 4.0M 1% /run/user/1000
That makes sense now. /dev/sda1 (mounted on /gnu/store) was out of
inodes. Is there a way to increase the maximum number of inodes a
partition can use? Or perhaps divide the store among multiple
partitions?