bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Switch to fresh pruned rumpkernel archive? [WAS: Re: [PATCH rumpkernel]


From: Janneke Nieuwenhuizen
Subject: Switch to fresh pruned rumpkernel archive? [WAS: Re: [PATCH rumpkernel] prune.sh: Remove ~1.1G of currently unused bits.]
Date: Tue, 27 Jun 2023 11:06:15 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)

Damien Zammit writes:

Hi!

> On 20/6/23 06:00, Janneke Nieuwenhuizen wrote:
>> The rumpkernel archive is ridiculously large.
[..]
>> Because the patches are so big I'm only sharing the prune.sh script that
>> will create some 'prune: ...' commits.
>
> I took your prune.sh script and modified it to just remove files.
> I could not easily see if there were more things to prune.
>
> I then rebuilt the tree from scratch by adding files from existing
> repo on upstream commits to a blank repo, and running the prune.sh
> script before committing in the new repo for each upstream commit.
> Then I imported the rest of the patches preserving authorship.
> Finally I ran git gc. I think everything is rebuilt as before, but the
> bare repo is only 114M now.

That's beautiful.  This will help us a lot, esp. native Guix builds on
the Hurd have much difficulty with the size of the current archive.

> Samuel, can you please see if this new repository is suitable to
> replace the debian rumpkernel:
>
> http://git.zammit.org/rumpkernel-debian.git

As mentioned on IRC, this (master) works for me.  \o/

It would be much appreciated if you would switch to this new archive!

> I have added 3 extra commits in develop branch (that perhaps could
> also be merged to master).

(I haven't tested the ACPI enabling patch, but it's easy to skip that
one if it doesn't work for me).

Greetings,
Janneke

-- 
Janneke Nieuwenhuizen <janneke@gnu.org>  | GNU LilyPond https://LilyPond.org
Freelance IT https://www.JoyOfSource.com | AvatarĀ® https://AvatarAcademy.com



reply via email to

[Prev in Thread] Current Thread [Next in Thread]