bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: proc leaking


From: Samuel Thibault
Subject: Re: proc leaking
Date: Wed, 1 Nov 2023 19:17:42 +0100
User-agent: NeoMutt/20170609 (1.8.3)

Samuel Thibault, le mer. 01 nov. 2023 16:06:57 +0100, a ecrit:
> Samuel Thibault, le mer. 01 nov. 2023 15:35:00 +0100, a ecrit:
> > Samuel Thibault, le mer. 01 nov. 2023 13:14:17 +0100, a ecrit:
> > > Samuel Thibault, le mer. 01 nov. 2023 01:50:40 +0100, a ecrit:
> > > > Samuel Thibault, le mar. 31 oct. 2023 04:40:43 +0100, a ecrit:
> > > > > (it looks like there are memory leaks in proc, its vminfo keeps
> > > > > increasing).
> > > > 
> > > > It seems 64bit-specific: the program below makes proc leak memory, 100
> > > > vminfo lines at a time. Possibly __mach_msg_destroy doesn't actually
> > > > properly parse messages to be destroyed, so that in the error case the
> > > > server leaks non-inline data? Flavio, perhaps you have an idea?
> > > 
> > > I don't think we have the kernel-to-user equivalent for
> > > adjust_msg_type_size? So that we end up pushing twice too much data to
> > > userland for port arrays?
> > 
> > I found and fixed the allocation issue in the kernel.
> 
> It seems proc is still leaking, but on the heap this time. This is not
> 64bit-specific, the same simple reproducer triggers it:
> 
> while [  "$(echo -n `echo a` )" = a ] ; do : ; done
> 
> or more simply:
> 
> while true ; do echo $(echo -n $(echo a)) > /dev/null ; done

I tracked it a bit, it seems that libport is not always cleaning
structures from the proc class. Below is the tracing that we get for
instance with the while loop above. Alloc is the allocation of pi, free
is the freeing from the point of view of the proc server, and clean is
the actual cleanup done by libports. I tell proc to print them whenever
one of them crosses a hundred boundary:

proc: alloc 651 free 600 clean 520
proc: alloc 700 free 648 clean 568
proc: alloc 731 free 679 clean 600
proc: alloc 751 free 700 clean 620
proc: alloc 800 free 748 clean 668
proc: alloc 831 free 779 clean 700
proc: alloc 851 free 800 clean 720
proc: alloc 900 free 848 clean 768
proc: alloc 931 free 879 clean 800
proc: alloc 951 free 900 clean 820
proc: alloc 1000 free 948 clean 868
proc: alloc 1031 free 979 clean 900
proc: alloc 1051 free 1000 clean 920
proc: alloc 1100 free 1048 clean 968
[...]
proc: alloc 2251 free 2200 clean 2120
proc: alloc 2300 free 2248 clean 2168
proc: alloc 2331 free 2279 clean 2200
proc: alloc 2351 free 2300 clean 2220
proc: alloc 2400 free 2348 clean 2268
proc: alloc 2431 free 2379 clean 2300
proc: alloc 2451 free 2400 clean 2320
proc: alloc 2500 free 2448 clean 2368
proc: alloc 2551 free 2500 clean 2368
proc: alloc 2600 free 2548 clean 2368
proc: alloc 2651 free 2600 clean 2368
[...]
proc: alloc 3400 free 3348 clean 2368
proc: alloc 3451 free 3400 clean 2368
proc: alloc 3500 free 3448 clean 2368
proc: alloc 3551 free 3500 clean 2368
proc: alloc 3600 free 3548 clean 2368

I.e. after a few seconds point the cleaning stops. I stopped the loop
there, waited a few seconds, and restarted it again, and got:

proc: alloc 3649 free 3597 clean 2400
proc: alloc 3651 free 3600 clean 2402
proc: alloc 3700 free 3648 clean 2450
proc: alloc 3749 free 3697 clean 2500
proc: alloc 3751 free 3700 clean 2502
proc: alloc 3800 free 3748 clean 2550
proc: alloc 3849 free 3797 clean 2600
proc: alloc 3851 free 3800 clean 2602
proc: alloc 3900 free 3848 clean 2650
proc: alloc 3949 free 3897 clean 2700
proc: alloc 3951 free 3900 clean 2702
proc: alloc 4000 free 3948 clean 2750

i.e. it restarts cleaning properly, but after some time the cleaning
stops again. Also, if I restart too quickly, the cleaning doesn't start
again. So it looks like the cleaning work somehow gets jammed.

Could it be that proc is overflown with dead port notifications? That's
not many procs, but still. Maybe Sergey has an idea?

Samuel



reply via email to

[Prev in Thread] Current Thread [Next in Thread]