qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v2 5/5] cputlb: dynamically resize TLBs based on u


From: Alex Bennée
Subject: Re: [Qemu-devel] [RFC v2 5/5] cputlb: dynamically resize TLBs based on use rate
Date: Tue, 09 Oct 2018 17:34:20 +0100
User-agent: mu4e 1.1.0; emacs 26.1.50

Emilio G. Cota <address@hidden> writes:

> On Tue, Oct 09, 2018 at 15:54:21 +0100, Alex Bennée wrote:
>> Emilio G. Cota <address@hidden> writes:
>> > +    if (new_size == old_size) {
>> > +        return;
>> > +    }
>> > +
>> > +    g_free(env->tlb_table[mmu_idx]);
>> > +    g_free(env->iotlb[mmu_idx]);
>> > +
>> > +    /* desc->n_used_entries is cleared by the caller */
>> > +    desc->n_flushes_low_rate = 0;
>> > +    env->tlb_mask[mmu_idx] = (new_size - 1) << CPU_TLB_ENTRY_BITS;
>> > +    env->tlb_table[mmu_idx] = g_new(CPUTLBEntry, new_size);
>> > +    env->iotlb[mmu_idx] = g_new0(CPUIOTLBEntry, new_size);
>
> For the iotlb we can use g_new, right?
>
> iotlb[foo][bar] is only checked after having checked tlb_table[foo][bar].
> Otherwise tlb_flush would also flush the iotlb.
>
>> I guess the allocation is a big enough stall there is no point either
>> pre-allocating or using RCU to clean-up the old data?
>
> I tried this. Turns out not to make a difference, because (1) we only
> resize on flushes, which do not happen that often, and (2) we
> size up aggressively, but the shrink rate is more conservative. So
> in the end, it's a drop in the ocean. For instance, bootup+shutdown
> requires 100 calls to g_new+g_free -- at ~300 cycles each, that's
> about 30us out of ~8s of execution time.
>
>> Given this is a new behaviour it would be nice to expose the occupancy
>> of the TLBs in "info jit" much like we do for TBs.
>
> The occupancy changes *very* quickly, so by the time the report is out,
> the info is stale. So I'm not sure that's very useful.

Hmm do I mean occupancy or utilisation? I guess I want to get an idea of
how much of the TLB has been used and how much is empty never to be used
space. In theory as the TLB tends towards guest page size out TLB
turnover should be of the order of the guests' TLB re-fill rate?

> The TLB size changes less often, but reporting on it is not obvious,
> since we have NB_MMU_MODES sizes per CPU. Say we have 20 CPUs, what should
> we report? A table with 20 * NB_MMU_MODES cells? I dunno.

I guess not. Although I suspect some MMU_MODES are more interesting than
others. I'm hoping the usage of EL3 related modes is negligible if we
haven't booted with a secure firmware for example.

>
>> Reviewed-by: Alex Bennée <address@hidden>
>
> Thanks!
>
>               Emilio


--
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]