qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 4/4] intel_iommu: implement mru list for iotl


From: Peter Xu
Subject: Re: [Qemu-devel] [PATCH v3 4/4] intel_iommu: implement mru list for iotlb
Date: Fri, 14 Jul 2017 12:43:05 +0800
User-agent: Mutt/1.5.24 (2015-08-30)

On Thu, Jul 13, 2017 at 08:49:44PM +0300, Michael S. Tsirkin wrote:
> On Wed, Jul 12, 2017 at 04:13:43PM +0800, Peter Xu wrote:
> > It is not wise to disgard all the IOTLB cache when cache size reaches
> > max, but that's what we do now. A slightly better (but still simple) way
> > to do this is, we just throw away the least recent used cache entry.
> 
> Not wise how? Slower? It seems simpler.

When the cache is full, the cached items are still valid, but we just
throw them away. I agree it is definitely simpler, but is that really
wise?

> 
> > This patch implemented MRU list algorithm for VT-d IOTLB. The main logic
> > is to maintain a Most Recently Used list for the IOTLB entries. The hash
> > table is still used for lookup, though a new list field is added to each
> > IOTLB entry for a iotlb MRU list. For each active IOTLB entry, it's both
> > in the hash table in s->iotlb, and also linked into the MRU list of
> > s->iotlb_head. The hash helps in fast lookup, and the MRU list helps in
> > knowing whether the cache is still hot.
> 
> Helps how? Faster?  What's missing here is a report of the performance
> impact.

Not faster, the hash is already there. This patch only tries to fix
the thing I mentioned above (not throwing valid entries when cache
full). As far as I tested (all simple ones), it does not have
performance boost since it's not easy to trigger cache full (our cache
size is 1024, not to mention that current intel iommu driver will do
very rapid global flush). I just think it makes sense, and maybe it
can help in the future when there are no that frequent iotlb flushes.

If we still require performance numbers to merge this patch, I think
we can put this patch aside, until one day we might think it useful.

Thanks,

-- 
Peter Xu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]