qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [Bug 1587535] Re: Incorrect MAS1_TSIZE_SHIFT in ppce500_spi


From: Aaron Larson
Subject: [Qemu-devel] [Bug 1587535] Re: Incorrect MAS1_TSIZE_SHIFT in ppce500_spin.c causes incorrectly sized TLB.
Date: Thu, 30 Jun 2016 13:20:15 -0000

Patch accepted.

Commit title is:

Eliminate redundant and incorrect function booke206_page_size_to_tlb

** Changed in: qemu
       Status: New => Fix Committed

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1587535

Title:
  Incorrect MAS1_TSIZE_SHIFT in ppce500_spin.c causes incorrectly sized
  TLB.

Status in QEMU:
  Fix Committed

Bug description:
  When e500 PPC is booted multi-core, the non-boot cores are started via
  the spin table.  ppce500_spin.c:spin_kick() calls
  mmubooke_create_initial_mapping() to allocate a 64MB TLB entry, but
  the created TLB entry is only 256KB.

  The root cause is that the function computing the size of the TLB
  entry, namely booke206_page_size_to_tlb assumes MAS1.TSIZE as defined
  by latter PPC cores, specifically n to the power of FOUR * 1KB.  The
  result is then used by mmubooke_create_initial_mapping using
  MAS1_TSIZE_SHIFT, but MAS1_TSIZE_SHIFT is defined assuming TLB entries
  are n to the power of TWO * 1KB.  I.e., a difference of shift=7 or
  shift=8.

  Simply changing MAS1_TSIZE_SHIFT from 7 to 8 is not appropriate since
  the macro is used elsewhere.

  Removing the ">>1" from:

  > static inline hwaddr booke206_page_size_to_tlb(uint64_t size)
  > {
  >     return ctz32(size >> 10) >> 1;

  and adding an appropriate comment is what I used as a work around:

  > static inline hwaddr booke206_page_size_to_tlb(uint64_t size)
  > {
  >     // resulting size is based on MAS1_TSIZE_SHIFT=7 TLB size.
  >     return ctz32(size >> 10);

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1587535/+subscriptions



reply via email to

[Prev in Thread] Current Thread [Next in Thread]