>
> Note that drop/add is always paired (i.e. the guest never sees an
> unmapped area), and we always map the full 64k even though cirrus code
> manages each 32k bank individually. It looks optimal... we're probably
> not testing the same thing (either qemu or guest code).
This is what my instrumentation revealed:
map_linear_vram_bank 0
map 0 (actually perform the mapping)
map_linear_vram_bank 1
map 1
4 a0000 0 7fe863a62000 1 (KVM_SET_USER_MEMORY_REGION)
4 a0000 10000 7fe863a72000 1
run (enter guest)
map_linear_vram_bank 0
map 0
map_linear_vram_bank 1
map 1
4 a0000 0 7fe863a72000 1
4 a0000 10000 7fe863a62000 1
run
map_linear_vram_bank 0
map 0
map_linear_vram_bank 1
map 1
4 a0000 0 7fe863a62000 1
run
map_linear_vram_bank 0
map 0
map_linear_vram_bank 1
map 1
run
So we suddenly get out of sync and enter the guest with an unmapped vram
segment. I takes a long time (in number of map changes) until the region
becomes mapped again.