qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Single 64bit memory transaction instead of two 32bit me


From: Adnan Khaleel
Subject: Re: [Qemu-devel] Single 64bit memory transaction instead of two 32bit memory transaction
Date: Thu, 01 Sep 2011 14:16:55 -0500

I had asked this question a year ago and I managed to have a temporary work around for doing a single 64bit read/write operations but now I'm looking for a more complete solution.

Is there anyway we can prevent Qemu breaking up 64,128 and 256bit XMM or YMM instructions into smaller chunks and have them issue as a single transaction of the original width? This is not an issue for reading/writing to the processor's memory but for an I/O device attached over PCI. One hack is that I could accumulate the writes as they're happening but I have no way of knowing if the writes are from the same instruction.

Thanks

AK

Re: [Qemu-devel] Single 64bit memory transaction instead of two 32bit me


From: Blue Swirl
Subject: Re: [Qemu-devel] Single 64bit memory transaction instead of two 32bit memory transaction.
Date: Tue, 9 Nov 2010 17:57:28 +0000

> Legacy. Patches have been submitted to add 64 bit IO handlers. But
> there have been other discussions to change the whole I/O interface. 

> On Mon, Nov 8, 2010 at 11:27 PM, Adnan Khaleel <address@hidden> wrote:
>> In the file exec.c:

>> The memory Write/Read functions are declared as an array of 4 entries where the index values of 0,1,2 correspond to 8,16 and 32bit write and read functions respectively.

>> CPUWriteMemoryFunc *io_mem_write[IO_MEM_NB_ENTRIES][4];
>> CPUReadMemoryFunc *io_mem_read[IO_MEM_NB_ENTRIES][4];

>> Is there any reason why we can't extend this to include 64bit writes and read by increasing the array size? This is because 64bit reads are currently handled as two separate 32bit reads for eg: sommu_template.h

>> static inline DATA_TYPE glue(io_read, SUFFIX)(target_phys_addr_t physaddr,
>>                                               target_ulong addr,
>>                                               void *retaddr)
>> {
>> :
>>    res = io_mem_read[index][2](io_mem_opaque[index], physaddr);
>>    res |= (uint64_t)io_mem_read[index][2](io_mem_opaque[index], physaddr + 4) << 32;
>>:
>>    return res;
>>}

>> I'm sure this is happening in other places as well. Is there a reason for this or could we arbitrarily increase this (within limits >> ofcourse)?

>> Thanks

>> AK

reply via email to

[Prev in Thread] Current Thread [Next in Thread]