[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] Re: [questions] savevm|loadvm
From: |
Wenhao Xu |
Subject: |
[Qemu-devel] Re: [questions] savevm|loadvm |
Date: |
Thu, 1 Apr 2010 12:35:27 -0700 |
Does current qemu-kvm (qemu v0.12.3) use the irqchip, pit of KVM? I
cannot find any KVM_CREATE_IRQCHIP and KVM_CREATE_PIT in the qemu
code.
Concerning the interface between qemu and kvm, I have the following confusion:
1. How irqchip and pit of KVM collaborating with the irq and pit
emulation of QEMU? As far as I see, qemu-kvm still uses qemu's irq and
pit emulation, doesn't it?
2. For return from KVM to QEMU, I cannot get the meaning of two exit reasons:
case KVM_EXIT_EXCEPTION:
What exception will cause KVM exit?
default:
dprintf("kvm_arch_handle_exit\n");
ret = kvm_arch_handle_exit(env, run);
What exit reasons are default?
3. How could DMA interrupt the cpu when it finishes and the qemu-kvm
is still running in kvm now?
I am still working in the patch, but these confusions really prevent
me moving forward. Thanks first for you guys giving me more hints.
The following is the code so far I write:
The main idea is synchronizing the CPU state and enter into the
emulator mode when switching from kvm to emulator. I only do the
switch when the exit reason is KVM_EXIT_IRQ_WINDOW_OPEN.
However, I got the following errors:
Whenever switch from kvm to qemu, the interrupt request in qemu will
cause qemu enter into smm mode which is definitely a bug.
This is the code that tries to synchronize the CPU state when the IRQ
Window is open. And then tries to switch back to the QEMU emulator
mode.
--- qemu-0.12.3/kvm-all.c 2010-02-23 12:54:38.000000000 -0800
+++ de-0.12.3/kvm-all.c 2010-04-01 12:23:07.000000000 -0700
@@ -577,7 +577,6 @@
{
struct kvm_run *run = env->kvm_run;
int ret;
-
dprintf("kvm_cpu_exec()\n");
do {
@@ -641,7 +640,8 @@
dprintf("kvm_exit_unknown\n");
break;
case KVM_EXIT_FAIL_ENTRY:
- dprintf("kvm_exit_fail_entry\n");
+ printf("kvm_exit_fail_entry\n");
+ exit(1);
break;
case KVM_EXIT_EXCEPTION:
dprintf("kvm_exit_exception\n");
@@ -670,7 +670,31 @@
env->exit_request = 0;
env->exception_index = EXCP_INTERRUPT;
}
-
+
+ /* de, start emulation */
+ if(env->ask_for_emulation){
+ //if( (env->eflags & IF_MASK) && (run->ready_for_interrupt_injection)){
+ if(run->exit_reason == KVM_EXIT_IRQ_WINDOW_OPEN ){
+ int saved_vm_running = vm_running;
+ vm_stop(0);
+ if (kvm_arch_get_registers(env)) {
+ printf("Fatal: kvm vcpu get registers failed\n");
+ abort();
+ }
+ env->kvm_state->regs_modified = 1;
+ env->is_in_emulation = 1;
+ target_ulong pc_start = env->segs[R_CS].base + env->eip;
+ /* int flags = env->hflags |(env->eflags & (IOPL_MASK | TF_MASK
| RF_MASK | VM_MASK)); */
+ /* int code32 = !((flags >> HF_CS32_SHIFT) & 1); */
+ printf("start emulation at pc: 0x%x, eip:0x%x\n", pc_start, env->eip);
+ /* target_disas(stderr, pc_start, 10, code32); */
+ /* env->interrupt_request = 0; */
+ printf("tr type:%d\n", (env->tr.flags >> DESC_TYPE_SHIFT) & 0xf);
+
+ if(saved_vm_running)
+ vm_start();
+ }
+ }
return ret;
}
ask_for_emulation is in the CPU_COMMON
--- qemu-0.12.3/cpu-defs.h 2010-02-23 12:54:38.000000000 -0800
+++ de-0.12.3/cpu-defs.h 2010-03-28 15:17:14.000000000 -0700
@@ -197,6 +197,8 @@
const char *cpu_model_str; \
struct KVMState *kvm_state; \
struct kvm_run *kvm_run; \
- int kvm_fd;
+ int kvm_fd; \
+ int ask_for_emulation; /* ask for emulation if 1 */ \
+ int is_in_emulation; /* is in emulation */
when is_in_emulation, don't enter kvm again in cpu_exec
--- qemu-0.12.3/cpu-exec.c 2010-02-23 12:54:38.000000000 -0800
+++ de-0.12.3/cpu-exec.c 2010-03-30 00:38:01.000000000 -0700
#if defined(TARGET_I386)
- if (!kvm_enabled()) {
- /* put eflags in CPU temporary format */
- CC_SRC = env->eflags & (CC_O | CC_S | CC_Z | CC_A | CC_P | CC_C);
- DF = 1 - (2 * ((env->eflags >> 10) & 1));
- CC_OP = CC_OP_EFLAGS;
- env->eflags &= ~(DF_MASK | CC_O | CC_S | CC_Z | CC_A | CC_P | CC_C);
- }
+ if (!kvm_enabled() || env->is_in_emulation) {
+ /* put eflags in CPU temporary format */
+ CC_SRC = env->eflags & (CC_O | CC_S | CC_Z | CC_A | CC_P | CC_C);
+ DF = 1 - (2 * ((env->eflags >> 10) & 1));
+ CC_OP = CC_OP_EFLAGS;
+ env->eflags &= ~(DF_MASK | CC_O | CC_S | CC_Z | CC_A | CC_P | CC_C);
+ }
- if (kvm_enabled()) {
- kvm_cpu_exec(env);
- longjmp(env->jmp_env, 1);
- }
+ if (kvm_enabled() && !env->is_in_emulation) {
+ kvm_cpu_exec(env);
+ longjmp(env->jmp_env, 1);
+ }
command "start_emulation", "stop_emulation" support in the monitor
--- qemu-0.12.3/monitor.c 2010-02-23 12:54:38.000000000 -0800
+++ de-0.12.3/monitor.c 2010-03-28 15:16:18.000000000 -0700
@@ -56,6 +56,9 @@
#include "json-streamer.h"
#include "json-parser.h"
#include "osdep.h"
+/* de */
+static void do_start_emulation(Monitor *mon, const QDict *qdict,
QObject **ret_data)
+{
+ CPUState *env = mon_get_cpu();
+ env->ask_for_emulation = 1;
+ monitor_printf(mon, "Starting emulation...\n");
+}
+
+static void do_stop_emulation(Monitor *mon, const QDict *qdict,
QObject **ret_data)
+{
+ CPUState *env = mon_get_cpu();
+ env->ask_for_emulation = 0;
+ monitor_printf(mon, "Stop emulation\n");
+}
+
+static void do_is_emulation(Monitor *mon, const QDict *qdict, QObject
**ret_data)
+{
+ CPUState *env = mon_get_cpu();
+ if(env->is_in_emulation)
+ monitor_printf(mon, "Emulating now\n");
+ else
+ monitor_printf(mon, "Virtualizing now\n");
+}
+
Thanks for the help, guys. I appreciate for your time to help!
regards,
Wenhao
On Thu, Apr 1, 2010 at 1:42 AM, Avi Kivity <address@hidden> wrote:
> On 03/31/2010 02:31 PM, Juan Quintela wrote:
>>
>> Wenhao Xu<address@hidden> wrote:
>>
>>>
>>> Hi, Juan,
>>> I am fresh to both QEMU and KVM. But so far, I notice that QEMU
>>> uses "KVM_SET_USER_MEMORY_REGION" to set memory region that KVM can
>>> use and uses cpu_register_physical_memory_offset to register the same
>>> memory to QEMU emulator, which means QEMU and KVM use the same host
>>> virtual memory. And therefore the memory KVM modified could be
>>> directly reflected to QEMU. I don't quite understand the different
>>> memory layout problem between the two. So I don't know exactly what
>>> you mean to "fix" it?
>>>
>>
>> 1st. qemu-kvm.git and qemu.git memory layouts are different, indeed with
>> qemu.git kvm mode. (yes it is complex and weird).
>>
>> kvm vs qemu initialization is different. Expecting to stop kvm, and run
>> tcg from there is not going to work. I guess it would need a lot of
>> changes, but I haven't looked at it myself.
>>
>
> I don't think it's so far fetched. In fact early versions of qemu-kvm
> switched between emulation and virtualization (emulate until 64-bit mode,
> also emulate mmio instructions in qemu).
>
> Even today, all memory initialization is done via generic qemu mechanisms.
> So long as you synchronize all state (pit, irqchip, registers) you should
> be fine.
>
>>> For why switching is useful? Actually, I am a master student now
>>> and doing a course project. What am I arguing is that QEMU could be
>>> potentially useful to do many instrumentation analysis, but it is a
>>> bit slow. So by combing with KVM, when the os runs to some place where
>>> we are interested in, switch it to QEMU emulator mode and do the
>>> analysis and then switch back.
>>>
>>
>> idea is good, but I don't think that it is _so_ easy at this point. tcg
>> and kvm basically live in a different world. Not sure of what needs to
>> be done to make them back on sync.
>>
>
> cpu_synchronize_state()
>
> --
> error compiling committee.c: too many arguments to function
>
>
--
~_~