qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [RESEND][PATCH] gdbstub: Add vCont support


From: Paul Brook
Subject: Re: [Qemu-devel] Re: [RESEND][PATCH] gdbstub: Add vCont support
Date: Fri, 16 Jan 2009 17:05:18 +0000
User-agent: KMail/1.9.9

On Friday 16 January 2009, Jan Kiszka wrote:
> Paul Brook wrote:
> >>  a) Modeling cpus as processes buys us nothing compared to threads given
> >>     the fact that we cannot provide a stable memory mapping to the gdb
> >>     frontend anyway. (*)
> >
> > I disagree. The process model fits perfectly. The whole point is that
> > each CPU has its own virtual address space. Separate address spaces is
> > the fundamental difference between a process and a thread.
> >
> > If you have a multicore system where several cores share a MMU[1] then
> > modelling these as threads probably make sense.
> >
> > Don't confuse this with OS awareness in GDB (i.e. implementing a
> > userspace debug environment via a bare metal kernel level debug
> > interface). That's a completely separate issue.
>
> You snipped away my argument under (*):
> > (*) Process abstraction is, if used at all, guest business. At best we
> > could try to invent (likely OS-specific) heuristics to identify
> > identical mappings and call them processes. I don't see a reasonable
> > benefit compared to the expected effort and unreliability.

You're doing exactly what I said not do do. You are confusing a GDB process 
model (i.e. separate address spaces) with actual OS processes.

The point is that each CPU has its own distinct virtual address space. GDB 
assumes that all threads use the same memory map. To handle these distinct 
address spaces you need to model CPUs as processes. Your thread hack is 
dependent on, and limited to, address ranges that happen to mapped the same 
on all CPUs. This may be sufficient for simple linux kernel debugging, but 
fails very rapidly when you start trying to do anything clever.

> You cannot simply assign some CPU n to a virtual process n because the
> mapping you see on that CPU at some point in time may next pop up on
> some other CPU - and vice versa. You start to make gdb believe it sees
> consistent processes while you have no clue at all about the scheduling
> of your guest. So what do you gain?

Mapping current CPU state to an OS process is the debugger's problem. A 
sufficiently smart debugger is able to interpret the OS process table, and 
match that to the address space that's present on a particular CPU. I don't 
think GDB is capable of doing this right now, but I've seen it done in other 
debuggers.

> I can tell you what you loose: If gdb starts to think that there are n
> separate processes, you will have to set separate breakpoints as well.
> Bad. And if some breakpoint assigned to process n suddenly hits you on
> process (CPU) m, only chaos is the result. E.g. kvm would re-inject it
> as guest-originated. Even worse.

You need to use multi-process breakpoints.

An OS aware debugger will take care of migrating userspace breakpoints when OS 
context switches occur.

> There was zero practical need for this so far. But maybe you can
> describe a _concrete_ debugging use case and the related steps in gdb -
> based on a potential process interface. I'm surely willing to learn
> about it if there is really such a great practical improvement feasible.
> What will be the user's benefit?

In embedded systems it's fairly common to run entirely separate operating 
systems on different cores. e.g. you may run linux on one core and a RTOS or 
dedicated codec engine on the other.

The rest of my examples are hypothetical. I don't have actual examples, 
however it's things that I know people are interested in, and in some cases 
actively working on. GDB already has a python plugin interface, though 
currently there aren't any useful hooks for doingn OS awareness stuff.

I'm not sure offhand how highmem mappings work. Are these per-cpu or global?
My guess is linux makes them global, but I can certainly imagine them being 
per-cpu.

A kernel with a 4g/4g split is a fairly pathological case. Your assumption 
that there is a useful common mapping across all cpus fails and the debugger 
absolutely needs to be able to specify which virtual address space it is 
accessing. A relatively small amount of debugger OS awareness is required to 
figure out whether a particular core is in the kernel or user space, but 
that's not QEMU's problem.

Paul




reply via email to

[Prev in Thread] Current Thread [Next in Thread]