|
From: | Johan Ceuppens |
Subject: | Re: Painter Fuzzy Node in github |
Date: | Thu, 18 Dec 2014 10:52:59 +0100 |
On 18 Dec 2014, at 09:07, Johan Ceuppens <address@hidden> wrote:
> Probably the system now as it stands for CoreX etc is mapping a window. Also X11 (sub)windows get mapped with or without the main windows AFAIK. X11 paints once per map cycle. If you paint in X you have to loop constantly through painting the screen (GS also has a root window, again a window). IF you do not paint in X11 your window lookandfeel or subelements will not be updated on the screen.
I'm not sure that you understand how drawing works with the Cocoa model.
You have two hierarchies:
- The view hierarchy, which corresponds to nested view objects in the window. Views are natural units for decomposition.
- The CoreAnimation layer hierarchy, which is similar, except that some views will render into their parent's layer and some will contain more than one layer. Layers are natural units for caching.
Layers are more or less equivalent to textures. Once they are rendered, they are pushed to the GPU and remain there.
When a view is redrawn, two things happen:
First, if the view is marked as needing redisplay, then its drawRect: method is invoked (possibly multiple times for different rectangles), which will update a known dirty region (the XDAMAGE extension can be used for this on X11). This then draws into the underlying layer. This (on OS X) can happen in parallel if the views are marked as supporting threaded rendering (the only reason why not is if they are data views that share a datasource and it would add too much synchronisation overhead for it to be worthwhile). Any updated layers are then shipped to the GPU.
Once the layers are updated, the GPU then composites them.
When the mouse moves, no redraw events happen because the mouse is in a separate compositing context. If you expose a part of a window, no -drawRect: invocations need to happen if the CA layers are still valid, they're just composited by the GPU. This can be very cheap, because you're compositing a few dozen textures on a processor designed to composite a few million textures per second.
With this in mind, which part do you think can be sped up by applying AI techniques?
[Prev in Thread] | Current Thread | [Next in Thread] |