chicken-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Chicken-users] gui api design -- some thought -- long mail


From: Shawn Rutledge
Subject: Re: [Chicken-users] gui api design -- some thought -- long mail
Date: Sat, 10 Feb 2007 22:34:47 -0700

On 2/7/07, minh thu <address@hidden> wrote:
The idea is to have the drawing and event-responsing of
a widget happens at the same time. It thus needs some decent
framerate. It means no callback : the code handling the
event is written alongside the code displaying the widget
(and the code testing the existence of the event). It
can also means no data structure for the widgets (just
like you can write a rectangle with opengl without having
to store a Rectangle object; the code is the object).

Google for references. See also 'A Zero Memory Widget
Library'. ZMW uses a datastructure when traversing the
code; it's necessary if you want to calculate the size
of a widget which is made of other (unknown) widgets.

Hmmm.  I found that site

http://www710.univ-lyon1.fr/~exco/ZMW/

and this too:

http://www.mollyrocket.com/forums/viewtopic.php?t=134&sid=7d8bf52fd86aa485f77874e287e1df24

This sounds weird at first but I guess it's normal for a game
developer, because when the whole screen is a virtual world, every
little action can potentially result in a change to every pixel,
right?  Do game developers usually make OpenGL calls to clear the
screen, create vertices, set properties, and render the scene, every
frame, and then start over and repeat as often as possible, and then
hopefully brag about how many FPS they get in spite of all that?  Or
do they typically expect the OpenGL implementation to hold a lot of
data, like pre-defined shapes that can be re-used?  If it's mostly the
latter that's going on, then you could say that you didn't implement a
GUI in "zero memory" by just having the one big "do" function, you
just moved the memory from the application to the GPU's memory, or the
memory that Mesa needs to hold all that stuff if that's what you're
using, right?  And the ZMW guy says that there is memory involved, but
it's all on the stack.  (Not sure if he means "the stack" or that he
created his own stack structure just for that.)

But in 2D UIs you usually have the concept of "damaged areas" that
need repainting, so as to avoid re-drawing pixels that didn't change.
To me, that idea has always been integral to the idea of writing an
efficient GUI.  Maybe it will become an obsolete idea though as GUIs
get more complex.

The guy in the video also mentions that another big problem is you
cannot do layout management separately from calling the big "do"
function.

So I'm not yet convinced that even if you want a really minimal UI,
that there is anything wrong with at least separating the painting
code from the event-handling code, and putting those functions plus
some metadata into a data structure.  Scheme would just make it
easier.  Then you can have a pointer to that structure and be able to
say "that right there is the widget which you see on the screen".

Another thing I accepted as gospel at the beginning, for PC
applications, is MVC (model, view, control) separation.  (QT is good
to implement that kind of paradigm.)  But the strictest implementation
of MVC is a bit cumbersome sometimes.  For applications that really
involve direct manipulation of graphical objects (like diagramming
tools), I'm still not quite sure whether it's better to duplicate some
information in the view, that is also in the model, just so you can
have a whole object to give to your rendering system ("here it is, go
stick it on the screen at this location" and then forget about it
until it either gets damaged or the user drags it somewhere else)
rather than keeping the views as thin as possible and getting
everything from the model; or (horrors) actually letting the model
drive the painting at some level - even a very high level where 2D
drawing commands are abstracted away.  The diagramming scenario is
where I think an immediate mode UI doesn't seem to fit.  But then
again a lot of games are like that too - there are objects that the
user picks up and manipulates, or uses as tools to manipulate other
things; so how much of it is really "model" and how much of it is
stored vertices in the GPU's memory that were created once and then
manipulated, rather than being directly re-drawn each time?

Actually at work I just wrote my first truly embedded GUI without
support of any OS or graphics library.  (I've been an object-oriented
developer for more than a decade, Java and C++ and that one job where
I did Scheme.)  I have a 140x32 graphical VFD connected via SPI bus to
a microcontroller, and some buttons and an encoder wheel on PIOs.
(But the VFD has a character generator so it's not as much work as
dealing with a dumb frame buffer.  I can send it commands to set
cursor position, draw characters and draw arbitrary blocks of pixels
that are the same height as a character.  That's about it.)  My
implementation has a main loop, which polls the buttons and encoder to
see if any of those changed state, and I have callbacks (function
pointers) for each of those buttons & encoder which I call each time
the state changes.  For a while I had the button polling driven by a
hardware timer, once every millisecond, but the trouble is if the
callback takes more than a millisecond something bad is bound to
happen (like it will cause the processor to reset), and I wanted to be
able to write as much code as I want in a callback (send stuff to the
screen, directly frob some other hardware, whatever).  Also for the
SPI bus I use a ring buffer, so there is a possibility I can generate
stuff that needs to go to the screen faster than the bus is running,
and therefore might need to have the send function be a blocking one
(wait until the ring buffer has enough space, so the outgoing data is
not lost).  My rule is any code which can block needs to be in the
main loop rather than an interrupt handler.  Now I use two timers, one
which happens every 1 ms, and drives arbitrary registered timer
callbacks; the other takes care of sending bytes out the SPI bus and
some other quick hardware operations periodically.  Painting is not
done "as often as possible" but rather only when some piece of code
somewhere decides to send something to the screen.  Any piece of code
can put stuff in the ring buffer to go to the screen, and draw at any
location.  It can even send the clear command and start over, if it
needs to do that.  But I have the concept of "modes."  A mode is a
struct containing callbacks that occur when the mode is entered,
exited, and when it needs to be repainted.  There is one active mode
at all times, and there is a modeUpdate() method which calls the
active mode's repaint callback.  In practice this is how most of the
screen painting gets done.

So I think what I naively re-invented is a kind of immediate-mode UI
right?  I have only 4K of SRAM on that chip, so I have enough for a
few callbacks but not enough to waste with things like event queues,
excessive buffering of anything else, nor for creating "view objects"
that duplicate information which really lives somewhere else.  It
works pretty well and the code isn't too ugly.  I didn't set out to
make it too elegant or re-usable or portable, just get it done in a
hurry (because every project at my job needs to be done in a big
hurry, it seems).  One big drawback is that you see the screen flicker
every time you clear the screen and repaint stuff, because it's not
double-buffered.  I think that's another reason that 2D UIs usually
only repaint the damaged parts - you don't want to see the screen
cleared and then repainted.  But I think every piece of graphics
hardware ought to have double-buffering in hardware - it just
shouldn't be a big deal these days.  It would have cost an extra 480
bytes of memory to do that with my VFD, but I didn't design the VFD so
I can't do anything about it.

I suspect people are just re-discovering some old concepts that were
plenty familar to DOS programmers, but when I was using DOS I wasn't a
very experienced programmer yet.

What I Don't Like About Callbacks.

a) Putted simply, you can't mix gui events with
application specific events.

Well, in systems which have an event queue, sometimes you can put
application events into the queue.  But dispatching of what is in the
queue to the appropriate callback can be handled differently.

A simple alternative is that often you just want to call yourself
later.  That is, register a timer which when it expires, will call the
given function which you give when you register the timer.  It can be
a one-shot or a repeating timer.

b) (which is related) Since a function call results
from an event, you can't generate an event whose
callback needs the end of
the event-generating-computation (i.e. it's not
asynchronous).

I'd rather like to finish a computation which generates
an event before processing it.

If you generate the event and put it in the queue, it will be
processed later when the code gets around to checking the queue,
right?

c) Also, say you want to code "When the left mouse button
is pressed (elsewhere than above my two main widgets),
make the first widget be blue and the second one be red.".
(Call this the blue-red exemple for later reuse.)
You'll need to register a callback on the background widget.
This callback needs to access (to be aware of) the two
other widgets. I don't like this.

Whereas in an immediate mode UI, the main "do" function is both
finding out that the button is pressed, and drawing the blue and red
widgets directly right?

An alternative way to get this behavior with callbacks would be to
make it possible for every widget to register interest in any kind of
event, whether or not it would seem relevant to that widget.  The
callback can have an extra parameter to indicate the relevance.  So a
normal button registers only for a click event on himself, and your
special button registers for a click event anywhere, and is given a
"true" when the click is on him, and a "false" if it's somewhere else,
along with coordinates etc.  Or maybe your special button does two
registrations of two different callbacks so that the parameter is not
necessary...

"My" Idea.

I call it mine but I don't know .. maybe it's as old
as the world ?

- The callback mechanism is exposed and made not mandatory.
- The widgets are aware of events, not the contrary
(see c) above).
- The event facility can be used for application-generated
events.
- The ol'callback way is still available.


How.

An event-queue keeps the not-yet-processed events.
(For some application, it might be useful to timestamp
events so they are dispatched later or even keep the
already-processed events to be able to reverse time.)

That's probably a really good idea.  I think in general the trend is
that as memory gets cheaper (RAM and disk both, and I believe they
will be the same thing some day when nonvolatile RAM is cheap enough),
you can find reasons to keep data around, timestamp it, take diffs,
even build a sort of version control system for some classes of data,
rather than assume that only the latest, most current data is relevant
and letting new stuff overwrite old stuff.  Especially, any text which
the user enters himself ought to be considered sacred and never
garbage-collected unless the user asks for it to be deleted.  The book
"The Humane Interface" makes this point.  I was thinking a few years
ago that a computer's disk ought to be structured as a giant log,
where the data the user creates is appended to the log, and then data
structures which the software creates to organize that data never
duplicate it - they just point to it.  The user enters a stream of
data, and the computer makes sense of it over time.  You would have
built-in version control - edits are actually a special kind of diff
that is appended to the log, so you will have the new version and the
old version at the same time.  Like CVS or subversion but implemented
more efficiently.  That is what early versions of the Xanadu project
were about...but there is more than one way to implement it, and I
can't make up my mind which is the best one (and neither could the
Xanadu guys, which is why it never got done).

Anyway I'm not sure that mouse events have the same kind of perennial
lasting importance that text does, but in graphical documents, like
diagrams, a persistent history would be nice, with unlimited undo and
the ability to compare different versions.  I want to build something
like that when I get the GUI stuff done.

For some event, the current state is kept. Exemple:
You might not want to have code run each time the
<enter> key is pressed but only want to be able to
query its state; is it pressed ?

In the embedded system I described above I have a bitmask which keeps
the state of every button.  How else will I detect that it was just
pressed, or just released, in order to generate an event?  But in a PC
keyboard the microcontroller would have to do something similar, and
then it sends scan codes across the serial link to the PC.  So you
have to re-create the state of the enter key when an event occurs
which indicates the state changed.

The base API let you test the existence of an event.
The test can be blocking or not. It can removes the
event from the queue or not. (The api has to be
extended so an event can be removed automatically
when every interested widget has already received it.)

Are you sure you would want to have events removed from the queue, if
you are depending on them to be there so that applications which
started up after those events can still process them?

(This last point let you write the blue-red program:
the two widgets are aware of the events.)

At this point, we have access to events. But how (when)
the code accessing the events is run ? How the gui
event-polling is done ?

Here again we gain some flexibility. You can write
yourself a loop that calls one API function to
pump gui events then calls the event-testing-related
API functions.

Testing the existence of an event is a search, isn't it?  So you could
have potentially hundreds of widgets all doing the same search over
and over again to comb through the whole event queue and find out what
is of interest to them? or at least combing through the "tail" of the
event queue (and keeping track where they left off, so at the next
polling cycle they know where to start). This is why registering a
callback is a nice concept - when the event occurs, the main event
handler can immediately build a list of callbacks that need to be
notified, and then call them.  On a multi-processor system the
callbacks could even be simultaneous.  If the red button expressed
interest in receiving all mouse clicks then he will be notified at
once (practically instantaneously) when the mouse is clicked, maybe
even at the same time the blue button also gets notified.

Or you can use an API function set
to let you register functions to be called regularly
(independently of events) and then launch an API-
provided main loop. Or register functions to
be called only for some specific events.

Yep, we approached the well-known callback mechanism.

Yeah maybe we're thinking pretty much the same.

To make it really feels like that, we can provide a
function which makes the widget generate an event

What does that mean, the widget generating an event?  Usually the user
generates an event by clicking or moving or typing.

*and* register a callback for that specific event.

By 'specific event', I mean an event has some kind
of name. When you register a function, you have to
say for which event names you want it to respond.
So the function I talked about in the previous
paragraph can generate a unique name so only *that*
widget and *that* callback are tied.

Is a "name" really just a generated ID, or is it some kind of
concatenation of an event ID, a widget ID and a callback?

Would you say that every widget has to create its own callbacks, or
you'd have a globally unique, generated widget ID which is passed as a
parameter to the callbacks, so that the callbacks can service multiple
widgets?  (This is an ancient Java AWT thing - is it OK sometimes to
have one button handler for several buttons, or do you have to create
new ones for each?  If you share the handler, then you have to have
some if's inside to decide which button was pressed.  Worse yet, the
callback included only the string that labeled the button, so the if's
would be doing string comparisons.  But before anonymous inner classes
were added to the language, it was hard to write separate handlers for
every button.  Now both ways are still possible, as far as I know.)




reply via email to

[Prev in Thread] Current Thread [Next in Thread]