freesci-develop
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[freesci-develop] OpenGL picmap renderer


From: James Albert
Subject: [freesci-develop] OpenGL picmap renderer
Date: Sun, 20 Nov 2005 23:55:36 -0500
User-agent: Thunderbird 1.5 (Windows/20051025)

Hi everybody,

I've been putting off mentioning this because I wanted to have working code first, but during the time last year when I wasn't in touch with anybody I was trying to write an OpenGL picmap renderer. I'm only mentioning it now because there's been speculation, are there has been before, about using the z-buffer in place of the priority map. I had a little success, but is there anybody on the mailing list familiar with Computational Geometry? - it could be a big help

The current FreeSCI algorithm goes something like this from what I understand: - Draw lines and points using the exact same methods as the original SCI so as not to break the fills.
- For each fill, fill from the given point to the non-white boundaries

A polygon based algorithm would go something like this:
- For each line or point, add the necessary geometry data to an internal datamap. Each line is actually two triangles, forming a quadrilateral. Each point would have to be a polygon of whatever was drawn by the classic algorithm. - for each fill, triangulate the current set of geometry data and fill from the current triangle, the one that the given point lies on, to every connected white triangle

There's pros and cons of course to this approach besides a better picture and hardware acceleration:
pros:
- intrinsic priority and control map support through the z-buffer and stencil buffer. I haven't looked into the stencil buffer other then to speculate about whether it could be used this way. I think it could but the stencil buffer is more for real time graphic manipulation, like raycasted shadows, and using the stencil buffer outside of the graphics render - to do conditional calculations in the VM - might be very detrimental. The other option here is to use a higher bit z-buffer. Use the lower bits for the control map and the upper bits for the depth map. This sounds awkard, but as long as objects "jump" between depths, then it won't matter if there's extra depth information at each pixel because it won't affect the render visually. - less code to debug, because most of the clipping and other graphics routines are handled by the api - portability, OpenGL can be run on systems even without hardware acceleration.

cons:
- The move to OpenGL could potentially require an entire graphics subsystem overhaul.
- the triangulation step is very time consuming (see below).
- although the code made be easier to debug, hardware differences might brake portability on some machines (although there's always the software based fall back) - There's still the problem of gaps in the fill, but they should be easier to deal with because instead of dealing with pixels you're dealing with geometry data. My current approach is to add extra lines to the geometry for any triangle smaller then a given size (the size of a gap between pixels), and then fill any unfilled triangles smaller then the gap size with the colors of the neighboring triangles. The problem with this though is filling holes that were *supposed* to be empty, so if anybody has any other ideas please share them.

Here's where I stand right now. I can read the picmaps and draw the lines but the fills are killing me. My best solution right now is drawing both the OpenGL and classic versions at the same time, and then overlaying just the OpenGL lines. I took computational geometry in college - but implementing the triangulation algorithms to accommodate the requirements of sci are daunting. The algorithm would need to quickly triangulate an already partially triangulated set of data (the lines and points are good to go already) - Most algorithms only triangulate points, and forcing data into them that wouldn't be part of the regular triangulation usually breaks the whole algorithm. Although a recursive algorithm might work, just start one step up for the currently triangulated portions. My code isn't at the point where I can try that yet though. The other option is using a pre-made triangulation API, and the best one I've found is the one built into the GLU library ("OpenGL Utility Library" - it's part of the OpenGL distribution, but not tied specifically to OpenGL - also, it isn't hardware accelerated) . There it's called tessellation because instead of points, it takes a polygon and breaks it down into triangles. This isn't quite what's required, but by "cutting out" all of the currently filled portions - including the lines, points, and previously filled polygons - the resulting triangulation would be sufficient. The problem here is drawing over top of the polygons that were already filled. There'd need to need to be a separate part of the algorithm that detects when this happens and splits or morphs the current polygon depending on how the new line polygon intercepts it.

Anyway, I just went through all this just to get to this question - Has anyone every worked with the GLU tesellation API?

- Jim :-)






reply via email to

[Prev in Thread] Current Thread [Next in Thread]