openexr-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Openexr-devel] Application requirements for Blender's OpenEXR outpu


From: Florian Kainz
Subject: Re: [Openexr-devel] Application requirements for Blender's OpenEXR output
Date: Mon, 14 Mar 2005 15:37:11 -0800
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.3) Gecko/20030314

Gernot Ziegler wrote:
>
> We had a little discussion internally on which output options we should
> support in Blender, and therefore some questions here to the
> post-processing application developers:
>
> .) Color data: Half only, I guess ? Or are unsigned int and Float used,
> too ?

HALF should be good enough for almost all color images.
The quantization step size of HALF is several times smaller
than the smallest difference that humans can perceive.
The range of HALF is larger than the range of values that
occurs in most images.

UINT is meant for quantities that are inherently discrete,
for example per-pixel object identifiers ("in this pixel,
object number 314 is visible").

> .) RGB vs. RGBA: Are space requirements an issue, or is it ok to always
> output RGBA ? Do you need a gray setting ?

Omitting the A channel from an image file saves some file space
and it can make file I/O up to 25% faster because only three
instead of four channels need to be compressed or decompressed.

The RGBA-only programming interface supports all possible
combinations of R, G, B and A (R, RG, RGB, G, GB, GBA, etc.)
In addition, luminance only channel, Y, is supported for
gray-scale images.  In your program, you can set the channel
combination in the constructor for the RgbaOutputFile object.

> .) Depth data: Float or Half ? Is there some standard for the value range ?
> We currently have near clip=0.0 and far clip=1.0 .

At ILM we output the actual Z values, without normalization.

We often use FLOAT channels for Z data, because the range of the
depth values can be enormous.  For example, in an image that shows
the moon seen through a window, the distance to the window frame
might be 1 meter, but the distance to the moon would be 3e8 meters.

Normalizing the Z values to a zero-to-one range is probably not a good
idea because even with FLOAT pixels you can lose too much precision.
In the "window frame vs. the moon" example, you would probably not be
able to tell whether a nearby object is in front of or behind the window
frame if the Z data were normalized.  With floating-point pixels that
indicate actual depth values, this problem does not occur.

Normalizing Z values also makes depth compositing more difficult.
Given two images with Z channels, you have to consider both images'
near and far values in order to compare the images' depth values.
And if you combine the two images into one, you have to make sure
that you store proper near and far values with the composite image.

I think Blender should save actual Z values, without normalization.
If you need the near and far values, store them as float or double
attributes in the file header.

>
> .) Do you use uncompressed (hi-speed), lossless compression, or the
> lossy compression ? (BTW: How do I set the compression mode in the writing
> process ?  )
>

Most of the time we use PIZ compression.

Compression is an attribute in the file header;
you can set it like this:

    Header hdr;
    ...
    hdr.compression() = PIZ_COMPRESSION;







reply via email to

[Prev in Thread] Current Thread [Next in Thread]