hi.
ive been testing image formats for a few months, and in depth. new to openexr.
i'm currently starting the optimizing of my lossless 16bit+ compression system
some observations.
remember, if you're source material is 16bit integer, the half float conversion is very lossy, although it has more conservative properties when manipulated down the post pipe in some regards,
one thing all formats suffer from .. they are always going out of date . most were invented in the 90s
my classical Huffman coder beats j2k and png on 48bit photographic material, and PIZ
PNG uses Huffman coding .. and can compress 16bit values .. but the compression system in png is only 8bit. same with TIF.
all these are by products of formats beings developed back when computers had 640kb of ram..
when you convert 16bit integer to 16bit half float, and normalise between 0..1 , there is half as much data recorded as when you normalise between -1 ..1 .. and you're not wasting the sign bit !
I don't know what openexr convention says. anyway, normalising between 0..1 makes my compressor produce files that are significantly smaller than the compressed integer source.
I'm not sure what the correct normalization procedure for openexr is to be honest
try normalising all possible 65536 integer values into half floats. then convert them back to integer, and print a list of the original inputs and outputs (into a text file) . quite interesting
k.r