freesci-develop
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[freesci-develop] Decompressing LSCI resources


From: Jason Douglas
Subject: [freesci-develop] Decompressing LSCI resources
Date: Sat, 17 Dec 2005 17:46:14 -0800

Hi, I'm the lead dev for the FauxINN project, which is trying to create a
free implementation of the Sierra Network, which used LSCI. The resources
seem like a hybrid between SCI0 and SCI1, and they use different compression
codes. I'm most interested in the CC (character creator) resources right
now, and it seems they all use DCL-EXPLODE (with a code of 8). The text
resources decompress fine, but the binary resources (views, aud, etc...)
have some issues. Their decompressed lengths look OK, but the actual data is
about 20% garbage. The problem is that in the huffman token decoder, length
> distance in many cases, meaning the dcl-explode decompressor is being
asked to read more data than has been decoded yet (like length=4 and
distance=2, where it'd be asked to read past the current writepos).

I thought of making changes to the tree1.def, but realized that doing so
would break text resource decompression. It seems unlikely that the
decompressor would check if the resource is text/binary, and choose
different huffman trees, so I'm wondering if I'm going in the wrong
direction here...

Does anybody on the list who knows dcl-explode (or the other decompression
algos) know what the problem might be? Or maybe can suggest other
decompression algorithms that would maybe work? (dcl-explode definitely
seems to be doing the trick, as the 80% of the data getting decompressed
correctly is certainly valid, I just don't know why the length/distances are
screwed up so much for binary resources, and it definitely seems like its
the length that's messed up, not the distance, since overall decompressed
lengths look correct).

Thanks!
Jason


reply via email to

[Prev in Thread] Current Thread [Next in Thread]