[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: NSSound
From: |
Xavier Glattard |
Subject: |
Re: NSSound |
Date: |
Fri, 05 Jun 2009 10:29:46 +0200 |
User-agent: |
Thunderbird 2.0.0.16 (X11/20080707) |
Stefan Bidigaray a écrit :
I think I was unclear on how I was storing the data...
On Thu, Jun 4, 2009 at 2:04 AM, Fred Kiefer <address@hidden
<mailto:address@hidden>> wrote:
I think that in almost all cases using NSData is better then storing a
pointer. That way you only have to worry once (when creating the NSData
object) about who is responsible for cleaning up afterwards.
I'll go ahead and use a NSData object to store the data, but to make my
life easier on the playback side I only store raw 16-bit PCM data into
the object. This allows me to not have to convert it later, and is the
native format for most (if not all) sound cards.
24-bits audio becomes very common. Anyway, as Fred said, the format of
the data has not to be handle by NSSound : libsndfile do it fine. Just
put the audio in a NSData and sndfile will convert it for you.
If you use jack (do you ?), i think you only have to call sf_read_float
and send the result to jack, that will convert it (again) to a format
supported by the hardware.
Writing in differenet formats should be no concern for NSSound, all we
have to support is writing the data to the pasteboard in a format we are
able to read ourselves.
As for reading we should rely on NSData and have the other two init
methods just create a suitable NSData object. What ever happens inside
of -initWithData: is up to your.
So should I leave -dataWithFormat:fileType: out? And keep the raw
reading method (David seems to like the idea)?
As I said, I agree with Fred : NSSound has nothing to do with audio
format. And I agree with David : this method might be usefull ;)
A GSSoundKit might be the right place for it.
http://www.cilinder.be/docs/next/NeXTStep/3.3/nd/
http://www.musickit.org/
As I understand it you are using two different libraries, libsndfile to
read the data and OpenAL to play the sound. As each of them may not be
available on the user system it is great to have fallbacks for that
case. I think it would be enough to support one file format, though.
Well, since the data in all these format are the same, all I really have
to worry about is correctly reading the headers. For example, AU can
store uLaw, aLaw and 8, 16, 24 and 32 bit PCM in big endian; WAV can
store uLaw, aLaw and 8, 16, 24 and 32 bit PCM in little endian. So
really, once I can read WAV, I can also read AU as long as I account for
the header and convert endian format.
On debian lenny i386 libsndfile.so is less than 360Kio while
libgnustep-gui.so is near 4Mio.
If you think it is realy important, what about a reduced libsndfile ?
One may probably build sndfile with a reduced set of codecs (the LGPL ones).
Thanks for the input. I have a lot clearer way ahead at this point.
I'll leave the back-end (gnustep_sndd, OpenAL or even JACK now) to after
I have a complete, working implementation of the front-end. There are
some things that need to be analyzed in some more detail there.
Stefan
Regards,
- Xavier
- Re: NSSound, (continued)
- Re: NSSound, Stefan Bidigaray, 2009/06/03
NSSound, Stefan Bidigaray, 2009/06/03
- Re: NSSound, David Chisnall, 2009/06/04
- Re: NSSound, Fred Kiefer, 2009/06/04
- Re: NSSound,
Xavier Glattard <=
- Re: NSSound, Stefan Bidigaray, 2009/06/05
- Re: NSSound, Xavier Glattard, 2009/06/05
- Re: NSSound, Stefan Bidigaray, 2009/06/05
- Re: NSSound, David Chisnall, 2009/06/05
- Re: NSSound, Stefan Bidigaray, 2009/06/05
Re: NSSound, David Chisnall, 2009/06/05