::| SUBSYNTHlowlevel audio synthesis subsystem. modular analog synthesizer in software |
api features overview
OSS - 4Front - works under 13 UNIXs (unices) - provides low level interface to audio and MIDI read/write, and mixer control - no high level sound interface - data must be fed to /dev/dsp or /dev/music manually - interface is through a file descriptor (i.e. you open /dev/dsp) - api is a series of ioctl, open, read, write, close calls. alternatively you can use the select() func for notifications. - all calls are non-blocking - all calls get queued (at least for midi, need to make sure how audio works.) - very flexible - read/write audio - mixer control - send/recv to/from MIDI port - multichannels support - through multiple /dev/dspX files - with special hardware that supports N channels of interlaces data in one stream - combination of both OpenAL - loki entertainment, creative labs - supports win32, unix, macos/x - high level - every sound produced is in 3D - dopplar - attenuation - transforms involved are for the "listener" and the "sound" - listener is a sort of global transform that affects all sounds - each individual sound can also be positioned locally. - ambient sounds need to be rendered at 0,0,0 - low level interface to audio hardware - raw data stream to hardware - use their alSourceQueueBuffers interface - uses intermediate step - makes a copy of your data buffer, then feeds to hardware - automatically does format conversion - this data stream is affected by 3D position, dopplar, attenuation - multichannels support maybe - supported by the 3D api (3d positional stereo, quad, oct speaker support) - mono and stereo for sure. OpenML - Kronos group - will be available on win32, irix, linux... - like OpenGL, but for video/sound - will cost money - is really new - not much about audio in their spec... :( SDL - simple direct media layer. - free cross platform multimedia dev API - intended for games/multimedia - what's it do - video - audio - input events (keyboard, mouse) - cd-rom audio - threads - timers - endian - audio (low level raw streaming support) - audio playback of 8-bit and 16-bit audio, mono or stereo - supports conversion - runs independently in a separate thread - filled via a user callback mechanism - multichannel - probably not, they say only mono/stereo support. - maybe can open multiple /dev/dsp's ?? AudioWorks - multigen/paradigm - costs money. :( - a lot of money 8( - super flexible - high level - 3D environmental audio - room parameter effects - atten - 3d - dopplar - too many environmental effects to name. - file loaders for mono .aiff, and limited .wav support - low level - can setup their awSound objects manually with your own file loader - raw data stream to hardware - use their FIFO interface. - multi speakers (N) - runs as a daemon you connect to through a socket. - allows a dedicated machine to run this. - AW was never really designed with streaming in mind. It's a physical sampler model architecture that was based on the Emu-IIIXP. All of the sample data is assumed to exist up front.CODE
start the audio system:
// in OpenAL void* dev = alcOpenDevice( NULL ); int attrlist[] = { ALC_FREQUENCY, 22050, ALC_INVALID }; int contextId = alcCreateContext( dev, attrlist ); alcMakeContextCurrent( contextId ); // in Audioworks, it uses the term observer instead of listener. awOpenAWD(""); awOpenEP(0, AWEP_SHARE); // multiusers can use audio works at the same time... awEPReset(); awEPFlush(); awCloseEP(); awCloseAWD(); awInitSys(); engine = awNewEng(); awAttachEng( engine ); channel = awNewChan(); awChanEng( channel, engine ); //Attach channel to engine awProp(channel, AWCHAN_MODEL, AWIF_QUAD); //Set QUAD sound imaging model awProp(channel, AWCHAN_NVOICES, 16); //Set polyphony to 16 awProp(channel, AWCHAN_ENABLE, AW_ON); scene = awNewScene(); environment = awNewEnv(); awProp( environment, AWENV_SOS, 330.0 ); // set speed of sound observer = awNewObs(); awProp( observer, AWOBS_STATE, AW_ON); //Enable the observer awAddObsChan( observer, channel ); //Attach observer to channel awObsScene( observer, scene ); //Attach observer to scene awObsEnv( observer, environment ); //Attach observer to environment awConfigChan( channel ); awConfigSys( 0 );load a sound:
// openal has the concept of buffers and sources. // buffers are closely tied to the waveform data // a source is an object with more attributes and functionality that operates from a buffer. ALuint buffer; ALuint source; // now load the .wav file into a char array called "data" (both header and data) alGenBuffers( 1, &buffer ); alBufferData( buffer, AL_FORMAT_WAVE_EXT, data, data_size, 0 ); alGenSources( 1, &source ); alSourcei( source, AL_BUFFER, buffer ); // audioworks suports aiff, and some wav files (1 chan, 11025hz) awWave wave; awName( &wave, filename ); //Set the .aifc or .wav filename awLoadWav( &wave ); //load the file data awMapWavToSE( wave, mEngine ); //attach the data to the engine awFlushWavToSE( wave ); //commit changes to the engine sound = awNewSnd(); //Define new sound awSndWave( sound, wave ); //attach sound to wave data awMakeSnd( sound ); awProp( sound, AWSND_ENABLE, AW_ON ); //Enable the sound awProp( sound, AWSND_EXPUNGE, AW_OFF ); //Don't delete it after play // player allows us to position the sound player = awNewPlyr(); awAddPlyrSnd( player, sound ); //attach player to sound awAddSceneSnd( scene, sound ); //add sound to the sceneunload a sound:
// in OpenAL alDeleteSources( 1, &source ); alDeleteBuffers( 1, &buffer ); // in audioworks awRemSceneSnd( scene, sound ); //detach sound from scene awUnMapWavToSE( wave ); // detach wave from engine awDelete( sound ); awDelete( wave );trigger a sound:
// In OpenAL alSourcePlay( source ); // in audioworks awProp( sound, AWSND_STATE, AW_ON );change position on a sound:
In OpenAL, sound position can be set in two ways: each individual sound may be transformed, or a global transform can be set. The global transform represents the human listener. // modify the position of an individual sound float pos[3] = { x, y, z }; alSourcefv( source, AL_POSITION, pos ); // modify the global "listener" transform ALfloat position[3]; ALfloat orientation[] = { forward[0], forward[1], forward[2], up[0], up[1], up[2] }; alListenerfv( AL_POSITION, position ); alListenerfv( AL_ORIENTATION, orientation ); // in audioworks float xyz[3] = { 0.0f, 0.0f, 0.0f }; float hpr[3] = { 0.0f, 0.0f, 0.0f }; awXYZHPR( sound, xyz, hpr );frame function:
OpenAL is multithreaded, it doesn't need to be manually driven by an update or frame function. When you call an OpenAL method, its own thread takes take of your request. Audioworks requires that you call their frame fnuction with the current running time. This tells audioworks how much audio data to process each frame. awFrame( total_time_elapsed );shutdown audio system: // in OpenAL alcDestroyContext( mContextId ); alcCloseDevice( mDev ); // in audioworks. awUnConfigChan( mChannel ); awRemObsChan( observer, channel); // detach awObsScene( observer, NULL); // detach awObsEnv( observer, NULL); // detach awDelete( observer ); awDelete( environment ); awDelete( scene ); awDelete( channel ); awDetachEng( engine ); awDelete( engine ); awExit();