::| SUBSYNTH

lowlevel audio synthesis subsystem. modular analog synthesizer in software

 : requirements.specification : 

requirements specification and random design notes

initial brainstorm

subsynth (low level synth subsystem, modular analog synthesizer in software)

uses the audioabstraction to communicate with the sound hardware, and thus is completely portable and can change sound APIs at run time (so the user can play with quality and latency differences).

Figure 1. A typical synth network. Inspiration from modular analog synthesizers, Terminals connect Modules together to form a sound graph.

signal modules
- what is it?
   - has inputs (signal consumers)
   - has outputs (signal producers or "sources")
   - inputs and outputs are all a common well-known data type
   - takes input from any other module's output
   - filters it in some way
- what are some typical modules?
   - filters
   - sources
   - mixers/routers
   - passthroughs
   - meta module, composed of many...
- mono filters:
   - reverb, echo, delay, chorus
   - volume (controlled by an envelope controller, or any other signal source)
   - stereo filter (like pan) would be a special case "stereo signal source" 
     pluggable only into a mixer, or other "stereo signal source" filters
- multi filters (stereo and beyond, a container for multi signal Is and Os )
   - stereo versions of the mono filters (slower to run of course)
   - take 2 or more mono signal source inputs
   - provides 2 or more mono signal source outputs
   
   
signal sources (brainstorm: where do we get audio input from?):
- what is it?
   - the output from a source/filter or in general... some module/node 
     in the sound graph
   - something that generates a signal.  could be considered a sound
        source in the form of a tone generator, or a control signal
        in the form of an envelope or some other thing manipulated by a user.
   - as a sound sample, tone, or streamed data (voice, webcast)
   - as a controller
      - how the user controls the behaviour of the elements in the synth
         - time/signal based (i.e. on == 1, off = 0.  ramp == 
            0,.1,.2,.3,.4,etc..)
         - this allows very flexible control over synth parameters
           any signal producer can then be used as a controller.
      - stuff that can be controlled:
         - everything
         - signal sources can be controlled through these inputs:
            - volume
            - trigger
            - A, D, S, or R
            - any other signal source defined params.
         - filters
            - will have many custom parameters to connect to, volume, 
              gain, distortion,
              the bands on an EQ, amount of chorus, etc...
         - meta sources
            - an aggregate mix of sources and filters that make up a larger 
              signal source with many parameters to control. (i.e. drum 
              machine analog synth, mixer, etc...)
            - analog synth module
            - sample player module
            - mixer module
               - basic mixers
               - flexible sound routers (modeled after modern hardware mixers)
                 - send/recv
                 - built in EQ filter
                 - multi channels
                 - volume control, mute, submaster assignment
                 - aux volume controls.
            - other filter modules (aggregate of 2 or more filters)
      - example controllers
         - any signal source (i.e. modulate the panning based on some sine or 
           saw wave)
- brainstorm...
   - from memory (and how does it get into memory?...)
      - from file (streaming, or prebuffered)
      - from network (streaming)
   - procedural sources (tone generators: sine, saw, noise, etc..)
   - scripted sources (envelopes i.e.ADSR)
      - envelope (ADSR - attack decay sustain release) specification
   - filters are also a source that could be used as input to other things.
   - should we implement controls in db, or in gain?


introspection
- would be nice to hook a GUI to this network, measure latency, 
  tweek params, make connections/reorganize.


control of the system
- reconfigurable mapper maps input to the current synth network.
  examples of "input" are: 
  - direct calls into the synth API via C++ or python bindings
  - MIDI events from the system's MIDI mapper, for example:
    - from external MIDI hardware
    - generated from software running on the host
  - Keyboard keys
  - mouse Button
  - joystick axis and buttons
  - Gadgeteer 
  - GUI widgets can give input such as radio knobs and sliders.
- this means there could be a reconfigurable input backend (like a gadgeteer)
  - ability to enabled all at once, or individually, or none at all
  - direct calls into the synth of course don't use this sort of adaptor layer.


resource management 
- looks at system as a whole, and can rearrage resource usage based
  on number of sound channels available as reported by the audio abstraction.
- looks at priority on the signal sources and drops low priority ones when 
  system is busy.
- schedules streaming audio to ensure lowest latency from disk or from network 
  shares
   - knows when to preload/unload or simply stream based on some system 
     metrics maybe?
- task migration probably not needed since each module is 
  (fairly) deterministic
- prediction in each nodes do a lookahead in case realtime input doesn't 
  happen.


pipeline:
- should it be paralellizable? - how much?
  i.e. 
  - filter/source/mixer 3 processes?, 
  - or one process for each instantiated filter/source (could have 1-100)
- single process may starve sound hardware because it has too much to compute
  - paralellizing introduces latency, but may keep sound hardware full to 
    reduce breakup of the datastream (audible pops and clicks)


sound graph
- could be built on top of subsynth


usability
- what types of things to do to make all this weird sound lingo stuff 
  accessable to the average non-sound-literate person?
- what types of things will be desired by musicians who don't understand 
  c++ or sound networks.
  

TODO: 
- litsearch (see litsearch.txt)
- why do this
- what is better here, in VSS, in others?
- support a software HRTF (3D position) filter in synth engine
  - how to support hardware HRTF???
    - can do it easily if there is no more filters after the HRTF
- who would be good to help with this project?
- simple sound triger api build on the synth
  - knows how to schedule a sound to a channel in sound hardware..
- scenegraph nodes could be build on top of the synth.


intro | documentation | design| requirements| implementation.notes| lit.search| publications

SourceForge Logo