Our goal for this article is to extend the SoundManager from part 6 with an architecture that lets us set up a chain of effects devices. Again, I’m not aiming for production-ready code, but for a simple and illustrative set of classes that demonstrate how to solve the problem at hand with a minimum amount of clutter. If you don’t have a good memory of the SoundManager class from part 6, I recommend re-reading that part before moving on.

The big picture

As we saw in part 7, implementing a graph of connected devices that process audio on a per sample base (each class calling a process() method of its connected input(s) for every sample) isn’t practical, because of the CPU overhead incurred by the resulting hundreds of thousands of method calls per second.
Instead of going through the effects chain for every sample, we’ll do so once per SampleDataEvent. We’ll split up our monolithic SoundManager singleton into three parts: An AudioEngine, an IAudioDevice interface, and a SamplePlayer class which implements IAudioDevice.

IAudioDevice declares a single method: function process(leftOutput:Vector.<Number>, rightOutput:Vector.<Number>):void;

The process method receives two Vector.<Number>s of size BUFFER_SIZE, which correspond to the current left and right sample data channels. Different audio devices can either manipulate the contents of these Vectors or overwrite them with new content. For instance, if the IAudioDevice in question were a delay effect, it would expect the output Vectors to already be populated with sound data, and it would add an echo to this data. By contrast, if it were a SamplePlayer, it would treat the output Vectors as an empty stream which it could populate with data as it sees fit.

By convention, IAudioDevices that are effect devices have a member variable of type IAudioDevice, named input. In the device’s process method, it would first call the process method of its connected input device, and then do its own processing.
As an example, imagine a sample player connected to a compressor, which goes through a reverb effect and then into the master output. In our setup, the reverb effect’s process() method is called once per frame by the AudioEngine, which passes it empty output Vectors (which are of the buffer size). The reverb’s process() method would immediately call its input’s (i.e. the compressor’s) process() method, passing on the empty Vectors. The compressor’s process() method would in turn call the connected SamplePlayer’s process() and pass on the Vectors, which the SamplePlayer would then write into. Once SamplePlayer’s process() returns, we’d be back in the compressor’s process() method which could now operate on the output Vectors, containing data from the SamplePlayer. When the compressor’s process() returns, we’d be back in the reverb unit’s process() method, which would then operate on the compressed samples in the output Vectors.

Note that in a real-life application, you’d probably want to expand the class hierarchy a bit and introduce different base classes for fx processors (which have inputs) vs. units that generate sounds (such as the SamplePlayer). You might also add some bells and whistles such as a bypass flag, which conveniently lets you turn fx processors on and off (you’d achieve this by having a bypassed fx processor call its input’s process() method and then do nothing). For our purposes, I’d rather stick with a few simple classes.

So the AudioEngine keeps track of the actual Sound instance that’s playing and handles its SampleDataEvents. Every SampleDataEvent, it calls process() on its connected master input IAudioDevice, supplying empty left and right output Vectors. When process() returns, it copies the contents of these Vectors into the sample data ByteArray.
The SamplePlayer in this scenario is an IAudioDevice which has registerSound, playSound and playSequence methods like our previous SoundManager. The difference is that instead of having a Sound instance and listening to its SampleDataEvents by itself, it simply updates the output Vectors, whenever its process() method is called.

Between the AudioEngine and SamplePlayer, there is now room for effect devices, which can be chained together by their input member variables. In this case, the last IAudioDevice in the chain would be set as the AudioEngine’s input, and the first device would be connected to the SamplePlayer.

 

Implementation

http://philippseifried.com/blog/files/misc/as3_audio_engine.zip contains the implementation of this setup. The SamplePlayer class contains most of the code from the old SoundManager, but note that the sampleDataHandler method has been split between SamplePlayer and AudioEngine, which now clears the output Vectors, calls process() on its input, and then copies the Vectors into the sample data ByteArray.

The zip also contains a class named SimpleLowPassFilter, which is a basic example for an IAudioDevice. It implements a very simple low pass filter, which removes high frequencies by producing output samples that are each the average of several input samples.

AudioEngineTest shows how to connect the chain:

// create SamplePlayer and connect it to AudioEngine
audioEngine = AudioEngine.instance;
samplePlayer = new SamplePlayer();
var filter:SimpleLowPassFilter = new SimpleLowPassFilter();
filter.input = samplePlayer;
audioEngine.input = filter;

If you wanted to bypass the filter, you’d simply connect the SamplePlayer instead, like so: audioEngine.input = samplePlayer;

Still to come: implementing effect IAudioDevices with parameters hooked up to a GUI.

 

3 Responses to Realtime audio processing in Flash, part 8: Extending the sound manager with audio effects

  1. Adalberto says:

    Hello Philipp, I have followed your excellent work.
    I have a question and would like your help.
    I need to develop an application for air capturing the microphone and work the 2 channels separately, put the monophonic samples provides SampleDataEvent.SAMPLE_DATA me (am I correct?), How do I separate them.
    If it is not possible to separate the channels you would have a solution to capture stereo microphone?

    • Philipp says:

      Hi Adalberto,
      I’ve never had to work with stereo input, and I’m not sure if you can (at least the sample data a Microphone provides is always mono). Since getMicrophone() lets you specify an index, perhaps it’s possible to add two separate event listeners to two virtual mics, representing a stereo input source.

  2. Adalberto says:

    In this case I would have to use 2 capture sources, which would be indicated option.
    Already have a transmission system that uses the Flash Media Live Encoder, and it is separate from the audio source with only 1 catch.
    Do you have any idea how they do?

Leave a Reply

Your email address will not be published. Required fields are marked *

*


× nine = 63

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>