In this part of my series on dynamic audio in ActionScript, we’ll discuss a simple sound manager that can seamlessly mix and string together different parts of a soundtrack, based on Flash 10′s sound API.

This will give us the ability to splice together musical pieces at runtime, which helps to conserve file size (think of a verse-chorus-verse-chorus-solo-chorus-chorus type song, which is composed of only 3 distinctive parts but would be about double the length of these 3 parts combined, if loaded from a single mp3 file). It will also lay the foundation for part 8, in which we’ll start extending the sound manager with real time audio effects.

Please note that the class presented in this tutorial is not intended to be production-ready code (for example, there are no facilities to stop a running sound). If you need a complete sound manager to use in your Flash game, you’ll either have to look elsewhere or write it yourself, building on the principles discussed in this series of articles. This means you’ll need to be able to read and thoroughly understand the code in this tutorial and then write your own, depending on the requirements of your project. I’ll do my best to explain everything, but please don’t expect me to implement any features for you.

 

Overview and features

You can download a zip of the complete project here: http://www.philippseifried.com/blog/files/misc/as3_sound_manager.zip. The zip contains three important files: the SoundManager class, a SoundManagerTest class, and a .fla containing a few sound assets and using SoundManagerTest.as as its document class. Furthermore, it contains the source AIF audio files as well, if you’d rather compile from Flash Builder or another environment. Note that the sound assets in the .fla are uncompressed – if you let Flash compress them as MP3s, it will add a bit of silence at their ends, which will be noticeable when chaining them together or looping them.

The SoundManager is a Singleton that lets you register sounds by string IDs. Once a sound is registered, you can play it back by calling playSound(id). You can optionally give the playSound method a callback function with an array of arguments, which will be called as soon as the sound completes. By chaining together such callback functions, you can play different sounds in sequence or create infinite loops. There is also a playSequence() method, which lets you specify an Array of sound IDs which are played in sequence.

You should be able to gather how to use the SoundManager by looking at the SoundManagerTest class, which registers some sample beats and then plays them back in various sequences.

 

Implementation – the big picture

To keep things simple as well as improve runtime performance, the SoundManager works with pre-extracted Vector.<Number>s instead of extracting streams from Sound instances as it needs to access them.

This means a higher memory footprint, and it means a relatively high setup cost, as you create and extract all Sounds you’ll be using at once, when your app initializes. Once that’s done however, playback will require a little less overhead as you won’t be extracting any further samples while your app is up and running (I’m thinking of applications such as games, which typically have a menu or loading state, where it’s permissible to perform intensive tasks, and an in-game state where you want to reduce overhead as much as you can).

Aside from the main SoundManager class, the file SoundManager.as also contains two private classes:

SingletonEnforcer is just an empty class used to enforce SoundManager’s Singleton nature by being required in its constructor (i.e. SoundManager’s constructor needs a SingletonEnforcer, but SingletonEnforcer is only accessible within SoundManager.as, disabling direct instantiation).

SoundData is a simple structure used by SoundManager to represent registered sounds. When a new Sound is registered by calling SoundManager.instance.registerSound(), the SoundManager creates a SoundData object with the given ID and copies the given Sound’s sample data into the SoundData’s Vector.<Number>s. The SoundData is then added to SoundManager’s registeredSounds Vector.

When a sound is played back by its ID, SoundManager will look for a SoundData object with that ID in the registeredSounds Vector. If it finds one, it makes a copy, adds a callback that will be applied when the sound finishes (if applicable) and sets the playback position. The copy is then added to the activeSounds Vector.

The SoundManager has a single Sound instance named “output”, which is continuously playing. Whenever the output sound needs fresh samples, SoundManager’s sampleDataHandler is called. sampleDataHandler mixes audio data from all SoundData instances in activeSounds together and writes the result into the event’s sound sample data ByteArray. sampleDataHandler is also responsible for checking when a particular SoundData’s playback is complete, in which case it removes the SoundData from activeSounds and calls its callback, if any is set.

 

Implementation – Method by method

It could be that the above overview and the comments in the code are all you need for a complete understanding of the SoundManager. In this case, feel free to stop here (although I’d recommend reading at least the explanation of sampleDataHandler).

For the rest of us, let’s take another, more detailed look at SoundManager by going over each of its methods. You may want to look at the explanations here and the source code side by side:

public function registerSound(id:String, sound:Sound):void

This extracts the sound’s sample data into two Vector.<Number>s, one for each stereo channel. This part should be pretty straight-forward, if you’ve read the other tutorials in this series. The Vectors are then attached to a new SoundData instance with the given ID, which is then added to SoundManager’s private registeredSounds Vector.

registeredSounds contains the SoundData objects for all Sounds that have been registered.

private function getSoundData(id:String):SoundData

This is a private helper function that finds a SoundData instance in registeredSounds, given the SoundData’s id.

public function playSound(id:String, completionCallback:Function = null, callbackArgs:Array = null):void

Once a sound has been registered, you can use playSound to play it back. playSound fetches the SoundData instance for the given ID, and makes a shallow copy (meaning the cloned SoundData shares a reference to the original SoundData’s sample data Vectors). It then adds the completionCallback to the copy (if there is one) and pushes it onto SoundManager’s activeSounds Vector, which contains all SoundData instances that are currently playing back.

You may be wondering about the line data.position = -currentOutputBufferPosition; I’ll cover that when we discuss the sampleDataHandler method.

public function playSequence(ids:Array, completionCallback:Function = null, callbackArgs:Array = null):void

This does the same thing as playSound, but takes an Array of Strings instead of a single ID, playing them in sequence. Once the sequence is complete, the optional completionCallback is called.

playSequence uses the helper method playSequenceHelper_ to recursively string together a series of callbacks, each of which calls playSound with the next callback in the chain. The mechanics of this aren’t important for understanding audio processing, so I’m not going to explain them in more detail.

private function sampleDataHandler(e:SampleDataEvent):void

The real meat of the SoundManager is in this event handler, which is called whenever the output Sound instance needs new sample data. If you haven’t opened the SoundManager class yet, I’d suggest doing so now and comparing the code in sampleDataHandler with the explanations given here. Grab a cup of coffee, this involves a few parts moving at once:

sampleDataHandler works on two Vector.<Number>s, named currentLeftOutput and currentRightOutput, which are each as long as the BUFFER_SIZE (given as a constant defined at the top of SoundManager). We’ll call these the “output Vectors”.

First, sampleDataHandler clears these output Vectors. Then it mixes the relevant parts of all active SoundDatas’ leftData and rightData Vectors into them. Finally, it copies the output Vectors into the SampleDataEvent’s ByteArray. The first and last step should be pretty obvious by now, so let’s focus on the mixing:

sampleDataHandler mixes sounds one buffer length at a time and one SoundData object at a time.

Since the BUFFER_SIZE is very small compared to a sound’s duration, a SoundData object that is currently active could be at its start or anywhere in the middle, when sampleDataHandler starts. It could continue for several more calls of sampleDataHandler, or it could finish anywhere between sample 0 and sample BUFFER_SIZE-1 in the current iteration, in which case its callback must be called (if one has been assigned). If the callback starts a new SoundData’s playback, this playback must be synced so that it starts seamlessly after the sample at which the previous SoundData ended.

A bunch of member variables and local variables keep track of everything needed for this setup:

dataPos – The index into the SoundData’s audio data, at which we read while mixing the data into the output Vectors. In the inner loop which performs the mixing (by adding the SoundData’s left and right Vectors to the left and right output Vectors), dataPos is continuously incremented.

outputEndIndex – The index in the output Vectors at which we’ll finish copying. This equals BUFFER_SIZE or the length of the current SoundData’s remaining audio, whichever is shorter.

remainingSamples – The remaining length of the current SoundData, at the start of this iteration of sampleDataHandler (not incremented/updated when the inner loop mixes data).

outputStartIndex and currentOutputBufferPosition – outputStartIndex is the index in the output Vectors at which we start mixing the current SoundData’s audio. currentOutputBufferPosition is a member variable containing the current write index in the output Vectors. Both of these are normally zero, except in the following situation:

Suppose that playback of a SoundData ends in the middle of an output Vector, and suppose that the SoundData has a callback which starts playback of another sound. This second sound needs to be synced so that its playback starts seamlessly at the first sample after the first sound’s end.

When playback of the first SoundData ends, currentOutputBufferPosition is set to the sample index in the output Vector at which we stopped copying data. Now the callback starts, and in turn calls playSound, which creates a new SoundData instance and sets data.position = -currentOutputBufferPosition;

This means that the new SoundData’s sample position at the start of the current sampleDataHandler iteration is a negative number (-currentOutputBufferPosition) – which reflects that the new SoundData is supposed to start somewhere between the current sampleDataHandler’s sample 0 and BUFFER_SIZE-1.

So when the callback returns and we’re back in sampleDataHandler, we go over the remaining active SoundData instances and then arrive at the SoundData that was just created by the callback. This SoundData has a negative position, which means it’s not supposed to start at sample 0 in the output Vector, but further on. The local outputStartIndex variable is now set to the point at which the SoundData starts, and the inner loop that does the mixing goes from outputStartIndex to the outputEndIndex.

If all of this seems complicated, try to think of how you would implement the edge case that somewhere in the middle of the buffer you’re currently writing, the current sample could stop, and a new sample should continue playing right at that point (which might actually be shorter than the remaining buffer and in turn trigger another sample with its own callback). Hopefully after a while, everything should fall into place.

Next up: a quick look at some performance comparisons between ByteArrays and Vector.<Number>s, and then we’ll re-write and extend the SoundManager class with a flexible architecture for audio effects.

 

One Response to Realtime audio processing in Flash, part 6: Building a simple sound manager

  1. Stephan says:

    Hi Philipp,

    first of all, big ups for your great tutorials – this is seriously one of the best flash/AS3 audio tutorials i’ve seen yet.

    I’m playing around with your SoundManager class, but trying to make it pure AS3. Therefore, I load the Sounds that are registered via registerSound() before calling it into a sound object:


    this.oSound = new Sound();
    this.oSound.addEventListener ( Event.COMPLETE, onSoundLoadComplete );
    this.oSound.load ( new URLRequest( this.sURL ) );

    //..

    private function onSoundLoadComplete ( e:Event ):void
    {
    // this.oSound.length gives back the correct duration of the sound
    SoundManager.instance.registerSound ( this.sURL, this.oSound );
    SoundManager.instance.playSound ( this.sURL );
    }

    This creates the output:


    SoundManager: Registering sound with id “sounds/hi_hats.aif”.
    SoundManager: playing ID: sounds/hi_hats.aif
    SoundManager: sound “sounds/hi_hats.aif” is ending. Current buffer position: 0

    Problem is, I don’t hear a sound playing, also the last output line (“.. is ending..”) appears immediately after starting playback..

    Thanks for your help!
    - Stephan

Leave a Reply

Your email address will not be published. Required fields are marked *

*


2 × three =

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>