In the first two tutorials of this series on dynamic audio in AS3, we’ve covered pretty much everything that Flash’s realtime sound API offers us. Let’s put all of it to use and benefit humankind by building a little app that will turn your voice into a horrible robot!

Pictured: Code sample 5. (image source: http://www.dieselsweeties.com)

Basically, what we’re going to do is take input from the microphone in a stream of sound samples, try to come up with something interesting to do with the samples, and then send the samples on to the sound card’s output.

Rather than building the app so that the user gets record and play buttons with which they can record a take and play back the processed version, we’ll build a real-time effect and process and play back the sound as it comes in. There are two reasons I want to go this route:

First, there’s less chrome involved that doesn’t add to the subject (GUI, application state and such).

Second, real-time processing is actually harder and more interesting than processing pre-recorded audio, because it poses the question how we can make (reasonably) sure that we have received enough input from the mic whenever we’re asked to fill a new output buffer. I’ll go over the intricacies of that in the next two sections – if you want to get right to the part where we do nasty things to your vowels, feel free to skip these and come back to them later.

 

Buffer syncing woes, in theory…

We have a microphone that periodically provides input to a buffer, and sound output which periodically asks us to fill a buffer with new samples. Let’s take a look at why syncing the two isn’t that trivial:

Suppose that, for argument’s sake, your output buffer is 3500 samples large, and the microphone dispatches a new SampleDataEvent whenever it has 3000 new samples available for you. If you start both the output sound instance and the microphone input at the same time, what’s going to happen?

At t=0 (the time unit here is single samples), you’re asked to create a 3500 sample output buffer, but you have collected no input yet! So, let’s say you fill the output buffer with zeros.

At t=3000, 3000 samples come in from the microphone, so you process and store these to use later. At t=3500, the sound card needs 3500 new samples, but you have only collected 3000 from the input so far! Do you wait until the next output cycle (introducing a further 3500 sample lag!), or do you send 500 zero-samples followed by 3000 processed input samples to the output buffer and continue from there?

Let’s say you do the latter: At t=3500, your input buffer is now of size 0. At t=6000, 3000 new samples come in from the input, so you add them to your input buffer (which is now 3K). However, at t=7000, the output buffer is empty again, and you’ll need 3500 input samples to fill it, but you only have 3000 available. If you pad the rest with zeros, you’ve introduced stuttering. What’s worse, the math actually works out so that the stuttering will reoccur periodically (if you go the route of adding to the output whatever number of samples you have at the time and padding the rest with zeros): at t=7000, your input is at 0 again. T=9000, 3K samples come from the input, but at t=10500, 3.5K samples are needed again for the output and you only have 3K…

The above is what happens when we start both mic input and sound output at the same time, and the output buffer is larger than the input buffer. Let’s look at a case where both are exactly the same size, say 3K:

At t=0, the output buffer needs 3K samples, but you don’t have any yet. At t=3K, 3K samples come in from the mic input, and the output buffer needs 3K new samples. So you process the input, send it to the output, and your input storage remains at 0. At t=6K, the same thing happens again, and 3K samples come in just before the output buffer needs 3K new output samples, again setting the input buffer to size 0.

The “just before” in the last sentence means trouble! What happens, if, by whatever operating system hiccup or timer inconsistency or hardware inconsistency, the SampleDataEvent that asks for new output suddenly gets called before the SampleDataEvent that provides new input? You get a 3000 sample pause in your output!

Finally, let’s look at a case where the output buffer is smaller than the input buffer size. Let’s say the microphone data comes in every 3500 samples, and your output buffer is 3000 samples in size (and you’re starting both at the same time).

At t=0, you’re expected to send 3000 samples to the output, but you haven’t received any from the mic, so you send 0s. At t=3K, the same thing happens again. At t=3.5K, 3.5K samples come in from the mic, so you process and store them. At t=6K, you need to send 3000 samples to the output, so you take them from your input storage, leaving 500 more samples in store. At t=7K, the mic sends you a further 3.5K samples, so your input buffer is now 4K samples big. At t=9K, the output requests 3K, leaving 1K in your input buffer.

I’ll cut this a little short and give you a table of what happens to the input buffer’s size at subsequent points in time.

t=10.5K → 4.5K (+3.5K came from input)
t=12K → 1.5K (-3K went to output)
t=14K → 5K (in)
t=15K → 2K (out)
t=17.5K → 5.5K (in)
t=18K → 2.5K (out)
The interesting thing actually happens at t=21K. In theory, at this point, you should both receive 3.5K of input samples and supply 3K of output samples! The problem is that, if you’re asked to produce the output before the input arrives (and there’s really no guarantee which will happen first), you are 500 samples short and you’ll have a gap in your playback!

 

… and in practice!

Clearly, if we’re transferring mic input to audio output, we need to accumulate some input and buffer it before we start the output. How much? To be frank, I really don’t know.

For starters, how much sample data will we get from the microphone at once? The answer is, that’s actually inconsistent! On my Mac, I’m getting new mic data every 1024 samples, but just occasionally the mic skips a SampleDataEvent entirely, and the next one comes with 2048. I’ve even seen hiccups where I get 4096 samples at once, although to be fair, I’ve only seen these near the application’s initialization.

Also, I have found no information on whether the 1024 sample interval is a target for all platforms, or whether the input intervals are different depending on your operating system, or even your sound card. If that information isn’t explicitly stated anywhere, it’s probably a good idea to treat it as undefined, and therefore subject to change.

So how do we proceed? I can think of two strategies:

a.) Buffer up an empirically determined amount of input samples before even starting the output, erring on the side of caution. The amount you want to pre-buffer is at least the size of one complete output buffer plus one complete input. Suppose your output buffer is 2048 samples and new data comes in every 1024 samples – you’d buffer up 3072 samples before starting playback. Since we already know that occasionally new mic input only comes in every 2048 samples, let’s be safe and make the buffer at least 4096. To be sure, you’re adding 4096/44100 = 92ms of latency between when a sample comes in from the mic’s SampleDataEvent and when it goes into the output sample data (and that’s on top of all the other latencies, including another guest to the party, which is your sound card’s input lag), but at least you’re fairly safe as far as buffer underflows are concerned.

b.) Start with a small safety buffer and monitor for underflows (i.e. instances where you have no input data left so you need to pad with zeros). Whenever an underflow occurs, increase the safety buffer’s maximum size and buffer until it is full again before continuing playback. This will be sure to have stutters in the first few seconds or so, but it should converge on the minimum “safe” latency (where “safe” means safe until something that holds up the input happens that didn’t happen before).

Those are the two avenues I’ve come up with. If you have a different suggestion, please leave a comment!

Personally, I’d argue that route a.) is almost always preferable. It works, it’s easier to implement, and if your objective is low latency input processing in Flash, then I’m afraid you’ve already lost. We might as well make the application perform with as few gaps as we can.

 

Enough already, let’s make some noise!

The following code sample sets up input and output SampleDataEvent handlers and pre-buffers at least the amount of samples specified in the MIN_SAFETY_BUFFER constant (more if MIN_SAFETY_BUFFER is not an integer multiple of the microphone’s input buffer size). In order to make this a complete application, it would be a good idea to handle microphone status events as well, so as to deal with users that don’t allow you to gather microphone input, but we’ll leave that out for now, to keep the example short and simple.

You can actually put the code on the timeline in Flash! Make sure you put on your headphones so you don’t get feedback.

import flash.media.Microphone;
import flash.events.SampleDataEvent;
import flash.media.Sound;

/*
Example 1:
Microphone input to audio output
*/

const BUFFER_SIZE:int = 2048; // output buffer size
const MIN_SAFETY_BUFFER:int = 1024; // minimum collected input before output starts

var outputActive:Boolean = false; // will be set to true once MIN_SAFETY_BUFFER samples have been collected.

var mic:Microphone = Microphone.getMicrophone();
mic.rate = 44;
mic.setSilenceLevel(0); // you need to set this, or else the mic will stop sending data when it detects prolonged silence!
mic.addEventListener(SampleDataEvent.SAMPLE_DATA, micSampleDataHandler);

var inputBuffer:Vector.<Number> = new Vector.<Number>(); // buffer in which we'll store input data

var playbackSound:Sound = new Sound();
playbackSound.addEventListener(SampleDataEvent.SAMPLE_DATA, soundSampleDataHandler);
playbackSound.play();


function micSampleDataHandler(event:SampleDataEvent):void 
{
  while(event.data.bytesAvailable)
  {
    var sample:Number = event.data.readFloat(); // microphone input is mono! 
    inputBuffer.push(sample);
  }
  if (!outputActive && inputBuffer.length >= MIN_SAFETY_BUFFER)
  {
    trace("starting playback!");
    outputActive = true;
  }
}

function soundSampleDataHandler(event:SampleDataEvent):void
{
  var outputBuffer:Vector.<Number>;
  // move samples from input to output buffer:
  if (outputActive)
  {
    // if playback is enabled, take BUFFER_SIZE number of samples from the input buffer...
    outputBuffer = inputBuffer.splice(0, BUFFER_SIZE);
    if (outputBuffer.length < BUFFER_SIZE)
    {
      trace("buffer underflow!");
      while (outputBuffer.length < BUFFER_SIZE) outputBuffer.push(0);
    }
  } else
  {
    // ... otherwise create an empty output buffer of the right size
    outputBuffer = new Vector.<Number>(BUFFER_SIZE);
  }
  
  // process samples and add them to the SampleDataEvent's data.
  for (var i:int=0; i<BUFFER_SIZE; i++)
  {
    var currentSample:Number = outputBuffer[i];
    // do something interesting with the sample here!

    event.data.writeFloat(currentSample); // left channel
    event.data.writeFloat(currentSample); // right channel
  }
}

									

Right at the bottom of the code, there’s a line where the currentSample variable is set, just before it is added to the ByteArray that is sent to the sound card: var currentSample:Number = outputBuffer[i];

Here would be the best place to do something interesting to each sample.

What should we do? It’s up to you to experiment! Here are a few ideas to get you started:

1.) Add a very short delay (a few milliseconds) with feedback! The result is basically a comb filter with a very metallic sound. One way to implement this is to have another Vector.<Number>, and use it as a queue. For each sample, if the queue has reached the length necessary to begin the delay effect, take the first element off the queue, multiply it with the feedback factor and add it to the current sample. Finish processing the current sample, then add it to the end of the queue.

import flash.media.Microphone;
import flash.events.SampleDataEvent;
import flash.media.Sound;

/*
Example 2:
Delay / comb filter
*/

const BUFFER_SIZE:int = 2048; // output buffer size
const MIN_SAFETY_BUFFER:int = 1024; // minimum collected input before output starts

// buffer used for comb filter effect
var delayQueue:Vector.<Number> = new Vector.<Number>();
var delayLength:int = 500; // length of the delay in samples
var delayFeedback:Number = 0.9; // strength of the delay effect.

var outputActive:Boolean = false; // will be set to true once MIN_SAFETY_BUFFER samples have been collected.

var mic:Microphone = Microphone.getMicrophone();
mic.rate = 44;
mic.setSilenceLevel(0); // you need to set this, or else the mic will stop sending data when it detects prolonged silence!
mic.addEventListener(SampleDataEvent.SAMPLE_DATA, micSampleDataHandler);

var inputBuffer:Vector.<Number> = new Vector.<Number>(); // buffer in which we'll store input data

var playbackSound:Sound = new Sound();
playbackSound.addEventListener(SampleDataEvent.SAMPLE_DATA, soundSampleDataHandler);
playbackSound.play();


function micSampleDataHandler(event:SampleDataEvent):void 
{
  while(event.data.bytesAvailable)
  {
    var sample:Number = event.data.readFloat(); // microphone input is mono! 
    inputBuffer.push(sample);
  }
  if (!outputActive && inputBuffer.length >= MIN_SAFETY_BUFFER)
  {
    trace("starting playback!");
    outputActive = true;
  }
}

function soundSampleDataHandler(event:SampleDataEvent):void
{
  var outputBuffer:Vector.<Number>;
  // move samples from input to output buffer:
  if (outputActive)
  {
    // if playback is enabled, take BUFFER_SIZE number of samples from the input buffer...
    outputBuffer = inputBuffer.splice(0, BUFFER_SIZE);
    if (outputBuffer.length < BUFFER_SIZE)
    {
      trace("buffer underflow!");
      while (outputBuffer.length < BUFFER_SIZE) outputBuffer.push(0);
    }
  } else
  {
    // ... otherwise create an empty output buffer of the right size
    outputBuffer = new Vector.<Number>(BUFFER_SIZE);
  }
  
  // process samples and add them to the SampleDataEvent's data.
  for (var i:int=0; i<BUFFER_SIZE; i++)
  {
    var currentSample:Number = outputBuffer[i];
    // delay effect / comb filter:

    // if the delay queue has reached its target length, take the sample at the beginning and 
    // mix it with currentSample
    if (delayQueue.length > delayLength) 
    {
      var delayedSample:Number = delayQueue.shift();
      currentSample += delayedSample*delayFeedback;
    }

    // push current sample to the delay queue's end
    // note: we're adding the already processed sample back into the queue, so we get
    // feedback of feedback, etc.
    delayQueue.push(currentSample);

    event.data.writeFloat(currentSample); // left channel
    event.data.writeFloat(currentSample); // right channel
  }
}

									

2.) Perform frequency shifting! Frequency shifting is actually really easy to do – just multiply the samples with a sine wave. (The rather complicated technical explanation for why this works is that multiplication in the time domain is the same as convolution in the frequency domain, and in the frequency domain, a sine wave is just an ideal impulse, offset by its frequency – hopefully I’ll be able to make this clearer in a future article!)
Frequency shifting adds non-harmonic content and gives a pretty cool metallic character to human speech. Note that it’s different from pitch shifting which basically multiplies all the frequencies that make up a given sound with a constant, whereas frequency shifting adds a constant offset to all the frequencies in a sound.

import flash.media.Microphone;
import flash.events.SampleDataEvent;
import flash.media.Sound;

/*
Example 3:
Frequency shifter
*/

const BUFFER_SIZE:int = 2048; // output buffer size
const MIN_SAFETY_BUFFER:int = 1024; // minimum collected input before output starts

var freqShiftPhase:Number = 0; // phase of the sine wave used for frequency shifting
var freqShiftDeltaPhase:Number = 75*2*Math.PI/44100; // phase increase per sample

var outputActive:Boolean = false; // will be set to true once MIN_SAFETY_BUFFER samples have been collected.

var mic:Microphone = Microphone.getMicrophone();
mic.rate = 44;
mic.setSilenceLevel(0); // you need to set this, or else the mic will stop sending data when it detects prolonged silence!
mic.addEventListener(SampleDataEvent.SAMPLE_DATA, micSampleDataHandler);

var inputBuffer:Vector.<Number> = new Vector.<Number>(); // buffer in which we'll store input data

var playbackSound:Sound = new Sound();
playbackSound.addEventListener(SampleDataEvent.SAMPLE_DATA, soundSampleDataHandler);
playbackSound.play();


function micSampleDataHandler(event:SampleDataEvent):void 
{
  while(event.data.bytesAvailable)
  {
    var sample:Number = event.data.readFloat(); // microphone input is mono! 
    inputBuffer.push(sample);
  }
  if (!outputActive && inputBuffer.length >= MIN_SAFETY_BUFFER)
  {
    trace("starting playback!");
    outputActive = true;
  }
}

function soundSampleDataHandler(event:SampleDataEvent):void
{
  var outputBuffer:Vector.<Number>;
  // move samples from input to output buffer:
  if (outputActive)
  {
    // if playback is enabled, take BUFFER_SIZE number of samples from the input buffer...
    outputBuffer = inputBuffer.splice(0, BUFFER_SIZE);
    if (outputBuffer.length < BUFFER_SIZE)
    {
      trace("buffer underflow!");
      while (outputBuffer.length < BUFFER_SIZE) outputBuffer.push(0);
    }
  } else
  {
    // ... otherwise create an empty output buffer of the right size
    outputBuffer = new Vector.<Number>(BUFFER_SIZE);
  }
  
  // process samples and add them to the SampleDataEvent's data.
  for (var i:int=0; i<BUFFER_SIZE; i++)
  {
    var currentSample:Number = outputBuffer[i];
    
    freqShiftPhase += freqShiftDeltaPhase;
    currentSample *= Math.sin(freqShiftPhase);
    
    event.data.writeFloat(currentSample); // left channel
    event.data.writeFloat(currentSample); // right channel
  }
}

									

3.) Take the first two ideas and add low frequency oscillators to them. If you modify the delay time of the comb filter (code example 2) with an LFO, you get your basic flanger! If you modify the frequency of the sine wave you multiply the input with (code example 3), you get a bouncy, cartoony effect, of which I’m not quite sure whether it actually has a name.
The following code produces a flanger effect by moving back and forth the index of the sample in the delay queue, the one that we add in as feedback. Note the somewhat dirty, distorted sound of the effect: That’s due to the fact that we perform no interpolation whatsoever. In the future, I’ll write a separate tutorial on flanging and chorus effects in which we’ll clean this up, but this is Robot Building 101, so a little dirt won’t hurt us!

import flash.media.Microphone;
import flash.events.SampleDataEvent;
import flash.media.Sound;

/*
Example 4:
Adding an LFO to produce flanging
*/

const BUFFER_SIZE:int = 2048; // output buffer size
const MIN_SAFETY_BUFFER:int = 1024; // minimum collected input before output starts

// buffer used for comb filter effect
var delayQueue:Vector.<Number> = new Vector.<Number>();
var delayLength:int = 100; // length of the delay in samples
var delayFeedback:Number = 0.8; // strength of the delay effect.
// lfo to transform the comb filter into a flanger
var lfoPhase:Number = 0; // phase of the LFO
var lfoDeltaPhase:Number = 0.5*2*Math.PI/44100; // phase increase per sample
var lfoModStrength:Number = 20; // strength of the lfo, given in +/- sample offset

var outputActive:Boolean = false; // will be set to true once MIN_SAFETY_BUFFER samples have been collected.

var mic:Microphone = Microphone.getMicrophone();
mic.rate = 44;
mic.setSilenceLevel(0); // you need to set this, or else the mic will stop sending data when it detects prolonged silence!
mic.addEventListener(SampleDataEvent.SAMPLE_DATA, micSampleDataHandler);

var inputBuffer:Vector.<Number> = new Vector.<Number>(); // buffer in which we'll store input data

var playbackSound:Sound = new Sound();
playbackSound.addEventListener(SampleDataEvent.SAMPLE_DATA, soundSampleDataHandler);
playbackSound.play();


function micSampleDataHandler(event:SampleDataEvent):void 
{
  while(event.data.bytesAvailable)
  {
    var sample:Number = event.data.readFloat(); // microphone input is mono! 
    inputBuffer.push(sample);
  }
  if (!outputActive && inputBuffer.length >= MIN_SAFETY_BUFFER)
  {
    trace("starting playback!");
    outputActive = true;
  }
}

function soundSampleDataHandler(event:SampleDataEvent):void
{
  var outputBuffer:Vector.<Number>;
  // move samples from input to output buffer:
  if (outputActive)
  {
    // if playback is enabled, take BUFFER_SIZE number of samples from the input buffer...
    outputBuffer = inputBuffer.splice(0, BUFFER_SIZE);
    if (outputBuffer.length < BUFFER_SIZE)
    {
      trace("buffer underflow!");
      while (outputBuffer.length < BUFFER_SIZE) outputBuffer.push(0);
    }
  } else
  {
    // ... otherwise create an empty output buffer of the right size
    outputBuffer = new Vector.<Number>(BUFFER_SIZE);
  }
  
  // process samples and add them to the SampleDataEvent's data.
  for (var i:int=0; i<BUFFER_SIZE; i++)
  {
    var currentSample:Number = outputBuffer[i];
    // flanger

    // if the delay queue has reached its target length, take the sample at the beginning and 
    // mix it with currentSample
    if (delayQueue.length > delayLength) 
    {
      delayQueue.shift(); // remove first sample
      
      lfoPhase += lfoDeltaPhase;
      var delayedSample:Number = delayQueue[ Math.floor( Math.sin(lfoPhase)*lfoModStrength )+lfoModStrength ];
      
      currentSample += delayedSample*delayFeedback;
    }

    // push current sample to the delay queue's end
    delayQueue.push(currentSample);

    event.data.writeFloat(currentSample); // left channel
    event.data.writeFloat(currentSample); // right channel
  }
}

									

Now put all of these together! Note what happens when you change the order of operations (it should have an effect, because the frequency shifting is a non-linear operation)! Experiment and find other ways to put dings and dents into the input material! What happens to the sound when you apply a non-linear function to each sample, such as taking its cube (hint: distortion!)? What happens to the sound when you change your frequency shifting implementation to use a triangle wave instead of a sine (hint: I have no idea, go try it out!)?

I’ll leave you with another combination of the above ideas, just as another starting point. The final code sample combines a flanger with a frequency shifting effect. Let’s hope it never achieves sentience!

import flash.media.Microphone;
import flash.events.SampleDataEvent;
import flash.media.Sound;

/*
Example 5:
Hacking away...
*/

const BUFFER_SIZE:int = 2048; // output buffer size
const MIN_SAFETY_BUFFER:int = 1024; // minimum collected input before output starts

// buffer used for comb filter effect
var delayQueue:Vector.<Number> = new Vector.<Number>();
var delayLength:int = 800; // length of the delay in samples
var delayFeedback:Number = 0.8; // strength of the delay effect.
// lfo to transform the comb filter into a flanger
var lfoPhase:Number = 0; // phase of the LFO
var lfoDeltaPhase:Number = 20*2*Math.PI/44100; // phase increase per sample
var lfoModStrength:Number = 100; // strength of the lfo, given in +/- sample offset

var freqShiftPhase:Number = 0; // phase of the sine wave used for frequency shifting
var freqShiftDeltaPhase:Number = 1800*2*Math.PI/44100; // phase increase per sample


var outputActive:Boolean = false; // will be set to true once MIN_SAFETY_BUFFER samples have been collected.

var mic:Microphone = Microphone.getMicrophone();
mic.rate = 44;
mic.setSilenceLevel(0); // you need to set this, or else the mic will stop sending data when it detects prolonged silence!
mic.addEventListener(SampleDataEvent.SAMPLE_DATA, micSampleDataHandler);

var inputBuffer:Vector.<Number> = new Vector.<Number>(); // buffer in which we'll store input data

var playbackSound:Sound = new Sound();
playbackSound.addEventListener(SampleDataEvent.SAMPLE_DATA, soundSampleDataHandler);
playbackSound.play();


function micSampleDataHandler(event:SampleDataEvent):void 
{
  while(event.data.bytesAvailable)
  {
    var sample:Number = event.data.readFloat(); // microphone input is mono! 
    inputBuffer.push(sample);
  }
  if (!outputActive && inputBuffer.length >= MIN_SAFETY_BUFFER)
  {
    trace("starting playback!");
    outputActive = true;
  }
}

function soundSampleDataHandler(event:SampleDataEvent):void
{
  var outputBuffer:Vector.<Number>;
  // move samples from input to output buffer:
  if (outputActive)
  {
    // if playback is enabled, take BUFFER_SIZE number of samples from the input buffer...
    outputBuffer = inputBuffer.splice(0, BUFFER_SIZE);
    if (outputBuffer.length < BUFFER_SIZE)
    {
      trace("buffer underflow!");
      while (outputBuffer.length < BUFFER_SIZE) outputBuffer.push(0);
    }
  } else
  {
    // ... otherwise create an empty output buffer of the right size
    outputBuffer = new Vector.<Number>(BUFFER_SIZE);
  }
  
  // process samples and add them to the SampleDataEvent's data.
  for (var i:int=0; i<BUFFER_SIZE; i++)
  {
    var currentSample:Number = outputBuffer[i];

    
    if (delayQueue.length > delayLength) 
    {
      delayQueue.shift(); // remove first sample
      
      lfoPhase += lfoDeltaPhase;
      var delayedSample:Number = delayQueue[ Math.floor( Math.sin(lfoPhase)*lfoModStrength )+lfoModStrength ];
      
      currentSample += delayedSample*delayFeedback;
      delayQueue.push(currentSample);

      freqShiftPhase += freqShiftDeltaPhase * Math.sin(lfoPhase*0.4);
      
      currentSample = currentSample*0.6 + 0.4*Math.sin(freqShiftPhase)*currentSample;
      
    } else delayQueue.push(currentSample);


    event.data.writeFloat(currentSample); // left channel
    event.data.writeFloat(currentSample); // right channel
  }
}

									

 

9 Responses to Realtime audio processing in Flash, part 3: Crush all humans!

  1. pandu says:

    this is very good tutorial ever.. never found better than it.

  2. Martin says:

    Is that possible to change that frequency f.e. low up, midd down, high up? I would like to get this with 10 or 15 different Hz to get hall effect, or concert effect or more. Is that possible? Thanks in advance.

  3. Philipp says:

    Right now what you have is a single, unfiltered delay. If I understand you correctly, you’re looking for a reverb effect? As a start toward that, you could combine several delay effects with different delay times and feedback. To get closer to a natural reverb sound, you probably also want to apply different low-pass or high-pass filters to these echoes, something I’d like to get to in a future tutorial.

    • Martin says:

      Thanks for fast reply.

      I would like to get something else.
      I would like to get EQ band corrector to set some bands (f.e. 10 different frequences in hearing range from 125Hz to 8000Hz) volume level up and for some bands some volume level down.

      something like this:

      band125hz.volume +=1;
      band200hz.volume +=5;
      band400hz.volume +=0;

      band800hz.volume +=1;

      I would like volume up and down by frequencies.

      • Philipp says:

        Sorry for the late reply, I only saw your last comment now. As it turns out, writing an EQ is quite a few steps more advanced than anything I’ve covered so far. Even if I keep continuing this series of posts long enough to eventually get there, it’ll be a long time until that happens.

        I suggest you look around http://www.kvraudio.com/forum/ – I’m pretty sure they’ll have a thread on that topic which will either give you some code or at least some links to start with.

  4. Andre says:

    Hi Philip,

    How are u?

    I need a little help.

    It’s possible use this effect while playing a Mp3 file?

    Best regards.

  5. Philipp says:

    I haven’t had to deal with streaming mp3 files yet, but for an mp3 that you’ve completely loaded, the adaptations to the code in the article should be pretty straightforward: Extract the mp3′s data into a ByteArray (via Sound.extract()), then fill up the inputBuffer Vector with the contents of the ByteArray (instead of filling the Vector in micSampleDataHandler()).

  6. Brandon says:

    Hey Philip,
    Excellent posts, one thing I am struggling with, I would like to adapt the code so that it can apply processing to a sinewave that is being dynamically generated in AS3. I know it is probably simple but I just can’t wrap my head around passing the output of the sine wave into the buffer instead of using the microphone….thanks!

  7. Philipp says:

    Hey Brandon!
    Have you read part 2 of the series? At the end of that article, there’s code that produces a sine wave. Maybe take that code and have a look at “Example 3: Frequency shifter” from this post, and see if you can figure out how to combine the two. That should be a good starting point?

Leave a Reply

Your email address will not be published. Required fields are marked *

*


two + = 11

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>