iPhone Core Audio Part 3 – Audio Callback

Previous: Setting up the AUGraph.

In the previous two posts, we set up the project and hooked up the audio plumbing. Now we finally get to the actual work. Amazingly, all there’s just one function left to write. The audio render callback is the function you provided earlier to be the input to the mixer AudioUnit.  Whenever the Mixer needs new audio input, it calls the render call back and it is up to you to fill a buffer up with audio samples. It is a C function that  has this specific set of parameters.

  • inRefCon – A pointer to an object that is used to pass in parameters.
  • AudioUnitRenderActionFlags – Indicates special states, we won’t need it here.
  • AudioTimeStamp – Used if you need to synchronize multiple sources.
  • inBusNumber – The specific bus of the Audio Unit that is called the function.
  • inNumberFrames – The number of frames of sample data that will be passed in.
  • ioData – An AudioBufferList, which is a struct containing an array of buffers representing sample data and a count of those buffers.

Here’s what we’re going to be doing.

  • Getting a pointer “THIS” so we can access AudioController variables.
  • Getting a pointer to the buffer we want to write to. (here there is only one buffer, it will be at index [0]).
  • Performing some preliminary setup for the sine wave.
  • Looping through an inNumberFrames length loop, calculating a sine wave and writing sample values to the buffer.
  • Saving any AudioController variables that need to be remembered across calls to the render function.

The actual sine wave generation is fairly simple. The phaseIncrement is the amount of change in phase a wave of a certain frequency will undergo in the period of one sample. On each pass through the loop the phase advances by one phaseIncrement. Taking the sin of the phaseIncrement gives you the amplitude for that sample. In order to make a continuous waveform, the phase needs to be stored so that the next buffer begins where the last one leaves off. The phase technically goes on into infinity which will cause the float it is stored in to overflow.  It should be reset at a zero-crossing so that the discontinuity is not heard.

The actual sample values are being calculated as floats with an amplitude that should range from -1 to 1. We told the mixer unit to expect signed 16-bit integers which range from -32768 to 32768. We will need to scale the float value and then cast it to an integer.

  • renderInput can go anywhere inside the @implementation block in AudioController.mm. It just needs to be defined before InitializeAUGraph where it gets referenced. 
// audio render procedure, don't allocate memory, don't take any locks, don't waste time
static OSStatus renderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
	// Get a reference to the object that was passed with the callback
	// In this case, the AudioController passed itself so
	// that you can access its data.
	AudioController *THIS = (AudioController*)inRefCon;

	// Get a pointer to the dataBuffer of the AudioBufferList
	AudioSampleType *outA = (AudioSampleType *)ioData->mBuffers[0].mData;

	// Calculations to produce a 600 Hz sinewave
	// A constant frequency value, you can pass in a reference vary this.
	float freq = 600;
	// The amount the phase changes in  single sample
	double phaseIncrement = M_PI * freq / 44100.0;
	// Pass in a reference to the phase value, you have to keep track of this
	// so that the sin resumes right where the last call left off
	float phase = THIS->sinPhase;

	float sinSignal;
	// Loop through the callback buffer, generating samples
	for (UInt32 i = 0; i < inNumberFrames; ++i) { 		

             // calculate the next sample
             sinSignal = sin(phase);
             // Put the sample into the buffer
             // Scale the -1 to 1 values float to
             // -32767 to 32767 and then cast to an integer
             outA[i] = (SInt16)(sinSignal * 32767.0f);
             // calculate the phase for the next sample
             phase = phase + phaseIncrement;
         }
         // Reset the phase value to prevent the float from overflowing
        if (phase >=  M_PI * freq) {
		phase = phase - M_PI * freq;
	}
	// Store the phase for the next callback.
	THIS->sinPhase = phase;

	return noErr;
}

[EDIT:  The stuff below about THUMB only applies to ARM6 devices. (iPhone 3G, iPod 2G and earlier.) Apparently, ARM7 devices are better at handling floating point. I suggest always testing these things for yourself.]

There is one more modification you should make to your project if you plan on doing a large amount floating point math.  There is a compiler setting for compiling with the THUMB instruction set which should be turned OFF. Doing floating point math with THUMB on causes the ARM processor to flip back and forth between 16-bit and 32-bit states, causing a drastic slowdown and greatly reducing the amount of audio processing you can make the little ARM chip do.

To turn it off, right click on your target and choose “Get Info.” Select the “Build” section.  There are a huge number of options so it is easiest to just use the search box and search for “THUMB”.  Make sure that the “Compile for Thumb” line is NOT checked. This setting applies to every Build configuration so apply it to all configurations or to the Debug and Release configurations individually.  When the time comes to deploy your application DON’T FORGET to check and double check that it is applied there too.

Sign Off.

I was inspired to put this post together by Chris Adamson’s Core Audio Brain Dump which is a great collection of some of the bits of wisdom you need to get your head around to really ‘get’ Core Audio. I really can’t wait until the Core Audio book by Chris and Kevin Avila arrives later this year.

I also wouldn’t have gotten anywhere on VocaForm without Michael Tyson’s post on using the remoteIO AU.

Of course, I have almost certainly done something in these examples that is sub-optimal or just plain wrong, so don’t hesitate to send in corrections and hints.

After all that hard work, why don’t you kick back, relax, and try Dingsaller, my iPad music app.

About these ads

59 thoughts on “iPhone Core Audio Part 3 – Audio Callback

  1. This is Awesome! Thanks so much for the tutorial. I got the sound playing however I couldn’t set up the GUI side properly so instead I just created a new audiocontroller object manually.

    What would I need to do to e.g. play a sample using this? I take it I need to somehow read the data from the sample file and feed it into the buffer but I don’t know ehere to start.

    • Thanks SupahFly.

      I haven’t done any code that plays samples from disk, VocaForm synthesizes stuff live. I’m thinking about doing another tutorial
      on it after I get it working in my current project.

      Check out the ExtAudioFileRead function to get started. If it’s a short sample you can load the whole thing into a buffer ahead of time. If it’s longer, you’ll have to do some fancy circular buffering to load chucks of it from disk as needed.

      • Thanks! I will have a go at it this weekend using your example as a base. I’ll send you a mail if I get something working.

  2. Tim – thanks for this series of posts. I’ve been messing with Core Audio for awhile (and I too relied heavily on Michael Tyson’s post) – and disabling “compile for THUMB” was a trick I didn’t know about — thanks!

    One thing I haven’t been able to get right yet – how do you mix multiple waveforms together? Sure, you can just add the SInt16s – until you reach +/-32767, and clip.

    I’ve talked to all the audio engineers I know, but they keep talking about analog concepts like decibels.

    Do you have a strategy for mixing multiple waveforms into a single end result? Do we need some sort of non-linear addition? Or is clipping a reality we have to deal with by reducing the amplitudes of the source?

      • You have to be careful with waveshaping or any other non-linear processes. They creates a lot of additional harmonic content and it is really easy to make some nasty sounding aliasing unless you oversample and filter everything above the Nyquist frequency.

        If you don’t want to introduce distortion, just multiply each waveform by a scaling factor. if you have two full scale waveforms, you’ll need to multiply both by 0.5 to avoid clipping.

  3. I haven’t had much luck. I’m using ExtAudioFileRead to read a file and store it in a AudioBufferList. So far so good. I’m having problems understanding where the ioData in renderInput is coming from.
    I understand the callback setup from pt 2 but I don’t see anywhere that there is a buffer specified. Or is the ioData an empty buffer which needs to be filled manually?

    In any case my thought was that I would just replace the AudioBufferList in renderInput with my buffer from file (the simplest solution). That doesn’t seem to work. Any help here is greatly appreciated

    • ioData is a pointer to a buffer that the output audio unit wants you to fill. It is created by the audio unit when it calls renderIO. It’s not necessarily empty.

      Any information you want to pass into renderIO has to come in through the *inRefCon pointer. Which we were using to pass in a reference to the audioController.

      Make your AudioBufferList a property of the audioController, then you can access it like this.

      AudioController *THIS = (AudioController*)inRefCon;
      AudioBufferList *buffer = THIS->myBuffer;

      Then transfer data from myBuffer to ioData.

      You can pass in whatever parameters you need to control playback in the same way.

  4. Got it working now, thanks! Will post an example as soon as I clean it up a bit ;)

    • I’m currently trying to get an implementation of what you were trying working. Could you share your solution?

  5. I found this tutorial really useful.

    I want to write an application to apply some filters to the input then output the result. How do I modify your code to do that?

    • If you want to modify a file off of the disk, you have to load it into a buffer outside of the callback and pass a reference to it via the object you pass in as *inRefCon.

      If you want to modify sound from the microphone you set the kAudioOutputUnitProperty_EnableIO property on the remoteIO unit.
      Then remoteIO will pass the microphone data into your callback with the *ioData buffer.

      • I wanted to modify audio from microphone.
        I am having trouble getting it to work.

        Can you please post a sample code?

    • So does this mean that Thumb doesn’t produce terrible floating point on ARM7? I’ll have to try it on the iPad and see what happens.

      • basically:
        global: thumb = yes
        arm6: thumb = no

        you can assume that arm7+ supports the arm instruction set

  6. Great post man, it’s really challenging to understand CoreAudio from the stuff thats out there but this made getting started pretty straight forward. Thanks

  7. Great articles! Thank you!
    I do check on the return status. Although it works (I got the tone), I got “-10868″ errors on the following calls:

    // Apply the CAStreamBasicDescription to the mixer output bus
    result = AudioUnitSetProperty(mMixer, …

    and

    // Apply the modified CAStreamBasicDescription to the output Audio Unit
    result = AudioUnitSetProperty( mMixer, …

    What does that mean? Any pointer is appreciated.
    Thanks in advance for your help.

    • The “Audio Unit Component Services Reference” document lists -10868 as kAudioUnitErr_FormatNotSupported

  8. Tim. Its really great to have a well-written Core Audio reference.

    Unfortunately, I can’t seem to get this example to run with any sound. My configuration is exactly as you described in this tutorial, except I added “-x objective-c++” to the OTHER C FLAGS in the settings, so my file extensions are all “.m” (instead of .mm).

    The source of my problem seems to be:

    // open the graph. AudioUnits are open but not initialized
    result = AUGraphOpen(mGraph);
    //

    which is returning an error code of “-2005″, which I understand is a “bad component” error. This error seems to be the root of others in subsequent calls and is preventing me from entering the renderInputCallback. I have googled a lot to figure out what the cause of this is to no avail.

    Any advice/tips you can provide are greatly appreciated.

  9. It does not work on Simulator but it works when I download it to the iPhone device. Is this expected? Or Did I miss something that it does not work on Simulator. I am using iPhone SDK 3.1.3

    Thanks.

  10. Tim, this is an excellent resource for unpacking these extremely complex and confusing Audio Unit concepts. Ta much.

  11. Pingback: iOS Development Link Roundup: Part 1 | iOS/Web Developer's Life in Beta

  12. Hi there.

    Have you had much luck manually mixing pcm samples together in the callback? I’ve been attempting this but the callback is taking too long to execute causing no audio to come out on the device. Works fine on the simulator though!

    I pass a struct containing pcm data from several audio sources and trying to mix and apply dsp the samples in the callback. Have you any advice on how to ensure the callback executes as efficiently as possible?

    • The usual warnings apply, don’t allocate memory, don’t printf or log, don’t take locks, try not to use Obj-C messages.

      I’ve been able to do a pretty reasonable amount of processing in the callback. Mixing, delays, filters, chorus, etc.

      • Hmmmm, yeah thats what I thought. I don’t believe that I’m doing any of these things? Would you mind taking a quick peruse at my callback?

        (I dont know what your policy on posting code in comments is. If its frowned upon feel free to delete )

        static OSStatus playbackCallback (

        void *inRefCon, // A pointer to a struct containing the complete audio data
        // to play, as well as state information such as the
        // first sample to play on this invocation of the callback.
        AudioUnitRenderActionFlags *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence
        // between sounds; for silence, also memset the ioData buffers to 0.
        const AudioTimeStamp *inTimeStamp, // Unused here.
        UInt32 inBusNumber, // The mixer unit input bus that is requesting some new
        // frames of audio data to play.
        UInt32 inNumberFrames, // The number of frames of audio to provide to the buffer(s)
        // pointed to by the ioData parameter.
        AudioBufferList *ioData // On output, the audio data to play. The callback’s primary
        // responsibility is to fill the buffer(s) in the
        // AudioBufferList.
        ) {

        //audio_engine *en = (audio_engine *) inRefCon;
        //audio_engineStructptr *en = (audio_engineStruct*) inRefCon;
        clock_t t = clock();
        AudioUnitSampleType *outSamplesChannelLeft;
        AudioUnitSampleType *outSamplesChannelRight;

        AudioUnitSampleType *dataInLeft;
        AudioUnitSampleType *dataInRight;

        UInt32 stime=(UInt32 ) inTimeStamp->mSampleTime;

        outSamplesChannelLeft = (AudioUnitSampleType *) ioData->mBuffers[0].mData;
        outSamplesChannelRight = (AudioUnitSampleType *) ioData->mBuffers[1].mData;

        for (UInt32 frameNumber = 0; frameNumber sampletime++;

        for (int i=0; ichannels[i];

        for (int j=0;jrows[j];
        en->currentbeat = en->sampletime / 22050;

        snd->readpoint=stime % snd->frameCount;

        dataInLeft = snd->audioDataLeft;
        dataInRight = snd->audioDataRight;

        l+=((dataInLeft[snd->readpoint])*snd->sendvalue);
        double dsl=(double)l*ch->volume;
        r+=((dataInRight[snd->readpoint])*snd->sendvalue);
        double dsr=(double)r*ch->volume;

        l=(AudioUnitSampleType) dsl;
        r=(AudioUnitSampleType) dsr;

        }
        }

        outSamplesChannelLeft[frameNumber] =l;
        outSamplesChannelRight[frameNumber] =r;
        stime++;

        }

        //printf(“time = %lf\n”, (double)(clock() – t) / CLOCKS_PER_SEC);
        return noErr;

        }

  13. Hi

    Dropped myself in the deep end here! I am trying to create a low latency Frequency reading from the microphone. I started down the path using AudioQueues and although I cobbled together a working app the changes in pitch are not detected fast enough. Through research I found I needed to use AudioUnits/RemoteIO and then discovered your tutorial. Many Thanks, I understand things a little bit more.

    Everything compiles ok but now I’m stuck trying to work out how to fill the AudioBufferList from the iPhone microphone, any help?

    Thanks
    Anim

  14. I think that in function renderInput() must be:
    double phaseIncrement = 2 * M_PI * freq / 44100.0;
    instead of
    double phaseIncrement = M_PI * freq / 44100.0;
    and
    if (phase >= 2 * M_PI * freq) {
    phase = phase – 2 * M_PI * freq;
    }
    instead of
    if (phase >= M_PI * freq) {
    phase = phase – M_PI * freq;
    }

    • I agree. I changed it in my code and it removed the glitch on odd frequencies. Good catch, thanks.

  15. hi, i tried use a slider to change the freq value (min:400-max:1200).
    Works, but the sound start to overload, i dont know how to explaine…

    float freq = [THIS->slider value];

    i only tried something like tis code.

    How i could change the freq and the waveforms?

    thanks very much for the code!

  16. Hi
    Many thanks for the tutorial. I am slowly starting to understand core audio.

    I implemented this code and it works fine. However, when I change the freq to 601 (or any other ODD frequency) I get a click in the sound at about one second intervals. Any ideas why? Is it a phase glitch in the synthesized waveform?

    Keep up the great work, very helpful. Now I need to understand how to control left and right channels with separate streams and volume controls.

    • I assume it’s some kind of phase of phase glitch, I never actually bothered to fix it because I use a lookup table for oscillators in my real Apps.

  17. What is in your lookup table? A fully synthesized sine wave and pre-calculated sample rates or ? ? How would you ingest it? copy to buffer?

    • I use a 1024 sample length array filled with one cycle of the waveform. You get different pitches by changing the rate at which you read through the table. Assuming a constant sample rate (44,100 Hz) that means changing how many samples you skip in the table per reading. Also, there should be some interpolation so that you can read out non-integral sample values.

  18. also, best post ‘eVa’.

    would you say it’s too slow to do the sine wave generation this way?

    i’ve applied my amplitude envelope directly within that renderInput call back function and currently have 2 sine waves (to be scaled up)

    would you strongly suggest to use a look up table?

    and if so, would you apply the amp envelope in an intermediate structure before passing to the callback, or still directly in the callback whilst copying each section to the buffer?

    i’m thinkin more on the lines of performance.

    • I’ve never actually measured the performance, for a few sine waves, I’m sure it doesn’t matter.
      What is convenient is that once you have a table lookup oscillator, you can fill it with any waveform you want.

      I usually do the envelope in the same callback with the synthesis.

      • I am trying to figure out how to direct my tone output to the left or right channel only. Any suggestions?
        Ideally I’d like tone 1 to go to output 1 for left channel only and tone 2 to go to output 2 for right channel only . . . simultaneously. Not having success searching the literature yet.
        Earle

  19. cheers for the incredibly quick reply.

    I’m going for additive synthesizer, so it’ll just be sine waves.

    aye, if i notice poor performance or some error i’ll have a look into creating a look up and filling the buffers that way.

    .. and cheers, you’ve helped a great deal.

  20. After a 4 solid day slog, i’ve //almost// got inputcallback ->mixer> iocallback->remoteIO working. Appreciated; i should read the posts properly ;D

    I do have a question though; When implementing the sin calculations, the oscillator doesn’t seem to hold it’s pitch properly, like it shifts every so often.

    It can’t be a case of buffer size because it writes sequentially unless it’s missing steps, and that would result in extra harmonics, it’s almost as if the sample rate consistency isn’t there?

    I’ve spent a very long time trying to debug it and have started to implement a lookup table…but i know it’s going to tax the iphone if i have to interpolate ~15 sine waves non-linearly.

    Have you experienced anything like this and know of a solution or what it could possibly be?

  21. Hello,
    I tried out this sound code and it works. However, it basically makes one continuous tone. What I am looking to do is create little arcade-type sound effects, galaxianesque bleeps and blips, etc. Do you know of anything I might read to get there from here?

    Am I correct in thinking that you are using sample-playing hardware, but just generating samples that match the waves you want? So there is no sound-channel pure-tone generating capability in the iPad (as opposed to Nintendo DS for example)?

    I have been using .wav and .aif samples for my sounds in my game, just using ” AudioServicesPlaySystemSound (gameSound);” But I was wanting to do stuff like a slider that sounds like a slide whistle and the aforementioned effects. Could I use the code from this post to do something like that?

    Thanks
    Bob

    • Most of the early consoles used a dedicated sound chip that had FM oscillators and noise generators which you should be able to emulate digitally. You might try looking up frequency modulation synthesis to get started.

      The remoteIO lets you do pretty much anything you can think up, you just have to fill up the buffers with the right samples.

      • I’ve never actually built anything with AudioQueues, but they are supposed to be slightly easier to work with. They use a queue of multiple buffers which makes them good for situations such as network streaming where the audio data comes inconsistently. Due the rotating buffers, they have a higher latency, which makes them unsuitable for the music instrument Apps that I work on. For scheduling game sounds the latency may be acceptable.

      • Thank you. I can’t believe this code required to set this up! It reminds me of how difficult it is to set up Quicktime tracks. You are a great man for writing it out and posting it.

        I varied the freq variable in the render callback and now it’s going up and down, sounds cool. How often does that callback get called? Does it play each buffer once through and then get a new one from you? Or am I misunderstanding how it works?

        Can I ask another question? Well, two. First, the default tone is very loud. In fact, I may use it as a vibrate feature, it actually makes the screen buzz. How do you control that? Is it some thing you change in the mixer setup?

        2) I am also playing another sound separately, using my old AudioServicesPlaySystemSound. It seems like these two interfere? When they play together I am hearing crackling and noise? Can I not mix these two methods? How do you play two separate sounds simultaneously then? Do you have to set up another audio unit or provide another input to the mixer somehow?

        Thanks very much
        Bob

      • The buffers are not reused, the audio hardware continually asks for one, then plays it then asks for the next.

        You can control volume using a mixer parameter or by scaling the samples.

        With digital audio, there is a hard limit on the amplitude of the sound. If you have a full scale sine wave playing, there is no headroom left. Any additional sounds will cause distortion. You will have to turn down the sine to allow the second source some room, both for system sounds, generating multiple sounds in a single callback, or for multiple callbacks and mixer channels.

      • With the volume, I added this to the audio unit setup — but I don’t hear any change in volume? The docs say volume ranges between 0 and 1, and I don’t get an error ???

        result = AudioUnitSetParameter ( mMixer, kHALOutputParam_Volume, kAudioUnitScope_Output, 0, .1, 0);

        Thanks
        Bob

      • MultiChannelMixer doesn’t have an output volumecontrol, that parameter is for the output AU.
        You could also use kMultiChannelMixerParam_Volume on the mixer’s input scope to control each channel.

  22. You insert a second callback for your filter between the mixer and the remoteIO. (No custom AU’s in iOS) Connect the filter callback to the remoteIO with AUGraphSetNodeInputCallback() Pass the mixer AU into the callback. Then you can use the AudioUnitRender function from within the filter callback.

    renderErr = AudioUnitRender(mixerAU, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, ioData);

    • Using your code, could I simply change the kAudioUnitType_Mixer to a kAudioUnitSubType_StereoMixer and add AudioSampleType *outB = (AudioSampleType *)ioData->mBuffers[1].mData (for right channel) to the existing render procedure? This should give me stereo. I could add outB[i] = (SInt16)( sinSignal * 32767.0f) to fill the buffer right after you fill outA. Guess I need to change the setup to desc.mChannelsPerFrame = 2 as well. What’s missing?

  23. Tim, I am still wrestling with left and right channels and a second tone to add to your example. See my 110525 entry. Can you point me in the right direction? I have been looking at the 101209 entry from dub for clues but I’m not there yet. There is a lot to absorb in core audio.
    Cheers.

  24. Hi Tim,
    I notice that using Core Audio this way disables the playing of iPod music when I start my app. Is there any way to prevent that? Do I have to set up some kind of Audio Unit to mix in the iPod audio?

    Thanks
    Bob

  25. Thank you so much for sharing this work. One bug fix, this code actually produces a 300hz tone (one whole octave lower, thanks to the way music works), because the phaseIncrement equation is missing a ‘2’.

    Change this:
    double phaseIncrement = M_PI * freq / 44100.0;
    to this:
    double phaseIncrement = 2 * M_PI * freq / 44100.0;

    and all is well.

    For those wondering where this equation comes from, it’s just the basic Sine wave equation to determine amplitude from phase (theta):
    amplitude = sine(theta)
    where theta = 2 * PI * freq * time
    and time = numFrames / sampleRate

    So for one sample (numFrames = 1):
    theta = 2 * PI * freq * (1 / 44100.0)

    Hope this is helpful.
    Ian Charnas

  26. [noticed this sometime later]

    Similarly, in renderInput, when you reset the phase value there is the same bug, preventing the phase value from going over PI, and thus preventing the sineSignal value from ever going negative. If you only have positive signal values, that’s the same as a half-rectified waveform, which sounds harsh and whiney when compared to a smooth sinusoidal waveform. Anyways, so the lines should look like this (note the “2” was added):

    // Reset the phase value to prevent the float from overflowing
    if (phase > 2 * M_PI * freq) {
    phase -= 2 * M_PI * freq;
    }

    cheers,
    Ian Charnas

  27. [correction to my previous comment]

    When modifying this code to play an array of notes, I noticed the phase value was reset at the wrong time altogether, resulting in a harsh pop every time the reset occurred. Phase values in a sinusoid have a period of 2 PI, so you should reset the phase counter at an integer multiple of 2 PI. Currently the code is resetting the phase counter at multiples of PI * freq, which is wrong. To eliminate pops, the code should be:

    // Reset the phase value to prevent the float from overflowing
    while (phase >= 2 * M_PI) {
    phase -= 2 * M_PI;
    }

    hope this helps others perusing this site, thanks again to the author for providing a fantastic starting point.

    Ian Charnas

    • Thanks Ian,

      I keep meaning to get in here and fix the sine wave. In my own code I use a table lookup rather than using the sine function directly, one of these days I’ll get around to making that part 4.

      -Tim

  28. Thank you for sharing this fundamental plumbing for an audio-generating app. It does seem strange to me that you’re resetting phase outside of a loop that could potentially overflow it. No? What I mean is… should the ” if (phase >= M_PI * freq)” check and reset perhaps be after the phase increment, *inside* the “for” loop?

  29. There seems to be a lot of interest here. I had great success with this tutorial getting a sine wave rendering. I thought I’d share this code since it seems like others here would find it interesting. These are c functions for square, triangle, and sawtooth functions. Just call these instead of sin() from within renderInput. They return -1.0 to 1.0.

    const float M_PI_3_4 = M_PI_2 + M_PI_4;

    float square(const float phase){
    float p = 0;
    float r = 0;

    p = phase – floor(phase/M_PI) * M_PI;

    if(p >= 0 && p = M_PI_4 && p = M_PI_2 && p M_PI_3_4 && p = 0 && p = M_PI_4 && p = M_PI_2 && p M_PI_3_4 && p = 0 && p = M_PI_4 && p = M_PI_2 && p M_PI_3_4 && p < M_PI)
    r = (p – M_PI_3_4) / M_PI_4 / 2.0 + 0.5;
    }
    return r;
    }

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s