Subtitles section Play video
Hey what's up everybody, I'd like to welcome you to another audio programmer tutorial.
And in this tutorial we're going to talk to you about the Juice audio processor class.
The audio processor class is the way that you can write your plug-in code once but be able to turn your plug-in code into all the different formats that you're able to deploy for plugins such as VST3, AU, and AEX, and also be able to do those plug-ins on separate operating systems such as Windows and Mac OS.
So we have two objectives for this tutorial.
The first objective is that we're going to dissect the Juice audio processor class.
I've been doing Juice tutorials since 2017 and I'm embarrassed to say that I never really took the time until recently to dig into the audio processor class and really truly look at what it does and what each function is doing.
So I'm going to do that with you today and show you some of the insights that I've gained along the way.
The second objective is that I'm going to show you the Juice audio processor class alongside the CMake file that we did in the last tutorial and show you how those two things relate to each other and how the configuration of your CMake file relates to the code that is actually generated for the Juice audio processor class.
When one of my mentors showed me how these two files relate, it really helped me expand my mind map on how build systems and generated code actually relate to each other and helped me level up my skill set in a major way and I hope it levels up your skill set as well.
If you're ready to take that next step into audio plug-in development, I invite you to check out our latest book, The Complete Beginner's Guide to Audio Plug-in Development.
This has everything that you need to know to build your first audio plug-in, over 450 pages, and you can get it both on physical copy via Amazon and via ebook on our website theaudioprogrammer.com.
This video is also brought to you today by our sponsor, JetBrains.
So I've been using their Sealion IDE for quite a while now and along with their great cross-platform experience of being able to switch back and forth between Windows and Mac and being able to find everything easily because the experience is largely the same.
I've also been enjoying their integrated CMake support.
It's been a lifesaver for me because I'm not an expert on build systems and being able to just double click on the CMake file and have it open up my project is a lifesaver.
It also has the ability to be able to debug CMake files, which is really awesome.
I really love that feature.
Be sure to check out JetBrains and Sealion on the link below.
And without further delay, let's get started.
Okay, so here we are in our project and one thing that you'll notice is that I'm in my CMake file rather than having the ProJuicer.
And with that, I'm going to make a very small announcement, which is that moving forward, we're going to be doing our tutorials using CMake rather than the ProJuicer.
Now, I think that the ProJuicer is great if you know barely anything about C++ and about Guice and you're just starting to get off the ground and write some code and write your first plugin.
But I think once you start really getting serious about C++ development that you're probably going to want to start switching towards using a build system like CMake.
This is a challenging step and it's been challenging for me.
However, one thing that I will say is having been around a couple major music tech companies that all of them use something like CMake or another build system for generating their projects.
None of them use the ProJuicer.
Okay, so if you want to get into this and start doing this for a living, I would highly advise or recommend that you start learning build systems and learning at least the basics about them.
And that's what I'm trying to do as well.
So that's why we're working with CMake right now.
So I know that this transition from the ProJuicer to CMake will feel a little bit intimidating for some.
But don't worry, we're going to take it slow and I'm going to try to show you every step of the way how everything is happening.
So first thing I would say is watch the last tutorial because that's how we actually created the CMake file that we're seeing here right now.
So the second thing I'm going to show you is something that I learned when I first started working with CMake that really helped me switch on the light in terms of what build systems do and what their intention is.
So if we scroll down here a little bit, what I'm going to show you is that in this juice add plugin function that we have a couple configuration options.
Okay, so the reason for this is let's just take the code and put it aside for a second.
And let's just say that we were just having a conversation about creating a plugin.
So one of the first questions that we would probably ask is, well, what kind of plugin do we want it to be?
Do we want it to be an audio effect?
Do we want it to be a synthesizer?
Do we want it to be some type of midi effect?
These are some of the first questions that we would ask ourselves, right?
And so this is what I would call or what would technically be called global configuration.
Okay, what do we want this project to be?
So you can see that here with the CMake file that these global configurations are here on the front, meaning on the CMake file itself, that we can change these to true or false.
So think of these as options or buttons that we could switch as developers.
Now, what happens is that these actually affect what happens on the code in the project that's actually generated.
So if we go to skipping ahead a little bit into some of the functions into the plugin processor.cpp, we have this function here that says accepts midi, right?
And we can see that this is currently set to return false, okay?
Because we see that this return true is kind of shaded and that it's currently returning false.
So it's saying that we do not want midi input.
That correlates directly with what we have here.
This option needs midi input.
Now, if I set this to true, and then if I regenerate the CMake file, rerun the CMake file, for people who are not using CLion, you would need to use the CMake command, which I believe is CMake-B, builds-G, and then whatever IDE that you're using.
Okay, so now we come in here into access midi, and we see that that has now returned true.
So we've just kind of hit a button or hit a switch there, and that has actually switched things.
So this is a great way for us to be able to create a convenient interface for the developer to be able to change their configuration of their plugin or of their project in a way that before they actually even generate the project.
So that is what the intention of using CMake really is, okay?
So if I take this and I turn this back to false, and then if I come here over back over here into this, and then I hit this refresh my CMake file, you'll see that this will change back to false.
Okay, and see how that switched?
Okay, so that was something that was very simple, but also a very big revelation for me.
Now, I know what some of you are thinking.
Josh, why can't you just put this option directly in the header file or in the CPP file?
Why can't you just have some sort of option or some sort of bull that you put in here?
And the reason is going back to that architectural question of, okay, these are what we would call global configurations, configurations of the project just as a whole, okay?
And that from an architectural standpoint, it would make more sense to have these options within the CMake file, which is kind of the front interface for the entry point for the developer of what they want this project to be, rather than having it hidden in an individual header file or CPP file, okay?
So that's the reason that it's done that way and why developers choose to do it.
So that, I hope, helps reveal some of the light or shed a little bit of light on why we use build systems like CMake and why we use these global configurations rather than putting them into the code themselves.
So I hope that was helpful for you.
Okay, moving along, here we are in our header file, which is the audio plugin audio processor, and this is inheriting from a juice audio processor.
Now, we have a few pure virtual functions that we've been implementing here, and we can tell that they're pure virtual functions because they have this override keyword, okay?
That's how you know that it's pure virtual function.
If we go over to the juice documentation, the way that you know that it's a virtual function is if the function has this equals zero after the function name, okay?
So that's how you know that this is a function that if you're inheriting from an audio processor class, you need to provide implementations of each one of those functions, okay?
I know that that's a general C++ thing, but I want to try to make this digestible for people who are just getting started, okay?
So be sure to check out pure virtual functions if you're not familiar with that concept.
Okay, so what we're going to do is give you a whistle stop tour of what the juice audio processor is and what each function does.
This is not going to be really comprehensive.
This is going to be very general, but hopefully this should give you some insight into some things that you may not have known about before.
I know that I've learned a lot in creating this tutorial, so hopefully it's helpful for you.
Okay, so let's get started.
So the first thing that we have here is the constructor.
So this is what's called when the plugin is first created.
And what happens is that it creates this juice audio processor.
And then in here is this piece of code with some macros that look intimidating at first, but if you break it down, you can see that it's not very intimidating at all, okay?
So here we have these variables that we created with the CMakeLists.
Let me go back to CMakeLists here very quickly.
So remember, we have these global configurations like needsMIDI input, needsMIDI output, and so on.
So that's all these are here, okay?
So it's just using those configurations to see if it's a MIDI effect or if it's a synth.
And depending on what these choices are, then it's going to create some inputs and outputs and give those to our audio processor to create our audio output and input, right?
So if it's not a MIDI effect, right?
So MIDI effect would take MIDI input and do something with it.
So if it's not a MIDI effect, and if it's not a synth, then it's saying that we want an audio input that would create stereo input, okay?
And that if we wanted a stereo output, that that's what would happen.
So if it's a MIDI effect, obviously you would want it to have a MIDI output.
But if it's not a MIDI effect, then you would want it to have an audio output.
So that's what's happening there is just creating your inputs and outputs for your plugin.
And then of course, here in this function, this constructor function, you would have any type of initialization code that you would need to start up other classes, to start up other things that need to be initialized with some variables.
Next, we have our destructor.
So this is typically what we would use that on destruction of this audio processor, which would typically be when somebody deletes your plugin, then this destructor would clear out any, you would use that to clear out any memory, any resources, any processes that need to be done before your plugin can be destroyed.
Things like that would typically happen here.
These next ones, get name, accepts MIDI, produces MIDI, is MIDI effect.
All of those are direct correlations to those global configurations that we created here in the CMake file.
Needs MIDI input, needs MIDI output, and is MIDI effect and so on.
Okay, so going down a little bit further, we have this get tail length seconds.
So sometimes when you're using a plugin in a DAW, when you stop processing audio through the plugin, when you stop the DAW, then typically sometimes if it's like a reverb or it's the delay, maybe you want that delay or that reverb to continue ringing out until it's finished, until it reaches silence.
Sometimes you want the audio to stop right away when your plugin is stopped.
Okay, so if you want it to stop right away, then you would just leave that at 0.0 seconds.
That means that even if it's a delay plugin or a reverb plugin, that when you hit stop, that the delay or the reverb stops.
If you want it to continue playing after audio is done processing through the effect, after you've stopped the DAW, then you would change this to a different number.
These next few functions like get num programs, get current program, set current program, get current program name, change program name, all of these are referring to presets.
So this is referring to a preset if it's loaded or if you want to get what the preset name is.
So typically your presets will be stored in some type of a vector with what you want each parameter to be set at, and that's how that would be stored.
And so all of these are just referring to loading them, getting them, changing the name of your plugin, of your preset rather, and so on.
Going down a little bit further, we have this function called prepare to play.
So this is one of the more important functions that you have here in the audio processor.
It's probably one of the top two or three most important functions.
So anytime that you're processing a, you're doing some type of digital signal processing, so let's say that you have a reverb or you have a delay and you've created an algorithm or you're sourcing an algorithm from somewhere, typically you'll have some type of function that's called either prepare or reset or prepare to play.
And what this does is it allows you to take the sample rate and the buffer size of where the DA, of what the DAW is processing and pass it into these individual algorithms.
Okay, so for example, let's say that we have a one second delay algorithm, right?
So your delay algorithm needs to know whether the sample rate is 44,100 samples per second or 48,000 samples per second, because that will determine whether that delay algorithm needs to process 44,100 samples or 48,000 samples to create that delay accurately.
Okay, so that's why any DSP algorithm, typically you'll need some sort of prepare or reset or prepare to play algorithm that is implemented here in prepare to play.
Now, another thing is to think about or to know about is when this function is actually called within the plugin.
So depending on the DAW, sometimes these are called at different times, but typically what'll happen is that when you create a plugin in your DAW, it will, I think most of them, all of them perhaps call prepare to play when the the plugin is getting ready to start processing audio again.
So for example, when the user hits the space bar to start playing audio through the plugin.
So let's say, and the reason for this is, let's say that the user has been playing audio through the plugin and the sample rate is 44,100 and they've been playing audio through the plugin and then they stop it.
Let's say for whatever reason, they decide to change the sample rate to 48,000 samples per second.
Then what needs to happen is that that information needs to be passed into all of your DSP algorithms.
So it knows that it needs to recompute those calculations.
Okay, so that's why this prepare to play method is so important.
The next method release resources is actually one that I haven't used in my travels, but I imagine I should probably use it a little bit more.
So release resources is that sometimes you have a situation where maybe you, you aren't processing audio through your plugin, or you're not doing anything with the plugin, but you want to be able to, but let's say it's a memory hog and you want to be able to take that memory and free it up.
You can do that here and release resources.
Once again, one that I haven't really used, but I imagine that it could be useful in certain situations.
This Boolean function is buses layout supported is one that the DAW calls to make sure that when you're trying to put a plugin on a track, that the configuration of your DAW matches the configuration of your plugin.
So if you're, let's say in Pro Tools and you're doing some sort of 5.1 or 7.1 speaker surround system processing, and then you try to load a stereo processing plugin, then what would happen is that this is buses layout supported would return false because it would say, Hey, I'm a stereo plugin.
I can't, I can't work on this 5.1 or 7.1 system.
Okay.
So that's what the purpose of this function is going down further to our next method.
This is probably the most important method in the audio processor class, which is called process block.
So process block is where your audio processing actually happens.
So what it does is it gives you this buffer, which is your actual audio buffer.
So think of this as almost like a, an empty boat that comes into your, or maybe it's a boat of audio that comes into your, into your plugin, think of your plugin as your dock.
And then let's say that you're doing an audio effect.
Then what would happen is that audio comes into the plugin.
There's some sort of processing or some sort of change that happens.
And then the boat leaves with your changed audio and comes out into the DAW.
Okay.
So that's what the buffer is.
It's actually a, a vector, a stereo vector, typically that, uh, that holds the audio.
You want to do some sort of changes to the audio, and then you want to process, you want to have those come out, uh, out of your plugin.
Also, you have your mini messages or so if you have, uh, a mini keyboard that's hooked up, this is where your mini messages would be coming in.
And this is where you can define, okay, this is what I want the mini messages to do.
When I press this key, I want it to play this sample at this particular speed and so on.
So that's what happens there.
Now I'll go into a little bit of detail of some of these things within the audio process block itself.
So this scoped node to normals is for, is a optimization.
So, uh, the normals is this thing that happens where, uh, when you're processing audio or when you're processing, let's say floating point numbers, let's say that the floating point numbers get very, very small.
So I'm talking about a number like 0.0004.
Um, what happens is that after a certain stage, uh, what happens is that processing those very small floating point numbers actually gets very expensive for the processor.
And you'll see the processor actually spike when this happens.
So node to normals allows the, uh, allows us to be able to zero out, uh, those very small numbers once they get past a certain point.
Um, so it's an optimization.
Then we have these local variables that are just forgetting the number of input and output channels.
Um, below this, you have a very function that allows us to clear out any garbage audio that may be in our buffer when we're first starting to, um, to play audio, uh, out of our plugin.
So it just clears that out and make sure that there's no garbage.
Cause that could produce very loud artifacting.
Um, then below this is some, uh, some code code skeleton for, um, your actual audio processing.
When you want to implement your DSP algorithms and do other things, that's where that would typically happen.
But I will show you how to do that, uh, in further and further tutorials.
One thing that I will tell you about with this process block is that a lot of people, including myself, when I first started getting into audio programming, I used to think of audio programming as something that happens sample by sample.
Um, so most, I think all DAWs, uh, actually do block size processing, which means that they don't, they don't process audio sample by sample.
They process it by buffer number of samples at a time.
So if your buffer size is set to 128 samples, then it will process 128 samples at one time, then bring in the next 128 samples and so on.
Okay.
If it's 512, then it would be 512 samples that it would process at a time.
Okay.
So this is why, uh, you get where, um, if you're using a MIDI controller and the, it, let's say that the block size, the buffer size is larger.
Let's say it's like 512 samples per block.
That's why when you, uh, when you press a key or when you do something on the MIDI controller, that the responsiveness is slow, because what happens is that when you're moving your MIDI controller, it has to wait until the next process block to actually receive that MIDI message and something with it, um, before so, so that, um, it'll actually process what you've done.
So that's why what happens is that typically people that are using MIDI controller, they set their buffer sizes to be as small as possible, because even though that makes the, um, the, it makes the CPU, the, the, the computer need to work harder.
Uh, what happens is that you can get tighter responsiveness if you're using a DJ controller, if you're using a MIDI controller.
Um, so just to give you a little bit of an example of how that would typically happen, right?
So let's pull up a calculator and let's say that our sample, uh, our sample rate is 44,100 samples per second.
Okay.
So what that means is that our plugin needs to be able, our plugin, our DAW, uh, our computer needs to be able to process 44,100 samples per second.
If it doesn't process that, then you'll typically get some type of, um, you'll get some type of unpleasant noise, like a, like a break, or, um, maybe the audio might even stop some type of stuttering effect.
So, uh, if your sample rate is set for 44,100, that means it needs to process that amount of samples within one second.
So if our process block, um, so let's say that our sample rate is 44,100.
Let's say that our buffer size, the one that we set in our DAW, let's say that it's 256, right?
So 44,100 is our sample rate.
If we have 256 samples, um, as our block size, that means that this process block needs to call 172 times per second to be able to produce audio where you don't hear any breaks, where it sounds the way it's supposed to sound.
Okay.
Now let's take, so that was 256 samples.
Now let's take same sample rate, 44,100.
Let's divide that by 64 samples.
Okay.
So rather than, I think it was 117, it said it needs to call 689 times per second in order to produce that audio accurately.
So that's why what happens is that when you're adjusting, let's say you're doing DJ controller that you set your block size, your buffer size to be as small as possible.
Um, that what happens is that if you set your buffer size too small, then what you'll get is breaks within the audio.
You'll get this artifacting and the audio will stutter and it won't play properly because the, because your computer needs to work harder to call that process block 689 times per second.
And it may not have the power to be able to do that and run all the other things that it needs to run at the same time.
So that should hopefully give you an insight into, um, just the basics of how the process block runs and, uh, and help give you some intuition on, uh, some of the things that, um, that, that are done there within the process block.
But we'll talk about other things.
There are a lot of rules about the process block things that you're not supposed to do in your code, uh, here in the process block.
So it's a highly optimized piece of code that's meant to run on time and make sure that audio is always producing the way that it was designed to going down a little bit further.
We have these two functions has editor and create editor.
So this is referring to the user interface, the graphics of your plugin.
Okay.
So what you could do is you could set this has editor defaults.
And that just means that you don't supply the, the, the, the plugin with, um, with your own graphics.
Typically when that happens, what will happen is that your DAW that you instantiate your plugin went in will typically see that.
And it'll actually create a generic interface for you.
It won't look nice.
It'll just be some simple dials and selector boxes.
But, um, that's what will typically happen below that you have this create editor.
And what you can see here is that it returns a new audio plugin, audio processor, editor, editor.
And what that is, uh, we'll talk about this in future tutorials is where you actually create the graphics for your plugin.
So this is actually creating an instance of the audio processor editor where your graphics would show up.
Okay.
So that's what happens there.
Then you have get state information and set state information.
So this is where you would actually be able to save and load presets.
Okay.
So, uh, we'll do this in a future tutorial as well.
And, uh, we'll show how that works where you can actually, so for example, when you're, when you plug in, let's say that it's, um, so this is actually not so much for, uh, for presets.
This is more for if you're in a DAW, uh, let's say you have a synthesizer loaded and it's in a certain state, the filter is at a certain cutoff and so on that, um, when you close down your project, you want the state of that plugin, the configuration that that's in to be saved.
So that would happen in this, uh, set state information that it would say, okay, the user has saved this synthesizer in this current state.
Now we're going to, um, we're going to save it to the project.
And then of course, when you reload the project, then you want get state information where it reloads with the right settings.
So that's where that happens.
Uh, then finally you have, uh, this function that is able to create an instance of the plugin itself called create plugin filter.
If we go a little bit further into the code, uh, what we'll see is how, what, how that actually happens.
So as I was saying before, this juice audio processor class is a way for us to be able to write our audio code once, but not need to write it individually for each plugin format that we're trying to do.
So typically, if we want to be able to sell a plugin, we want to be able to sell the VST three version of it, the AU version of it, potentially the AAX version of it.
And we want to be able to do those for windows and Mac.
Right.
So, um, before juice, you used to need to be able to do that separately for each one of the plugin, uh, API APIs.
But what juice has done is that they've nicely encapsulated this in this audio processor class so that when you actually write this once that it'll actually deploy your code and actually wrap it within the, uh, within these different plugin formats, uh, just by hitting a button.
So there's a lot that happens under the hood there that you don't really see on the surface.
So they've done a really nice job of that.
I'll show you just a very general overview of how that happens.
But, uh, and if you want to have a deeper look, then feel free to take a deeper look.
But, um, this is where my knowledge starts to get a little bit hazy here.
Um, but going into the juice code itself, I'm going to go into the juice modules.
So think of these modules as individual libraries that have been written, um, for, um, for juice.
Okay.
So juice is really like a collection of different libraries and they have this one that's called juice audio plugin client.
Now, if you go in here, now you see these different ones for AAX, uh, a U right.
And BST three.
So these are all individual, um, SDKs or API APIs that if you wanted to, if you, if, if you wanted to take, for example, the Apple audio unit API, you could write a plugin that you just do and you deploy as an audio unit as just for a U format, but typically people want different formats to be able to satisfy different customers.
Right.
So here in this, uh, juice audio plugin client.
So this is kind of the insertion point into where this happens.
And you have a whole bunch of configuration options here that I'm not going to go through in this tutorial, but then you have this, uh, create plugin filter.
So this is what actually happens when you create a version of your audio plugin.
Then what happens is that it calls this function called create plugin filter of type.
And then here you have different plugin wrappers for the, for all the different, um, plugin formats.
So if I go here into wrapper type, here is where you see wrapper type VST three wrapper type, audio unit, wrapper type, AAX standalone wrapper type, and so on.
So this is actually how juice takes your plugin and it wraps it into these different formats and is able to provide you with binaries of all these different types.
So really clever stuff.
Uh, it starts to get a lot lower level from here.
Uh, so for example, if we go into the VST three, um, into that one, let's just take a look here.
This is where it starts to get outside of my, um, outside of my domain knowledge.
And you can see here that there's a lot of, a lot of stuff that goes on here and, um, a lot of, um, code that's a little bit deeper than my, than my experience.
But if you want to take a deeper look there, feel free to have a look, but that's, that's how it happens.
That's how the magic happens.
Okay.
So there you have it.
That's a general overview of what happens with the juice audio processor.
As I mentioned before, this is one of the core classes that you need to know about if you're doing audio processing with juice.
Uh, I hope that was helpful for you.
If you, uh, found that helpful and, um, gain something from that, be sure to give it a like and subscribe, and I will see you next time.