Placeholder Image

Subtitles section Play video

  • So today we're gonna be looking at different ways of doing auto focus with a camera.

  • So when you're taking a photo with any kind of camera, really?

  • So a lot of phones or definitely things like salons like this, you might see an auto focus mechanism.

  • So here we've got M f for manual focus and then Forte focus.

  • So if we got it set on auto focus, essentially the camera.

  • And these days the computer inside the camera decides whether a picture is is blurry or no, and corrects that by moving the lens.

  • So we're gonna look at a few of the methods.

  • You can do that.

  • Okay, so let's start back in the day.

  • So the first kinds of focus assist, I suppose you could call it.

  • So back in the sort of run about the 19 thirties, there was something called range finder.

  • Cameras came out.

  • These are cameras that have a sort of extra separate mechanism on the side of them, which was the range finder mechanism.

  • Now, this was all done with optics, and the way it works is you basically see a viewpoint of what you're looking at, and then you get a kind of like a ghost image that shifted to one side.

  • Normally, what you have to do is change a dial on the range finder mechanism to bring the two images kind of lying on top of each other.

  • So these days it's all kind of joined into the imaging system.

  • So, generally speaking, there's two different ways you can do auto focus.

  • There's active methods, and there's passive methods, So an active method will fire something out from the camera on DIT will use that to work out how far away something is.

  • So some of the most some of the earliest mechanisms for doing active, orto focused used something a bit like sonar, so it would send out a ping from the camera, and it would basically time how long it took to hit something and bounce back, just like the way so no works.

  • And it would use that to work out a distance that you could then use to focus it and seeking you sound.

  • You can also use light so some active systems will fire out light from the camera.

  • What we're actually gonna concentrate on today is the passive approach is how do We actually focus a camera when we're not sending out light or sound, so we're just using on the light that's coming into the lens to do the focusing.

  • One of the most popular passive mechanisms of auto focus is something called phase detection.

  • So you might be aware of face detection systems on your camera.

  • You might have different auto focus points, different areas of an image that you can choose to focus on.

  • Let's have a look at how face detection works in each one of those regions with a phase detection system.

  • If this is our lens coming into the camera, what we do is we essentially measure how light behaves at different points on the lens.

  • So if we have a ray coming from here and we sort of follow it through, so I'm not gonna draw the complicated optics here, your light rays that go through the lens fall upon an auto focus sensor.

  • So this is really essentially a sort of one D strip of photo died so set of pixels.

  • Essentially, it's like a little image sensor that the light hits here.

  • Now the trick behind face detection is you measure to parts of light so you would actually have a second sensor.

  • So this second light ray here actually hits us a different sensor.

  • Imagine behind the scenes that this is separated vier optics, but just to kind of simplify things.

  • Let's draw what's happening here.

  • So what we get hitting this set of pixels, if you like, is we get to see one of the image features in the image.

  • So if we've got a very simple image, what we might have here is a little peek.

  • So perhaps the edge of something on one of those detection points on, because in this case, the images well focused, these two curves will overlay if you imagine our two images hitting our auto focus sensor and they're perfectly aligned in the case where the images and in focus what happens is these beans go through the pixels like this, and they actually focus just behind the pixels.

  • So what that means is, if we draw our curves, we have one peek that's kind of up here like this, and we have another peek that's a little bit below like this because we're not in focused.

  • So the nice thing about face detection is that what you do is you measure the officer of these two picks on the distance between them tells you how out of focus they are.

  • So let's just draw the last case where so this one is kind of focus past the censor, and the other way that you could be focused is in front of the sensor.

  • So we have light coming up here, perhaps doing something like that.

  • So our focus point is here.

  • So when it hits the auto focus senses, they're going to be offset again.

  • So the two sensors will give a reading off one kind of curve up here on one curve down here, drawing very well, Okay.

  • And then we get another distance out what these sensors do.

  • So remember, in reality, there's probably two of these inside the system that the light's hitting.

  • This is a very simple image that we're making here.

  • In practice.

  • The two curves might be quite complicated, you know, they might be different features that we see They're not just gonna be a straight Pete.

  • Most of the time, it be some kind of pattern of light that's hitting the senses.

  • And so the job off the phase detection mechanism is to work out how to move one of these curves so that it lines up with the other one.

  • So mathematically you can do something called cross correlation there, which is a way of essentially looking at how to best match two signals that are offset from each other on what that gives you is a distance.

  • And it's that distance that phase detection uses to drive the lens.

  • So the nice thing about phase detection is that once it's calculated, this distance is very fast to focus because no only doesn't know it's our focus, but it knows by how much.

  • So once it's calculated this difference, it can say to the mechanism driving the focus lens.

  • Okay, move this much in this direction.

  • So we've got a distance.

  • But we also know whether we're focused too far away or focus too close, because if you notice here, the red peak here is above the green peak.

  • When we focus behind and hear, the green peak is above the red beet.

  • When we're focused in front by knowing which way to shift these patterns, it knows how far to meet the lens and which direction they face.

  • Detection tends to be one of the quickest ways to focus the camera.

  • Most systems use this kind of people do most is so a lot of systems will use both to say a lot of salons will use both.

  • And the reason is when you're focusing through the viewfinder, it tends to use face detection because it's using the optics of the the lens system to steal a bit of the light and pass it to these pairs of auto focus senses.

  • So you get a one pair for each water focus region, but you can only do that when you're looking through the viewfinder.

  • If you open the life, you so that changes how the optics in the camera works.

  • And so then it will tend to use a process called contrast detection, which will look at now.

  • Now, contrast.

  • Detection does work on light.

  • That's hitting the imaging sensor, so we're not using the optics in here to divert light around to the auto focus senses.

  • This is just using the sensor that is essentially used to capture the final image.

  • What we're gonna be doing is reading off some values of those pixels that make up your image on dhe.

  • One of the things one of the properties about focus is that the contrast of the image so sort of, the differences between the bribe, its and the dark bits get more extreme.

  • The mawr in focus you are.

  • So when you have nice, crisp focus, you get nice, clear differences between black areas and white areas.

  • So what that means is if we have a way of calculating those differences.

  • So how kind of sharp are edges are on our corners are and how different are regions of light and dark are so we can measure our contrast.

  • We can kind of work out how it in focus we are.

  • So if we just work through how we would do that, using a really simple example, we could look at some other kind of watches that happen on the way.

  • Andi think about why it's quite Slater do this as well.

  • So we've got a photograph here, which I've just turned black and white because it just makes the processing of it simpler.

  • So we're just using a tool here called Image J, which allows us to descend pretty simple scripting just to get a pixel values and to blur the images as well.

  • You can just download this, and what's going on here is we can get the values of the pixels on DDE in orderto work out how much contrast we have in the image.

  • Probably the very simplest thing we can do is just look at pairs of pixels and calculate the difference between them in terms of brightness.

  • So if we just go through an image, a pair of pixels at a time, and calculate the difference between them when we kind of maximize that the total of all those differences, then we're in pretty good focus because we got the most contrast we can have.

  • So that's what this simple example here will dig on.

  • This line here is just calculating the difference between them, So I'm just calculating differences in the ex direction.

  • So in the row along here, sometimes, and this is true with face detection as well your calculations off contrast or phase, you can either be sort of in the ex direction along the rows or along the columns, or you can get sort of cross sensors that do both in phase detection.

  • In the contrast here, we're just going to do a neighboring ex pixels.

  • Okay, so we could calculate all the neighboring.

  • Why pixels as well.

  • But just to keep it simple, I'm doing this.

  • We could use probably a better measure of contrast.

  • So something like a sub l operator or something else that's good for detecting edges.

  • But just as a very simple example, Let's just measure the difference between neighboring pixels and see how that changes as we go out of focus, which I'll simulate by blowing the image.

  • So if I run this, it will move over the image it takes a little while because I'm moving across all the pixels, we get a number here, which is essentially the total of a ll.

  • Those differences between the pairs of pixels.

  • So it doesn't really matter absolutely what the number is.

  • What we're gonna do is try and find the peak okay of these these values.

  • So, actually, we're starting off in focus here, and we've got a value of about five million.

  • Let's make it a little bit out.

  • Focus.

  • So if we apply a galaxy and blur so I don't know whether you can see there.

  • But it's gonna be our focus that you can see we've lost our crisp edges.

  • So if I run this again being a little bit out of focus taking that, let's go over it.

  • So our first value was 5.1 million.

  • We've now got a value of 1.2 million.

  • So we've gone from five million down to one million as the total of our differences.

  • So we've gone about focus on We've got a lower, um, control spider if you like.

  • So let's take it to the extreme case terribly out of focus image.

  • This time I come rise Michael in.

  • So run again.

  • How they react.

  • So now we've gone from 1.2 million down to 145,000.

  • And if we take it to the extreme, the real extreme case, we're going to get very low values coming out here.

  • So what's happening if we have an algorithm that does this is that we can plot these values on a curve.

  • So if this is our focus motor driving the lens and this is our measure of contrast, which in this case is just differences in pairs of pixels what's gonna happen is we're going to get some kind of curve like this.

  • So when we're out of focus, either way, the valley's gonna drop down like it does there.

  • When we're in focus, we're kind of at this peak point here.

  • So the trick with contrast detection is finding this peak with my camera operator head on.

  • Looking at the shot I'm looking at now, I've got the laptop quite close to me.

  • Yeah, I've got you in the middle and I've got the blinds in the back.

  • Yeah, on dhe experience tells me that auto focus will look at those blinds and go, Hey, they're nice.

  • They're going to look great.

  • And you're gonna be blurry in your face.

  • Yeah, so that could be a problem.

  • If you're running this over a whole image, you're gonna get issues.

  • So quite often, For example, on your phone, you can you can essentially select a region to focus on so you can press a region on your phone or you can select a focus point on live you or something like that.

  • Well, that will do is it will only calculate this difference across a particular region of an image.

  • So if we just want to focus on the library here, we can just calculate this over there.

  • Otherwise, you're right.

  • It's gonna end up kind of optimizing this curve for something in the image that you might not care about the shiny stuff just showing off course.

  • The other thing to say, I suppose, is we've gone through every pixel in the image here, but actually you would probably only sort of sub sample the image in order to make it quicker.

  • You might have noticed it took a little while to work.

  • These methods tend to be quite slow.

  • Then Noah slow.

  • Is this because they're not calculating every single pixel?

  • You don't generally need Thio, but the catch with this method is that you can tell you're out of focus.

  • So when we were out of focus, that sort of 1.2 million on our accounts are down here.

  • If this is five million, where we were to start with and then the first time we blowed it, we went down to 1.2 million.

  • We know we're here, but we don't know whether we're they're all there.

  • So we don't know which way to move the lens.

  • You'll notice when cameras use this as a hunting mechanism.

  • So it has to move the lens a little bit and work out.

  • Whether it's got better or worse.

  • So will tend to move it in quite big jumps like this on as soon as it starts getting worse.

  • It will kind of humpback.

  • So you get these steps moving up the curve to try and find the sort of optimal focus point up here.

  • So you need to search.

  • So that's one of the reasons why contrast detection is pretty slow.

  • Unlike face detection, where it says move this much in this direction, move the lens this much in this direction to focus with contrast, detection.

  • You don't get that.

  • You just say I'm out of focus, but I don't know whether I'm too far away to place.

  • The other reason why this method can fail is if it doesn't have anything to measure contrast on to start with, so you need some kind of texture.

  • So if you'll try and focus on the sky region, for example, up here, you can imagine that even, you know, the more and more I blow the sky it's no having that much effect on the focus.

  • That's pretty true of the face detection as well.

  • So if you've got no edges visible, it's very hard to do that pattern matching toe workout where you need to move.

  • Sometimes you you'll see things like these charts, which provides a nice contrast between black and white edges that used to assist the camera focused focusing mechanism.

  • And the nice thing about the calibration charts like this, where they have very good contrast, bright areas and dark areas that make focusing on nice and easy.

  • So, as an example, let's try perhaps focusing on this and taking some out focus images on.

  • We can see how the contrast focusing mechanism performs as we go in and out of focus.

  • Okay, so you might notice with your cameras that both of these mechanisms will failed to work very well when you haven't got much texture.

  • So if you're pointing just at the white wall there, it's going to struggle to find focus lowlights of problems so you often get an assist.

  • Bean save.

  • Some cameras will use a flash to light up the scene so they can see what it's doing some will send out, they'll have a little extra bold that they light up the scene with, um, which could be done an infrared so you can't see it, and then it could focus in infrared.

  • Yes, you get that problem as well.

  • Some systems will even project out some kind of structured light, like a grid of light or a texture of light, just to help these algorithms focus a bit better.

  • And so one advantage of the of the active methods is, of course, their focus in complete darkness.

  • Because if you're using sonar and bounce back off the wall, whether it's lit up or no, the the disadvantage being it'll bounce back from a window as well.

  • So you can take photos, three glass and things like that kind of swings and roundabouts with all these different mechanisms to kind of summarize the last two methods that we talked about.

  • The phase detection is nice and quick, but it needs its own optics toe work.

  • The contrast detection works without fancy optics, and it works on a live view where you can just see the image, but it's a bit slower because you have to do this hunting approach, So wouldn't it be nice if we could do some kind of face detection?

  • But on the actual image sensor?

  • And so there's some technologies coming along now, say things like Joel Pixel focusing where what they've tried to do is essentially bury these auto focus points throughout the sensor.

  • So I think it's cannon that do this approach.

  • I don't know where those other approaches available on the way it works.

  • Essentially, each picks always comprised of two photo dives, so they kind of work in pairs.

  • And each one of the photo diehards has some kind of micro lens attached to it.

  • You've got optics going on, but it's spread out across the sensor on each one of these pairs of photo diners is used to do phase detection focusing, so it works on the back main image sense of the same one that used to capture the image.

  • So you used a pair of photo die odes.

  • Hence you'll pixel to do the essentially.

  • It's like phase different face detection focusing.

  • But when you want to take the picture, both of the photo Diane's will work together toe actors a pixel to take the picture So the nice thing about that is it's still working on the back plane.

  • So when you're looking at the LCD panel, it's still doing phase detection.

  • So it's nice and fast.

  • This is called the After Problem or the barbershop pole illusion because it's got stripes moving up and down on the idea being that there's not enough information here.

  • Thio Too accurate on state road times 100 plus your X times one and that will give you the exact point in memory Illini a member.

So today we're gonna be looking at different ways of doing auto focus with a camera.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it