Before there was autofocus, there was focus. The camera is a light-tight box that is used to expose a photosensitive surface (film or digital sensor) to light. In order to focus the light onto the surface, most cameras (and your own eyes) use a lens to direct the light. Why did I say, “Most?” Well, there are many types of cameras around that do not rely on lenses to focus light. The “pinhole camera” is a box with a tiny hole on one end and a photosensitive surface on the other. Light comes through the tiny opening and is projected onto the rear wall of the box. A search of the Internet or your local library will reveal that scientists and engineers are currently working on developing lens-less cameras that are never out of focus and avoid the unfortunate characteristics imparted to light when it passes through glass or plastic lenses. For the time being, however, nearly all of us are using cameras that focus light through a lens.
A lens is an optical device that consists of a curved material that allows light to pass through it. Depending on the design, a camera lens, either built into the camera or attached and interchangeable, consists of one or more elements that both diverge and converge light to focus it onto the photosensitive surface and re-assemble the light reflecting from the scene that has passed through the optics, resulting in an image.
Why do we need to bend the light to create an image? Well, we do not truly need to bend the light at all. The issue is that the film, sensor, or back wall of your eyeball is usually much smaller than the view we are trying to capture. Therefore, we need to bend the light to reduce the size of the image. How else would you get an entire mountain or building to fit onto a camera sensor without bending the light?
Not only does the lens bend the light, it also slows it down. The speed of light changes when it passes through translucent materials. So, light is bending and slowing as it enters and exits a lens (depending on the design of the lens). The camera lens’s job is to direct that light onto the film or sensor.
Before we go too crazy here, let me issue a disclaimer stating that there are many things one can learn about the behavior of light and the physics of lenses. I will never pretend to have more than a casual understanding of the topic, and my college physics grades would indicate that you might want to forget what you just read and are about to read but, for the purposes of this article, I am going to try to keep this basic and clear so that we can get to the subject at hand—focus. If you want to dig deeper, by all means, indulge yourself. Optics and light are super cool and fascinating, but I need to keep this relevant to the photographer. Doctoral-level knowledge of this topic is in no way guaranteed to make you a better photographer.
As anyone who has used a magnifying glass to try to burn holes in paper or leaves can attest to, there is a direct correlation between the convergence of light and distance from the object onto which you are trying to project that light. When you try to focus the light of the sun into a tiny spot to start a flame with a lens, you are focusing the light from a single light source. The camera, as well as your eye, is focusing the light from not only potentially many light sources, but an infinite number of light rays that are reflecting from objects in the scene. Moving the lens closer or farther from the sensor or film is how the camera and lens work to channel the light to recreate the image clearly.
If you could not adjust the focus of the camera and lens, you would have to move physically closer or further from the object—just like you did with your magnifying glass and the sun. Luckily for us, most cameras do the moving for us.
Let us get theoretical one more time to help cement this information. You are fundamentally against selfies and are taking a portrait of a friend so that they don’t have to take their own picture. Now, let’s look closely at our subject. Really closely… the tip of an eyelash. That eyelash tip is reflecting light from a light source (sun, strobe, light bulb, etc) in all directions, not just back at the camera. Reflected light from that eyelash is entering the camera’s lens at different angles because it is reflecting at a nearly infinite number of angles. The lens’s job is to collect those light rays and make them converge onto the film or sensor at a single point so that we can reproduce the tip of that eyelash on our photograph exactly the same as it appears to our eye. If that light converges at a point before the sensor, that eyelash tip will appear blurry, as the light will converge to a point and then continue on its merry way, diverging from the point. Similarly, if that light tries to converge at a point beyond the film or sensor, the light impacting the plane will not yet be brought to a single point, and we have the same effect.
What is this effect? An out-of-focus image is created. The tip of that eyelash is reproduced as a fuzzy collection of reflected light that will resemble a blurry eyelash tip. Now, imagine that an infinite number of times from every point of light or reflection in a scene. Blurry!
To allow your image to be sharp, or to allow you to intentionally not focus, the camera and lens work together to change the distance of the lens from the sensor or film in order to control where the captured light converges. When the light converges precisely at the plane of the film or sensor, the image is in focus.
So, on a camera with a lens that has a rotating mechanical focus ring, by turning this ring you will physically move the focusing lens, or lens-focusing group, to manually change the distance between the lens and sensor and allow the control of where in the camera that light converges.
Now that we have a basic understanding of how the lens works to focus the light onto the sensor or film, we can talk about the magic of autofocus. As technology advanced, camera companies figured out how to motorize the camera body and lenses to move the focusing elements or focusing group toward or away from the sensor or film. A vast majority of today’s cameras do not have autofocus motors inside the camera body, but rely on tiny motors built into the lenses, which are controlled from the camera itself.
Not really rocket science, right? But, how does the camera know when the subject is in focus? When we focus a lens manually, we look through a viewfinder or at an LCD screen and verify, with our eyes, if the subject looks sharp. Many viewfinders in the days of film had useful split-screen microprisms at the center that assisted with manual focusing. The autofocus camera needs to calculate focus electronically as the lens moves to and from the sensor or film. And, luckily for us, especially if you do not have perfect vision, it can now do this extremely fast and accurately.
Active versus Passive
You won’t see Active AF systems much these days, but let us give a nod to the technology. Active AF systems were around in the early days of autofocus technology and relied on the camera transmitting an ultrasonic or infrared signal toward the subject. The subject would reflect the sound or light back to the camera’s focus sensor and by crunching the time it took to receive the return versus the speed of sound or speed of light, the camera would know how far away the subject was. It actually sounds pretty cool and high tech, right? This is, basically, sonar and radar in a camera. Sonar and radar are cool. So is Active AF.
Before you get all excited about having pioneering technology on your camera, if you have what is known as an AF-assist lamp on your camera, its use is not an Active AF system—it merely augments lighting in a dark scene to assist the passive system.
Passive AF is the choice of the vast majority of today’s cameras. In the Passive AF world we have two different systems: Phase Detection and Contrast Detection. We will wrap up this intriguing article by describing how each system works, again, keeping it relatively simple.
Phase detection is the system most commonly found on today’s DSLR cameras. As you know, light enters the lens of a DSLR and strikes a mirror that is angled in front of the sensor or film. That light is reflected up into a prism and then toward the viewfinder at the back of the camera. However, what you might not have known is that a very small amount of light passes through that mirror, strikes another mirror, and is reflected down toward the bottom of the camera, where the autofocus sensor lives.
The autofocus sensor contains two or more image sensors with microlenses above them. These tiny sensors create the camera’s autofocus points. The first passive autofocus cameras used to have one central focus point. Technology today gives us cameras with dozens of selectable focus points.
So, how does this autofocus sensor work? In simple terms, phase detection works by dividing that incoming light into pairs of images before comparing them. The light is divided as it passes through that transparent part of the main mirror, where that area acts like a beam splitter. The two distinct images are directed downward to the aforementioned autofocus sensor, where the two images are compared and their positional relationship evaluated. A computer inside the camera evaluates the signal from the autofocus sensor and commands the lens to adjust the focusing elements inside the lens until the two images appear identical. Once the two images match, the image is in focus.
Early sensors just evaluated vertical details in the image. This had its limitations as the system struggled to focus on simple scenes with lots of horizontal components. I remember turning my old SLR camera sideways to trick the autofocus sensor! Now, many sensors, called cross-type points, read both horizontal and vertical information simultaneously. Ahhhh, technology!
Contrast detection is the system used commonly by mirrorless cameras, point-and-shoot cameras, DSLR cameras in live view, and smartphone cameras; basically any camera without a mirror in use.
As you may have noticed, the phase detection systems are complex and have many components. Contrast detection is much simpler and it uses the light falling on the main sensor to provide focus. This gives contrast detection one advantage over phase detection: the number of autofocus points. With phase detection, the number of points is based on the design of the mirror and how many autofocus sensors live below that mirror. With contrast detection, the camera can have an almost unlimited number of focus points. Some modern cameras have touchscreens where the camera will focus on any point in the image that you designate, with the touch of a finger.
How does it work? Well, the camera commands the focus element of the lens to move while it reads any decrease in the intensity of light on a pixel or group of pixels. The maximum intensity indicates the region of sharpest focus. While simplicity is the advantage of this system, the downside is that the camera must constantly evaluate images in order to achieve focus. When the light hits the sensor for the first time, the camera has no idea if the light is showing its maximum intensity or not until it changes the position of the lens to vary that intensity. It is kind of the equivalent of measuring something on a balance scale without knowing the weight. You could put the counterweight on the opposite end of the scale and find that it is just right, too heavy, or too light. The camera gets the initial image, which may be in focus, but in order to verify, it has to start moving the lens to see if the image gets sharper or more blurry.
This is called “hunting.” Those who have older point-and-shoot cameras may remember, not so fondly, waiting for the lens to find focus while the action in the scene passed you by. Luckily, technology surrounding contrast detection autofocus is always improving, and today’s mirrorless cameras and point-and-shoots have the ability to focus extremely fast.
So, now you know how focus works inside your camera. Or, at least, I hope you do. Thanks for reading!