The Gap is Getting Smaller

There’s always been a divisive gap between the two major forces in the art of photography. For decades we’ve seen and enjoyed the traditional camera style, now known as DSLR photography, and in recent years, the growth of mobile photography has been very difficult to ignore. Its popularity has taken off to where it has all but obliterated the point and shoot market. But still, there are some differences of opinion with regard to the integrity of mobile photography.

DSLR makers like Canon and Nikon have some amazing cameras, but the speed at which they upgrade with new models seems slow for some of their customers. I’ve heard professional photographers on some podcasts talk about features they want but don’t understand why the manufacturers can’t, or at least aren’t, adding them to their camera lineup. I’m sure it’s not easy to make the “best” camera on the market.

This is a very exciting time for photography in the mobile space. The cameras in the current line of phones are obviously the best they’ve ever been and the competition among phone makers is getting fierce. It’s not a megapixel war like we’ve seen in the DSLR space, but rather a battle to see who can get the best image quality from these very small lenses and sensors. I would guess that releasing a phone with a camera that has a larger lens opening than the others is one of the bigger checkboxes on the list of features for these companies, but that can’t be an easy endeavour technically because of the physics involved.

As an iPhone user and one who follows Apple more closely than I do any other tech company, I can’t fairly speak about the technology in devices by Samsung, HTC, LG, etc., but I can say that these phone makers do have their loyal customers who are passionate about their phones. Photography brings out some of that passion because photography is art, and art is an expression of one’s vision. For me, the iPhone produces images that best suit my vision and artistic style.

So, what is this gap that’s getting smaller? For one, it’s the ability to tell whether a photo was taken with a DSLR or a mobile device. People have been questioning me on this for a couple of years now which is a testament to the iPhone’s ability to produce a good quality image. And I think it’s worth mentioning that these little cameras in our beloved phones have their limitations. Some of these limitations can be overcome thanks to the expert app developers out there who have been blessed with the creativity and intelligence needed to supply us with the tools needed for the job.

The concept of “computational photography” has come to light recently with some phones having the technical ability to read and perceive depth in an image. This is a huge advancement for mobile photographers. Research tells me there is way too much to discuss here other than the technology used in this process is called “light field” or “plenoptic” whereby the camera reads the light field of a scene including the direction in which the light rays are travelling.¹

Apple introduced this technology in the iPhone 7 Plus with Portrait Mode, which uses both the wide angle and telephoto lenses to gather enough data from a scene to create a depth map and use that information to produce a photo with a sharp foreground and a nice bokeh in the background. The only other device on the market that uses a form of Portrait Mode, that I know of, is the Google Pixel 2. I believe Samsung has a feature where you can select the focus after the shot, but this is not promoted as a form of Portrait Mode. The Pixel 2 also performs its magic after the shot, most likely because it only has one lens, but it does an impressive job at creating a portrait with a soft background. I may be a bit biased, but I think the iPhone does the best job with Portrait Mode, and it does it all live with a preview of the scene before you take the shot.

I mentioned app developers earlier and how they help us overcome some of the limitations of mobile photography. This brings me to what I see, at least in my experience, as the one app that closes the gap closer than any other to this point: Focos. Yes, that’s how they spell it and it does a fantastic job with how it allows us to select a point of focus after the shot, as well as, get this, change the depth of field in a way that is similar to changing the aperture of a conventional camera lens. For this to work, the photo needs to be taken on an iPhone with the dual lens system in Portrait Mode. There are third party camera apps that shoot with the depth information available from the two lens configuration, but I’ve found those files don’t work in Focos.

Focos has a lot more to offer as well to make the app more fun to explore and use, but you have to pay for those features either by a subscription, which is reasonable until you decide to renew this subscription year after year, or there is a one time fee that enables all the features of the app forever. I’m not a fan of the subscription model so I went for the gusto and paid for the whole thing.

So let’s take a look at Focos and how it helps bridge the gap. I took a photo of a pair of Dwarf Alberta Spruce trees in front of my house after a fresh snowfall using Portrait Mode on my iPhone 8 Plus. The image on the left is how it looked as it was taken with the tree in the foreground in sharp focus and the background showing the nice bokeh that Portrait Mode offers. Before Portrait Mode, the iPhone could only give us an image with a very large depth of field, even with the small aperture housed in these little lenses, and that’s all thanks to physics, which is also something I couldn’t begin to talk about.

Foreground FocusDistant Focus

 

Changing the focal point of the photo is as simple as tapping the area you want in focus, and for the photo on the right, I tapped on the tree in the background.

 

 

 

 

This next feature is where the real magic of FocAperture Slideros happens.  The slider under the image is how the “aperture” can be adjusted. When I rest my finger on the slider, a graphic of an aperture ring appears with a value below it. I don’t know how the aperture value is calculated or how closely it resembles the aperture of any conventional lens, but as I slide my finger across the screen to adjust it, the value changes in increments of 0.1, so if anything, Focos gives us some very fine control over the depth of field. The fine adjustments here can only be made possible with computational photography because the way an aperture ring works in a conventional lens is that with every single adjustment of the aperture, the lens lets in half or double the amount of light. This changes the dynamics of the image exposure to where you have to adjust the shutter speed or the ISO to compensate for the aperture change in order to get the same brightness in the photo. Focos is merely altering the depth of field when you make a change with this slider.

Below are two versions of the photo; one with the slider all the way to the left to where the aperture bottoms out at f/16 and the other to the right where the maximum aperture is f/1.4. The left image shows as it should with a small aperture opening, a large depth of field with most of the scene in focus. The image on the right has such a small depth of field that only a portion of the tree in the foreground is in focus, which is quite similar to the effect I used to get from my 50 mm Canon lens that had a maximum aperture of f/1.8.

focos4focos5

 

 

 

 

 

 

 

 

 

 

 

 

 

Continue reading