There’s always been a divisive gap between the two major forces in the art of photography. For decades we’ve seen and enjoyed the traditional camera style, now known as DSLR photography, and in recent years, the growth of mobile photography has been very difficult to ignore. Its popularity has taken off to where it has all but obliterated the point and shoot market. But still, there are some differences of opinion with regard to the integrity of mobile photography.
DSLR makers like Canon and Nikon have some amazing cameras, but the speed at which they upgrade with new models seems slow for some of their customers. I’ve heard professional photographers on some podcasts talk about features they want but don’t understand why the manufacturers can’t, or at least aren’t, adding them to their camera lineup. I’m sure it’s not easy to make the “best” camera on the market.
This is a very exciting time for photography in the mobile space. The cameras in the current line of phones are obviously the best they’ve ever been and the competition among phone makers is getting fierce. It’s not a megapixel war like we’ve seen in the DSLR space, but rather a battle to see who can get the best image quality from these very small lenses and sensors. I would guess that releasing a phone with a camera that has a larger lens opening than the others is one of the bigger checkboxes on the list of features for these companies, but that can’t be an easy endeavour technically because of the physics involved.
As an iPhone user and one who follows Apple more closely than I do any other tech company, I can’t fairly speak about the technology in devices by Samsung, HTC, LG, etc., but I can say that these phone makers do have their loyal customers who are passionate about their phones. Photography brings out some of that passion because photography is art, and art is an expression of one’s vision. For me, the iPhone produces images that best suit my vision and artistic style.
So, what is this gap that’s getting smaller? For one, it’s the ability to tell whether a photo was taken with a DSLR or a mobile device. People have been questioning me on this for a couple of years now which is a testament to the iPhone’s ability to produce a good quality image. And I think it’s worth mentioning that these little cameras in our beloved phones have their limitations. Some of these limitations can be overcome thanks to the expert app developers out there who have been blessed with the creativity and intelligence needed to supply us with the tools needed for the job.
The concept of “computational photography” has come to light recently with some phones having the technical ability to read and perceive depth in an image. This is a huge advancement for mobile photographers. Research tells me there is way too much to discuss here other than the technology used in this process is called “light field” or “plenoptic” whereby the camera reads the light field of a scene including the direction in which the light rays are travelling.¹
Apple introduced this technology in the iPhone 7 Plus with Portrait Mode, which uses both the wide angle and telephoto lenses to gather enough data from a scene to create a depth map and use that information to produce a photo with a sharp foreground and a nice bokeh in the background. The only other device on the market that uses a form of Portrait Mode, that I know of, is the Google Pixel 2. I believe Samsung has a feature where you can select the focus after the shot, but this is not promoted as a form of Portrait Mode. The Pixel 2 also performs its magic after the shot, most likely because it only has one lens, but it does an impressive job at creating a portrait with a soft background. I may be a bit biased, but I think the iPhone does the best job with Portrait Mode, and it does it all live with a preview of the scene before you take the shot.
I mentioned app developers earlier and how they help us overcome some of the limitations of mobile photography. This brings me to what I see, at least in my experience, as the one app that closes the gap closer than any other to this point: Focos. Yes, that’s how they spell it and it does a fantastic job with how it allows us to select a point of focus after the shot, as well as, get this, change the depth of field in a way that is similar to changing the aperture of a conventional camera lens. For this to work, the photo needs to be taken on an iPhone with the dual lens system in Portrait Mode. There are third party camera apps that shoot with the depth information available from the two lens configuration, but I’ve found those files don’t work in Focos.
Focos has a lot more to offer as well to make the app more fun to explore and use, but you have to pay for those features either by a subscription, which is reasonable until you decide to renew this subscription year after year, or there is a one time fee that enables all the features of the app forever. I’m not a fan of the subscription model so I went for the gusto and paid for the whole thing.
So let’s take a look at Focos and how it helps bridge the gap. I took a photo of a pair of Dwarf Alberta Spruce trees in front of my house after a fresh snowfall using Portrait Mode on my iPhone 8 Plus. The image on the left is how it looked as it was taken with the tree in the foreground in sharp focus and the background showing the nice bokeh that Portrait Mode offers. Before Portrait Mode, the iPhone could only give us an image with a very large depth of field, even with the small aperture housed in these little lenses, and that’s all thanks to physics, which is also something I couldn’t begin to talk about.
Changing the focal point of the photo is as simple as tapping the area you want in focus, and for the photo on the right, I tapped on the tree in the background.
This next feature is where the real magic of Focos happens. The slider under the image is how the “aperture” can be adjusted. When I rest my finger on the slider, a graphic of an aperture ring appears with a value below it. I don’t know how the aperture value is calculated or how closely it resembles the aperture of any conventional lens, but as I slide my finger across the screen to adjust it, the value changes in increments of 0.1, so if anything, Focos gives us some very fine control over the depth of field. The fine adjustments here can only be made possible with computational photography because the way an aperture ring works in a conventional lens is that with every single adjustment of the aperture, the lens lets in half or double the amount of light. This changes the dynamics of the image exposure to where you have to adjust the shutter speed or the ISO to compensate for the aperture change in order to get the same brightness in the photo. Focos is merely altering the depth of field when you make a change with this slider.
Below are two versions of the photo; one with the slider all the way to the left to where the aperture bottoms out at f/16 and the other to the right where the maximum aperture is f/1.4. The left image shows as it should with a small aperture opening, a large depth of field with most of the scene in focus. The image on the right has such a small depth of field that only a portion of the tree in the foreground is in focus, which is quite similar to the effect I used to get from my 50 mm Canon lens that had a maximum aperture of f/1.8.
Focos has a number of other features that can change the way the bokeh looks in an image, and some are better suited for other photo subjects than the one I show here, but I’ll let you decide if that’s something you want to explore. The point of my writing this piece was to illustrate how I feel the differences between mobile and conventional photography are becoming smaller. Will mobile ever replace conventional photography? I seriously doubt it ever will, and I hope not because I have many friends who prefer their DSLRs over an iPhone any day, and I love what they produce with them. There are many things they can do with their cameras that I cannot do with my iPhone, and I’m ok with that. I knew that would be the case when I sold my DSLR gear and I was willing to accept the challenges of going mobile.
Seeing what I can achieve with an app like Focos has me wondering what is next for computational photography. Oh, I’m sure there will be some readers who say this isn’t real photography, just like there were those who said the same thing when digital entered the photography space but look what happened. This technology is most likely here to stay. It will only grow and evolve like digital photography did but at a faster pace. And don’t be surprised if computational photography works its way into the DSLR space — wait, I believe it’s already here. The Nikon D5500, for one, has a touch screen where you can simply touch the image preview on the pop-out screen where you want to focus and take the picture. Sure, it still uses a lens to actually get the true point of focus and it happens before the shot, but there is definitely some computing going on to do the task. Let’s not forget the Lytro ILLUM camera that enabled you to focus after the shot, much like Focos does here, and that came out in 2014.
Computational photography isn’t mainstream, but it’s here, and it’s doing its part to bridge the gap.
** Additional Content
A couple of days after I posted this article, I was listening to PhotoGeek Weekly, a podcast by a friend of mine named Don Komarechka, where he talks about the science and technology in today’s photography world. In Episode 7, entitled “The Future of Post Processing” with his guest Martin Bailey, they were about to wrap things up when Don mentioned something very interesting. Martin spoke about a couple of lenses he has that are similar, except one is a newer version of the other with a slightly smaller maximum aperture, and he wants to take some photos to compare them to each other. Below is a transcript of what Don said to Martin regarding a little test he can do:
Don Komarechka (DK): Here’s a test for you… Take both of these lenses, point it to a white wall, on a manual exposure, and it’ll end up being some shade of grey. Okay, take that picture, and then dismount the lens ever so slightly so that it’s still completely attached to the camera but is not making electrical connections, and make sure that both of these shots are at a maximum aperture. My experience with the 85-1.2 is when the camera is detecting the lens properly at f/1.2, the resulting image is brighter than if the camera cannot detect the lens properly.
Martin Bailey (MB): Huh! So the aperture actually closes down a little bit if it’s not connected I would imagine.
DK: No, it has nothing to do with the aperture. It has everything to do with the incident angle of light hitting a CMOS sensor.
DK: Yes. And they’re compensating for this in software and not telling you about it.
MB: Ha! Didn’t know that.
Don continues to say there are some white papers on the subject, and that the difference is only about 2 to 3 percent, but when comparing the two images, it is noticeable, and “the image is shifting its brightness when the camera can detect what the aperture is and then compensate for it.”
So, it appears that DSLR photography, at least with Canon, is always computational as long as the lens in use connects to the camera electronically to tell the camera what the aperture setting is. And to further clarify this, Don told me when I asked for his permission to post this, “The software in the camera adjusts the sensor gain to compensate for its inability to record light from extreme angles.”
This really floored me because I always thought DSLRs were, in a way, the purest form of “mainstream” photography out there. Now I’m not so sure.
I highly recommend listening to Don’s show, PhotoGeek Weekly. It’s highly entertaining and educational.