Reading Time: 6 minutes

Is truth in the art of photography now lost forever?

With the recent advent of computational photography, I seriously fear the implications of what technology is doing to harm the aesthetic of photography.

Computational photography is now evolving in smart phones from Apple and Google.  I expect all others to follow closely.  There has not been much innovation in smart phones recently, except that being developed for the cameras built-in to these devices.  Manufacturers are adding advanced algorithms, neural networks, and artificial intelligence to enhance and manipulate the photos that you snap with your smart phone. This manipulation is making awesome pictures, no argument.  And, for the average Joe, it is welcome.  But, for the serious picture-takers like myself, it is a very troubling concept that I fear will propagate to all kinds of cameras beyond smart phones.  Will Nikon, Canon, Sony and others adopt computational algorithms to enhance their DSLRs.  Or, are they already do it?

fred composite
iPhone X – Exact same picture in two modes Photo (L) and Portrait (R)

Wikipedia defines computational photography like this:

Computational photography refers to digital image capture and processing techniques that use digital computation instead of optical processes.  Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film based photography, or reduce the cost or size of camera elements.  Examples of computational photography include in-camera computation of digital panoramas, high-dynamic-range images, and light field cameras.  Light field cameras use novel optical elements to capture three dimensional scene information which can then be used to produce 3D images, enhanced depth-of-field, and selective de-focusing (or “post focus”). Enhanced depth-of-field reduces the need for mechanical focusing systems. All of these features use computational imaging techniques.

The definition of computational photography has evolved to cover a number of subject areas in computer graphics, computer vision, and applied optics.  Examples of such techniques are image scaling, dynamic range compression (i.e. tone mapping), colour management, image completion (a.k.a. in-painting or hole filling), image compression, digital watermarking, and artistic image effects.  Also omitted are techniques that produce range data, volume data, 3D models, 4D light fields, 4D, 6D, or 8D BRDFs (bidirectional reflectance distribution function), or other high-dimensional image-based representations. Epsilon Photography (image stacking) is a sub-field of computational photography.

Now, some of these new capabilities are pretty impressive.  Sure, I have even used some of them, most notably the panorama features.  The results are actually pretty amazing.  But, I must wonder, if we are going to far to change the fundamental concept of what makes photography an ‘art form’?

We are now effectively removing the photographer from the photography. They are being replaced by a CPU, storage, and algorithms.  This idea disturbs me greatly.

nikon_2175_pc_e_micro_nikkor_85mm_570522
Nikkor PC-E Micro 85mm f/2.8 Manual Lens

There have always been tools to help photographers enhance their images.  In the past, I have owned the a Nikon lens that permit the tilt-shift of perspective.  I have even owned a 105mm lens that allowed the controlled and deliberate defusing of focus with two apertures.  Are these mechanical, hardware-based devices any better or worse compared to these new software devices?

tilt-shift-2
Nikon Tilt-Shit with 85mm Lens

Today, when I shoot with my digital Nikon D850s or Z7 cameras, I purposely set them to the ‘vivid’ setting to punch up the chroma in the images.  This practice would absolutely horrify many other serious photographers.  But for me, this setting makes me recall the vivid colours from my film days, when I chose Fuji Film for landscapes and Sakura film for portraits, over the classic Kodachrome, which I always found too cool and bluish.  I greatly preferred the enhanced colours of Fuji and Sakura films and applied their colour manipulations liberally.  So, I have been using technology for over 40 years to influence my images.

But, how far is too far?  Where do we draw the line?

Nikon F
‘Old School’ Image Manipulation with Film Selection

This revolutionary technology – encompassing imaging techniques that enhance or extend the capabilities of digital photography – could not only give us a different perspective on photography but also change how we view our world.

Marc Levoy, professor of computer science (emeritus) at Stanford University, principal engineer at Google, and one of the pioneers in this emerging field, has defined computational photography as a variety of “computational imaging techniques that enhance or extend the capabilities of digital photography [in which the] output is an ordinary photograph, but one that could not have been taken by a traditional camera.

So, Levoy uses the words “enhance and extend’.  That sounds okay.  But how far do we enhance and how far do we extend?  When is the art less about the person taking the image and more about the computer modifying the image?

According to Josh Haftel, principal product manager at Adobe, adding computational elements to traditional photography allows for new opportunities, particularly for imaging and software companies: “The way I see computational photography is that it gives us an opportunity to do two things. One of them is to try and shore up a lot of the physical limitations that exist within mobile cameras.

WOW, Haftel is suggesting breaking free of the limits of our physical world.  Does that problem distress you?

So, what is next? Despite these shortfalls, many companies are forging ahead with new implementations of computational photography.  In some cases, they are blurring the line between what is considered photography and other types of media, such as video and VR (virtual reality).

pixel-3-night-sight
Google Night Sight

For example, Google will expand the Google Photos app using AI (artificial intelligence) for new features, including colourizing black-and-white photos.  Microsoft is using AI in its Pix app for iOS so users can seamlessly add business cards to LinkedIn.  Facebook will soon roll out a 3D Photos feature, which “is a new media type that lets people capture 3D moments in time using a smartphone to share on Facebook.”  And in Adobe’s Lightroom app, mobile-device photographers can utilize HDR features and capture images in the RAW file format.

Historically, since the dawn of photography, we have always manipulated the images in the darkroom, or in post-production in a computer.  Image manipulation is an art form, in and of itself.  It is taught in schools and almost all photographers tweak their pictures.  Where would wedding photographers be without the capabilities of Photoshop to fix rapid fire, ‘run and gun’ style capture work and convert it into cherished lifelong memories?

The computational photography tools and features we have seen thus far are just the start.  I think these tools will become much more powerful, dynamic, and intuitive as mobile devices are designed with newer, more versatile cameras and lenses, more powerful onboard processors, and more expansive cellular networking capabilities.  In the very near future, you may begin to see computational photography’s true colours.


References:

Sullivan, T. (2018). Computational Photography Is Ready for Its Close-Up. PCMAG Digital Edition, Ziff-Davis, LLC. Retrieve on January 6, 2019 from, https://www.pcmag.com/article/362806/computational-photography-is-ready-for-its-close-up

Wikipedia. (2019). Computational Photography. Retrieve on January 6, 2019 from, https://en.wikipedia.org/wiki/Computational_photography


About the Author:

Michael Martin has more than 35 years of experience in systems design for broadband networks, optical fibre, wireless and digital communications technologies.

He is a Senior Executive with IBM Canada’s Office of the CTO, Global Services. Over the past 14 years with IBM, he has worked in the GBS Global Center of Competency for Energy and Utilities and the GTS Global Center of Excellence for Energy and Utilities. He was previously a founding partner and President of MICAN Communications and before that was President of Comlink Systems Limited and Ensat Broadcast Services, Inc., both divisions of Cygnal Technologies Corporation (CYN: TSX).

Martin currently serves on the Board of Directors for TeraGo Inc (TGO: TSX) and previously served on the Board of Directors for Avante Logixx Inc. (XX: TSX.V). 

He serves as a Member, SCC ISO-IEC JTC 1/SC-41 – Internet of Things and related technologies, ISO – International Organization for Standardization, and as a member of the NIST SP 500-325 Fog Computing Conceptual Model, National Institute of Standards and Technology.

He served on the Board of Governors of the University of Ontario Institute of Technology (UOIT) and on the Board of Advisers of five different Colleges in Ontario.  For 16 years he served on the Board of the Society of Motion Picture and Television Engineers (SMPTE), Toronto Section. 

He holds three master’s degrees, in business (MBA), communication (MA), and education (MEd). As well, he has diplomas and certifications in business, computer programming, internetworking, project management, media, photography, and communication technology.