The Ethics of Post Processing

Apparently everyone wanted to announce things prior to the Memorial Day vacation here in the US (the traditional kickoff to “summer”). Adobe’s big news this week was Generative Fill, a new AI function being beta tested in Photoshop. 

I’m not going to speak to whether it works or not, or all the things you might be able to do with it. Instead, we need to return to a subject that’s been out of the discussions for awhile now: what is “normal” post processing, and what is “taking processing too far”? 

I first encountered this issue full scale back when I was editor of Backpacker magazine in the 1990’s. A number of photographers, including my mentor Galen Rowell, were digitally scanning their slides. Indeed, Backpacker had to digitally scan any film that was submitted by a photographer. All of us faced a simple question: what constitutes an acceptable color and tonal manipulation on the scan, versus what starts to make the photograph “lie” to the viewer?

As I’ve written before, ethics carry over into the field, as well. And it’s from there that I started to develop my basic position about object removal/manipulation. NANPA (North American Nature Photographers’ Association), of which I was a founding member, starting in 1994 had a basic ethics guideline card they gave to members. That’s expanded into multiple pages, but the key field ethics page is here. Couple that with their top-level statement: “The viewing public generally assumes that nature images are straightforward depictions of events and scenes that occurred without human interference.”

But even that is fraught with nuanced issues. For instance, I’m in the middle of a wilderness and am photographing a beautiful scene. I notice a beer can that shouldn’t be there. I actually encountered this situation, and it brought up dueling ethics: Leave No Trace says I should pick up the can and take it to where it belongs (recycling), NANPA’s statement essentially says I should photograph with the can in place, as that would be the straightforward depiction. But the can won’t be in place the minute I leave! ;~) Worse still, what happens if technically I’m not allowed to walk over to where the can is (e.g. area marked as being restored, do not enter)?

Yeah, ethics gets prickly real fast. 

That led to my generalized field position: leave things (and photograph them) the way they should be. Which, of course, is subjective. But then I’ve read every word of the Wilderness Act and quite a few other documents about natural preservation policies. I believe my subjective view is “informed” as opposed to “what I feel like.”

Which brings us to the next dilemma: you took that photograph but didn’t notice the beer can: do you remove it in post processing. Here’s my answer: yes, I do, because that’s what I would have done if I had noticed it in the field. It doesn’t belong there, and if I had seen it, it wouldn’t be there. 

That’s the “easy” part. The problem just multiplies when we get out of the woods and into the city. One thing that Content Aware Fill does well, for example, is remove power lines. Those lines are everywhere you don’t want them to be when photographing in towns. A beautiful building you want to capture has a line running through it that distracts from what you want to show the viewer.

And there we get into “photographic intent.” Imagine the building is a historical one, imagine that your client wants a “perfect” rendering of the building itself. There’s a power line in the way of you doing that. In a “perfect” world you’d have the client get the power company to come out and remove the line temporarily while you took your photo, then put it back up afterwards. That takes a lot of time, effort, and money to do, and it’s an absurd thing to even consider when you can effectively do the same thing virtually at no cost. If my intent is to show the building without distraction, I’d have no real issue taking out the power line in post processing.

However, we’re on a slippery slope here, and this is where Photoshop’s Generative Fill starts to come into the discussion. 

The examples I gave so far are reality based. The thing about Generative Fill—and the way it’s been demonstrated by most so far—is that it is equally good about fantasy as it is reality. Maybe especially good. So in my first example I could have said “replace the beer can with a Corvette.” Now that would have never happened (remember, we’re in a designated Wilderness area): thus the result is entirely fantasy. 

I have no real problem with fantasy images other than one: I have to know it’s a fantasy image as I view it. Otherwise, I’m being deceived. 

Back when we had tons of magazines and newspapers doing (mostly) the right thing, pretty much everyone had a policy about photographs that went beyond reality into fantasy: they had to be labeled something along the lines of “photo montage” or “photo illustration” or something even more specific as to what was done by the creator. 

Which brings me to my real point, and it’s a point about AI in general, too: how do we know when to process something as “reality” or “fantasy”? We don’t. Because very few sources you get information from today reliably give you context for what you’re viewing. It’s not Photoshop that’s the problem, it’s the media you consume. It’s not reliable, simple as that.

Since I just mentioned it, perhaps I should offer you some context for what you’re viewing on my sites: photographs with my color and tonal decisions made in post processing, and from which I’ll remove (content aware fill) anything that I would have removed if I had seen it in the field and could have done so. I always process towards the “real scene” without distractions. In the few times I might not (sometimes I illustrate a possibility), I’ll try to let you know that I have.

Looking for gear-specific information? Check out our other Web sites:
DSLRS: | mirrorless: | Z System: | film SLR: all text and original images © 2023 Thom Hogan
portions Copyright 1999-2022 Thom Hogan-- All Rights Reserved