The Permissible and the Prohibited: Image Manipulation in Landscape Photography

Not long after first taking up a camera, you will discover the rules of photography—someone will feel duty-bound to tell you. We don’t need an exhaustive list here, but look for the verb in the imperative form: “always fill the frame,” “remember the rule of thirds,” and “look for odd numbers.”

These are all common edicts. However, no matter how firmly stated, rarely are these directions essential to creating a pleasing photo. Once you’ve mastered your camera settings, put away the beginner’s guides and settled on a genre, you look for inspiration in the work of others—and you notice something: great images often don’t follow the rules; and if they do they don’t contrive to do so; when the rule of thirds shows up in the work of a master, it can almost seem incidental. Rules might give you something to aim at, but once you have the basics of composition, exploring and experimenting are much more important.

I don’t want to talk about compositional rules in this blog; I’m yet to take a photo that pushes the boundaries of composition—let alone rationalised and put into words what makes a photo appealing. Further, outside the toxic world of photography forums and Facebook groups, people aren’t overly rigid about these guidelines. Instead, I want to talk about another area of photography with many rules: the ethics of image manipulation in landscape photography. On this topic, more than any other, people can be absolutist, inflexible and reactive. When I hear people criticising post-production work, I often find myself disagreeing with their convictions. And, even where I do agree, I justify my feelings in different ways.

Image Manipulation in Landscape Photography

How much editing is too much? Every photographer will decide their red lines. However, unlike reportage, street or wildlife photography, digital manipulation is not a fringe activity in the landscape photography community. In fact, landscapers are perhaps second only to fashion photographers here: time blending, clone stamping, light painting, focus stacking and perspective blending are often part of a landscape photographer’s repertoire.

I am upfront about the work that goes into an image. On social media, I enjoy the back and forth in the comments section when people ask questions or express opinions about an edit or a composition. But I am bound to get pushback whenever I tell people about image manipulation. While the photography community is accepting of—if not comfortable with—Photoshop wizardry, most non-photographers are less accommodating. When I tell people about an editing process, I get comments like ‘why don’t you get it right in camera?’, ‘isn’t that cheating?’, and ‘telling me that kinda devalues your other images.’

There appear to be levels of a photographer’s blasphemy: people seem happy with focus stacking, and they don’t tend to care if I use Photoshop to remove a crisp packet from an otherwise pristine beach, but you are on shakier ground with time or perspective blending.

Let’s look at a few of these techniques as they illustrate people’s attitudes toward image editing—they will also serve as a background for my conclusion, where I will unpack my Photoshop guidelines.

The Permissible: Focus Stacking, HDR and Cloning

Focus Stacking

Depth of field is important for landscape photography purists. While some genres encourage out-of-focus, indistinct elements in front of a subject, as well as creamy blurred-out backgrounds, landscape photographers want crisp focus throughout the frame—from the nearest foreground to far-off mountains. Enter hyperfocal distance. The hyperfocal distance is the place to focus if you want most of your image in ‘acceptable’ focus. When landscape photography tutorials suggest you pick a spot a third of the way into your scene and focus there, it is the hyperfocal distance they are approximating. At the hyperfocal distance, everything from that point to infinity will be sharp enough, as will everything halfway between you and that spot.

In certain scenarios, you can focus somewhere between the key elements, and you only need to take one photo. If, however, you are low to the ground—or otherwise very close to a foreground object—this approximation will not work. When you can’t get everything in focus in a single frame, focus stacking is your best option.

In the image below, my lens is very close to the ammonite in the foreground. I was shooting at f/14, but even so, if I focused on the ammonite, the cliffs were somewhat soft. So instead, I took two shots: one focused on the ammonite and one around the hyperfocal distance of 2 metres. I then blended them in Photoshop:

The larger the camera sensor, the shallower the depth of field at equivalent aperture settings. If you shoot a full-frame or medium format camera, and you don’t have the option of flattening the focal plane, focus stacking is the only way of guaranteeing sharp focus on every element in challenging scenes. When you explain this to a layperson, they do not object. After all, the photos I blended above were taken moments apart and portray the scene faithfully. Perhaps it is hard to begrudge a photographer who is simply working around the shortcomings of their equipment.


Another limitation that photographers live with is dynamic range. Also referred to as tonal range—when taking a photograph, dynamic range refers to the luminance range in a scene: that is, the contrast ratio between the lightest and darkest colour tones in an image. If a camera sensor—or film stock—has a low dynamic range, keeping shadow detail without losing data in our highlights is challenging. The reverse is also true. A camera with a higher dynamic range deals with high contrast scenes better as it records a bigger difference between the lightest and darkest values.

Dynamic range is measured in stops. Each stop increase or decrease represents a doubling or halving of luminance. Aperture, shutter speed, and ISO are adjustable on modern digital cameras—usually in 1/3 stop increments. (Think back to photography 101 and the exposure triangle.) In the film days, the tonal range was a limitation that people had to work with. While some colour negative films had more than ten stops of tonal range, slide films like Velvia have a relatively low 5-6 stops, and the resulting images are contrasty. Dynamic range is less of a limitation with digital sensors, which often boast 14+ stops.

When photographers talk of HDR-imaging, they could mean several things. At a basic level, HDR (high dynamic range) helps us increase the dynamic range protecting the details in the highlights and shadows, details that might be lost if your camera only has a narrow tonal range. People wrongly assume that HDR editing is a modern invention, but it has been around since the 1850s when Gustave Le Gray combined different film exposures to capture seaside scenes faithfully. Throughout the 20th century, photographers used various techniques to extend dynamic range. Dodging and burning, for example, is just a way of reducing the contrast ratio of an image to render it faithfully in print.

In the digital age, ‘HDR’ is often a pejorative. Photography fads—especially in the social media age—have a tiring effect on their audience. There was a stage in the mid/late 2000s when every second image would be a hyperreal or painterly HDR. I only took up photography in 2015, so I caught the tail end of the HDR boom, but even so, I saw why people didn’t like it. More often than not, HDR landscapes look like a Thomas Kinkade-drawn acid trip. Yet, although hyperreal HDR is rarely well done, there is a place for it, and the cartoonish frankness of a well-executed HDR image can be an effective storytelling device.

Thankfully, HDR in landscape photography is applied more subtly nowadays. And tonal-mapping software allows you to selectively paint back details into the shadows and highlights.

Quite often, when I shoot into the sun, as in the image above, I will bracket my exposures; I take a neutral exposure and allow the camera to capture two stops under and two stops over. These three frames ensure that I have the whole dynamic range covered. Then, using luminosity masking in photoshop, I can selectively open the shadows without introducing noise. Conversely, I can paint back the highlights if I start to lose them.

Images such as that above are prone to clipping as the red channel highlights are sensitive if you expose to the right (EttR.) So when I shoot scenes where the highlights are red or yellow, I expose to the right, but I always bracket just in case. The image on the left above is a neutral exposure, and the histogram read fine, but in Lightroom, I could not recover all the details in the brightest parts of the sky.

Recent updates to Lightroom Classic allow more control with selective masks, and I bring fewer images into Photoshop to widen the dynamic range. I would only bother bracketing and luminosity masking nowadays if I were working on a portfolio piece with an extreme contrast ratio.

I guess the question to ask when assessing whether HDR blending is an acceptable editing technique might be this: how does the dynamic range of a photograph compare to what we see? Measuring the dynamic range of the human eye is complicated. Wikipedia puts the contrast ratio at 100:1, about 6.5 stops. However, it’s not that simple, and some scientists argue that we can resolve a 5000:1 contrast ratio in a single image. Moreover, our eyes move, and as they move, the pupils adjust, allowing us—in the right lighting conditions—to see details in both the deep shadows and bright highlights. Glance out your window on a bright sunny day, and you see the sky, clouds and trees outside and the pattern of your curtains. Now do the same thing with a 14-stop digital camera. You won’t be able to resolve both luminance ranges without blowing out the highlights or introducing a lot of noise recovering the shadows. With its scanning ability, some estimates put the human eye at 21 stops of dynamic range.

Applied badly, HDR techniques leave nothing to the imagination, but selective use of tonal mapping can bring details back while retaining the drama of the original image. What’s more, a well-done HDR image is often more akin to the dynamic range of what we saw in person, while in the landscape. Finally, it is worth noting that most of the resistance to HDR comes from within the photographic community. Artsy HDR goes down very well with a wider audience.

Clone Stamping

For any task in Photoshop, there are many ways to skin an onion. The clone stamp tool is perhaps the most famous for clearing up blemishes and imperfections, but the spot healing brush, the patch tool and content-aware fill all do similar things. Landscape photographers use these tools to remove items that distract from the composition. I follow a philosophy similar to Sean Tucker on cloning; when clearing red patches and blemishes on the skin, Sean removes anything that won’t be there in a month: pimples go, but moles stay.

Landscapes are fast-moving affairs, so I use a different time scale to Sean; I remove distracting items that would be gone in a few minutes. Above, the Thatcher’s can in Bristol harbour tells a story, but it distracted me from the sunrise I was documenting. In this application, removing objects in Photoshop achieves an outcome that, while possible, would be difficult to achieve at the time of shooting; clearing up an image at the time, you might miss the optimum colour, put ugly footprints across a pristine beach, or—in extreme cases—end up soaking wet from a swim in the harbour.

I’d venture to guess that I’m in the majority here, and most people don’t mind clone stamping to clean up an image. However, many would draw that line at larger manipulations. A few years ago, there was some internet controversy over ‘object’ removal in a few images by Steve McCurry.

Only a miserly purist would bemoan the contrast adjustments and colour shifts that give some three-dimensionality to the men on the rickshaw and make the boys playing football pop from the green background. Still, I can see why people might be upset by the idea of a documentary photographer removing people from their images. You will have your own opinion; maybe these edits are too interpretive a reading of the scene—but I see no foul. The subtractions (and additions) take excellent images with some distracting elements and make the story more concise. They may not be faithful to the scene, but if an image is for your portfolio rather than a reportage piece for National Geographic, is there a problem with manipulating a photo to make it simpler to read and more visually appealing?

Within limits, the techniques I outlined above—cloning, focus stacking and HDR-imaging—are acceptable to most people’s photographic morals. When applied respectfully, these three techniques are ways of overcoming the technical limitations of our photographic equipment or—with clone stamping—opportunity limitations. However, when overdone, we get into murky waters. With clone stamping, you can remove too much, and HDR can stretch the boundaries of what some people feel comfortable calling ‘landscape photography.’

The three techniques in the next section represent more extreme manipulations. But, as you may suspect from my defence of the reportage shots above, I am not a purist, and—in my conclusion—I will defend two and grudgingly accept that the third might be OK in the right hands!

The Dark Arts: Time Blending, Perspective Blending and Sky Replacements

Perspective Blending

With old large-format cameras, it was common to use movements to manipulate the scene. I mentioned lens movements—or front tilt—in passing above. Front tilt allows a photographer to flatten the focal plane so that it matches better the angle of the ground and allows for sharp focus throughout an image. But some cameras with bellows also allow for rear tilt. The lens stays in place with rear tilt, but the back, containing the film (or digital sensor), can be moved. This rear tilt allows photographers to subtly manipulate the foreground’s impact in the scene. You can do the same thing in the ‘Transform’ panel in Lightroom:

Transform panel to emulate camera movements

By tilting the frame and turning the image into a trapezoid (before cropping), I emphasise the background more and reduce the causeway’s dominance in the foreground.

I wouldn’t have used this technique for the above image. The moon gets too warped and becomes a distraction, but I would venture to say people—and photographers especially—don’t object much to this technique. This manipulation simply imitates digitally what the rear camera movements of view cameras did physically.

Of course, you can go further in photoshop than mimicking rear camera movements. In the digital darkroom, you can selectively warp an image in ways that go way beyond the rear movements of a film camera. Perspective warping sits uncomfortably with people. And yet some photographers go even further.

Perspective blending—also known as focal length blending—is where two images of the same subject are taken at two different focal lengths before being combined in Photoshop. For example, a photographer might do this to maintain an engaging, wide-angle foreground while ensuring that the subject is more prominent in the frame with a longer focal length. I have never used perspective blending in my images, but I have made an example below.

I took this image near the beginning of my autumn 2021 trip to Scotland. I visited this location on my first morning waking up north of the border, but I hadn’t got into the mode of taking images and—despite the obvious potential—I couldn’t make this location work. I posted this image to social media, and people fed back that the subject should be more prominent than the foreground. I admit that this image doesn’t have it—whatever it is—but I liked that Castle Stalker was part of the landscape rather than the main focal point of the picture. However, if I had taken the criticism to heart, perspective blending might have been a good approach:

A common refrain amongst people who engage in heavy digital manipulation is “sell the fake.” Of course, once you know that the image on the right is not ‘real,’ you’ll spot telltale signs of a composite image, but an uninitiated observer would unlikely question the image.

Time Blending

Time blending is a technique when you leave the tripod set up in one place and take several images over a short or long period. Later you choose elements you like from different frames and blend them in Photoshop. For example, in a landscape scene, we might like golden-hour sunlight on the subject, but the sky colours are better after the sun drops below the horizon; or, in a cityscape, the balance of ambient to artificial light is best during blue hour, but we like the stars that show themselves in the sky an hour later. Time blending solves these dilemmas.

Photographers also use time blending in creative ways. Stephen Wilkes uses this technique to produce his Day to Night images:

If you look closely at any of Wilkes’s images, they are like a page from Where’s Wally? (US: Where’s Waldo?); every character is interesting to look at.

Astrophotography brought me to time blending. Astrophotographers use blue-hour light to get low noise exposures of the foreground—stopping down their apertures for greater depth of field. They then wait until true darkness later in the evening when the Milky Way is more colourful for a sky frame. As discussed earlier, modern cameras have 14 stops of dynamic range. The higher you take your ISO, the worse the dynamic range gets.

For this use case—perhaps—you could argue that astrophotographers use time blending to overcome the shortcomings of their gear; once ISO performance is better, there might be no need to time blend. Maybe so. But the available light during the night is quite dull, and astrophotographers use the directionality of the blue hour light to make their foregrounds more jazzy. Astrophotographers even introduce light spills in Photoshop to explain the shape of the light. I don’t know whether it is because everyone is following the same tutorials online, but astrophotography has an established aesthetic. Images tend to be hyperreal and awe-inspiring. Even when a camera can shoot 20 stops of dynamic range at ISO12800, blue hour blending and this editing style will continue.

Sky Replacements

Unlike the techniques described above, sky replacement doesn’t work purely with the scene in front of you. You could shoot in any conditions, and if you are disappointed by the sunset colours, it doesn’t matter—you can just replace them after the fact with a sky taken somewhere else, at a different time. Like all extreme manipulations, astrophotography acted as a ‘gateway drug’ for me to try sky replacements; while learning the techniques you need to merge a blue hour foreground with a starry sky from later in the night, I tried a few sky replacements—transforming scenes taken on a grey day and adding a sky with an afterglow from sunset for example.

Making a sky blend plausible is not an unskilled task, and learning to do so kept me occupied during the spring lockdowns of 2020. But I didn’t warm to this technique. This preference is not the result of any ethical consideration; it speaks more to the type of photographer I am—I much prefer the challenge of composing while out in the field than making a perfected image at home on the computer.


Don’t stray too far from reality

Of what we’ve covered, people are happier with an edit—or shooting method—when a technique makes up for the shortcomings of camera technology; generally, a viewer of a landscape photograph wants to see the scene presented how a photographer witnessed it. This seems to be the conventional wisdom:

When a picture doesn’t contain enough of the original photograph, it is digital art or a composite.

By this logic, focus stacking is harmless, and who could begrudge a landscape photographer for protecting the highlight details by HDR blending? However, we get into murky waters with clone stamping, and most non-photographers will object vociferously to perspective warping and time blending.

Except for focus stacking, perhaps, we can take all these methods “too far,” and it is very easy to overengineer and contrive an image, but how much editing is too much? In my estimation, most people are drawn to landscape photography over other art because they value the aesthetics found in nature. For this reason, lovers of the landscape are less willing to call an image “photography” when it departs too far from reality. The phrase “the camera never lies” has penetrated deep into our psyches—and we don’t like being lied to.

The camera may never lie, but we tell stories

Yet, the idea that a camera tells the truth is misleading. Rather than documenting reality, Victorian Pictorialists contrived scenes to emphasise beauty, tonality and composition. Double exposures and dragging the shutter for artistic effect were also used as early as the 1860s. Suffice it to say, although photography has the veneer of a realist medium, from early in its existence, the camera was used—like any other tool—to express the artist’s will.

All visual art necessarily plays with perception, and ground-breaking artists play around the boundaries of conventional wisdom—often stepping beyond them in thought-provoking ways. Although on the surface, it seems that photographers have less creative freedom than artists wielding a paintbrush or chisel, the way we shoot and edit can make a clear statement. Burnham Arlidge stacks multiple exposures of a scene to make his memoryscapes. According to Burnham, memoryscapes emulate—more closely than can a single still image—the way he remembers locations and events.

And a single still frame can feel impotent to convey the complexity of experience. How often does a smartphone-wielding tourist look back at their photos and exclaim, “I guess you just had to be there?” Even if your intention as a camera operator was to ‘get it right’ in camera—and for a frame to need little editing—you would have trouble faithfully conveying a scene this way. Leaving aside that a RAW file has nowhere near the contrast and colour you witnessed, we don’t experience reality as a camera does. Our experience is a complex tapestry of sensations, perceptions and preconceptions. Moreover, by the time we come to the editing room, the vagueness of memory coupled with post hoc associations with a landscape means some details fade while we embellish others. The “naked eye” is a fallacy. The signals from the eye are quickly adulterated; our minds clothe them in preconceptions to make them fit with a narrative. Our vision of the world is heavily curated and a largely imagined place.

Of the threads from which we weave our experiences, it would be easy to argue for greater freedom in the editing room based on memory, personal associations, and preconceptions; I believe this is the argument Burnham makes with his memoryscapes. However, I think there is also a case for a less rigid editing ethic from the slightly less subjective: sensation and perception. I argued earlier that people object less to editing techniques when you point out that they are overcoming limitations—technical or opportunity. But what about editing to overcome the camera’s limitations to emulate our experience?

The senses and experience

If our eyes were announced at Photokina, the spec sheet would read something like this:

  • Fixed-lens prime
  • Full frame 50mm equivalent
  • 10 megapixels
  • Maximum ISO1000
  • Dynamic range 14 stops
  • f/3.2 maximum aperture
  • Bit of a lucky dip—each copy will vary
  • Supply your own image processing chipset

Of course, this is an oversimplification. Each caveat to the specs listed above would fill a scientific journal. But it is helpful to look at the technical ‘specifications’ of the eyes—at least in passing; these organs provide data to our brains, and a cursory examination casts light on why editing breathes life—the energy you felt on location—into your sickly RAW file. (Read more about treating the human eye as a camera here and here.)

There are three main differences between how we see and the way our cameras record the world (three that affect our discussion on the ethics of editing, that is.) These differences result from the technical specs outlined above—sensation—and the way our brains interpret this sensory data—perception. The first of these differences is the simplest to deal with as it impacts least on people’s photography morals; as I’ve noted, we usually get a free pass with contrast and colour adjustments.

Bringing back the dimensionality of stereo vision

Stereo vision impacts our perception of the world to a high degree. With two eyes, we triangulate, and the resulting depth perception is something we take for granted. Our two eyes—coupled with head movement within a three-dimensional space—are one reason a landscape photo looks flatter than your memories.

For an image to have dimensionality on a two-dimensional surface—a screen or a sheet of paper—you need contrast. In a picture, contrast isn’t as simple as lights vs darks. Yes, luminosity contrast is one of the primary tools available to a photographer, but colour and textural contrast are just as important. If the elements in your picture differentiate themselves in brightness, hue, or pattern, they will stand out from the page. You can do this in the field while shooting, of course. But much of the work bringing life back to an image is done while editing. Split toning, using colour theory and selective contrast adjustments to your subject all help bring the viewers’ attention to where it should be in your image.

But I’m preaching to the choir—not many people begrudge a photographer for making a few colour and contrast adjustments in Lightroom and Photoshop, right?

Field of View and Subject Focus

Street photographers usually fall into two camps when choosing a standard focal length for a full-frame camera: arguments go back and forth over whether a 35mm or 50mm lens is more representative of the human visual experience. Both have a fair claim. If we consider one eye, our ‘vision’ has an angle of view as wide as that of a 17mm lens. However, only the centre of this field renders what is in front of us with good detail, colour and contrast. This high-definition area of the retina sees a field of view equivalent to that of a 50mm lens. Yet we have two eyes and their detail resolving fields overlap, giving a field of view similar to a 35mil lens.

Case closed? Not quite. To complicate matters yet further, your visual array constantly scans and creates a much larger composite of the scene in front of you. In concert, our eyes and brain build a wider context of the world. I think this has implications for the editing techniques we’ve explored above.

While you may be aware of a 17mm FoV (and beyond,) your narrow focus when you look at a scene is more telephoto. What’s more, noteworthy things—a castle, a loan tree, an interesting rock formation—take more of your attention. It is tempting to approach a landscape with an ultra-wide-angle lens and include all that you could see in your frame, but the subject will look small and undemanding in the resulting image.

Of course, an experienced photographer could deal with this in the field and arrange the foreground and background elements to give the subject the prominence it deserves. But isn’t perspective blending a valid way to approach this challenge too? You are standing on Marazion Beach. Out to sea stands St Michael’s Mount. Your telephoto eyes see the castle at 50mm, and your brain gives greater prominence to this imposing manmade landmark. In your wider peripheral vision—attuned to movement—a wave breaks, threatening a soggy-footed return to the car. You step back. The water plays interesting patterns over the sand and pebbles as the waves withdraw. While it may be a measure too far for some people, combining the narrow subject focus with the broader awareness of the scene using perspective blending doesn’t seem to contravene the ethic “don’t stray too far from reality.” If perspective blending leaves a scene true to what you witnessed, is it not justifiable to do so and thereby tell the story better than a single frame could?

Stream of consciousness: how we connect the fragments of our experience

For the most part, our discussion in the last two sections has treated the eye like camera lenses. And I have discussed how editing techniques can bring back more of what you would have seen with your stereoscopic vision in one moment. However, we don’t experience the landscape in this way. We have already seen how your eyes scan a scene, and your brain makes a panoramic collage from split-second visual impressions of a narrower field of view, but your brain holds together more than this. In addition to these split-second adjustments, time passes.

In his On Landscape talk “Point of Stillness”, David Ward describes photography as “sampling from the river of time.” Ward notes that photography does something alien to us in our stream of awareness; photography elongates a transient moment and makes it eternal. But perhaps points of stillness are not as alien to us as Ward suggests; could neuroscience explain photography’s appeal to us behind awareness? Following Christof Koch, experimental data from studies in neuroscience seem to indicate that our perception is “the result of a sequence of individual snapshots, a sequence of moments, like individual, discrete movie frames that, when quickly scrolling past us, we experience as continuous motion.” Koch suggests that these “snapshots” may explain why time seems relative to us. If you are aware of more snapshots in a shorter period, time would appear to slow, whereas it would speed during periods of fewer frames. For patients with cinematographic vision, these snapshots can stretch out for long periods before the following snapshot replaces it.

All this being said, while individual impressions may intrude into our awareness in a fragmentary way if we are healthy, we link them together and experience reality as a flow of time. And while the first apprehension of a view can strike you dumb, the impression a landscape leaves on me is not that of a sudden impact. The essence of a landscape filters into my awareness gradually—think of a photograph developing in a dark room bath. When I spend time with a landscape taking photos, I can be there for hours. I might arrive well before sunset and still be there when the milky way rises. During this time, the landscape changes. The pushing or ebbing tide alters how the rocks look in the foreground, and clouds are endlessly dynamic. Even in a short window, the landscape can change radically as the light casts colour and contrast differently on the rocks and plants, and the air that contains them glows and grows dull.

Is a still frame enough to convey this? Sometimes. And perhaps the craft of photography is achieving this magic trick and freezing the decisive moment. But even without introducing the haziness of memory, your narrative experience of a landscape—how your awareness deals with the flow of sensory input—means a single frame can be a weak analogue of what is in front of you. So perhaps we could see time blending as a way to recreate the experience of a place.

And sky replacements too?

But what about sky replacements? If it is legitimate to condense the flow of time using frames taken over several minutes into a final image using time blending, why not paint in a sky from the following day?

No. For me, there is a boundary here. Time blending merges moments that you experienced in a particular place during a distinct sequence of time. The only common factor between the two photos used in a sky replacement is my presence with a camera. How much the experiencer of different days if a cohesive continuum is a philosophical debate that has been considered for thousands of years—and thankfully, we don’t need to get into those weeds here. But suffice it to say, I’m more inclined to call a work landscape photography if the frames used to craft the image come from a distinct sequence of time—a package of experiences with a narrative that we can easily cohere. This may be more a gut intuition than any concrete division I can point to. But a sky replacement seems a different order deception to perspective warping and time blending. Both perspective manipulations and time blends interpret and curate the scene in front of the camera, while sky replacements likely bring in elements from unrelated scenes. If you are going to introduce a sky from a different image, why not also a figure standing on a cliff edge? You could even paint in some low-lying fog, some stars and a rainbow… while you’re at it, throw in a fairy-tale castle for good measure.

Exaggerating a point to prove a point is a cheap rhetorical device, but I think here—after much meandering—we have come to the crux of what landscape photography is for me:

If a piece of art was taken with a camera and conveys the essence of a certain place in time, then it is landscape photography.

I am ever inspired by the works of Andrew S. Gray and Valda Baily. While their photographies are stylised and heavily edited, they fit my definition above, and their works are landscape photography—and landscape photography of the highest quality. You would have to work pretty hard to convince me otherwise.

Yet here I come to a dilemma. Would my opinion change if I learned that these artists used skies from a different day and a different place? Contrary to my gut feeling about sky replacement, probably not. Expressionist and painterly in the extreme, Baily and Gray’s work studies place and captures its essence; a feat all nature photographers aspire to—rules be damned.

What are my editing red lines?

People like rules for obvious reasons: guidelines categorise a chaotic universe making it seem more manageable. Breaking down a vision into stepwise processes brings a goal—however lofty—within our remit to achieve. But while rules can be useful as you approach image making, holding to rules has two main pitfalls:

(1) It is much easier to get fixated on rules and procrastinate rather than learning the principle informing a rule and moving on. Of course, procrastination is alluring. Why else do people hang around on internet messaging boards discussing camera specs and being snarky about best practice? Shouldn’t they be out taking photos? Maybe it is easier to be ‘right’ and on a safe forum than it is to put in the time, play with the rules and discover what works. Yes, you might make mistakes, but creativity lies just beyond conventional boundaries. Your job as an artist and a landscape photographer is to overcome resistance and face chaos head-on. Pick up your camera, go out of the walled city and make sense of nature—or, accepting it doesn’t make sense, wrestle it into a frame anyway. If strict rules for composition and editing arm you in your quest, all the better—use them.

(2) However, as David Ward wryly observes, the acronym for the rule of thirds is ROT. Less pithily, perhaps, and more crudely, I pronounce ‘rule of thirds’ with an Irish accent when I read it to myself. Guy Tal gives a concise critique of reliance on rules in More Than a Rock:

“the greater risk of memorizing and consciously implementing templates, guidelines, or other “rules” of visual composition is not that they may fail to impress viewers, but that they may inhibit or entirely suppress creativity. To follow rules is, literally, to not be creative to not allow for possibilities outside of what’s already known or what’s been predetermined to be the only “correct” or desirable outcome(s).”

Although Tal is speaking of composition, the same is true with editing. Photography should be playful and explorative. If you don’t experiment—if you hold yourself too rigidly to an ethic or aesthetic—stagnation is bound to follow. Your chosen artistic medium is photography, and you want to study landscape? Then take advantage of the digital darkroom. Try new things, see what works for you, and change things up when they no longer do.

I’ve been into landscapes for several years and have tried various editing techniques and shooting styles. How I edit now is dictated by the time I have and my preference of what to do in the field. I started out shooting anything and everything. For a time, I would have considered myself a dynamic photographer—I used to explore national parks in Russia with a monopod, and I was thrilled to capture a bird in flight or an antlered deer in a landscape scene.

Nowadays, I now shoot 99% of my images locked down on a tripod. My chosen photographic craft is the careful arrangement of elements in the scene. Once I find a composition with potential, I can spend hours arranging the elements so that they work well together. Centimetres left, right, up or down make a huge difference to a final image. I haven’t got a geared head yet, but I am close.

I prefer now to spend more time and attention working in the field than on long editing sessions—and my editing “red lines” reflect this preference. I spend the most effort on contrast adjustments and colour grading. And I will dip into other techniques when they are needed. Editing has become as much about studying composition as producing a final image. This is especially true for the “dark arts” above. Heavy editing of an image that didn’t work can guide how I approach future shoots; when I edit like this, I ask, ‘was there anything I could have done to improve this image while on location?’ Let’s look at an example.

When I arrived at Ayrmer Cove, the tide wasn’t in my favour. No matter where I positioned myself, the jagged rock sat uncomfortably with the line of the horizon. Side note: when vertical lines intersect horizontal ones, I like the intersection to look deliberate; but in the image below (left,) the tip of the rock was either just above the horizon or, stepping back, I couldn’t get enough of a gap between the top of the stone and the line of the horizon. All the images I took that morning looked clumsy. I chalked it up as a scouting trip.

Back in front of the computer, I used the image for learning. In Photoshop, I cut out the rock and enlarged it—I guess we might think of this as perspective blending. With the tip of the stone well above the skyline and the subject slightly larger in the frame, the image above (right) is more pleasing. Of course, this edit informs my understanding of composition generally, but it also arms me for my next trip to Ayrmer; I know how high the tide was that day, and next time, I need it half a metre lower so I can get closer to the subject rock.

Ending thoughts

What started as a thought about the relation between the flow of consciousness and time blending turned into a micro-dissertation. If you’ve stuck with me, thank you.

I hope the discussion above convinces you—if you needed convincing—that all editing techniques are legitimate weapons in a landscape photographer’s arsenal. Above, I defined landscape photography: a piece of art predominantly created using a camera and conveying the essence of a certain place in time. Used skillfully, any of the techniques above speak clearly about a landscape. But you need only scroll down your social media feed for ten minutes to see bad applications. So I guess this essay is a long way of saying that you must decide your guidelines based on how you want to shoot and your vision for a final image.

Allow me to end with a thought—it is at the edge of conceptual clarity, but I will attempt to articulate it as best I can. People seem most resistant to heavy edits in landscape photography because the resulting image does not faithfully document reality. People like to treat the camera as an objective scientific instrument capable of capturing photons and conveying unadulterated reality. Although this theory collapses under examination, treating a camera thus keeps it manageable. As a technical discipline, we can argue about cameras. Camera gear does matter—of course—but endless forums, in heated discussions about equipment, show how many of us prefer to talk about the technical and are unwilling to engage with photography as artistic expression. Your camera and how you shoot and edit are all tools to express yourself and your apprehension of the world around you.

When I look at the works of Valda Baily and Andrew S. Gray, I see Landscape—in its pure essence; and I can’t help reflecting that, in the hands of an artist, the archetypal modernist instrument—the camera—coupled with the cutting edge hardware and software that powers our world, actually serve to highlight the shortcomings of technology to convey the majesty of human experience. The deification of science in our culture tempts us towards reductionist explanations when possible. But does the scientific method control for too much? Just as a scientific explanation of a phenomenon does not tell you how that event makes you feel, an objective image—a RAW file—can be a wan analogue of the world around us; and the still frame is often inadequate to capture the richness of experience—and memory. Experience is a complex interaction of sensations, perceptions and other ways of knowing, and reductionism is not always appropriate. In a world increasingly run by technology and machine learning, our most important attribute is that found in Polanyi’s Paradox: our ability to “know more than we can tell.” The artistic endeavour is to play just outside your conceptual comfort zone. Articulating what you find there in an accessible way tells people something they know but could not have said.

You could argue the photographer’s craft is to capture the ‘magic’ in a still frame—and that is always my goal. But sometimes, a heavy edit brings more of my ‘knowing’ to an image. A friend who sells photography in galleries tells me that HDR, double exposures, and Intentional Camera Movement do better than straight landscape and wildlife shots. Maybe this is unconnected to my argument. I’m sure a lot depends on the gallery and its audience. Perhaps this popularity is purely about the aesthetics people value in modern art. But maybe people want more from a photograph than ‘objective reality.’