Camera Work

David Douglas Duncan, 102, Who Photographed the Reality of War, Dies

nytimes.com | 2018-06-07T23:15:29.000Z


Under the helmets, the faces are young and tormented, stubbled and dirty, taut with the strain of battle. They sob over dead friends. They stare exhausted into the fog and rain. They crouch in a muddy foxhole. This goddamn cigarette could be the last.

There are no heroes in David Douglas Duncan’s images of war.

Dark and brooding, mostly black and white, they are the stills of a legendary combat photographer, an artist with a camera, who brought home to America the poignant lives of infantrymen and fleeing civilians caught up in World War II, the Korean conflict and the war in Vietnam.

“I felt no sense of mission as a combat photographer,” Mr. Duncan, who was wounded several times, told The New York Times in 2003. “I just felt maybe the guys out there deserved being photographed just the way they are, whether they are running scared, or showing courage, or diving into a hole, or talking and laughing. And I think I did bring a sense of dignity to the battlefield.”



Mr. Duncan, who had lived since 1962 in Castelleras, France, died on Thursday in the South of France, his friend Joel Stratte-McClure said. He was 102.

He was among the most influential photographers of the 20th century, a Life magazine peer of Alfred Eisenstaedt, Margaret Bourke-White and Carl Mydans. In addition to his war work, Mr. Duncan spent years with Pablo Picasso, creating a pictorial record of the artist’s life, and roamed the world making photographic essays on the Kremlin, the city of Paris and the panorama of peoples in Asia, Africa and the Middle East.

A globe-trotting adventurer sometimes likened to Hemingway, he climbed mountains, crossed jungles and was a deep-sea diver, a marine zoologist, a fisherman, an aerial and undersea photographer, an archaeologist in Mexico and Central America and a connoisseur of Japanese art and culture.

His work filled more than 25 books, including eight on Picasso. “This Is War!” (1951), about Korea, was his best-known combat work and brought worldwide acclaim. The renowned photographer Edward Steichen called it “the greatest book of war photographs ever published.”

Mr. Duncan was a Marine officer and combat photographer in World War II, covering the American invasions of the Solomon Islands and Okinawa. He was aboard the battleship Missouri in Tokyo Bay in 1945 photographing the formal Japanese surrender under the stern gaze of Gen. Douglas MacArthur.

He joined Life after the war, and his assignments took him to conflicts in Palestine, Greece and Turkey and to India, Egypt, Morocco and Afghanistan. He was in Japan in 1950 when North Korean troops crossed the 38th parallel, igniting a United Nations police action that would leave 36,500 Americans dead.

Mr. Duncan was soon on the front lines, exposed to the same dangers as the allied troops and civilian refugees. He also flew on bombing missions, taking pictures from jets swooping over targets. He wrote the text for “This Is War!,” as he did for his other books, but critics said it was his pictures that captured the essence of war.

“My objective always is to stay as close as possible and shoot the pictures as if through the eyes of the infantryman, the Marine or the pilot,” he told an interviewer in 1951. “I wanted to give the reader something of the visual perspective and feeling of the guy under fire, his apprehensions and sufferings, his tensions and releases, his behavior in the presence of threatening death.”

In Vietnam, where he worked for Life and ABC News, Mr. Duncan again focused on the vulnerability of soldiers and civilians, often against backgrounds of lush jungles and burning villages. His most powerful images were made in the 1968 siege of Khe Sanh. But in contrast to the objectivity he showed in earlier wars, he was critical of the United States’ role in Vietnam, which he denounced in his book “I Protest!” (1968).

Mr. Duncan’s friendship with Picasso began in 1956, when, at the suggestion of a colleague, the war photographer Robert Capa, he went uninvited to Picasso’s home, the Villa La Californie, in the South of France. Admitted by Picasso’s wife at the time, Jacqueline Roque, he found his subject taking a bath. Mr. Duncan stayed for months, and they were simpatico for 17 years, until Picasso’s death in 1973.

Exploring the artist’s daily life and extraordinary creativity, Mr. Duncan’s pictures were collected in “The Private World of Pablo Picasso” (1958), “Picasso’s Picasso” (1961), “Goodbye Picasso” (1974), “The Silent Studio” (1976), “Viva Picasso” (1980) and other volumes.

“You cannot imagine how simple it was,” Mr. Duncan told Le Monde in 2012. “I was there, like someone belonging to the family, and I took pictures.”

David Douglas Duncan was born to Kenneth and Florence (Watson) Duncan on Jan. 23, 1916, in Kansas City, Mo., where he and three brothers and a sister grew up. He was fascinated with photography from an early age.

He studied archaeology at the University of Arizona in 1934, but dropped out to join expeditions to Mexico and Central America. He then majored in zoology and Spanish at the University of Miami, graduating in 1938.

Resolved to freelance, he began deep-sea fishing, diving and photographing aquatic life. On a schooner from Key West, Fla., to the Cayman Islands, he took pictures of giant sea turtles. In Mexico, he photographed Indians, Gila monsters and jaguars, and shot Mayan ruins in the Yucatan. Off Peru and Chile, he caught and photographed swordfish and marlin. His pictures ran in National Geographic magazine and many newspapers.

After World War II, he went to Palestine for Life and covered fighting between Arabs and Jews in 1946, before the creation of the State of Israel.

His marriage to Leila Khanki, in 1947, ended in divorce. He married Sheila Macauley in 1962. She is his only immediate survivor.

Mr. Duncan covered the Republican and Democratic National Conventions for NBC News in 1968. He was just back from Vietnam, and what might have been a hiatus from combat turned violent in Chicago, where National Guardsmen with rifles and police officers with nightsticks and tear gas clashed with antiwar demonstrators outside the convention hall where Democrats were meeting. His photographs showed helmeted troops on Michigan Avenue, protesters with gashed and bleeding heads, and a sobbing girl who pleaded with him, “Please, tell it like it was.” The grim scenes were published in his 1969 book, “Self-Portrait: U.S.A.”

Mr. Duncan’s archives — including thousands of combat photographs, works on Picasso and others for “The Kremlin” (1960), “Sunflowers for Van Gogh” (1986) and other books — were acquired in 1996 by the Harry Ransom Humanities Research Center at the University of Texas in Austin.

He went to war with only essential equipment: helmet, poncho, spoon, toothbrush, compass, soap and backpack containing two canteens, an exposure meter, film and two cameras. He used a Rolleiflex in World War II, but preferred a 35-millimeter. He took two Leica IIIc cameras into Korea, and said they stood up well in the rain and mud. He often used 50-millimeter f/2 and 135-millimeter f/3.5 Nikkor lenses.

A 1972 exhibit of his war photos at the Whitney Museum of American Art in New York was hailed in The New York Times. “Again and again,” the photography columnist Gene Thornton said of Mr. Duncan, “he approaches and crosses the line that divides the journalist’s interest in the here and now from the artist’s concern with the timeless and universal.”



© 2018 The New York Times Company
https://www.nytimes.com/2018/06/07/...column-region&region=top-news&WT.nav=top-news

http://www.hrc.utexas.edu/exhibitions/web/ddd/gallery/

x
 
Quantum Dot lighting.

Quantum dot white LEDs achieve record efficiency

July 12, 2018, Optical Society of America


https://phys.org/news/2018-07-quantum-dot-white-efficiency.html#jCp

"Efficient LEDs have strong potential for saving energy and protecting the environment," said research leader Sedat Nizamoglu, Koç University, Turkey. "Replacing conventional lighting sources with LEDs with an efficiency of 200 lumens per watt would decrease the global electricity consumed for lighting by more than half. That reduction is equal to the electricity created by 230 typical 500-megawatt coal plants and would reduce greenhouse gas emissions by 200 million tons."

The researchers describe how they created the high-efficiency white LEDs in Optica, The Optical Society's journal for high impact research. The new LEDs use commercially available blue LEDs combined with flexible lenses filled with a solution of nano-sized semiconductor particles called quantum dots. Light from the blue LED causes the quantum dots to emit green and red, which combines with the blue emission to create white light.

"Our new LEDs reached a higher efficiency level than other quantum dot-based white LEDs," said Nizamoglu. "The synthesis and fabrication methods for making the quantum dots and the new LEDs are easy, inexpensive and applicable for mass production."

Advantages of quantum dots

To create white light with today's LEDs, blue and yellow light are combined by adding a yellowish phosphor-based coating to blue LEDs. Because phosphors have a broad emission range, from blue to red, it is difficult to sensitively tune the properties of the generated white light.

Unlike phosphors, quantum dots generate pure colors because they emit only in a narrow portion of the spectrum. This narrow emission makes it possible to create high-quality white light with precise color temperatures and optical properties by combining quantum dots that generate different colors with a blue LED. Quantum dots also bring the advantage of being easy to make and the color of their emission can be easily changed by increasing the size of the semiconductor particle. Moreover, quantum dots can be advantageously used to generate warm white light sources like incandescent light bulbs or cool white sources like typical fluorescent lamps by changing the concentration of incorporated quantum dots.

More at the link above.
 
Re: Quantum Dot lighting

Re: Quantum Dot lighting

Very interesting.

More efficient LEDs will be good for everyone.

By the way InVisage is developing an imaging sensor based on quantum dots. I have no idea if this technology will ever become important for still-photography.
 
Lawson Little Portraits From Key West FL

Lawson Little Portraits From Key West FL

Here's a short piece with portraits of artists and writers who resided in Key West during the 1970s.

I like the photographer's style.
 
Great thread. Thanks for all the info and links.

I had no knowledge of Lawson Little and those wonderful portraits taken in 70s Key West. Man that place has sure changed.
 
IPPA IPhone Awards 2018

If you look through this site you'll find a gallery of previous awards. While I don't like all the good photos, there are some real gems, done with a camera phone. The more people who explore the visual world of still imagery, their heightened awareness of quality photography benefits all.

This concept has always stayed with me: "When we die, if we're lucky, we'll have a few images to take with us." The images remembered are Still images, not motion segments. Test your memory for things that are important. I'll bet the images are still images of family, important personal events, etc. It's the thing that's kept me making still photos, in a new world of easily made motion pictures.

https://www.ippawards.com/
 
Andre Kertesz in Paris

https://www.youtube.com/watch?v=zCr1r4boxdU

"Among his many honors and awards were a Guggenheim Fellowship and admission to the French Legion of Honor.
Kertész's work had widespread and diverse effects on many photographers, including Henri Cartier-Bresson, Robert Capa, and Brassaï, who counted him as a mentor during the late 1920s and early 1930s. His personal work in the 1960s and 1970s inspired countless other contemporary photographers. Kertész combined a photojournalistic interest in movement and gesture with a formalist concern for abstract shapes; hence his work has historical significance in all areas of postwar photography."


https://www.icp.org/browse/archive/constituents/andré-kertész?all/all/all/all/0

http://www.artnet.com/artists/andré-kertész/


http://www.phaidon.com/agenda/photo...melancholy-life-of-the-amazing-andre-kertesz/

.
 
Note: I generally don't archive technical articles but, I found this good and important. My tech-photo skills/knowledge aren't the best. So, Willie if you would read and comment on this lengthy piece, it would benefit all. Willie, I believe, is the most tech savvy of the CW family.

The graphics used in this piece are quite good. I encourage visiting the site. The link is at the bottom of part 3.
pkr

8, 12, 14 vs 16-Bit Depth: What Do You Really Need?!

By Greg Benz

petapixel.com

PART 1 OF 3

“Bit depth” is one of those terms we’ve all run into, but very few photographers truly understand. Photoshop offers 8, 16, and 32-bit file formats. Sometimes we see files referred to as being 24 of 48-bit. And our cameras often offer 12 vs 14-bit files (though you might get 16-bit with a medium format camera). What does it all mean, and what really matters?
What is bit depth?
Before we compare the various options, lets first discuss what the naming means. A “bit” is a computer’s way of storing information as a 1 or 0. A single bit isn’t really good for anything beyond “yes” or “no” because it can only have 2 values. If it was a pixel, it would be pure black or pure white. Not very useful.
To describe something complex, we can combine multiple bits. Every time we add another bit, the number of potential combinations doubles. A single bit has 2 possible values, 0 or 1. When you combine 2 bits, then you can have four possible values (00, 01, 10, and 11). When you combine 3 bits, you can have eight possible values (000, 001, 010, 011, 100, 101, 110, and 111). And so on. In general, the number of possible choices is 2 raised to the number of bits. So “8-bit” = 28 = 256 possible integer values. In Photoshop, this is represented as integers 0-255 (internally, this is binary 00000000-11111111 to the computer).
So “bit-depth” determines the smallest changes you can make, relative to some range of values. If our scale is brightness from pure black to pure white, then the 4 values we get from a 2-bit number would include: black, dark midtones, light midtones, and white. This is a pretty lumpy scale and not very useful for a photograph. But if we have enough bits, we have enough gray values to make what appears to be a perfectly smooth gradient from black to white.

Here’s an example comparing a black to white gradient at different bit depths. The embedded image here is just an example, click here to see the full resolution image in the JPEG2000 format with bit depths up to 14-bits. Depending on the quality of your monitor, you can probably only display differences up to 8-10 bits.

How is bit-depth defined?
It would be convenient if all “bit-depths” could be compared directly, but there are some variations in terminology that are helpful to understand.
Note that the image above is a black and white image. A color image is typically composed of red, green, and blue pixels to create color. Each of these colors is handled by your computer and monitor as a “channel”. Photography software (such as Photoshop and Lightroom) refer to the number of bits per channel. So 8-bits means 8-bits per channel. Which means that an 8-bit RGB image in Photoshop will have a total of 24-bits per pixel (8 for red, 8 for green, and 8 for blue). A 16-bit RGB or LAB image in Photoshop would have 48-bits per pixel, etc.
You would assume that this then means 16-bits means 16-bits per channel in Photoshop. Well, it does and it doesn’t. Photoshop does actually use 16-bits per channel. However, it treats the 16th digit differently – it is simply added to the value created from the first 15-digits. This is sometimes called 15+1 bits. This means that instead of 216 possible values (which would be 65,536 possible values) there are only 215+1 possible values (which is 32,768 +1 = 32,769 possible values).

So from a quality perspective, it would be very fair to say that Adobe’s 16-bit mode is actually only 15-bits. Don’t believe me? Look at the 16-bit scale for the Info panel in Photoshop, which shows a scale of 0-32,768 (which means 32,769 values since we are including 0). Why does Adobe do this? According to Adobe developer Chris Cox, this allows Photoshop to work much more quickly and provides an exact midpoint for the range, which is helpful for blend modes). Should you worry about this “loss” of 1 bit? No, not at all (15-bits is plenty, as we’ll discuss below).
Most cameras will let you save files in 8-bits (JPG) or 12 to 16-bits (RAW). So why doesn’t Photoshop open a 12 or 14-bit RAW file as 12 or 14 bits? For one, it would be a lot of work to develop both Photoshop and file formats to support other bit depths. And opening a 12-bit file as 16-bits is really no different than opening an 8-bit JPG and then converting to 16-bits. There is no immediate visual difference. But most importantly, there are huge benefits to using a file format with a few extra bits (as we’ll discuss later).
For displays, the terminology changes. Monitor vendors want to make their equipment sound sexy, so they typically refer to displays with 8-bits/channel as “24-bit” (because you have 3 channels with 8-bits each, which can be used to create roughly 16MM colors). In other words, “24-bits” (aka “True Color”) for a monitor isn’t super impressive, it actually means the same thing as 8-bits for Photoshop. A better option would be “30-48 bits” (aka “Deep Color”), which is 10-16 bits/channel -with anything over 10 bits/channel being overkill for display in my opinion.
For the rest of this article, I’ll be referencing bits/channel (the camera/Photoshop terminology).
How many bits can you can see?
With a clean gradient (ie, worst case conditions), I can personally detect banding in a 9-bit gradient (which is 2,048 shades of gray) on both my 2018 MacBook Pro Retina display and my 10-bit Eizo monitor. A 9-bit gradient is extremely faint (barely detectable) on both displays. I would almost certainly miss it if I weren’t looking for it. And even when I am looking for it, I cannot easily tell exactly where the edges are in comparison to a 10-bit gradient. I’d almost say there is no banding at 9-bits. An 8-bit gradient is relatively easy to see when looking for it, though I might still potentially miss it if I weren’t paying attention. So, for my purposes, a 10-bit gradient is visually identical to 14-bits or more.

So from a quality perspective, it would be very fair to say that Adobe’s 16-bit mode is actually only 15-bits. Don’t believe me? Look at the 16-bit scale for the Info panel in Photoshop, which shows a scale of 0-32,768 (which means 32,769 values since we are including 0). Why does Adobe do this? According to Adobe developer Chris Cox, this allows Photoshop to work much more quickly and provides an exact midpoint for the range, which is helpful for blend modes). Should you worry about this “loss” of 1 bit? No, not at all (15-bits is plenty, as we’ll discuss below).
Most cameras will let you save files in 8-bits (JPG) or 12 to 16-bits (RAW). So why doesn’t Photoshop open a 12 or 14-bit RAW file as 12 or 14 bits? For one, it would be a lot of work to develop both Photoshop and file formats to support other bit depths. And opening a 12-bit file as 16-bits is really no different than opening an 8-bit JPG and then converting to 16-bits. There is no immediate visual difference. But most importantly, there are huge benefits to using a file format with a few extra bits (as we’ll discuss later).
For displays, the terminology changes. Monitor vendors want to make their equipment sound sexy, so they typically refer to displays with 8-bits/channel as “24-bit” (because you have 3 channels with 8-bits each, which can be used to create roughly 16MM colors). In other words, “24-bits” (aka “True Color”) for a monitor isn’t super impressive, it actually means the same thing as 8-bits for Photoshop. A better option would be “30-48 bits” (aka “Deep Color”), which is 10-16 bits/channel -with anything over 10 bits/channel being overkill for display in my opinion.
For the rest of this article, I’ll be referencing bits/channel (the camera/Photoshop terminology).
How many bits can you can see?
With a clean gradient (ie, worst case conditions), I can personally detect banding in a 9-bit gradient (which is 2,048 shades of gray) on both my 2018 MacBook Pro Retina display and my 10-bit Eizo monitor. A 9-bit gradient is extremely faint (barely detectable) on both displays. I would almost certainly miss it if I weren’t looking for it. And even when I am looking for it, I cannot easily tell exactly where the edges are in comparison to a 10-bit gradient. I’d almost say there is no banding at 9-bits. An 8-bit gradient is relatively easy to see when looking for it, though I might still potentially miss it if I weren’t paying attention. So, for my purposes, a 10-bit gradient is visually identical to 14-bits or more.

If we significantly brighten the shadows or darken the highlights, then we are expanding some portion of the range. We are making any minor errors or rounding error in the data more obvious. In other words, increasing the contrast in the image is like decreasing bit depth. If we manipulate the photograph enough, this will start to show up as “banding” in the image. Banding is obvious/discrete jumps from one color or tone to the next (instead of a smooth gradient). You’ve already seen a theoretical example with the low-bit gradients above. A typical real-world example would be various “bands” showing up in the clear blue sky or excess noise.
Why do 8-bit images look the same as 16-bit images?
If you convert a single layer 16-bit image to 8-bits, you will see something that looks exactly like the 16-bit image you started with. If so, why bother with 16-bits? As you apply Curves or other adjustments, you are expanding the tonal range of various parts of the image. This can start to make small gaps between values turn into large gaps. So even though the difference may not be initially visible, they can become a serious issue later as you edit the image

So how many bits do you really need in camera?
A 4-stop change in exposure is on the order of losing a little over 4 bits. A 3-stop change in exposure is closer to only losing 2 bits. I rarely would adjust RAW exposure out to +/-4 stops, but it can happen with extreme situations or portions of poor exposures. So I’d suggest an extra 4-5 bits over the limits of visible banding to be safe. Adding that margin of safety on top of a goal of at least 9-10 bits to avoid visible banding gets you to roughly 14-15 bits as an ideal target.
In reality, you will probably never that many bits for several reasons:
There aren’t that many situations where you would encounter a perfect gradient. Clear blue skies are probably the most likely. Anything else with detail makes it MUCH harder to see the difference in bit depth.
Color offers more bit-depth. My discussion here is limited to a single black and white channel. If you process for fine art black and white, then these numbers apply directly to you. But if you process in color, you probably have a little more wiggle room.
Your camera’s accuracy is not as high as its precision. In other words, there is noise in your image. This noise typically makes banding a little harder to see at a given bit-depth (ie, real-world images don’t typically show banding quite as easily as the smooth gradients I’ve used above.)
You can remove banding in post-processing using a combination of Gaussian blur and/or adding noise. Of course, you’ll need to be on the lookout for it so that it doesn’t sneak into a print.
The extra bits mostly only matter for extreme tonal corrections.

Taking all of this into consideration, 12-bits sounds like a very reasonable level of detail that should allow for significant post-processing. However, cameras and the human eye respond differently to light. The human eye is more sensitive to shadows, and a “logarithmic” curve is applied to the RAW sensor data (not to TIF or other files after RAW conversion). The result is that the bits used for shadows are lower quality (see this DPReview article for an in-depth discussion of the topic). So there may be value in capturing extra bits of detail depending on your needs and camera.
To test the limits for my Nikon D850, I shot a series of exposures bracketed at 1 stop intervals using both 12 and 14-bit RAW capture with my D850 at base ISO under controlled lighting. My test scene included a gray card to help precisely evaluate white balance. I then processed the images in Lightroom (LR) using exposure and white balance adjustments.

I do not see notable differences in noise, but there are huge differences in color cast in deep shadows (with the 12-bit file shifting a bit yellow and quite a bit green) and some minor differences in shadow contrast (with the 12-bit file being a little too contrasty). The color cast starts at about 3 stops of underexposure (-3ev), is much more apparent at -4ev, and is a serious issue at -5 and -6. Results from other cameras are likely to vary, and the differences are ISO-dependent – so you should test with your own camera.
Furthermore, RAW processing software matters, so I also tried processing the same images in Capture One (testing auto, Film Standard, and Linear curves for the D850). LR and CO aren’t directly comparable since you can’t do more than 5 stops of exposure adjustment in LR or more than 4 stops of exposure adjustment in CO. So I set both to +4 exposure and then adjusted the RAW curve to bring in the white point to 50%.

What I found surprised me, the Capture One (CO) results fell off much more quickly with deep shadows. CO was not as good as LR at -5 and nearly unusable at -6ev, while the LR result was surprisingly usable at -6ev.
Ultimately, I find that at ISO 64 with a Nikon D850:
12-bit files can be pushed 3-4 stops in LR or CO
14-bit files can be pushed 5-6 stops in LR or 4-5 stops in CO
As I generally try to avoid more than 3 stops of shadow recovery due to noise, the color cast due to 12-bit files is rarely going to be an issue in my work. 12-bits is definitely a reasonable choice. That said, I care much more about quality than file size, so I just shoot at 14-bits all the time. This gives me more latitude to deal with extreme scenes or work with files that I may accidentally underexpose.
Below are some extreme examples from my testing. First is the original 14-bit RAW, which is about 6.5 stops underexposed. It probably looks pure black to you, but if you look closely, you’ll see there’s some detail. Clearly, this is massively underexposed throughout the image and about as extreme an example as you could ever imagine. I’m not posting the 12-bit original RAW as it looks the same before processing.

What I found surprised me, the Capture One (CO) results fell off much more quickly with deep shadows. CO was not as good as LR at -5 and nearly unusable at -6ev, while the LR result was surprisingly usable at -6ev.
Ultimately, I find that at ISO 64 with a Nikon D850:
12-bit files can be pushed 3-4 stops in LR or CO
14-bit files can be pushed 5-6 stops in LR or 4-5 stops in CO
As I generally try to avoid more than 3 stops of shadow recovery due to noise, the color cast due to 12-bit files is rarely going to be an issue in my work. 12-bits is definitely a reasonable choice. That said, I care much more about quality than file size, so I just shoot at 14-bits all the time. This gives me more latitude to deal with extreme scenes or work with files that I may accidentally underexpose.
Below are some extreme examples from my testing. First is the original 14-bit RAW, which is about 6.5 stops underexposed. It probably looks pure black to you, but if you look closely, you’ll see there’s some detail. Clearly, this is massively underexposed throughout the image and about as extreme an example as you could ever imagine. I’m not posting the 12-bit original RAW as it looks the same before processing.
 
Back
Top