M Monochrom: is "significantly higher resolution" junk science?

Dante_Stella

Rex canum cattorumque
Local time
9:21 AM
Joined
Apr 28, 2007
Messages
1,846
Are popular discussions of the M Monochrom technically misinformed? I think that it is fairly uncontroversial that losing the Bayer pattern filter will do several things:

1. Increase light sensitivity by 2-4x by removing the array (hence ISO 10,000 over 2,500) (according to Kodak technical materials related to its RGBW patent, the efficiency of a typical Bayer sensor is about 1/3).

2. Eliminate the blue-channel-noise-in-incandescent-ligtht problem by eliminating the blue filter.

3. Allow the use of standard contrast filters originally designed black and white film (inasmuch as the M-M spectral response is similar to known films).​

I am frankly suspicious of any claim that a monochrome sensor has twice the resolution of a Bayer one (or anything near that big a difference). I don't think anyone is going to try to argue against how resolution is measured: so many cycles at so much contrast. But my suspicions are based on the following:

1. I have not seen any organized literature that chalks up differences in resolving power to different demosaicing algorithms. The discussion is primarily of aliasing and artifacts, which are problems that occur when adjacent colors in an image interact badly with color reconstruction. Maybe I have not read enough patents and white papers. Does anything actually discuss resolving power?

2. If Bayer decoding led to any significant loss in resolving power on the sensor, we would have seen a test by now comparing aerial resolution of lenses, the theoretical resolution of a sensor, and the resulting output.

3. But Dr. Evil, that has come to pass! According to Erwin Puts' tests, even including a lens (always a drag on an optical system), the M9 resolves 60lp/mm, about 82% of its Nyquist limit of 73lp/mm. Because you have the same sensor in the M-M with the same density, the limit is the same. Even if the M-M reached its Nyquist limit, that would still be only 20% better than the color model.

3a. Even absent the above, it is well documented (and formulaic) that doubling the resolution of one component of an optical system would not double the end resolution.

4. In the interpolation fury, there seems to be a considerable amount of confusion between an on-chip pixel and the resulting effective photosite. A simple way to understand photosites is a virtual pixel that overlaps a four-square R-G-G-B matrix of photodiodes, as shown in a diagram like this. Except at the edges, the diodes themselves belong to more than one photosite (hence the two pixel counts - actual and effective - you see in specifications). Real arrays, depending on the algorithm used, can consist of many more than 4 pixels.

5. The real math is a lot more complex - and getting ticked off about "interpolation" by thinking of averaging is simplistic. It's not averaging, and the mathematical model can be very accurate. As a result, interpolation does not necessarily mean that the interpolated result is bad. At a very basic level, if you have 5 and then a unknown value and then 10, using the mathematical mean for the middle value is actually very accurate if these are points on a straight line. Even if they are not, formulae can still very accurately approximate the correct values (witness JPG compression, which computes color gradients). If your method of interpolation is appropriate to the task, the errors are quite small. And cameras use multiple algorithms simultaneously. The performance of the M9 color sensor and processing appears to be 82% as good as a theoretically perfect sensor (of its size) coupled with a lens of unreal resolution, focused perfectly with an APO lens. Not so bad.

6. Resolving power is based on the ability to detect changes from photosite to photosite. On a mono sensor, the actual photosites and the virtual ones are perfectly aligned. On a color sensor, they do not have that kind of relationship, and it seems at least possible that you will be able to catch transitions that you would not get with mono sensors since two items of the same brightness (or responding similarly on the camera's curve) - but different colors - register as the same tone in black and white.

7. Which brings us to contrast filters. One of the reasons why the sample pictures posted on various blogs look so bad is poor [color] contrast control (poor control of scene lighting, overall exposure, and composition are also at play). People who grew up in the old days know that panchromatic film can lead to very blah results (particularly skin tones in artificial light). You basically get one shot shooting on b/w film or a b/w sensor. Unless you have an intuitive idea of what you are trying to do with color accentuation and suppression, you're going to get stuck. And unless you are using APO lenses, some of those contrast control tools, especially orange, deep orange, and red filters, will put a serious hurt on sharpness on a thin, mono-sensitive surface. I would speculate that a color array (later channel mixed) gives you better sharpness (since non-APO lenses can at most get 2 colors of light (red, green or blue) focused at the same place at the same time). Simulating filters that way may not cut haze and may result in noise, though.​

None of this is to say the M-M is a bad camera; the sensitivity and low noise make it compelling as a concept. But the idea that it's somehow massively better in resolving power than the M9 just doesn't seem to wash.

Granted, I am not an optical engineer - and my last training in combinatorics and matrix theory was contemporaneous with the QuickTake 100. But I have this nagging feeling that the people spouting conventional wisdom about the M-M are confusing Foveon propagranda about Bayer sensors with misinformation from videography discussion forums.

Thoughts?

Dante
 
Thoughts?

the bayer filter is equivalent to what happens when you downsize an image. It interpolates one pixel from several of its surrounding pixels. Problem is that it can't ever get it right. It always an approximation. So with a mono sensor where this interpolation doesn't take place, the accuracy of each end result pixel will be higher. That equates to higher resolving power because each pixel is of a higher quality.
Note the difference between sensor resolution, i.e. the pixel array x,y, dimensions and the resolving power which is its ability to reproduce detail. They are not the same thing.
But this is all nothing to do with with what most will see as the problem, and that is that the resulting tonal distribution is different from what film produces and that is what a lot of film users want. The subsequent processing in SEF is a destructive process to arrive at a desired result. So whats the point of a B+W sensor if you can get the equal from a colour sensor with SEF processing.
Until someone does a direct comparison from an M9 image processed to a B+W image using SEF and one from an M Monochrom processed through SEF to the same film type, then we won't really know if the M Monochrom is any better. I suspect any difference will be imperceptible in a print.
 
Boost in resolution should be a real thing, but twice as much is probably too much. Most numbers I've read put it around 1.3 times the resolution. It's probably hard to assign a fixed number to the resolution increase because it depends on the color patterns in the image, how they match up with the individual sensels, and the demosaicing functions. But the fact of the matter is with a monochrome sensor, you don't have to interpolate anything from the Bayer data. In a Bayer image, say 1000x1000 pixels, there just isn't 1000x1000 G pixels (500x500), nor are there that many R or B (250x250). That reduction of the G pixel density by a factor of two is probably where people are getting 'twice' the resolution from. However, the debayering math is pretty sophisticated and obviously interpolates the missing data from neighboring pixels nicely. And you *do* have data from locations other than just the G pixels. So it goes to reason that your resolution reduction is less than two. But avoiding that step and actually sampling luminance at every single sensel location *will* increase resolution. Just not by a factor of 2.
 
Sorry to barge in, just one question: how the sensor without Bayer pattern filter will be "panchromatic" ? So far I understood, that conversion of digital images to "grey scale" was done on the bases of analogy how panchromatic film would render given colour. How it will work now ?
 
Sorry to barge in, just one question: how the sensor without Bayer pattern filter will be "panchromatic" ? So far I understood, that conversion of digital images to "grey scale" was done on the bases of analogy how panchromatic film would render given colour. How it will work now ?

Now it will be based on the intrinsic spectral sensitivity of the sensor, any overlying coverglass, and the lens configuration and coatings. Nearly all scientific CCDs and CMOS devices are monochrome. Depending on the application they may be optimized to deliver different spectral sensitivity curves. Here are some examples.
 
By the way, great post, Dante, and superior discussion from others, too. An RFF tech discussion at its best.
 
Good post Dante. I've not got much time now -off to develop a roll of film and take the children swimming - but a quick couple of thoughts.

1. I agree that Leica are overstating the increase in resolution when they suggest double.
  • This implies that Bayer demosaicing only takes data from the green sensitive photosites when assigning luminance - I believe that is not the case
  • Erwin's test is pretty impressive, but the results are very dependent on what goes in. The M9 also produces lots of local moire and artifacts in real world use, depending on what you are photographing. This can confuse in resolution testing.
  • I understand that effective Bayer demosaicing can get resolve around 75% of the total photosite count on Bayer sensor
  • Leica seem to be relying on most people confusing pixel count with linear resolution - doubling pixel count only increases linear resolution by 41% (ish)
  • Demosaicing fills in the gaps with interpolated numbers - these (obviously based on experience) can look pretty good anyway - even if they are not known to be accurate
  • Good lenses at mid apertures are not that big a drag on the resolution of 35mm equivalent sensors, though they do have an impact
  • Overall, I would expect the MM to resolve a bit more than the M9, but not to deliver twice the resolution
Whilst colour moire is an issue with non-AA BAyer filtered sensors, a mono sensor can also produce moire and sampling artifacts. I'm sure these will appear at some time when the initial gushing has finished. Whether they will be an issue remains to be seen.


I think tonality will be an interesting question. I read that Leica suggests adding a light yellow filter to mimic the spectral sensitivity of panchromatic film (which are all different anyway). This suggests increasing sensitivity towards the blue end of the spectrum, and might account for some of the images seen so far. One of my concerns is that by requiring the use of Silver Efex, the decisions on look are still left until after exposure, which may be important and either positive or negative depending on the photographers outlook. It seems possible though that if you want more post exposure control, you might be better with an M9 or D800e and do the conversion from the colour data.


Best


Mike
 
Maybe a comparison of FT plots in the frequency domain of the data of both sensors recorded from a test-pattern would give the answer? I would imagine that Leica did this during the developing phase and draw their conclusions.
 
Thoughts?

the bayer filter is equivalent to what happens when you downsize an image. It interpolates one pixel from several of its surrounding pixels. Problem is that it can't ever get it right. It always an approximation.

But the interpolation from the Bayer filter is for the pixel colour and not the luminosity; presumably, you're seeing more or less the same content as without the filter, sans colour
 
The story is different; there is nothing added, there are simply fewer losses.

Bayer filtered sensors have resolution loss, one by the filtering array and secondly by the interpolation. That amounts to about 30% and is a well-known phenomena. secondly there are moire-like errors introduced by the filter itself that induce further resolution loss. removing those error results in the MM sensor giving a true 18 MP file.
To get a true 18 MP file from a bayer-filtered sensor you need to start out from a 24-34 MP base sensor. So it is not resolution added, but less resolution lost.
 
It cannot really be shown on the Internet, but this M Monochrome shot I took Thursday on a pre-production camera is mind-blowing in detail, despite being taken in grotty light. The post processing was simply on a Macbook Air with the Photoshop app, hitting the auto buttons.




L9033989.jpg
 
This one shows (remind me why I never upload photographs to this forum, they must be so small as to be practically worthless....) the highlight situation. I should have exposed more for the highlights and pulled up the shadows which have a wealth of detail and no noise to speak of (@ ISO 5000!!!)



L9033986.jpg
 
It cannot really be shown on the Internet, but this M Monochrome shot I took Thursday on a pre-production camera is mind-blowing in detail, despite being taken in grotty light. The post processing was simply on a Macbook Air with the Photoshop app, hitting the auto buttons.

View attachment 90789

Just upload the full file somewhere. If something can't be shown, it just as likely simply isn't there.
 
I’ll be back from Berlin in a few days. I’ll put the DNGs in Dropbox then.

Yeah, or just post a couple of crops in the meantime, I mean if you look at detail onscreen it should be enough to look at the detail in a particular area.
 
Sorry, mate, I have just a MacbookAir 11” and an urge to go out and explore Berlin...;)
 
Having no idea about the extent what this "significantly higher resolution" means, I will try to comment in very simple terms:

Anything, including transparent ones, standing on the path of light reaching the “wells” on the sensor cause a loss in resolution besides sensitivity (quantum efficiency) of the sensor as a whole. An IR filter, a Color Filter Array, an antialiasing filter, a wave plate or any optical glass mean a certain loss of resolution as well as a certain drop in sensitivity (for most of us the number of photons collected in the “well” in a given time.)

The red, green and blue color filters cause up to 30% drop in the intensity of light passing through them, which corresponds to up to 4-stop less ISO sensitivity in conventional terms. (Trick here: Green is located around the mid portion of the visible spectrum so our eyes see sharper with green than what they do with the “side” colors like red and blue. That’s why Dr. Bayer has employed two green with one blue and one red in his pattern; do not mind about luminance or chrominance sensitivities.)

Assuming we employed the usual demosaicing algorithms, the elimination of the antialiasing filter is generally known as improving the resolving power of a sensor by around 10%. This is the increase in actual resolution power; increase in contrast, crispness, acuity, etc. are different subjects. As you note, the use of the AA filter can be likened to closing down a lens further when diffraction starts to appear.

The elimination of the CFA, while helping the sensitivity to raise considerably can also assist the resolution power of the sensor to reach its “native” value. What we do here is actually nothing but trying to “retrieve” back the original resolution power as well as its original sensitivity by trying to eliminate the losses for what we placed on it to get color pictures and to stay away from moire.

How far the elimination of the CFA can help a sensor’s resolution? It’s usually around 25% depending on the algorithm used. To have an idea about it, check first the sample comparisons between the D800 and D800E; they are about 10% apart from each other due to the elimination of the AA filter (as some Nikonians admit of not being able to see or being able to compensate by the PP sharpness slider only.) With crude calculations one can state that the resolution of the M9M could be equivalent of a 28MP color sensor.

The following chart illustrates a typical example of the same sensors resolution power, with and without the CFA.



7181416656_c6b4a09eed_b.jpg
 
Does anyone remember the endless debate when digital was young about the negligible effect of using superior lenses because sensors could not tell the difference? Engineers of every stripe assured us that lens quality was not a factor. Well. all I had to do to see the effect was mount a Leica lens on my first Canon 5D and to see a world of difference with some focal lengths, like the 180 apo-telyt f 3.4. Wait and see what happens in actual use. Theoretical science can only predict what might happen, The proof will be in the images.
 
(Side note: Elimination of any "accessories" of a 18MP sensor will not turn it into one of 28MP; do not regard my calculations as a reference for actual enlargements too! A 5212 x 3472 sensor never becomes a 19MP sensor! Do not assume that the pictures from the M9M can be enlarged up to the 28MP sizes with the same "qualities" of the latter.. )
 
Back
Top