What the imager has

Started Feb 11, 2014 | Discussions
Shop cameras & lenses ▾
Truman Prevatt
Veteran MemberPosts: 4,414Gear list
Like?
Re: What the imager has
In reply to Laurence Matson, Feb 12, 2014

One of the big issues with the Bayer and the reason color Moire is an issue is the spatial sampling rate is different between the channels.  Hence the R&B channels will alias before the green.

This sensor has the same issue.  How it shows up in practice will be the question.  Clearly the current Foveon does not suffer color Moire although aliasing is visible in the obvious places of high spatial frequencies.  It will be interesting to see how aliasing will manifest itself in this sensor.  Clearly the advantage of this design is to catch more light from the lower level detectors hence give it better low light performance.  Of course there are always trade offs in any design.  It will be interesting to see how this sensor performs.

-- show signature --
 Truman Prevatt's gear list:Truman Prevatt's gear list
Nikon D800E Nikon AF Nikkor 50mm f/1.4D Nikon AF Nikkor 85mm f/1.4D Nikon AF Nikkor 135mm f/2D DC Nikon AF Nikkor 180mm f/2.8D ED-IF +8 more
Reply   Reply with quote   More options Complain
DickLyon
New MemberPosts: 7
Like?
Re: Once more with feeling
In reply to Laurence Matson, Feb 12, 2014

Laurence Matson wrote:

As with all previous Foveon imagers there are layers where colors are detected. The "filtering" done in the silicon and the filtering process is really just counting electrons at each discrete location. The electrons are the "corpses" from the expired photons. Since photons carry energy in proportion to their frequency, the stronger ones will penetrate furthest and the weaker one will penetrate least. So the "blue" layer is merely a device to count how many dead photon bodies are lying around; the same goes for the "red" layer and the "green" layer.

Yes, this is a simple explanations. Perhaps some engineers can parse it better. I just studied acting.

Laurence,

Checking dpreview after hearing the news, I was amused to see that not much has changed in recent years.

Your Shakespearean description is not bad.  The collected electrons are exactly the corpses of absorbed photons, which is the point that most people who talk about the filtering miss: absorption and filtering and detection being the same event.

But the "stronger" and "weaker" is not quite right.  The high-frequency blue photons are stronger (highest energy); but the way they interact with silicon makes them get absorbed soonest, near the surface.  The lowest energy photons, in the infrared, penetrate deep.  Low frequencies, at wavelengths greater than 1100 nm, where the photons are quite weak, are not able to kick electrons out of the silicon lattice at all, so the silicon is completely transparent to them.  In between, there's a nicely graded rate of absorption.

Understanding the spectral responses of the layers starts with understanding that at any wavelength, the light intensity decreases exponentially with depth in the silicon.  I think I've written about this some place... Anyway, the top layer is not white, not luminance, not blue, but a sort of panchromatic blueish that turns out to work well enough for getting a high-frequency luminance signal.  We did a lot of experiments and amazed ourselves how "well enough" the 1:1:4 worked; it was not obviously going to be a good thing, but turned out awesome.

We had 1:1:4 sensors working, including very efficient and effective hardware and software processing pipelines, before I left Foveon back in '06, but the sensors didn't yet have the low-noise fully-depleted top-layer photodiodes of the "Merrill" cameras, and we were only targeting these for cell phones at the time.  I expect it will be a killer combination: fully-depleted blue plus 1:1:4.  I don't think the red and green are fully depleted, too; that was thought to be somewhere between hard and impossible, which is why they don't have the same low read noise, and one reason why aggregating red and green quads this way is a big win.

But understanding how it compares to the classic Foveon and to Bayer will keep many people busy for a long time.  Something for dpreviewers to do while waiting for cameras, and something to keep the marketing people tied up... Should be fun.

Dick

Reply   Reply with quote   More options Complain
Kendall Helmstetter Gelner
Forum ProPosts: 19,257Gear list
Like?
Re: The Quattro Knows
In reply to DMillier, Feb 12, 2014

DMillier wrote:

This debate is interesting but a bit philosophical.

And would someone who thinks they know please explain how it is possible to discriminate colour properly when each group of 4 adjacent pixels shares exactly the same values for the 2nd and third layers and only the top value can vary.

Because you can use other nearby quads as lookup tables for exact values.  And once you have a fixed color for one pixel from the quad, you get closer to exact known values for the remaining three.  It only takes one quad with a constant color to get the correct value for every single pixel of that color in any other quad.

That would seem to severely limited the colour values that the 4 pixels can read. I'm still puzzled by this. Unless it is doing a Bayer like thing and measuring the values of additional pixels from different spatial locations and using these to infer the missing values (or something).

Not inferring.  It's a lookup.

And that's just one possibility.  Just think of it as a puzzle, the more you can figure out one part of the puzzle the more others fall into place.

 Kendall Helmstetter Gelner's gear list:Kendall Helmstetter Gelner's gear list
Sigma 70-200mm F2.8 EX DG OS HSM Sigma 8-16mm F4.5-5.6 DC HSM Sigma 24-70mm F2.8 EX DG HSM Sigma 50-500mm F4.5-6.3 DG OS HSM Sigma 85mm F1.4 EX DG HSM +4 more
Reply   Reply with quote   More options Complain
Laurence Matson
Forum ProPosts: 11,549Gear list
Like?
Re: Once more with feeling
In reply to DickLyon, Feb 12, 2014

DickLyon wrote:

Laurence Matson wrote:

As with all previous Foveon imagers there are layers where colors are detected. The "filtering" done in the silicon and the filtering process is really just counting electrons at each discrete location. The electrons are the "corpses" from the expired photons. Since photons carry energy in proportion to their frequency, the stronger ones will penetrate furthest and the weaker one will penetrate least. So the "blue" layer is merely a device to count how many dead photon bodies are lying around; the same goes for the "red" layer and the "green" layer.

Yes, this is a simple explanations. Perhaps some engineers can parse it better. I just studied acting.

Laurence,

Checking dpreview after hearing the news, I was amused to see that not much has changed in recent years.

Your Shakespearean description is not bad. The collected electrons are exactly the corpses of absorbed photons, which is the point that most people who talk about the filtering miss: absorption and filtering and detection being the same event.

But the "stronger" and "weaker" is not quite right. The high-frequency blue photons are stronger (highest energy); but the way they interact with silicon makes them get absorbed soonest, near the surface. The lowest energy photons, in the infrared, penetrate deep. Low frequencies, at wavelengths greater than 1100 nm, where the photons are quite weak, are not able to kick electrons out of the silicon lattice at all, so the silicon is completely transparent to them. In between, there's a nicely graded rate of absorption.

Understanding the spectral responses of the layers starts with understanding that at any wavelength, the light intensity decreases exponentially with depth in the silicon. I think I've written about this some place... Anyway, the top layer is not white, not luminance, not blue, but a sort of panchromatic blueish that turns out to work well enough for getting a high-frequency luminance signal. We did a lot of experiments and amazed ourselves how "well enough" the 1:1:4 worked; it was not obviously going to be a good thing, but turned out awesome.

We had 1:1:4 sensors working, including very efficient and effective hardware and software processing pipelines, before I left Foveon back in '06, but the sensors didn't yet have the low-noise fully-depleted top-layer photodiodes of the "Merrill" cameras, and we were only targeting these for cell phones at the time. I expect it will be a killer combination: fully-depleted blue plus 1:1:4. I don't think the red and green are fully depleted, too; that was thought to be somewhere between hard and impossible, which is why they don't have the same low read noise, and one reason why aggregating red and green quads this way is a big win.

But understanding how it compares to the classic Foveon and to Bayer will keep many people busy for a long time. Something for dpreviewers to do while waiting for cameras, and something to keep the marketing people tied up... Should be fun.

Dick

Well, I guess you should know too, 'cause you are also an engineer. But Merrill was just a janitor who liked trains, so he was a wannabee engineer. Sort of.

I was just looking for that stuff. I guess you just looked harder. But of course it makes sense, like the low-frequency signals whales use and that stuff.

Google really isn't much of a friend when you don't know what you're talking about.

Welcome back. More stories please.

And there is nothing more fun than tying up marketing people. Duct taping them too.

-- show signature --
Reply   Reply with quote   More options Complain
mike earussi
Veteran MemberPosts: 5,610
Like?
Re: What you pose as theory is in fact impossible
In reply to Kendall Helmstetter Gelner, Feb 12, 2014

Kendall Helmstetter Gelner wrote:

mike earussi wrote:

Kendall Helmstetter Gelner wrote:

mike earussi wrote:

<...>

The point is that it is an example. And you're free to call the top layer anything you want.

That doesn't change your misconception of the top layer being isolated from the bottom layers. You can't "call" it anything and change that.

Nor did I make up the concept, but rather got it here from the "boss":

http://translate.google.no/translate?sl=auto&tl=en&u=http%3A//blogs.yahoo.co.jp/ka_tate/64122571.html

Because we all know Google Translate is a bastion of accuracy!

So you think Google translate can screw up a color diagram?

I think your understanding of it can be wrong, and translate making the text associated with the digram hard to follow helps with that substantially.

Really? It looks very clear to me. The second and third layers are averaged out and combined with the top layer, and if the top layer value for all four pixels are the same then the color for all four pixels will also be the same.

Reply   Reply with quote   More options Complain
Raist3d
Forum ProPosts: 35,268Gear list
Like?
Re: The Quattro Knows
In reply to DMillier, Feb 12, 2014

DMillier wrote:

This debate is interesting but a bit philosophical.

And would someone who thinks they know please explain how it is possible to discriminate colour properly when each group of 4 adjacent pixels shares exactly the same values for the 2nd and third layers and only the top value can vary. That would seem to severely limited the colour values that the 4 pixels can read. I'm still puzzled by this. Unless it is doing a Bayer like thing and measuring the values of additional pixels from different spatial locations and using these to infer the missing values (or something).

You get an average for the color region, but the upper layer paints the detail along with whatever wavelength it is color biased to.  The closer the objects are to this color the more high resolution they will be, the further away the less color resolution, but since detail / luminance is very strongly sampled, it will still give the impression of a lot of detail.  Lose some color accuracy in favor of detail (vs an idea X3 design with no noise).

The data seems to me still better than what a Bayer CFA sensor has to deal with.

-- show signature --
-- show signature --
Reply   Reply with quote   More options Complain
DickLyon
New MemberPosts: 7
Like?
Re: Once more with feeling
In reply to Laurence Matson, Feb 12, 2014

Google found me the first paper listed at http://www.foveon.com/article.php?a=74

which goes into the exponential absorption stuff.

The stock price is also looking good.

Duct tape all around.

Dick

Reply   Reply with quote   More options Complain
DickLyon
New MemberPosts: 7
Like?
Re: Once more with feeling
In reply to DickLyon, Feb 12, 2014

Hubel's paper there covers novel processing pipelines that may be relevant:

http://www.foveon.com/files/CIC13_Hubel_Final.pdf

but he and I left some years ago, so it's hard to know what they're doing by now.

Dick

Reply   Reply with quote   More options Complain
Laurence Matson
Forum ProPosts: 11,549Gear list
Like?
Re: Once more with feeling
In reply to DickLyon, Feb 12, 2014

DickLyon wrote:

Hubel's paper there covers novel processing pipelines that may be relevant:

http://www.foveon.com/files/CIC13_Hubel_Final.pdf

but he and I left some years ago, so it's hard to know what they're doing by now.

Dick

All that stuff has way too many words. How are we supposed to figure out all of the angles when we have to look up more than we were supposed to learn in a lifetime.

Glad the stock price is up. Glad they dumped more of American technology on Lenovo. And when is Street View going to do Garvin Hill Rd. My arm is getting tired of waving.

Hug to peg.

-- show signature --
Reply   Reply with quote   More options Complain
victorgv
Senior MemberPosts: 1,254Gear list
Like?
Re: What the imager has
In reply to Truman Prevatt, Feb 12, 2014

Truman Prevatt wrote:

One of the big issues with the Bayer and the reason color Moire is an issue is the spatial sampling rate is different between the channels. Hence the R&B channels will alias before the green.

This sensor has the same issue. How it shows up in practice will be the question. Clearly the current Foveon does not suffer color Moire although aliasing is visible in the obvious places of high spatial frequencies. It will be interesting to see how aliasing will manifest itself in this sensor. Clearly the advantage of this design is to catch more light from the lower level detectors hence give it better low light performance. Of course there are always trade offs in any design. It will be interesting to see how this sensor performs.

-- show signature --

I do not think there would be any issues with color moire.

I did quick and dirty experiment scaled color to 50% and back converted original color to b&w and combined 2 layers. I bet Sigma has much much better algorithm so result would be much better.

http://www.dpreview.com/forums/post/53096876

 victorgv's gear list:victorgv's gear list
Sigma DP1 Sigma DP2 Merrill Sigma dp2 Quattro
Reply   Reply with quote   More options Complain
Truman Prevatt
Veteran MemberPosts: 4,414Gear list
Like?
Re: What the imager has
In reply to victorgv, Feb 12, 2014

victorgv wrote:

Truman Prevatt wrote:

One of the big issues with the Bayer and the reason color Moire is an issue is the spatial sampling rate is different between the channels. Hence the R&B channels will alias before the green.

This sensor has the same issue. How it shows up in practice will be the question. Clearly the current Foveon does not suffer color Moire although aliasing is visible in the obvious places of high spatial frequencies. It will be interesting to see how aliasing will manifest itself in this sensor. Clearly the advantage of this design is to catch more light from the lower level detectors hence give it better low light performance. Of course there are always trade offs in any design. It will be interesting to see how this sensor performs.

-- show signature --

I do not think there would be any issues with color moire.

I did quick and dirty experiment scaled color to 50% and back converted original color to b&w and combined 2 layers. I bet Sigma has much much better algorithm so result would be much better.

http://www.dpreview.com/forums/post/53096876

I don't know how it will show but the lower two layers of detectors have a much lower Nyquist frequency and aliasing will be present in shots with high frequency patters of red and green from the lower layers. I expect your are right it won't show up as traditional color Morie in the sense that shows up in Bayer sensors.  I am, however, interested to see how it presents itself.

-- show signature --
 Truman Prevatt's gear list:Truman Prevatt's gear list
Nikon D800E Nikon AF Nikkor 50mm f/1.4D Nikon AF Nikkor 85mm f/1.4D Nikon AF Nikkor 135mm f/2D DC Nikon AF Nikkor 180mm f/2.8D ED-IF +8 more
Reply   Reply with quote   More options Complain
victorgv
Senior MemberPosts: 1,254Gear list
Like?
Re: What the imager has
In reply to Truman Prevatt, Feb 12, 2014

Truman Prevatt wrote:

victorgv wrote:

Truman Prevatt wrote:

One of the big issues with the Bayer and the reason color Moire is an issue is the spatial sampling rate is different between the channels. Hence the R&B channels will alias before the green.

This sensor has the same issue. How it shows up in practice will be the question. Clearly the current Foveon does not suffer color Moire although aliasing is visible in the obvious places of high spatial frequencies. It will be interesting to see how aliasing will manifest itself in this sensor. Clearly the advantage of this design is to catch more light from the lower level detectors hence give it better low light performance. Of course there are always trade offs in any design. It will be interesting to see how this sensor performs.

-- show signature --

I do not think there would be any issues with color moire.

I did quick and dirty experiment scaled color to 50% and back converted original color to b&w and combined 2 layers. I bet Sigma has much much better algorithm so result would be much better.

http://www.dpreview.com/forums/post/53096876

I don't know how it will show but the lower two layers of detectors have a much lower Nyquist frequency and aliasing will be present in shots with high frequency patters of red and green from the lower layers. I expect your are right it won't show up as traditional color Morie in the sense that shows up in Bayer sensors. I am, however, interested to see how it presents itself.

-- show signature --

You will have to try really hard to create processing algorithm to get color moire from that type of sensor

 victorgv's gear list:victorgv's gear list
Sigma DP1 Sigma DP2 Merrill Sigma dp2 Quattro
Reply   Reply with quote   More options Complain
Kendall Helmstetter Gelner
Forum ProPosts: 19,257Gear list
Like?
Re: What the imager has
In reply to Truman Prevatt, Feb 12, 2014

Truman Prevatt wrote:

<...>I don't know how it will show but the lower two layers of detectors have a much lower Nyquist frequency and aliasing will be present in shots with high frequency patters of red and green from the lower layers.

Yet another thing that the overlap with the top layer with the MIDDLE and BOTTOM (not green/red) layers will prevent.  You are still acting as though the TOP layer with greater subdivision is totally separated from the other layers.

I expect your are right it won't show up as traditional color Morie in the sense that shows up in Bayer sensors. I am, however, interested to see how it presents itself.

Pretty sure the answer is, not at all.  There may be some kinds of artifacts but that will not be one of them.

 Kendall Helmstetter Gelner's gear list:Kendall Helmstetter Gelner's gear list
Sigma 70-200mm F2.8 EX DG OS HSM Sigma 8-16mm F4.5-5.6 DC HSM Sigma 24-70mm F2.8 EX DG HSM Sigma 50-500mm F4.5-6.3 DG OS HSM Sigma 85mm F1.4 EX DG HSM +4 more
Reply   Reply with quote   More options Complain
Kendall Helmstetter Gelner
Forum ProPosts: 19,257Gear list
Like?
Recording?
In reply to DickLyon, Feb 12, 2014

DickLyon wrote:

Hubel's paper there covers novel processing pipelines that may be relevant:

http://www.foveon.com/files/CIC13_Hubel_Final.pdf

but he and I left some years ago, so it's hard to know what they're doing by now.

The paper mentions presenting some comparisons between the one and two stage (separate high frequency luminance and low frequency chroma) pipelines, don't suppose you know if that presentation can be viewed anywhere?

 Kendall Helmstetter Gelner's gear list:Kendall Helmstetter Gelner's gear list
Sigma 70-200mm F2.8 EX DG OS HSM Sigma 8-16mm F4.5-5.6 DC HSM Sigma 24-70mm F2.8 EX DG HSM Sigma 50-500mm F4.5-6.3 DG OS HSM Sigma 85mm F1.4 EX DG HSM +4 more
Reply   Reply with quote   More options Complain
zodiacfml
Contributing MemberPosts: 520Gear list
Like?
Re: What the imager has
In reply to Roland Karlsson, Feb 12, 2014

Roland Karlsson wrote:

Laurence Matson wrote:

What the imager has is 19 million spatial locations. How the pixels are counted is once again a big deal for those discussion types. I am guessing that the G and R layers each have around 5 million pixels and the top B, 19 million. Or thereabouts.

Of course, some of our favorite negativists will argue that this is not really an X3 imager. That also, is nonsense. There are 3 layers (X3) each of which collects stuff to yield a full-color reading at each spatial location. The oh-so obvious - at least to Ricardo - interpolation that has to be going on is a moot point at best. Moot on, if you want.

Hi Laurence.

It is a 5 MP full Foveon sensor, giving 5 MP full color images, with a potential to get 20 MP luminance resolution.

And that is it.

+1 from me. That is just basically it.

It just gets complicated when we consider the IQ of the resulting 20MP image which is probably as complicated as the algorithms needed by Bayer filter to restore a color image.

I reckon the 20MP color image quality will vary in quality depending on how much information is there for the top channel.

 zodiacfml's gear list:zodiacfml's gear list
Sigma DP2
Reply   Reply with quote   More options Complain
DickLyon
New MemberPosts: 7
Like?
Re: What the imager has
In reply to Kendall Helmstetter Gelner, Feb 12, 2014

Kendall, long time...

You're right that there won't be much aliasing. A lot of people seem to have the idea that aliasing has something to do with different sampling positions or density, as in Bayer. But that's not the key issue. The problem with Bayer is that the red plane (for example) can never have more than 25% effective fill factor, because the sampling aperture is only half the size, in each direction, of the sample spacing. If you take the Fourier transform of that half-size aperture, you'll find it doesn't do much smoothing, so the response is still quite too high way past the Nyquist frequency. That's why it needs an anti-aliasing filter to do extra blurring. But if the AA filter is strong enough to remove all the aliasing in red, it also throws away the extra resolution that having twice as many green samples is supposed to give. It's a tough tradeoff.

In the Foveon sensor, the reason no AA filter is needed is not because of where the samples are, or what the different spatial sampling densities are. It's because each sample is through an aperture of nearly 100% fill factor, that is, as wide each way as the sample pitch. The Fourier transform of this aperture has a null at the spatial frequencies that would alias to low frequencies; this combined with a tiny bit more blur from the lens psf is plenty to keep aliasing to a usually invisible level, while keeping the image sharp and high-res.

In the 1:1:4 arrangement, each sample layer has this property, but at different rates -- very unlike the Bayer's red and blue planes.  The large area of the lower-level pixels is the ideal anti-aliasing filter for those layers; the top layer is not compromised by the extra spatial blurring in the lower layers, so it provides the extra high frequencies needed to make a full-res image.

Another good way to think of the lower levels is that they get the same four samples as the top level, and then "aggregate" or "pool" four samples into one. This is easy to simulate by processing a full-res RGB image in Photoshop or whatever.

The pooling of 4 into 1 is done most efficiently in the domain of collected photo-electrons, before converting to a voltage in the readout transistor. The result is the same read noise, but four times as much signal, so about a 4X better signal-to-noise ratio. Plus with fewer plugs, transistors, wires, etc. to service the lower levels, the pixel fill factor is closer to 100% with easier microlenses, and the readout rate doesn't have to be as high. Wins all around -- except for the chroma resolution.

The main claim of Bryce Bayer, and the fact that most TV formats and image and video compression algorithms rely on, is that the visual system doesn't care nearly as much about chroma resolution as about luma resolution. Unfortunately, trying to exploit that factor with a one-layer mosaic sensor has these awkward aliasing problems. Doing it with the Foveon 1:1:4 arrangement works better, requiring no AA filter, no filtering compromises. So, yes, the chroma resolution is less than the luma resolution, but you'd be hard pressed to see that in images.

If you throw out the extra luma resolution and just make 5 MP images from this new camera, you'll still have super-sharp super-clean versions of what the old DP2 or SD15 could do. Now imagine adding 2X resolution in each dimension, but with extra luma detail only, like in a typical JPEG encoding that encodes chroma at half the sample rate of luma. Whose eyes are going to be good enough to even tell that the chroma is less sharp than the luma? It's not impossible, but hard.

Speaking of stories from the old days, Foveon's first version of Sigma Photo Pro had a minor bug in the JPEG output, as you probably recall: our calls to the jpeg-6b library defaulted to encoding with half-res chroma. It took a while, but a user did eventually find an image where he could tell something was not perfect, by comparing to TIFF output, and another user told us how to fix it, so we did. It we could have gotten that original level of JPEG quality from the SD9 with 5 million instead of 10 million pixel sensors and data values, and could have gotten cleaner color as a result, would that have been a problem? I don't think so; except for marketing, and they had enough problems already. Same way with Sigma's new one, I expect; if 30 M values gives an image that will be virtually indistinguishable from what could be done with 60 M, but with cleaner color, will someone complain?

Probably so.

So, it's complicated.  Yes, reduced chroma resolution is a compromise; but a very good one, well matched to human perception -- not at all like the aliasing-versus-resolution compromise that the mosaic-with-AA-filter approach has to face.

Dick

disclaimer: I've been away from this technology too long to have any inside knowledge.  And give my apologies to Laurence for my too many words.

Reply   Reply with quote   More options Complain
amdme127
Regular MemberPosts: 459Gear list
Like?
Re: Once more with feeling
In reply to DickLyon, Feb 12, 2014

DickLyon wrote:

But the "stronger" and "weaker" is not quite right. The high-frequency blue photons are stronger (highest energy); but the way they interact with silicon makes them get absorbed soonest, near the surface. The lowest energy photons, in the infrared, penetrate deep. Low frequencies, at wavelengths greater than 1100 nm, where the photons are quite weak, are not able to kick electrons out of the silicon lattice at all, so the silicon is completely transparent to them. In between, there's a nicely graded rate of absorption.

Understanding the spectral responses of the layers starts with understanding that at any wavelength, the light intensity decreases exponentially with depth in the silicon. I think I've written about this some place... Anyway, the top layer is not white, not luminance, not blue, but a sort of panchromatic blueish that turns out to work well enough for getting a high-frequency luminance signal. We did a lot of experiments and amazed ourselves how "well enough" the 1:1:4 worked; it was not obviously going to be a good thing, but turned out awesome.

We had 1:1:4 sensors working, including very efficient and effective hardware and software processing pipelines, before I left Foveon back in '06, but the sensors didn't yet have the low-noise fully-depleted top-layer photodiodes of the "Merrill" cameras, and we were only targeting these for cell phones at the time. I expect it will be a killer combination: fully-depleted blue plus 1:1:4. I don't think the red and green are fully depleted, too; that was thought to be somewhere between hard and impossible, which is why they don't have the same low read noise, and one reason why aggregating red and green quads this way is a big win.

But understanding how it compares to the classic Foveon and to Bayer will keep many people busy for a long time. Something for dpreviewers to do while waiting for cameras, and something to keep the marketing people tied up... Should be fun.

Dick

Dick,

So potentially the 4.9 mp middle and lower layer could produce a cleaner (better color accuracy and more detail, less noise) result then the Merrill sensor's middle and lower layer.  Which would make the new sensor quite a bit superior in overall quality to the Merrill sensor in most situations?

Also with the assumption that the middle layer collects cleaner results then the bottom layer, when Sigma/Foveon make a bigger sensor would a 1:4:16 (bottom:middle:top layer) make sense to produce even superior results when the sensor gets big enough to support that resolution?

Thank You for any insight you can give

-- show signature --
 amdme127's gear list:amdme127's gear list
Sigma dp2 Quattro Sigma SD9 Sigma SD14 Sigma SD15 Sigma 50mm F1.4 EX DG HSM +12 more
Reply   Reply with quote   More options Complain
Kendall Helmstetter Gelner
Forum ProPosts: 19,257Gear list
Like?
Color Resolution Zen
In reply to Roland Karlsson, Feb 12, 2014

Roland Karlsson wrote:

mike earussi wrote:
Which will vary throughout the image, dependent entirely on the value of the four top layer pixels. So some parts of the image could be as low as 5mp whereas other part could be as high as 19mp, color resolution of course.

Yes, and this is an unwanted characteristic. One of the more irritating properties of bayer sensors is its varying resolution.

Now, the Quattro will not vary in resolution, but in color resolution. Not the same thing.

Yes, the irritating part about bayer is varying luminance resolution, because it suddenly drops the plots as far as detail before natural factors like DOF would cause decay.

The other annoying part is the color artifacts, which bring utterly unexpected colors into view where none should be simply based on a pattern.

The Quattro should be immune to simply "made up" colors, the worst I can possibly see happening is some kind of color bleeding.  But because of the top layers also getting some values the lower layers get, you can eliminate a lot of color bleeding kinds of effects simply by having a clear picture of where the boundaries are.

In the end the result should be way closer to 19MP of color resolution than 5MP.  I don't even think it's possible to have any image where you would only get 5MP of color resolution.

 Kendall Helmstetter Gelner's gear list:Kendall Helmstetter Gelner's gear list
Sigma 70-200mm F2.8 EX DG OS HSM Sigma 8-16mm F4.5-5.6 DC HSM Sigma 24-70mm F2.8 EX DG HSM Sigma 50-500mm F4.5-6.3 DG OS HSM Sigma 85mm F1.4 EX DG HSM +4 more
Reply   Reply with quote   More options Complain
zodiacfml
Contributing MemberPosts: 520Gear list
Like?
thanks
In reply to DickLyon, Feb 12, 2014

DickLyon wrote:

Kendall, long time...

You're right that there won't be much aliasing. A lot of people seem to have the idea that aliasing has something to do with different sampling positions or density, as in Bayer. But that's not the key issue. The problem with Bayer is that the red plane (for example) can never have more than 25% effective fill factor, because the sampling aperture is only half the size, in each direction, of the sample spacing. If you take the Fourier transform of that half-size aperture, you'll find it doesn't do much smoothing, so the response is still quite too high way past the Nyquist frequency. That's why it needs an anti-aliasing filter to do extra blurring. But if the AA filter is strong enough to remove all the aliasing in red, it also throws away the extra resolution that having twice as many green samples is supposed to give. It's a tough tradeoff.

In the Foveon sensor, the reason no AA filter is needed is not because of where the samples are, or what the different spatial sampling densities are. It's because each sample is through an aperture of nearly 100% fill factor, that is, as wide each way as the sample pitch. The Fourier transform of this aperture has a null at the spatial frequencies that would alias to low frequencies; this combined with a tiny bit more blur from the lens psf is plenty to keep aliasing to a usually invisible level, while keeping the image sharp and high-res.

In the 1:1:4 arrangement, each sample layer has this property, but at different rates -- very unlike the Bayer's red and blue planes. The large area of the lower-level pixels is the ideal anti-aliasing filter for those layers; the top layer is not compromised by the extra spatial blurring in the lower layers, so it provides the extra high frequencies needed to make a full-res image.

Another good way to think of the lower levels is that they get the same four samples as the top level, and then "aggregate" or "pool" four samples into one. This is easy to simulate by processing a full-res RGB image in Photoshop or whatever.

The pooling of 4 into 1 is done most efficiently in the domain of collected photo-electrons, before converting to a voltage in the readout transistor. The result is the same read noise, but four times as much signal, so about a 4X better signal-to-noise ratio. Plus with fewer plugs, transistors, wires, etc. to service the lower levels, the pixel fill factor is closer to 100% with easier microlenses, and the readout rate doesn't have to be as high. Wins all around -- except for the chroma resolution.

The main claim of Bryce Bayer, and the fact that most TV formats and image and video compression algorithms rely on, is that the visual system doesn't care nearly as much about chroma resolution as about luma resolution. Unfortunately, trying to exploit that factor with a one-layer mosaic sensor has these awkward aliasing problems. Doing it with the Foveon 1:1:4 arrangement works better, requiring no AA filter, no filtering compromises. So, yes, the chroma resolution is less than the luma resolution, but you'd be hard pressed to see that in images.

If you throw out the extra luma resolution and just make 5 MP images from this new camera, you'll still have super-sharp super-clean versions of what the old DP2 or SD15 could do. Now imagine adding 2X resolution in each dimension, but with extra luma detail only, like in a typical JPEG encoding that encodes chroma at half the sample rate of luma. Whose eyes are going to be good enough to even tell that the chroma is less sharp than the luma? It's not impossible, but hard.

Speaking of stories from the old days, Foveon's first version of Sigma Photo Pro had a minor bug in the JPEG output, as you probably recall: our calls to the jpeg-6b library defaulted to encoding with half-res chroma. It took a while, but a user did eventually find an image where he could tell something was not perfect, by comparing to TIFF output, and another user told us how to fix it, so we did. It we could have gotten that original level of JPEG quality from the SD9 with 5 million instead of 10 million pixel sensors and data values, and could have gotten cleaner color as a result, would that have been a problem? I don't think so; except for marketing, and they had enough problems already. Same way with Sigma's new one, I expect; if 30 M values gives an image that will be virtually indistinguishable from what could be done with 60 M, but with cleaner color, will someone complain?

Probably so.

So, it's complicated. Yes, reduced chroma resolution is a compromise; but a very good one, well matched to human perception -- not at all like the aliasing-versus-resolution compromise that the mosaic-with-AA-filter approach has to face.

Dick

disclaimer: I've been away from this technology too long to have any inside knowledge. And give my apologies to Laurence for my too many words.

Excellent post.  Only by this can end all this discussion with the new sensor, yet, who wants that?  

 zodiacfml's gear list:zodiacfml's gear list
Sigma DP2
Reply   Reply with quote   More options Complain
unknown member
(unknown member)
Like?
Many thanks for going to all the trouble
In reply to DickLyon, Feb 12, 2014

DickLyon wrote:

Laurence Matson wrote:

As with all previous Foveon imagers there are layers where colors are detected. The "filtering" done in the silicon and the filtering process is really just counting electrons at each discrete location. The electrons are the "corpses" from the expired photons. Since photons carry energy in proportion to their frequency, the stronger ones will penetrate furthest and the weaker one will penetrate least. So the "blue" layer is merely a device to count how many dead photon bodies are lying around; the same goes for the "red" layer and the "green" layer.

Yes, this is a simple explanations. Perhaps some engineers can parse it better. I just studied acting.

Laurence,

Checking dpreview after hearing the news, I was amused to see that not much has changed in recent years.

Your Shakespearean description is not bad. The collected electrons are exactly the corpses of absorbed photons, which is the point that most people who talk about the filtering miss: absorption and filtering and detection being the same event.

But the "stronger" and "weaker" is not quite right. The high-frequency blue photons are stronger (highest energy); but the way they interact with silicon makes them get absorbed soonest, near the surface. The lowest energy photons, in the infrared, penetrate deep. Low frequencies, at wavelengths greater than 1100 nm, where the photons are quite weak, are not able to kick electrons out of the silicon lattice at all, so the silicon is completely transparent to them. In between, there's a nicely graded rate of absorption.

Understanding the spectral responses of the layers starts with understanding that at any wavelength, the light intensity decreases exponentially with depth in the silicon. I think I've written about this some place... Anyway, the top layer is not white, not luminance, not blue, but a sort of panchromatic blueish that turns out to work well enough for getting a high-frequency luminance signal. We did a lot of experiments and amazed ourselves how "well enough" the 1:1:4 worked; it was not obviously going to be a good thing, but turned out awesome.

We had 1:1:4 sensors working, including very efficient and effective hardware and software processing pipelines, before I left Foveon back in '06, but the sensors didn't yet have the low-noise fully-depleted top-layer photodiodes of the "Merrill" cameras, and we were only targeting these for cell phones at the time. I expect it will be a killer combination: fully-depleted blue plus 1:1:4. I don't think the red and green are fully depleted, too; that was thought to be somewhere between hard and impossible, which is why they don't have the same low read noise, and one reason why aggregating red and green quads this way is a big win.

But understanding how it compares to the classic Foveon and to Bayer will keep many people busy for a long time. Something for dpreviewers to do while waiting for cameras, and something to keep the marketing people tied up... Should be fun.

Dick

and joining DPR and finally giving reliable insight about what's really going on with the Quattro, Mr Lyon!

Awesome, I believe, might be the appropriate word for this

Reply   Reply with quote   More options Complain
Keyboard shortcuts:
FForum MMy threads