BetterPhoto Q&A
Category: New Answers

Photography Question 

NAVNITH KRISHNAN
 

effective megapixel


What is exactly meant by effective megapixel in a dig.camera.For example Nikon D70s comes out with 6.1 megapixels.With 300dpi,I get only a max size of 8X10 print.Has it got any relevence with the size of the image?
navnith
navnith41@yahoo.co.in


To love this question, log in above
April 05, 2006

 

robert G. Fately
  A digital camera has a microchip that consists of rows and columns of teeny tiny light sensors. Think of a very very VERY fine screen mesh, with each hole in the mesh having a little light reader, okay?

Now, the light sensors only read the amount of light - they don't sense color. So, 1/4 of these sensors have red filters on them, 1/4 have blue and 1/2 have green (it has to do with stuff we need not get into here). These are spread out evenly, and the computer in the camera body gathers up the readings from adjoining sensors to interpolate a single picture element, or pixel.

So, a 6MP chip essentially has an array of 2000 by 3000 sensors, which then leads to an "effective" rating of 6 megapixels because it's not a straight 1:1 relation.

As for output - this is highly flexible. First, different printers have different ideal resolution requirements...Epsons are usually 360ppi, Dye-sub printers are often 300ppi, and some printers only require 240ppi. You can use other resolutions, though, with hardly any noticeable difference - for example, you could have a 144 dpi image file print on a dye-sub printer that is idealized for 300 dpi and it will still come out looking very good.

On top of that, digital images are highly up-sizeable; that is, you can increase the total file size with Photoshop (or other editing packages) with hardly any ill effect. In fact, using specialized software like Genuine Fractals, you can upsize quite a bit - that 6MP orignal image file could get to 20x30 inches and still look great. Assuming, of course, that the original photo was great....


To love this comment, log in above
April 05, 2006

 

John G. Clifford Jr
  Well... the previous poster is almost right.

'Pixels', short for 'PIcture ELementS', are the RGB values that you see in the finished, processed image file on the computer. Raw sensor data does not directly correlate to the image pixels, because image processing software, either on the camera (for JPEG images) or on the PC (for raw image files) calculates each pixel from multiple photosensor values.

Bayer CFA (color filter array) sensors, the digital sensor type used in most digital cameras, have a pattern of color filters (the color filter array) arranged in a certain pattern (the Bayer pattern is Green-Red-Green-Blue), so that in a 2x2 matrix of photosensors (the individual light-sensing elements on a digital sensor, informally and incorrectly referred to as 'pixels') you have two green-detecting sensors on a diagonal, one red-detecting sensor, and one blue-detecting sensor. This is done because photosensors are monochromatic (can only see intensity of light, not color), so the various filters allow only light of that color to be sensed.

A sophisticated algorithm that 'knows' what 'color' each individual photosensor sees, then takes the different readings from all of the sensors on the chips and compares and correlates them to arrive at a single RGB value. Even more advanced algorithms do this multiple times, applying the 4-sensor matrix pattern over the raw sensor information several times to hopefully calculate pixel values that are closest to what was actually there. Think of a 'pixel' as the computed value at the intersection of four adjacent photosensors in that 2x2 matrix.

You can see that the outer row of photosensors on the digital sensor cannot be resolved to pixel information, because there is only two photosensors available for calculations (the photosensor and the one adjacent to it). Hence, all Bayer sensor cameras (every dSLR and digicam except for the Sigma SD9/SD10 and the Polaroid x530) do not have one pixel for every photosensor location.

(BTW, that is why I like the Sigma SD10... every image pixel directly reflects what was seen by the sensor and there is no interpolation, and the resultant 3.4 MP image is as sharp as 6 to 8 MP Bayer sensor-equipped dSLRs.)

Concerning megapixels and the largest print you can get from a camera, as Bob mentioned, you can interpolate your image to at least twice the default X/Y size. A 3000 x 2000 pixel image can be interpolated to 6000 x 4000 without a noticeable quality loss at a viewing distance of more than a few inches, thus you could produce a 13x20 image with excellent quality. In reality, you can often go up to 250% or more before quality drops off noticeably.


To love this comment, log in above
April 07, 2006

 

NAVNITH KRISHNAN
  Many thanks to Bob and John for the excellent clarification.
navnith


To love this comment, log in above
May 25, 2006

 
wildlifetrailphotography.com - Donald R. Curry

BetterPhoto Member
Contact Donald R. Curry
Donald R. Curry's Gallery
  John,

I use a Coolscan V for scanning transparencies. Does this same concept apply to scanning?


To love this comment, log in above
November 08, 2006

 
This old forum is now archived. Use improved Forum here

Report this Thread