Gå til innhold

hvor stor oppløsning ser et menneske øye?


JFM

Anbefalte innlegg

Videoannonse
Annonse

mye mer enn 576 Megapixels om kilden er rett.

 

Skjult tekst: (Marker innholdet i feltet for å se teksten):

Notes on the Resolution of the Human Eye

 

What is the resolution of the human eye, or eye plus brain combination in people? There seems to be a lot of different numbers quoted.

 

Visual acuity is defined as 1/a where a is the response in x/arc-minute. The problem is that various researchers have defined x to be different things. However, when the different definitions are normalized to the same thing, the results agree. Here is the problem:

 

Usually a grating test pattern is used, so x is defined as cycles in the pattern. Different researchers have used a line, a line pair, and a full cycle as the definition of x. Thus, they report seemingly different values for the visual acuity and resolution. It is easy to recompute the acuity to a common standard, when the study defines what was used.

 

So when we define x to be a line pair, as is normally done in modern optics, the 1/a value is 1.7 under good lighting conditions. This was first determined by Konig (1897 [yes that's 1897] in 'Die Abhangigkeit der Sehscharfe von der Beleuchtungsintensitat,' S. B. Akad. Wiss. Berlin, 559-575. Also in: Hecht (1931, 'The Retinal Processes Concerned with Visual Acuity and Color Vision,' Bulletin No. 4 of the Howe Laboratory of Ophthalmology, Harvard Medical School, Cambridge, Mass.) A summary plot of numerous subjects of visual acuity as a function of brightness appears Pirenne (1967, "Vision and the Eye," Chapman and Hall, London, page 132).

 

The acuity = 1.7 when the light level is greater than about 0.1 Lambert. A Lambert is a unit of luminance equal to 1/pi candela per square centimeter. A candela is one sixtieth the intensity of one square centimeter of a blackbody at the solidification temperature of platinum. A point source of one candela intensity radiates one lumen into a solid angle of one steradian according to the photonics dictionary http://www.photonics.com/dictionary.

 

The acuity of 1.7 corresponds to 0.59 arc minute PER LINE PAIR. I can find no other research that contradicts this in any way.

 

Thus, one needs two pixels per line pair, and that means pixel spacing of 0.3 arc-minute!

 

The number above, 0.7 arc-minute, corresponds to the resolution of a spot as non-point source. Again you need two pixels to say it is not a point, thus the pixels must be 0.35 arc-minute (or smaller) at the limit of visual acuity, in close agreement with the line pairs. Line pairs are easier to detect than spots, so this too is consistent, but is closer than I thought it would be.

 

Consider a 20 x 13.3-inch print viewed at 20 inches. The Print subtends an angle of 53 x 35.3 degrees, thus requiring 53*60/.3 = 10600 x 35*60/.3 = 7000 pixels, for a total of ~74 megapixels to show detail at the limits of human visual acuity. The 10600 pixels over 20 inches corresponds to 530 pixels per inch, which would indeed appear very sharp. Note in a recent printer test I showed a 600 ppi print had more detail than a 300 ppi print on an HP1220C printer (1200x2400 print dots). I've conducted some blind tests where a viewer had to sort 4 photos (150, 300, 600 and 600 ppi prints). The two 600 ppi were printed at 1200x1200 and 1200x2400 dpi. So far all have gotten the correct order of highest to lowest ppi (includes people up to age 50). See: http://www.clarkvision.com/imagedetail/printer-ppi

 

How many megapixels equivalent does the eye have?

 

The eye is not a single frame snapshot camera. It is more like a video stream. The eye moves rapidly in small angular amounts and continually updates the image in one's brain to "paint" the detail. We also have two eyes, and our brains combine the signals to increase the resolution further. We also typically move our eyes around the scene to gather more information. Because of these factors, the eye plus brain assembles a higher resolution image than possible with the number of photoreceptors in the retina. So the megapixel equivalent numbers below refer to the spatial detail in an image that would be required to show what the human eye could see when you view a scene.

 

Based on the above data for the resolution of the human eye, let's try a "small" example first. Consider a view in front of you that is 90 degrees by 90 degrees, like looking through an open window at a scene. The number of pixels would be

90 degrees * 60 arc-minutes/degree * 1/0.3 * 90 * 60 * 1/0.3 = 324,000,000 pixels (324 megapixels).

At any one moment, you actually do not perceive that many pixels, but your eye moves around the scene to see all the detail you want. But the human eye really sees a larger field of view, close to 180 degrees. Let's be conservative and use 120 degrees for the field of view. Then we would see

120 * 120 * 60 * 60 / (0.3 * 0.3) = 576 megapixels.

The full angle of human vision would require even more megapixels. This kind of image detail requires A large format camera to record.

Endret av MrLee
Lenke til kommentar

Jeg leste bare fort gjennom deler av teksten. Det virker som forfatteren bare tar de optiske begrensningene til linsen. Altså diffraksjonen som bestemmer oppløsningen til øyet. Videre tar han/hun denne oppløsningen og ganger med "arealet" øyet ser for å finne hvor mange "megapixler" øyet kan registrere.

 

Dette er en ganske grov forenkling av den virkelige sannheten. Det er mange ting som spiller inn på hvor små punkter man kan skille fra hverandre. Den fundamentale er gitt av diffraksjon når lyset går gjennom pupillen. Denne grensen er tilnærmet:

a = (lamda*L)/D

hvor lambda er bølgelengden. L er avstanden mellom punktet og åpningen (pupillen) og D er pupillens størrelse. I dagslys er pupillens diameter omtrent 2mm og bølgelengden er omtrent 550nm hvor øyet er mest følsomt for. Vi kan derfor skille to punkter som ligger 225um fra hverandre (225 milliondel av en meter). I mørke er pupillen større og den fundamentale grensen går ned.

 

Videre er linsen best i sentrum. Utover i kantene blir difffraksjonsmønsteret uklart og setter begrensningen.

 

Videre bestemmer tettheten til staver/tapper (lysfølsomme sensorer i retina) hvor stor oppløsning man kan oppfatte. Dette vil da være det som setter grensen på hvor nærme punkter vi kan skille fra hverandre. Disse stavene/tappene sitter tettest i senteret til synsfeltet.

 

For å gjøre det virkelig vitenskapelig må man integrere en kompleks funksjon som beskriver "pikseltettheten" i hele synsfeltet for å finne hvor mange piksler vi kan se samtidig. Etter det jeg husker er det noen få millioner staver/tapper som sitter i et friskt øye. Vi kan derfor ikke registrere flere punkter enn dette tallet...

Lenke til kommentar

Analoge lys-signaler baserer seg ikke på minsteoppløsninger med mindre det er en fremviser basert på minste-enheter (målt i pixler) som gjengir et analogt bilde. Øyet deler ikke et bilde opp i minstepixler såvidt meg bekjent, men jeg vet ikke så mye om disse stavene da.

Lenke til kommentar

Opprett en konto eller logg inn for å kommentere

Du må være et medlem for å kunne skrive en kommentar

Opprett konto

Det er enkelt å melde seg inn for å starte en ny konto!

Start en konto

Logg inn

Har du allerede en konto? Logg inn her.

Logg inn nå
  • Hvem er aktive   0 medlemmer

    • Ingen innloggede medlemmer aktive
×
×
  • Opprett ny...