New research says that if you want to see something better, you shouldn’t look directly at it. At least, that’s what our eyes seem to believe.
Researchers at the University of Bonn, Germany, report that when we look directly at something, we’re not using our eyes to their full potential. When we do this, they explain, light doesn’t hit the center of our foveas, where photoreceptors (light-sensitive cells) are most densely packed. Instead, light (and thus, the area where images are perceived) are shifted slightly upwards and towards the nose relative to this central, highly sensitive spot.
While this shift doesn’t seem to really impair our perception in any meaningful way, the findings will help improve our understanding of how our eyes work and how we can fix them when they don’t.
I spy with my little eye
“In humans, cone packing varies within the fovea itself, with a sharp peak in its center. When we focus on an object, we align our eyes so that its image falls exactly on that spot — that, at least, was the general assumption so far,” says Dr. Wolf Harmening, head of the adaptive optics and visual psychophysics group at the Department of Ophthalmology at the University Hospital Bonn and corresponding author of the paper.
The team worked with 20 healthy subjects from Germany, who were asked to fixate on (look directly at) different objects while monitoring how light hit their retinas using “adaptive optics in vivo imaging and micro-stimulation”. An offset between the point of highest photoreceptor density and where the image formed on the retina was observed in all 20 participants, the authors explain. They hypothesize that this shift is a natural adaptation that helps to improve the overall quality of our vision.
Our eyes function similarly to a camera, but they’re not really the same. In a digital camera, light-sensitive elements are distributed evenly across the surface of their sensors. They’re the same all over the sensor, with the same size, properties, and operating principles. Our eyes use two types of cells to pick up on light, the rod and cone photoreceptors. The first kind is useful for seeing motion in dim light, and the latter is suited to picking out colors and fine detail in good lighting conditions.
Unlike in a camera, however, the photosensitive cells in our retinas aren’t evenly distributed. They vary quite significantly in density, size, and spacing. The fovea, a specialized central area of our retinas that can produce the sharpest vision, has around 200,000 cone cells per square millimeter. At the edges of the retina, this can fall to around 5,000 per square millimeter, which is 40 times less dense. In essence, our eyes produce high-definition images in the middle of our field of view and progressively less-defined images towards the edges. Our brains kind of fill in the missing information around the edges to make it all seem seamless — but if you try to pay attention to something at the edges of your vision, you’ll notice how little detail you can actually notice there.
It would, then, seem very counterproductive to have the image of whatever we’re looking at directly form away from the fovea. Wouldn’t we want to have the best view of whatever we’re, you know, viewing? The team explains that this is likely an adaptation to the way human sight works: both eyes, side by side, peering out in the same direction.
All 20 participants in the study showed the same shift, which was slightly upwards and towards the nose compared to the fovea. For some, this offset was larger, for some, smaller, but the direction was always the same for all participants, and all of them showed symmetry in the offset between both eyes. Follow-up examinations carried out one year after the initial trials showed that these focal points had not moved in the meantime.
“When we look at horizontal surfaces, such as the floor, objects above fixation are farther away,” explains Jenny Lorén Reiniger, a co-author of the paper. “This is true for most parts of our natural surrounds. Objects located higher appear a little smaller. Shifting our gaze in that fashion might enlarge the area of the visual field that is sheen sharply.”
“The fact that we were able to detect [this offset] at all is based on technical and methodological advances of the last two decades,” says Harmening.
One other interesting conclusion the authors draw is that, despite the huge number of light-sensitive cells our retinas contain, we only use a small fraction of them — around a few dozen — when focusing on a single point. Even more, it’s probably the same cells all throughout our lives, as the focal point doesn’t seem to move over time. While this is an interesting tidbit to share in trivia, it’s also valuable for researchers trying to determine how best to repair eyes and restore vision following damage or disease.
The paper “Human gaze is systematically offset from the center of cone topography” has been published in the journal Current Biology.
Was this helpful?