The way we, as humans, perceive the world with what is readily available to us compared to the technology we create to do the same thing is something that greatly interests me. Here is a comparison of both methods of viewing and ‘capturing’ images.
SENSATION & PERCEPTION
Sensation: Awareness of properties of an object or event; it only happens when specific receptors are stimulated.
Physical energy hits a sense organ, triggering receptor cells to send impulses to the brain and causing you to become aware of the world around you.
Perception: The 2nd step to vision; occurs after organization and interpretation of sensory signal as part of an event.
Organize essential visual elements into units; recognition and identification of objects/images.
Actual physical events vs. experience of events (differences/changes within a stimulus).
Threshold: The point where a stimulus is strong enough to be noticed.
Absolute Threshold: The smallest amount of a stimulus needed to detect that the stimulus is present.
JND: (Just Noticeable Difference) The smallest size of the difference in a stimulus’ property needed for the observer to notice a change
Weber’s Law: RULE: A constant percentage of a magnitude change is necessary to detect a change.
Signal Detection Theory: This theory explains why we can detect signals, which are embedded noise in some situations.
Sensitivity: In Signal Detection Theory: the threshold of a level for distinguishing between a stimulus and noise; the lower the threshold, the greater the sensitivity.
Bias: In Signal Detection Theory: A person’s willingness to report noticing a stimulus.
Amplitude: The height of the peaks in a light wave.
Frequency: The rate at which light waves move past a given point.
Wavelength: The time between the arrival of peaks of a light wave (shorter wavelengths have higher frequencies).
STRUCTURES OF THE EYE
Transduction: The process where physical energy is converted (by a sensory neuron) into neural impulses.
Pupil: The opening in the eye through which light passes.
Iris: The circular muscle that adjusts the size of the pupil.
Cornea: Transparent covering of the eye which serves partly to focus the light onto the back of the eye.
Accommodation: Occurs when muscles adjust the shape of the lens so that it focuses light on the retina from objects at a difference.
Retina: A sheet of tissue at the back of the eye containing cells that convert light into neural impulses.
Fovea: The small, central region of the retina with the highest density of cones and the highest resolution.
Rods: Retinal cells that are very sensitive to light but register only shades of gray.
Cones: Retinal cells that respond strongest to one of the three wavelengths of light and play a role in producing color vision.
Optic Nerve: The large bundle of nerve fibers carrying impulses from the retina into the brain.
TRICHROMATIC THEORY OF COLOR VISION:
The theory that color vision arises from the combinations of neural impulses from three different kinds of sensors, each of which responds maximally to a different wavelength.
OPPONENT PROCESS THEORY OF COLOR VISION:
The theory that if a color is present, it causes cells that register it to inhibit the perception of the complementary color like red vs. green.
Opponent cells: Cells that pit the colors in a pair against each other (blue/yellow & red/green).
Most color blindness is present from birth.
Some cannot distinguish between hues, and in more severe cases, unable to see hue at all.
Researchers have found that people with the more common types of color blindness possess genes that produce similar colors in their cones, so their cones do not work as they should.
Rare cases, people become color blind when they take severe damage to some regions of the brain.
How it Works: While different, digital cameras work based on many of the same aspects of human vision. When you push a button to take a digital picture, such as with a camera, an aperture opens and allows light to stream in. Unlike outdated film cameras, a Charged-Coupled Device (CCD) or a CMOS Image Sensor then converts the light rays into electrical signals.
This light is broken up into millions of pixels and measures the brightness of color of each pixel. Once this happens, each pixel is then stored as a number. Essentially, a digital photograph is simply a massive string of numbers.
Once a picture is converted and stored into digital form, numerous options open up. You can download and transfer images, edit them, and upload them onto websites, storage devices, and text them to your friends. When you edit a picture, you are merely adjusting the numbers that represent the pixels. For example, to increase the brightness of an image, a program scans all the numbers and increases them by 20%.
Components: Digital cameras are typically composed of the same general parts. Apart from the typical battery, memory card slot, and USB connectors, there are flash capacitors, LEDs (indicating operations), a lens (similar to the lens of the human eye), focusing mechanisms, image sensors (light-detecting microchip), and a processor chip (digital brain).
Comparison: When comparing human and the vision of digital cameras, it is vital to know how each specifically adjusts. While human vision can dynamically adjust based on subject matter, a camera captures a single image. It is also interesting that what we see in our brain’s reconstruction of objects are based on the input created by our eyes, not the actual light emitted from the image.
The three main differences of human vs. digital vision are the angle of view, resolution & detail, and finally sensitivity and dynamic range.
With cameras, the angle of view is governed by the focal length and sensor size, whereas the human eye has a focal length around 22mm. However, there are other factors involved such as the curve of the back of the eyeball and that there are typically two eyes present.
Closing Thoughts: As technology grows, digital vision advances as well. However, even with 4k and 8k resolutions, a still image does not allow the human eye to specify which regions to focus on. Due to this, the image would need to contain the maximum detail available, so that we could focus on and perceive it all. Although the human eye is a marvel of evolution, the human brain can, and probably already has, invented a way to surpass the eye with new advancements in camera and display technology.
Categories: Random Thoughts