OEMs have been competing to create better mobile cameras ever since the first cellphone with a camera arrived on American soil in 2002. Having a good phone camera has become essential for anyone trying to keep up with the Joneses since the advent of the smartphone era in 2008 and the emergence of social media.
But prior to the widespread use of mobile cameras, the majority of photographs were taken using equipment that was larger than almost all of the mobile devices that replaced it. How is that even possible? What processes are taking place inside our phone to enable 100MP photos?
Three fundamental components make up any smartphone camera. The camera’s lens, which directs light into the device, is the first. The second component is the sensor that transforms the concentrated light photons into an electrical signal. The software that transforms those electrical signals into Instagram-ready photos is the third component.
Let us now take a closer look at each of these components.
The aperture, which refers to the size of the hole, controls how much light enters the camera’s sensor. A bigger aperture is generally a positive thing for mobile cameras because it means the camera has more light to work with.
Elements are the several plastic lenses that make up a smartphone camera. The way that light behaves causes distinct light wavelengths (colors) to be refracted (bent) at various angles as they pass through a lens. This implies that your scene’s colors will be misprojected onto your camera sensor.
All cameras require a number of lenses to convey a clear image to the sensor and to compensate for this and other similar effects.
Focus is a crucial lens feature that has historically been abstracted from the user. Although some camera applications allow you to manually adjust the focus, the majority of them use the sensor, additional hardware like a laser range finder, or a combination of the two to do so.
Software-based or passive autofocus analyzes information from the image sensor to decide if the image is in focus and then modifies the lenses to make up for it.
In order to calculate the distance between the phone and your target, active autofocus makes use of additional hardware.
The sensor is a tiny silicon wafer with a single function: to convert photons (light) into electrons (electrical signals). On the tiny surface of the sensor, millions of photosites do this photovoltaic conversion.
The sensor interprets that pixel as black if no photons are detected at the photosite. A pixel turns white when a lot of photons arrive at the photosite. The bit depth of a sensor refers to how many different shades of gray it can register.
Each photosite has a color filter on top of it that only allows red, green, or blue light to pass through. As a result, an image made up of red, green, and blue pixels of different brightness is created, which must then be transformed using challenging algorithms into a full-color image.
Because it’s unlikely that you carry a tripod around with you, smartphone manufacturers cram as much technology as they can into their devices to reduce camera shake as much as possible. There are two main types of image stabilization: optical and electrical.
A gyroscope is used to detect phone movement and tiny motors or electromagnets are used to move the lenses and sensor to make up for it in optical image stabilization (OIS). When there is less light available, OIS is ideal since the image sensor needs more time to gather light.
Electronic image stabilization (EIS) uses the phone’s accelerometer to detect any motions and moves the image frames or exposures rather than the camera’s physical components. The final image or video has a lower resolution since the exposures are set based on the image’s content rather than the image sensor’s frame.
The term “hybrid image stabilization” refers to the employment of both methods in many recent phone models. The best of both worlds can be had by combining the two technologies, especially for video footage.
The image signal processor (ISP) transforms those 1s and 0s into an Instagram-worthy image after the image sensor has completed its task of converting the light given to it by the lenses into an electrical signal.
For all of you shutterbugs out there, the information supplied to the ISP is essentially a black-and-white image, or a RAW image. The color data must first be returned using the known configuration of the color filter matrix we previously discussed by the ISP. We now have an image, but each of its pixels is a different shade of red, green, or blue.
The procedure of demosaicing comes next. Here, the ISP adjusts the pixel colors in accordance with those of its surroundings. The demosaicing technique will turn green and red pixels into yellow, for instance, if an area has more green and red pixels than blue pixels.