When it comes to photography, the idea that the camera never lies is not entirely true. In today’s world, with the prevalence of smartphones, photo editing has become common practice to enhance images on the go, from adjusting colors to fixing lighting.
Now, thanks to artificial intelligence (AI), a new generation of smartphone tools is pushing the boundaries of what it means to capture reality. Google’s latest smartphones, the Pixel 8 and Pixel 8 Pro, go even further than other devices by using AI to alter expressions in photographs.
We’ve all been there: in a group photo, someone looks away from the camera or forgets to smile. With Google’s phones, you can sift through your photos and seamlessly merge different expressions from past photographs of the same person using machine learning.
This feature, called Best Take, takes advantage of AI technology. Moreover, these devices allow users to remove, move, and resize unwanted elements in a photo, from people to buildings, with the Magic Editor tool. The software uses deep learning, which is an AI algorithm that analyzes surrounding pixels to determine what textures should fill the gaps left behind. To train this algorithm, it has been exposed to millions of other photos.
A key aspect of these features is that they can be applied not only to pictures taken on the device but also to any pictures in the user’s Google Photos library. However, some observers have raised concerns about the ethical implications of such AI manipulation. Tech commentators and reviewers have used terms like “icky,” “creepy,” and suggested it could further erode trust in online content.
Andrew Pearsall, a professional photographer and senior lecturer in Journalism at the University of South Wales, agrees that AI manipulation carries dangers, even from an aesthetic perspective. He warns about the potential risks associated with crossing the line into a realm where reality is no longer authenticate.
Isaac Reynolds, the leader of the camera systems team at Google, emphasized the company’s commitment to the ethical consideration of its consumer technology. Watts countered the concerns by stating that features like Best Take are not “faking” anything. He explained that the final image represents a moment that may not have actually occurred but is the desired image created from multiple real moments. Reynolds claims this technology provides something unprecedented for smartphone cameras, allowing users to capture the desired expressions realistically.
According to Professor Rafal Mantiuk from the University of Cambridge, the use of AI in smartphones should not be interpreted as an attempt to mimic real-life photographs. Rather, people desire aesthetically pleasing images rather than absolute realism when taking pictures.
Due to the physical limitations of smartphones, devices rely on machine learning algorithms to “fill in” information that doesn’t exist. This facilitates enhancements like zooming, low-light photography, and Google’s Magic Editor feature, which either adds elements to photos that were never there or swaps in elements from other photos, such as replacing a frown with a smile.
Image manipulation is not a new phenomenon; it has existed as long as photography itself. Nonetheless, the ease with which AI can augment reality is unprecedented. Samsung faced criticism earlier this year for utilizing deep learning algorithms to improve the quality of Moon photos taken with their smartphones.
The algorithm could generate usable images regardless of the starting quality of the photo. This further challenges the notion that a photo generated by the smartphone is an accurate representation of reality.
Reynolds states that the issue of ethics surrounding the use of AI in photography is complex and cannot be reduced to a single “line in the sand” boundary. However, Google is proactive in addressing this topic and adds metadata to its photos, clarifying when AI functions have been employed, a move considered an industry standard.
Reynolds asserts that it is an ongoing conversation, with Google actively listening to user feedback. Google’s confidence in the acceptance of AI features is reflected in the prominent role they play in the company’s advertising campaign.
While these new AI technologies raise concerns regarding reality, Professor Mantiuk underscores the importance of considering the limitations of our own eyes. He points out that human brains reconstruct and infer information, even missing information, which is why we perceive sharp and colorful images. So, even though cameras may be accused of “faking stuff,” our brains perform a similar process in a different way.