A portrait of Abraham Lincoln, which ranks among the earliest known photo fakes, was created only a few decades after photography’s invention. Since then, debates about what actually makes a photo—what is genuine, what is phoney, and how much manipulation is too much—have raged on for ever. The conversation will be messier than ever as we enter an era when AI-powered products are pervasive and simple to use. Additionally, Google has completely reframed the topic of “what is a photo” with the Pixel 8.
Since many years ago, Google has been directing smartphone photography in this direction. The business invented the idea of computational photography, in which smartphone cameras do extensive back-end processing to produce images with more information than the camera sensor can capture in a single shot. The majority of contemporary smartphones employ a technique similar to Google’s HDR Plus technology to snap a burst of photographs and integrate them into one computationally constructed picture, integrating highlights, shadows, details, and other data to provide a more perfect image. It’s standard procedure now, but it also implies that a standard smartphone shot is already more than simply “a photo” – it’s a combination of several of them.
The Pixel 8 series confuses matters even more by beginning to tweak how easy a shot may be altered after it is taken. It offers simple-to-use editing features that are potent enough to produce an entirely different image from the one you took when you pressed the shutter, and those tools are promoted as being an essential component of the phone and camera. Although there have always been picture editing tools, the Pixel 8 blurs the distinction between taking and altering photos in novel and significant ways.
Best Take, Magic Editor, and Magic Eraser
The Magic Eraser, a two-year-old function that Google updated with generative AI for the Pixel 8 Pro, is where we should start. By “blending the surrounding pixels,” or taking what’s already there and smearing it to cover minute objects and blemishes, the first version could eliminate undesired elements from photographs. According to Google hardware head Rick Osterloh, this updated version “generates completely new pixels” using generative AI; the outcome is no longer just your shot but your photo plus some AI-assisted painting. Google demonstrated how the tool could easily remove a whole automobile and fill in features like wooden slats behind it in one example. In a different image, Google essentially utilised Thanos to snap two people into nothingness before using the new Magic Eraser to fill in the blanks.
The Pixel 8 also introduces a function that defies reality dubbed Best Take, which aims to fix the issue of someone blinking in a picture by letting you swap in their face from a recent snap. According to what I observed during our tests at Google’s event, it appears to have potential because it can perform some natural face swaps.
The major one is Magic Editor, though. Magic Editor, which was first unveiled at Google I/O in May, makes significant adjustments to large portions of the image using generative AI. Simply tapping and dragging a person around will reposition them into a better position. You can easily change the person’s size. Magic Editor may even be used to alter the sky’s colour.
Whereas Magic Eraser and Best Take focus more on ‘correcting’ photographs by removing blinks and intruders, Magic Editor totally embraces ‘changing’ photographs by turning reality from an unsatisfactory version to one that is far cooler. Here are two illustrations from a Google video. In one, a picture of a father tossing a baby into the air has been altered to raise the child up higher. In another, a person leaps for a slam dunk at a basketball hoop before taking away the bench they used to raise themselves.
Making your own picture edits is not necessarily bad. It has been practised for a very long period. However, Google’s tools encourage their widespread use without any specific restrictions or thought to what that might entail. They give everyone access to powerful photo manipulation features that were previously only possible with some Photoshop expertise and hours of work. All of a sudden, practically every photo you take may be quickly altered to look false.
Others can identify Pixel photographs that have been altered, but they will have to seek for it. According to Google representative Michael Marconi, metadata will be present in modified photos that were created using Magic Editor. The information is based on [International Press Telecommunications Council] technological standards, and we are adhering to its recommendations for categorising photographs changed using generative AI, Marconi continues.
Theoretically, this all implies that you will be able to examine certain information to see whether AI was involved in producing the appearance that the infant is too high in the air in a Pixel image. (Marconi did not respond to inquiries concerning the storage location for this metadata or if it would be modifiable or removable, as is the case with conventional EXIF data.) According to Marconi, Google also adds metadata to images that have been modified with Magic Eraser, and this is true of earlier Pixel devices that support Magic Eraser.
According to Marconi, using Best Take does not add metadata to images, but there are several limitations on the tool that may prevent it from being abused. Best Take “uses an on-device face detection algorithm to match up a face across six photos taken within seconds of each other,” according to Marconi, and it does not produce new facial expressions. The source photographs for Best Take “require metadata that shows they were taken within a 10-second window,” according to Marconi, so it can’t extract facial emotions from images taken outside of that window.
Small adjustments may unmistakably enhance a photograph and help identify the subject you’re attempting to portray. Additionally, organisations that place a high priority on photo authenticity have already established very clear guidelines for what modifications are acceptable. For instance, The Associated Press accepts “minor adjustments” such cropping and cleaning up dust from camera sensors but forbids red eye correction. According to CEO Craig Peters, Getty Images’ editorial coverage policy is to “strictly avoid any modifications to the image.” Cross-industry solutions for content provenance are being developed by groups like the Content Authenticity Initiative, which may make it simpler to identify material produced by AI. On the other side, Google makes its products quite easy to use, and while it has
Simplicity of generative AI can be detrimental. “In a world where generative AI can produce content at scale and you can distribute that content on a breadth, reach, and timescale that is unprecedented, authenticity gets squeezed out, “immense,” Peters said. Peters also thinks that businesses need to go beyond metadata to find the solution. The correct solutions should be created around that, he added, so “the generative tools should be investing.” It is primarily in the metadata of the current view, which is readily extracted.
We are now in the early stages of the AI photography era, and the tools available to us are easy to use and conceal. However, Google’s most recent upgrades make picture modification simpler than ever. I predict that Apple and Samsung will follow suit with tools that are comparable to these, which might fundamentally alter the notion of “what is a photo?” Now, the query “is anything a photo?” will grow more and more prevalent.