The Second Gaze
A synthetic portrait of one of the most iconic faces in modern photography, and the questions it raises in 2025
Amsterdam, July 7, 2025
What happens when an image that shaped our collective imagination is seen from another angle? In this visual essay, I revisit the iconic portrait of Sharbat Gula using generative AI, not to recreate but to interrogate.
In 2020, my AI-generated portrait of Jesus sparked a global conversation about realism, mythology, and historical imagination. Around the same time, my reinterpretation of Lady Liberty was fact-checked by Reuters and Snopes as part of a broader public debate on synthetic imagery and misinformation, while my portrait of Napoleon appeared on the cover of Le Figaro magazine, stirring further discussion about visual history and representation. These early works reached wide audiences at a time when generative image-making was still unfamiliar to most, and they helped shape the emerging cultural discourse around AI and authenticity.
I have spent the past six years working intensively with generative AI, watching the technology evolve rapidly. Working under the name Ganbrood, I have developed a visual practice that blends synthetic realism with conceptual storytelling. In 2025, generative image-making is no longer fringe or speculative. It is everywhere. The public has not only seen the medium, it has been shaped by it. The conversation around synthetic images has grown more urgent, layered, and critical.
What once felt like a fascinating tool for exploring new creative territory is now embedded in discussions about authorship, appropriation, power dynamics, and the increasingly porous boundary between fiction and documentation. The sense of wonder has given way to more difficult questions: how do we make images, why do we make them, and who gets to decide what they mean?
As this conversation deepened, so did my approach. The images alone no longer carry the full weight of the work. Intention and context now form the backbone of my practice. My goal is no longer just to make striking visuals, but to stage visual arguments, deliberate spaces where uncomfortable ideas can be held.
My most recent work, The Second Gaze, continues in that direction. It takes as its source material one of the most recognized portraits in photography: the 1984 image of Sharbat Gula, known widely as “the Afghan Girl.” Using generative AI, I have recreated her likeness from a shifted perspective. This is not homage. It is a confrontation. A deliberate, constructed response that challenges how we look and why we look.
As someone who spent years working as a documentary photographer, I do not take this lightly. This image is not neutral, and it is not safe.
The Second Gaze – Ganbrood (2025)
This portrait is not a replica. It is a visual echo.
It references one of the most iconic photographs of the twentieth century: the green-eyed girl captured by Steve McCurry in 1984 in a Pakistani refugee camp. That image has come to symbolize both conflict and beauty, the haunting intersection of human suffering and visual myth-making. But it also reveals the power of a photograph to consume and define the person it captures.
Sharbat Gula’s face, shown without her knowledge or permission, became a global icon. According to a 2002 interview, she was angered by its publication. In her Pashtun culture, it is considered highly inappropriate for a young woman to reveal her face publicly, much less be photographed by an outsider. Her identity was subsumed by a story she had no say in telling.
As a former documentarian, I am familiar with these ethical fault lines. Taking a person’s image, especially in situations of vulnerability, can teeter between storytelling and exploitation. The interests at stake are rarely limited to the subject. The photographer and the institutions that circulate the work are also implicated.
My generative reimagining carries its own ethical baggage. This image was not created with a camera, but with a neural network trained on countless photographs, many sourced without the consent of those pictured. The result is a new kind of appropriation. It inherits the complications of the original and adds another layer of complexity.
I chose to render her from a changed point of view, as if she has turned her face away from that first, invasive gaze. She is no longer the silent, front-facing subject that fixed her identity in our minds. She is now something more ambiguous. Less illustrative, more unresolved.
Working with AI every day, I constantly question my role in this process. The tools I use are powerful, compelling, and filled with contradictions. Where does artistic influence end and digital mimicry begin? What happens to authorship when cultural memory is reassembled by an algorithm?
The unresolved questions of art history and representation do not disappear with new tools. They shift shape. In the age of generative image-making, they return with more urgency.
We live in a moment when the foundations of art, journalism, and visual culture are under review. Feminist theory, post-colonial critique, and digital rights activism have called into question many assumptions that once passed as neutral. Generative AI does not resolve these tensions. It intensifies them. It mirrors our past and forces a reexamination of how meaning, authorship, and creative control are distributed.
There is something undeniably seductive about these tools. They let us peer into imagined moments, offering images that feel plausible even if they are entirely speculative. In that sense, generative AI allows us to adopt the perspective of a ghostly, time-traveling photographer. Someone who returns from an impossible past with images that blur memory and invention.
But this image is not an attempt at revision. It is a response.
I am fully aware of the implications of reworking Sharbat Gula’s likeness. I do not deny the ethical conflict it carries. I take responsibility for that choice. Artists should be the ones who step into discomfort first. That is our role: to work at the edge of certainty, to engage with tension, to provoke where necessary. Like documentary filmmakers or war photographers, we are called to seek the friction others avoid.
Many legitimate voices are shaping this conversation. Technologists, ethicists, artists, and journalists all contribute crucial insights. My role is not to watch from the sidelines, but to engage directly with the visual language of this moment.
Her face, once circulated across the globe without her consent, returns here not to be admired but to pose a question.
Note to readers: I welcome reflection, critique, and conversation. If this piece resonates with you, challenges you, or raises further questions, feel free to reply, comment, or share. The ethics of synthetic imagery is not a closed book. It is one we are all writing together.


I have the National Geographic magazine with that original photo on the cover. It marked me in many ways. My memory of that image is simple and explicit; Beauty and defiance. What has happened outside my brief experience of that is different. The context is key and the narrative has been heavily contrived over the years.
Fascinating. I heard a very knowledgeable researcher recently say that he felt the word “photography” was no longer useful in the age of AI. The border between photographs and generative images is fuzzy, to say the least. Do you consider AI images that look like they might be photographs (that are ‘photographic’) to be a separate category of image or part of the photomontage tradition?