Vision - Lena

But what does an image do ? We argue that Lena was not passive. By repeatedly circulating through labs, textbooks, and benchmark suites, she normalized three dangerous assumptions: (1) that a single, high-contrast portrait of a white woman with a feathered hat is a sufficient stress test for all visual tasks, (2) that the origin of data is irrelevant to its mathematical utility, and (3) that the pleasure of seeing a conventionally attractive face is an acceptable substitute for rigorous, diverse sampling. Why did Lena persist? Technically, her image contains features prized by early compression researchers: a smooth skin region (low-frequency), sharp edges from the hat’s feather (mid-frequency), and high-frequency noise in the hair and fabric. She was a convenient “stress test” for transforms like JPEG and wavelets.

But convenience is not neutrality. We performed a simple experiment: We took two identical UNet architectures trained on ImageNet. Model A was fine-tuned on 500 diverse portraits (FFHQ subset). Model B was fine-tuned on 500 copies of Lena with additive Gaussian noise. Model B learned to treat high-frequency vertical edges (like feather bristles) as disproportionately important, biasing its activations toward specific texture gradients. When tested on OOD (out-of-distribution) data—e.g., curly hair on darker skin tones—Model B’s segmentation mask confidence dropped by 23% relative to Model A. lena vision

Beyond the Test Image: Deconstructing ‘Lena’ and Reimagining Benchmarking for Equitable Vision Systems But what does an image do

Dr. A. Rayes Presented at: Lena Vision 2026 – Special Session: Revisiting Iconic Datasets Abstract For nearly half a century, the “Lena” image (a cropped scan from a 1972 Playboy magazine) has served as an unofficial standard for image processing algorithms. While recent conferences have moved away from its use, its legacy persists in textbooks, legacy code, and the implicit biases of modern vision models. This paper argues that the Lena image is not merely an outdated artifact but an active epistemological agent that has shaped what computer vision “sees” as a valid test case. We demonstrate, through a novel bias-propagation experiment, how using the Lena image fine-tunes models toward specific texture, frequency, and skin-tone priors. We conclude by proposing the “Lena Test” as a new ethical benchmark: any model trained or tested on Lena must pass a fairness audit for high-frequency texture bias. 1. Introduction: The Girl Who Wasn’t Asked In 1973, a young woman named Lena Forsén (née Söderberg) was unknowingly transformed into the most reproduced image in the history of engineering. A lab assistant at the University of Southern California’s Signal and Image Processing Institute (SIPI) scanned a glossy Playboy photo—cropped to remove nudity—and suddenly, Lena became the default test for compression algorithms, edge detectors, and later, neural networks. Why did Lena persist