![]() Either way, Wenger’s team released an update to their tool last week that works against Azure again. Microsoft may have changed its algorithm, or the AI may simply have seen so many images from people using Fawkes that it learned to recognize them. “It suddenly somehow became robust to cloaked images that we had generated,” she says. Her team recently noticed that Microsoft Azure’s facial recognition service was no longer spoofed by some of their images. Wenger is resigned to an ongoing battle, however. But because Erfani and her colleagues stop an AI from training on images in the first place, they claim this won’t happen with unlearnable examples. The more adversarial examples an AI sees, the better it gets at recognizing them. Unlearnable examples may prove more effective than adversarial attacks, since they cannot be trained against. When presented with the image later, its evaluation of what’s in it will be no better than a random guess. ![]() ![]() Instead of introducing changes to an image that force an AI to make a mistake, Ma’s team adds tiny changes that trick an AI into ignoring it during training. Unlike Fawkes and its followers, unlearnable examples are not based on adversarial attacks. (Credit to Sarah Erfani, Daniel Ma and colleagues) “Fawkes trains a model to learn something wrong about you, and this tool trains a model to learn nothing about you.” Images of me scraped from the web (top) are turned into unlearnable examples (bottom) that a facial recognition system will ignore. Together with Daniel Ma at Deakin University, and researchers at the University of Melbourne and Peking University in Beijing, Erfani has developed a way to turn images into " unlearnable examples," which effectively make an AI ignore your selfies entirely. Like Fawkes, LowKey is also available online.Įrfani and her colleagues have added an even bigger twist. Wenger thinks that a tool developed by Valeriia Cherepanova and her colleagues at the University of Maryland, one of the teams at ICLR this week, might address this issue.Ĭalled LowKey, the tool expands on Fawkes by applying perturbations to images based on a stronger kind of adversarial attack, which also fools pretrained commercial models. The tech is improving all the time, however. But it won’t sabotage existing systems that have been trained on your unprotected images already. Related Storyįawkes may keep a new facial recognition system from recognizing you-the next Clearview, say. The doctored training images had stopped the tools from forming an accurate representation of those people’s faces. In a small experiment with a data set of 50 images, Fawkes was 100% effective against all of them, preventing models trained on tweaked images of people from later recognizing images of those people in fresh images. Wenger and her colleagues tested their tool against several widely used commercial facial recognition systems, including Amazon’s AWS Rekognition, Microsoft Azure, and Face++, developed by the Chinese company Megvii Technology. Unlike previous ways of doing this, such as wearing AI-spoofing face paint, it leaves the images apparently unchanged to humans. Give Fawkes a bunch of selfies and it will add pixel-level perturbations to the images that stop state-of-the-art facial recognition systems from identifying who is in the photos. This technique is very close to a kind of adversarial attack, where small alterations to input data can force deep-learning models to make big mistakes. They make tiny changes to an image that are hard to spot with a human eye but throw off an AI, causing it to misidentify who or what it sees in a photo. Most of the tools, including Fawkes, take the same basic approach.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |