top of page

Big Brother-2020




The New Yorker March 16, 2020 pp44-51 ANNALS OF TECHNOLOGY “ADVERSARIAL MAN” “Dressing for the surveillance age” “Is there anything fashion can do to counter the erosion of public anonymity” by John Seabrook


The author, like myself, grew up in a small town and in small towns, much to your chagrin sometimes, everyone knows you. I remember moving to Los Angeles and realizing total anonymity. Now, if you didn’t realize it, the window of being "lost in a crowd" is being shut. The question arises, do we as citizens have methods for thwarting the onslaught of Closed-Circuit Television (CCTV) and artificial intelligence (AI) etc.?


Our surrender of privacy has and is happening with eyes wide open. We learned to love the convenience and security of having facial recognition on our cell phones, appreciate the value of CCTV in reducing crime, and have come to realize that AI can read medical images better than humans. Reluctantly, we might even admit to knowing how these systems could undermine our freedom as they have in mind or reality in China and Israel. Suppose we have good and not nefarious reasons to avoid recognition, what methods are at our disposal to be an “Adversarial Man”?


Although reported elsewhere, we now starting to understand that AI training for interpreting medical images is fickle. Such machine learning happens by running one or many training sets. After that the training is validated with a set of previously classified images. Recent reports indicate that machine learning and algorithm-building may not translate accurately to images captured on another instrument. The emerging argument is that each system must be trained and validated independently. The same is pointed out, in this article, as it relates to reading everything from License Plates (Automatic License Plate Readers ALPR) to human faces. Systems are only as good as there unique training. Were all the images of the same quality? Did they come from a driver license image or from a social media image? Will the quality of real time images be comparable? Was the system trained to deal with facial masks or hats etc.?


Currently, many of the systems assume facial symmetry to efficently identify us as individuals. So being pictured with make-up to alter symmetry, or to darken skin tone, is one maneuver that may undermine the effectiveness of some systems. As a real life example, it has been reported that many systems are better at identifying light-skinned males than they are at identifying dark-skinned females. A different approach, to confuse or circumvent these systems, is to feed them with fake information-as an example garments embossed with figures of fake license plates that don't link to a database.


Unfortunately, for every action there is a reaction if that action creates value. Indeed many groups are working, in the private and public sector, to deconstruct these systems as a way to understand current limitations and guide development of more accurate and precise identification.


Some interesting notes from this article.


YOLO means You Only Look Once “a vision system widely employed in robots and CCTV.


Nuisance variables (NV). The human brain is very sophisticated such that we are able to eliminate NV “noise” like lighting and shadows to recognize a human face and we can even recognize a face from just pieces of one’s image. AI systems will soon be approaching this ability by leveraging deep learning and improved computing speed. “Billions of trial and error cycles…might be required…to figure…what a cat looks like but what kind of cat it is”.


Progress in computer vision has moved much faster than “local and national privacy policies”. “What aspects of your face and body should be protected by law from surveillance machines…”? Under Xi Jinping, China has a policy of “stability maintenance” that leverages the fear and or the reality of these technologies. So far, the U.S. government has not “created governing structures to safeguard citizens” but “last May San Francisco banned city agencies from using facial-recognition technologies”. “In New York City, ‘no one can be arrested on the basis of the computer match alone’ and that human investigators would need to confirm any matches that machines suggest”

.

ALPR readers, using optical character recognition (OCR), are “mounted on street lights, highway overpasses, freeway exits, toll booths, digital speed-limit signs and tops of police cars”. “They are also found in parking garages, schools, and malls”. PlateSmart Technology offers software that can convert any digital camera into ALPR. Using various software these data systems can guess where your vehicle is likely to be at any time. Law enforcement uses “hotlists” containing plate information allowing them to track vehicles of interest. Interestingly, “99.9 per cent of the three hundred and twenty million plate images…in the [California] database had not been involved in criminal investigations”. Casinos and also Taylor Swift are reportedly using facial recognition to avoid unwanted patrons and stalkers respectively. Retailers would like to identify VIPs and use data to understand “dwell time” within a store and even to read facial signals. Someday our cell phones may allow us to “snap a picture of someone across the subway…run the face through a reverse search”. “Big Brother is us”.

bottom of page