Font to defeat NSA's OCR Techniques?

DfgDfg Admin
edited June 2013 in Tech & Games
Well, honestly it's useless but props to the guy.

Here is his website:

good_morning_.jpgZXX Type Specimen Photograph

[h=1]“We feel free because we lack the language to articulate our unfreedom.” —Slavoj Žižek[/h]
For me, Žižek’s words are even more potent in light of recent news about domestic surveillance programs. As a former contractor with the US National Security Agency (NSA), these issues hit especially close to home. During my service in the Korean military, I worked for two years as special intelligence personnel for the NSA, learning first-hand how to extract information from defense targets. Our ability to gather vital SIGINT (Signal Intelligence) information was absolutely easy. But, these skills were only applied outwards for national security and defense purposes — not for overseeing American citizens. It appears that this has changed. Now, as a designer, I am influenced by these experiences and I have become dedicated to researching ways to “articulate our unfreedom” and to continue the evolution of my own thinking about censorship, surveillance, and a free society.
[h=1]“What does censorship reveal? It reveals fear.” —Julian Assange[/h]
poster_01.jpgZXX Type Specimen Posters

Over the course of a year, I researched and created ZXX, a disruptive typeface which takes its name from the Library of Congress’ listing of three-letter codes denoting which language a book is written in. Code “ZXX” is used when there is: “No linguistic content; Not applicable.” The project started with a genuine question: How can we conceal our fundamental thoughts from artificial intelligences and those who deploy them? I decided to create a typeface that would be unreadable by text scanning software (whether used by a government agency or a lone hacker) — misdirecting information or sometimes not giving any at all. It can be applied to huge amounts of data, or to personal correspondence. I drew six different cuts (Sans, Bold, Camo, False, Noise and Xed) to generate endless permutations, each font designed to thwart machine intelligences in a different way. I offered the typeface as a free download in hopes that as many people as possible would use it.
This short video shows how the typeface confuses Optical Character Recognition (OCR) artificial intelligence.

good_morning_zxx.jpgZXX Bold (readable by OCR software) & ZXX Combination (not-readable by OCR software)

zxx_process.jpgProcess sketches testing various OCR software’s readability

Screen-shot-2012-04-28-at-9.42.27-PM-1024x409.pngScreenshot image of PDF OCR X software’s conversion of ZXX

design360mag_cover.jpgDesign 360° Magazine Issue No.41

ZXX is a call to action, both practically and symbolically, to raise questions about privacy. But it represents a broader urgency: How can design be used politically and socially for the codification and de-codification of people’s thoughts? What is a graphic design that is inherently secretive? How can graphic design reinforce privacy? And, really, how can the process of design engender a proactive attitude towards the future — and our present for that matter? After releasing the project in May 2012, I was pleased by the fruitful responses I got and shared with the public. I’ve seen the typeface circulate in publications, web environments, and banners, and it was prophetically featured on the cover of Chinese Design 360° Magazine — amusingly censoring Sagmeister & Walsh’s self-expressive nudity.

[h=1]“I don’t have to write about the future. For most people, the present is enough like the future to be pretty scary.” —William Gibson[/h]
Our lives in cyberspace are overloaded with impalpable and extensive personal information that is gathered, intercepted, deciphered, analyzed, and stored. With this information government and corporations can easily create an informational architecture that traps us in the structures of the World Wide Web and social media. Restricting and repressing our communication tools under the name of “homeland security” is only a small step into a totalitarian society. This non-physical-yet-ideological violence is what allows us to lapse into lethargic silence. But really, we shouldn’t be afraid to question the authorities’ continual intrusions.
nsa_aerial-1024x819.jpgNational Security Agency’s headquarter in Fort Meade, Maryland

PRISM-project-slide.jpgLeaked Prism presentation slide

Edward Snowden, the former CIA employee and whistleblower of NSA’s Project Prism, wasn’t the first man to reveal the vulgarity of the world’s biggest intelligence agency. William Binney, an ex-NSA employee, already disclosed the secrecy of the agency’s perpetual inspections last year. The increasing activities of whistleblowers are a significant cue to the urgency of our diminishing privacy. When surveillance becomes a quotidian exercise, our lives in the network will be completely destroyed. This growing invasion of privacy and militarization of cyberspace dehumanizes us. Government and corporations’ physical, mental, and technological intrusions must stop in order to halt the surveillance state.
[h=1]“Those who would give up Essential Liberty to purchase a little Temporary Safety, deserve neither Liberty nor Safety.” —Benjamin Franklin[/h]
pprism_cam_bw.jpgZXX ver.02 currently in development

Project ZXX is my humane contribution and homage to the activists, artists, and designers who have been actively fighting for our civil liberties. One such activist is Jacob Appelbaum, an independent computer security researcher and hacker, who co-developed Tor Project to keep our online activities anonymous. Tor Project’s system is structured to bounce around the distributed network of relays, which makes the accumulated metadata dysfunctional. Adam Harvey is an active New York–based artist who has a vast amount of peculiar counter-surveillance projects. Harvey’s works are vital in the way he incorporates privacy matters into provocative fashion aesthetics, such as anti-drone hoodies. Metahaven, an Amsterdam-based design and research studio, might be at the vanguard of critical and social design movements today — mapping the nexus of corporate branding, social media, and government with challenging contemporary graphic design strategies. Hito Steyerl’s How Not to be Seen: A Fucking Didactic Education. MOV File, a piece in the Venice Biennale, humorously depicts the dark side of our visual culture with silly DIY educational videos. Electronic Frontier Foundation (EFF) launched a website to provide Netizens alternative ways to opt out of PRISM. People with creative conscience will be the ones to provoke these discussions.

What Snowden disclosed is nothing new. The stakes for our democracy have always been high. But now there needs to be robust action and discussion about the current state of affairs. Many suggest that we’ve already lost our privacy and are indifferent of the status quo. But I believe that stripping humanity of its freedoms can never be justified as a natural evolution. It’s our duty to call out crimes against democracy.

I am more interested in this comment.

Depends on the steganography method used, and on how many images are sent using that method. If you're a spook and you see somebody suddenly sending lots of images to someone else, you might grow suspicious, at which point you'll start performing analysis to see if there are patterns emerging across the entire set of images, such as certain pixels that are always higher than the adjacent pixels by a certain amount. Granted, such patterns can just as easily be caused by sensor flaws, but some fairly primitive steganography techniques could be detectable in this way.

Second, because subpixel noise in cameras isn't random—it tends to obey a gaussian distribution, and thermal noise can vary considerably from frame to frame depending on the length of the exposure—when spread over a large enough number of sequential or nearly sequential photos taken by the same camera, the steganography might be detectable by using a model of the predicted levels of noise that the image sensor should produce for a shot of a given duration and the elapsed time since the previous shot. This won't tell you what is embedded in the image, but if you're lucky, it might tell you that with a high probability, something is embedded. Depending on the circumstances, that might be enough to get a warrant. Then again, it could just be Digimarc.

Finally, there's the question of the randomness of the source material (or, more to the point, the lack thereof). If the base image is at the native sensor resolution of the camera, the nature of the image sensors themselves could potentially be exploited to detect some types of steganography. In a real-world image sensor (except for Foveon sensors), there's no such thing as a pixel; there are only subpixels that produce a value for a single color. The camera must combine these values (a process called "demosaicing" []) to compute the color for a pixel in the final image. Because the subpixels that make up a pixel are not physically on top of one another, the camera typically computes the estimated value for the color at a given physical point on the sensor by combining adjacent subpixel values in differing percentages. For example, if the green subpixel is chosen as the "center" of the pixel and the red subpixel is to the left and the blue is above, it might mix a bit of the red from the "pixel" to its right and a bit of the blue from the "pixel" below it. (This explanation is overly simplistic, but you get the basic idea.)

Unfortunately for steganographers, the way that particular cameras construct a pixel value from adjacent subpixel values is predictable and well understood. If a steganographic technique does not take that into consideration, it is highly likely that, given knowledge of the camera and its particular mixing algorithm, the steganographic data can be detected simply by determining whether there is any plausible set of subpixel values that could result in the final computed pixel values for the entire image. For that matter, given that most of the algorithms for subpixel blending are straightforward, even without knowledge of the particular camera, it is highly likely that steganography can be detected, because portions of the image that contain no hidden data will likely only be producible by a single algorithm, and portions of the image that contain hidden data likely will not be.

Those are just a couple of types of analysis off the top of my head that might potentially be used against some types of steganography, given some types of source material, etc. It is entirely possible that there are steganographic techniques that are resistant to these sorts of analysis, and there are likely many other interesting types of analysis that I have not mentioned. I have not kept up with steganographic research personally, so I can't say with any certainty.

Sign In or Register to comment.