If a tree falls in the woods, and no one hears it…? What if a real photograph is cropped, framed, and exposed in just such a way as to convey a narrative that is counter to the “truth” of the “reality”? What if a deep fake is created that is “true” to the event in a way no camera on site could be? Indeed. This is but one quandary we face. Deep Fake AI has arrived. The lines between fact and fiction are unalterably blurred. Are you ready for the Flood?
Indulge me in this thought experiment: What if there were no cameras on the scene of a historically critical event, but with historical accuracy, a reasonable simulation of the actual event was formed, and photo stills and video representations were rendered? It is a representation of the truth, as one might recall something from memory. The event actually happened… but the rendering, like the conflicting oral eye-witness acccounts of several observers, is inaccurate in many little ways. So is it real, or is it fake?
Deep Fake AI: Question Everything(!?)
We all saw deep fakes coming from a mile (a decade) away. What we didn’t see was just how trivial producing them en masse would be. The Generative content revolution started last year (2022) with OpenAI’s DALL-e, and a spark turned into an internet-consuming bonfire within 12 months. Not quite as hyped as the ChatGPT/GPT4 launch, a 12-person company named MidJourney released the version 5.0 upgrade to their AI.art engine.
What’s the big deal with v5? Essentially: text-to-photorealistic imagery within 30 seconds. The AI synthesizes the image… ANY image you can describe (or if you’re at a loss for words, you can just upload a drawing) faster than you can type its description. And whereas its predecessors were beautifully “artistic” and often “painterly”, the v5 engine, without additional prompting, is unapologetically, totally, photorealistic.
[example 1: Will Smith eating spaghetti]
[example 2: Lex Fridman and Eliezer Yudkowsky]
[example 3: UnRecord game footage]