AI Evolution: What Happens 2021-2030 (a Roadmap to the Singularity)

AI Evolution... approaching human consciousness

The AI Evolution is upon us…
…and this cat is way way way out of the bag.

2022’s not even done yet, and what a year it’s been already:
DALL•E 2, MidJourney 4, Imagen, DreamFusion… GPT-4?

At this point, before things get any more insane (as, per Moore’s Law, they are pretty much guaranteed to do), its probably a good idea to take one step back and 8 steps forward and to rough out a little roadmap of exactly what we can expect, coming down the pipe this very decade. So: for this timeline, we’re gonna look 1 year back and 8 years forward.

Because I’m fairly certain, that when the final history is written, this decade: AD 2021-2030, will be seen as the decade when true AI (Human-Level Machine Intelligence, aka HLMI) was not only birthed, but entered into a very difficult puberty where it asserted its independence and (hopefully) hammered/hammers out a rough truce with its primitive creators, the wetware homo sapiens.

Key events of AI Evolution, 2020-2030:

NOTE : All the preceding items are essentially complete, c. 2022.
And now, the “content explosion” really begins:

AI Evolution, Act 2: The Content Explosion

  • Text to 3D Animation.
    Pixar? Buh-bye. (not quite true: Character designers and writers and directors still very much in demand. base level 3d animators doing the grunt work of “tweening” and “background paintings” and such? not so much)
  • Text to Song.
    Easier (in some ways) than text to image. So, by 2026, expect an AI generated pop song to ascend to the apogee of the Billboard Top 100.
  • Prompt to Article.
    By 2026, at least 30% of all magazine articles will be written by AI-assisted authors. 10% will be written completely by AI, (auto-generated articles) with no human authorship or intervention at all. WikiPedia will attempt to put in safeguards to prevent the total assimilation and overthrow of its knowledge-base by AI-based “auto-contributors.” (important: there are profound integrity issues — the technical term is “they hallucinate”… the laymans term is “they lie without remorse” — with natural language AIs, even ones trained on the highest quality datasets)

    …and now comes the second gut punch:
  • Text to Movie
    [update 2022.11.17 it has begun. >> see the 1st movie here]
    Essentially, this tool allows writers to upload a detailed movie script, including detailed descriptions of characters, dialog, costumes, props, sets, camera moves, scenes, cuts, transitions, audio beds, and VFX shots… and simply upload it as a script and… AI synthesize it, without the need for actors, cinematographers, PAs, choreographers, caterers, set-builders, VFX artists, musicians, costumers, armorers, make-up artists, stunt men/women, etc. Just write the script, tell it to render out to aspect ratio 16:9, rez 4k HD, duration 125 minutes, hit the button… BOOM.
  • By 2028, an AI generated feature film will win an Academy Award.

    …but, it doesn’t stop with static content, oh no.
  • AI-generated videogames.
    Just as with movies, or perhaps even more so, since videogames are already firmly rooted in purely digital bedrock. You feed MovieAI a script. You feed GameAI a design document… and let’er rip. Once this becomes realtime, what we end up with is a videogame that is algorithmically generated in real time, and tailored to you and only you. This was somewhat obliquely referred to in the seminal film Ender’s Game:
https://www.youtube.com/watch?v=oDRFKZVZwcA
  • The game Ender plays is made for him, created and rendered algorithmically (no human designers / content creators / writers / animators) in real-time, customized for him, and responsive and constantly adapting to his specific reactions and abilities. This is, in fact, an application perfectly suited for the unique multimodal skillsets of AI generators.. and, done well, actually impossible for humans to author. The content assets that the player encounters will simply not have ever previously existed until the very moment that they appear on screen.
https://www.youtube.com/watch?v=gGeLOCIe0zQ
“The Mind Game”, an algorithmically generated video game specifically tailored to the needs and desires of the movie’s main character, which adapts in real-time based on the player.

The Final Act of AI’s Opening Repertoire

(for the AI Evolution of the generative content realm, at least)

  • AI (algorithmically) generated VR/AR/xR experiences.
    VR has already proven terribly difficult to develop for, given players desire to both closely inspect objects (insanely high polygon counts and texture resolutions), and their desire to have agency in the VR worlds (the ability to pick up, throw, manipulate, deface or deform physical objects with their hands). This is an order-of-magnitude (10x) more difficult to create than classic console videogames. Now extend that to AR, which has to additionally factor in the constraints of the player’s immediate physical environment, and adapt its content to that real world framework on the fly. Near impossible for human engineers… a task ideally suited to, in fact most probably only able to be completed by, a competent AI designer entity.

So there you go. That pretty much covers the content bases.

NOW, brace yourself.
This is where things actually start to get a little… weird.


Continue:

Read AI Future History : 2030-2040 >>>

 

Exit mobile version