Skip to content

Diffusion models will bring the Diamond Age’s Primer to life

Neal Stephenson’s book “The Diamond Age: Or, A Young Lady’s Illustrated Primer” written in 1995 is a favorite among science fiction lovers. If you haven’t read it, I highly recommend you read it. The book is a story of what the future of artificial intelligence, nanotechnology, and social class may look like in the not-too-distant future. I won’t go into the plot of the story, but the main topic of the story is the Primer, a piece of hardware and software technology that acts as an interactive book to educate and raise children. The book has a database of educational topics and is able to interact with the reader to create immersive characters that are lifelike and bond with the reader with the goal of guiding the reader to educational outcomes. As Fiona, the main character of the book progressed and learn, she would talk and interact with the book. Based on those interactions, it would change the story and content around. The idea of the Primer has brought considerable interest to many companies. Amazon kindle’s original codename was the Primer. Facebook has referenced it multiple times in relation to its metaverse. Many technology companies have thought about the idea. The technology has never been fully developed to be able to actually build one, but that seems like it will change very soon with Stable Diffusion and generative AIs like GPT3. From simple text strings, we can generate images of anything we can image:

A boy eating an apple.

“A smiling hippo”

“An anime cat eating a carrot”

The technology is brand new and still a little rough, often times you will see extra hands, feet, or other weird artifacts in the image. Now that it is released to the public and anyone can modify it, there are tons of people working to make the technology create better and faster images. There are people working to make it work in video and in 3D. So the core technologies are here. We even have the technology to take a few pictures of you and then be able to generate images of you doing anything.

Here are some examples of a model we trained on Dave Chappelle

Chappelle as a rabbi:

Chappelle as a king:

With the Primer, when you are learning about a subject like physics, whole movies and scenes can be generated that are customized to you and what you have learned. So you might get a custom lesson showing how gravity works in an environment like a farm because you like farms. Or if you just finished a lesson about the ocean, your physics lesson may be by a synthetic beach. The founder of Stability.ai, the group that brought us stable diffusion has already mentioned that this technology could power holodeck like the ones we have seen in Star Trek.

We love the idea of bringing the Primer and holodeck to life. We are doing our part to help bring this vision to reality by making it as easy as possible for anyone to create and use their own diffusion models. Just upload some images, click submit, and use your custom image generator ~an hour later. We hope to be helping people work out of their own holodecks soon :)