Skip to content

The Diffusion Pulse Newsletter #001

Someone on reddit used Stable Diffusion to print out custom Magic the Gathering cards!

An analysis of Stable Diffusion’s porn filter, hint: it’s biased and doesn’t work very well.

Stable Diffusion running on a raspberry pi, it takes ~45 minutes to generate one image!

Stable Diffusion running on the iPhone, all source code has been released on github

A researcher has gotten stable diffusion to learn a style from one image, code is open source, paper coming soon.

Article from Wired: “Stable Diffusion feels like a miracle”

Versatile Diffusion was released as code and a paper. The model that can do image to text, image variation, text to image, text variation, and other use cases. Dubbed as “Steps Towards a Universal Generative AI”

An interview with Emad Mostaque, CEO and co-founder of stability.ai

StableDiffusion integrated into VR, looks amazing.

A tutorial on making futuristic bedrooms

A project to turn music into real time AI

A review of the porn generating group Unstable Diffusion