Starting with the announcement of OpenAI’s DALL·E 2 in April of this year, enthusiasts and research engineers alike have generated a tidal wave of interest in the latest quantum-leap ML advances for Generative AI—particularly for forms of human expression. DALL·E 2 pioneered a new AI technique known as Diffusion Models: learning from a huge range of photography and visual expression on the open web, then “denoising” an initial input of random noise, while using the power of Transformers to structure how the resulting image evolves based on an input prompt.
Over the last 6 months, DALL·E 2 has attracted enthusiastic supporters (and a few critics, as with any new disruptive technology) in artistic, engineering, and research communities. That’s because these models power new forms of expression, including techniques that work “backwards” from images to understand the concepts the model “sees”.
In this panel, we’ll compare and contrast the impact that Generative AI has had across a wide range of products and tasks, from the impact on the field of software engineering, to the workflows adopted in creative fields. We’ll also talk about the impact on curation and community on the web, and how the act of discovery and inspiration might change with workflows powered by Generative AI.