SFSCON23 - Markus Pobitzer - Image Generation with Diffusion Models
SFScon
69 views
11 slides
Dec 04, 2023
Slide 1 of 11
1
2
3
4
5
6
7
8
9
10
11
About This Presentation
Recent machine learning developments saw a breakthrough in generating images. So-called Diffusion Models can create photo-realistic images from noise. With the help of an input text (prompt) we can guide the generation and produce matching images.
This technology opened new doors for creating digit...
Recent machine learning developments saw a breakthrough in generating images. So-called Diffusion Models can create photo-realistic images from noise. With the help of an input text (prompt) we can guide the generation and produce matching images.
This technology opened new doors for creating digital art, modifying existing images, and creating stunning visual experiences. In the talk, we will find out how these algorithms work, introduce Stable Diffusion (a concrete implementation), and find out what its use cases are. We will see how text can be used to generate matching outputs but also take a look at more experimental features such as creating images from edges, outlines, or depth maps.
We will mainly focus on the open source text-to-image model Stable Diffusion, which has set new standards in image generation. With it also comes an active community that keeps it open source and accessible for everyone.
Size: 1.45 MB
Language: en
Added: Dec 04, 2023
Slides: 11 pages
Slide Content
“Standing on top of the highest mountain, looking down to the other peaks, alps.
An award-winning landscape photo of South Tyrol”
Image Generation with
Diffusion Models
How computers imagine our world
Markus Pobitzer
Overview
Diffusion Models
Stable Diffusion (SD)
(Text) Guidance Stable Diffusion Output
“Majestic royal ship on
a calm sea, oil
painting, …”
Inpainting with Stable Diffusion
Input Mask Output
Original imagefrom: https://unsplash.com/@overture_creations