# AI RESOURCES

## Tools and Resources for AI Art

If you are looking to get started with AI art, then a good place to start is one of the  [popular apps](https://pharmapsychotic.com/tools.html#sec-ca80)  like  [DreamStudio](https://beta.dreamstudio.ai/) ,  [midjourney](https://www.midjourney.com/) ,  [Wombo](https://www.wombo.art/) , or  [NightCafe](https://nightcafe.studio/) . You can get a quick sense of how you can use words and phrases to guide image generation. Read up on  [prompt engineering](https://pharmapsychotic.com/tools.html#sec-1f7f)  to improve your results. Then you may want to move on to using Google Colab notebooks linked below like Deforum.  If you have a good nVidia GPU of your own then you can also use  [NMKD Stable Diffusion GUI](https://nmkd.itch.io/t2i-gui)  or  [Visions of Chaos](https://softology.pro/voc.htm)  to run the most popular notebooks locally. If you want to train your own Ai models check out the  [Ai art model training](https://pharmapsychotic.com/training.html)  page, for animations check [Stable Diffusion animations](https://pharmapsychotic.com/animation.html) .

You can follow on twitter [@pharmapsychotic](https://twitter.com/pharmapsychotic)

### Active Ai Art Competitions

* Ending June 8: Thomas & GPB art contest ([tweet](https://twitter.com/Th0mas_Art/status/1664262643513970689?s=20) ) ( [link](https://app.joyn.xyz/contest/globetrotter-polar-bears-art-contest-9a3e61b6bcc8) )
* Ending June 22: Ai Art Weekly theme: censorship $50 ([tweet](https://twitter.com/dreamingtulpa/status/1669645541192376323))

## Google Colab notebooks

### Text to Image

There are a TON of shared Google Colab notebooks floating around for doing text to image with pre-trained GAN and diffusion models. I've been compiling the ones I come across and try out and find interesting. Please hit me up on twitter ([@pharmapsychotic](https://twitter.com/pharmapsychotic)) if you know a cool notebook that I am missing! Stable Diffusion is most popular right now.&#x20;

* [Stable Diffusion WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) by automatic1111 - run SD local with lots of features and extensions
* [Deforum Stable Diffusion 0.7](https://colab.research.google.com/github/deforum-art/deforum-stable-diffusion/blob/main/Deforum_Stable_Diffusion.ipynb) - group effort for ultimate SD notebook ([discord](https://discord.com/invite/upmXXsrwZc)) ([youtube tutorial](https://youtu.be/MR7M1HSXgos)) ([guide](https://docs.google.com/document/d/1RrQv7FntzOuLg4ohjRZPVL7iptIyBhwwbcEYEW2OfcI/edit))
* [Disco Diffusion v5.6](https://colab.research.google.com/github/alembics/disco-diffusion/blob/main/Disco_Diffusion.ipynb) by Somnai, gandamu, zippy721 ([guide](https://docs.google.com/document/d/1l8s7uS2dGqjztYSjPpzlmXLjl5PM3IGkRWI3IiCuK7g/edit)) ([new guide](https://sweet-hall-e72.notion.site/A-Traveler-s-Guide-to-the-Latent-Space-85efba7e5e6a40e5bd3cae980f30235f)) ([youtube tutorial](https://www.youtube.com/watch?v=kRhd1xEH6bQ))
* [Huemin Jax Diffusion 2.7](https://colab.research.google.com/github/huemin-art/jax-guided-diffusion/blob/v2.7/Huemin_Jax_Diffusion_2_7.ipynb) by nshepperd, huemin\_art ([guide](https://docs.google.com/document/d/11HWN5e57taWdpyZlW5s6gqzrwMsLlmOQivyJncOPPhE)) ([stitching guide](https://dreamingcomputers.com/ai-articles/huemin-jax-diffusion-2-7-stitching/))
* [pytti-tools v0.10](https://colab.research.google.com/github/pytti-tools/pytti-notebook/blob/main/pyttitools-PYTTI.ipynb) by DigThatData and sportsracer
* [VQGAN+CLIP](https://colab.research.google.com/drive/1peZ98vBihDD9A1v7JdH5VvHDUuW5tcRK) by remi\_durant
* \[2023/04/28] [DeepFloyd IF](https://deepfloyd.ai/) ([huggingface](https://huggingface.co/spaces/DeepFloyd/IF)) ([github](https://github.com/deep-floyd/IF))
* \[2023/04/05] [Kandinsky 2.1 Batching+Dynamic prompting](https://colab.research.google.com/drive/1bJSxeadteHXQsbxoYDfrB4o4rA6OB9XC?usp=sharing) Colab by @jrobocat
* \[2023/04/03] [Kandinsky 2.1](https://huggingface.co/spaces/ai-forever/Kandinsky2.1) ([huggingface](https://huggingface.co/spaces/ai-forever/Kandinsky2.1)) ([site](https://fusionbrain.ai/diffusion))
* \[2023/03/23] [Image-to-text-to-image](https://colab.research.google.com/drive/1zsG6ruSF_Rk8Yw8OeqA0746_GVjV8Y18?usp=sharing) Colab by @jrobocat  - batch CLIP Interrogator + SD generations
* \[2023/03/14] [Unidiffuser](https://huggingface.co/spaces/thu-ml/unidiffuser) - unified diffusion framework ([github](https://github.com/thu-ml/unidiffuser))
* \[2023/02/20] [Stable Diffusion Auto Stitching](https://colab.research.google.com/drive/1AoDb6idjOWpGdJT-CbKc9kwYZm4okt2k) by @oleg\_ai\_art ([guide](https://docs.google.com/presentation/d/1nt7FnzEYdXgybkUBOjNmDGDwPugaNiQoWSRFU6v0mnA/edit#slide=id.p))
* \[2023/02/15] [ControlNet](https://github.com/Mikubill/sd-webui-controlnet) - control Stable Diffusion with extra conditioning ([youtube](https://www.youtube.com/watch?v=YephV6ptxeQ)) ([huggingface](https://huggingface.co/spaces/hysts/ControlNet)) ([github](https://github.com/lllyasviel/ControlNet)) ([models](https://civitai.com/models/9251/controlnet-pre-trained-models))
* \[2023/02/14] [Pix2Pix video with coherence](https://colab.research.google.com/drive/1inQJPKLOpjB_Bpo0GmboqJWJ1AxzW5Xa?usp=sharing) by ​@johnowhitaker - stylize video inputs!
* \[2023/01/30] [Tune-a-Video](https://huggingface.co/spaces/Tune-A-Video-library/Tune-A-Video-Training-UI) - create short text2video sequences ([github](https://github.com/showlab/Tune-A-Video)) ([paper](https://tuneavideo.github.io/))
* \[2023/01/21] [KLMC2 Animation](https://colab.research.google.com/github/dmarx/notebooks/blob/main/Stable_Diffusion_KLMC2_Animation.ipynb) - @DigThatData's fork with lots of additions
* \[2023/01/20] [InstructPix2Pix](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/InstructPix2Pix_using_diffusers.ipynb) - use text instructions to modify images ([huggingface](https://huggingface.co/spaces/timbrooks/instruct-pix2pix))
* \[2023/01/19] [Image Mixer](https://huggingface.co/spaces/lambdalabs/image-mixer-demo) by @Buntworthy - mix up to 5 images together with SD
* \[2023/01/14] [Latent Blending](https://colab.research.google.com/drive/1I77--5PS6C-sAskl9OggS1zR0HLKdq1M?usp=sharing) by @j\_stelzer - smooth transition between SD latents ([github](https://github.com/lunarring/latentblending))
* \[2023/01/10] [Custom Diffusion](https://huggingface.co/spaces/nupurkmr9/custom-diffusion) - fast SD finetune with multiple concepts ([github](https://github.com/adobe-research/custom-diffusion))
* \[2022/12/22] [Karlo](https://huggingface.co/spaces/kakaobrain/karlo) - unCLIP architecture like DALLE-2 ([huggingface](https://huggingface.co/spaces/kakaobrain/karlo)) ([github](https://github.com/kakaobrain/karlo))
* \[2022/12/08] [Stable Diffusion KLMC2 Animation](https://colab.research.google.com/drive/1m8ovBpO2QilE2o4O-p2PONSwqGn4_x2G) by @RiversHaveWings
* \[2022/11/30] [BAOAB-limit sampler](https://colab.research.google.com/drive/17kesyBVqubV_Zzchf2XoR-7MHk5jxTuo?usp=sharing) - new SD sampler that can also make anims hella fast ([paper](https://www.ajayjain.net/journey/))<br>
* \[2022/11/25] [Stable Diffusion 2.0 Web UI](https://colab.research.google.com/github/qunash/stable-diffusion-2-gui/blob/main/stable_diffusion_2_0.ipynb) - by @anzorq (run SD 2.0 in colab using Diffusers)
* \[2022/11/24] [Stable Diffusion 2.0 w Diffusers](https://colab.research.google.com/github/amrrs/stable-diffusion-v2-colab-ui/blob/main/How_to_use_Stable_Diffusion_2_0_with_Diffusers.ipynb) - by @amrrs ([youtube](https://www.youtube.com/watch?v=9v3jABCh_L4))
* \[2022/11/08] [Midjourney v4 Style](https://colab.research.google.com/drive/1vkuxKKeSYNYI2OLZm8mR-WqcokQtSURM?usp=sharing) - (dreambooth SD finetune on midjourney v4 outputs)
* \[2022/11/03] [All-in-one Private Diffusions Colab](https://colab.research.google.com/drive/139WqHPDrrXhgtMSNANkWBhK0H_qC7kGO?usp=sharing) - fork and upgrades to WD notebook ([website](https://rentry.org/nocrypt))
* \[2022/10/25] [Fast Dreambooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) by TheLastBen (easy fast finetune of stable diffusion in colab)
* \[2022/10/08] [Stable Worlds](https://colab.research.google.com/drive/1RXRrkKUnpNiPCxTJg0Imq7sIM8ltYFz2?usp=sharing) by @NaxAlpha (create panoramas with SD!)
* \[2022/09/29] [MathRockDiffusion](https://colab.research.google.com/github/ethansmith2000/MathRock-Diffusion/blob/main/ES2000_MathRock_Diffusion.ipynb) by ethansmith2000 (mods and improvements on Disco) ( [guide](https://docs.google.com/document/d/1C5wt-q6i1JVb2zGsTCGcZev4NayFKC0p-ejbdZx4AkM/edit) )( [cuts](https://docs.google.com/document/d/1oHWoP3i0NFImekinqaaiWXfREkkK_rbsFMz_VTWDcRw/edit) )
* \[2022/09/29] [robo\_diffusion\_v1](https://colab.research.google.com/github/nousr/robo-diffusion/blob/main/robo_diffusion_v1.ipynb) by @nousr (a DreamBooth fine tune of stable diffusion)
* \[2022/09/27]  [Video Killed The Radio Star Diffusion](https://colab.research.google.com/github/dmarx/video-killed-the-radio-star/blob/main/Video_Killed_The_Radio_Star_Defusion.ipynb) by @DigThatData (transform music videos from YouTube)       &#x20;
* \[2022/09/25] fast-stable-diffusion - [automatic111 ui](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb), [hlky ui](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_hlky.ipynb), [github](https://github.com/TheLastBen/fast-stable-diffusion) (+25% speed and low VRAM)
* \[2022/09/18] [Doohickey Diffusion](https://colab.research.google.com/github/aicrumb/doohickey/blob/main/Doohickey_Diffusion.ipynb) by aicrumb (stable diffusion with CLIP guidance, perlin init, lots more)
* \[2022/09/18] [optimized colab](https://colab.research.google.com/github/neonsecret/stable-diffusion/blob/main/optimized_colab.ipynb) by neonsecret (stable diffusion with nice gradio gui in colab)
* \[2022/09/13] [Stable Diffusion Batch](https://colab.research.google.com/github/visoutre/ai-notebooks/blob/main/Stable_Diffusion_Batch.ipynb) by visoutre (includes tiled upscaling!) ([tutorial](https://www.youtube.com/watch?v=negxfEteqDc))
* \[2022/09/11] [Easy Diffusion](https://colab.research.google.com/github/WASasquatch/easydiffusion/blob/main/Stability_AI_Easy_Diffusion.ipynb) by WASasquatch and NOP (stable diffusion with lots of still image features)
* \[2022/09/07] [NMKD Stable Diffusion GUI](https://nmkd.itch.io/t2i-gui) (nice easy Windows GUI for stable by Noomkrad)
* \[2022/08/30] [Simple Stable Diffusion](https://colab.research.google.com/drive/1BRvQ6sseZxDDOv_b-ngVR0UdRG8P0Qd4?usp=sharing) by @ai\_curio (supports prompt weighting)
* \[2022/08/29] [Stable Diffusion WebUi](https://colab.research.google.com/github/altryne/sd-webui-colab/blob/main/Stable_Diffusion_WebUi_Altryne.ipynb) by @altryne (fancy Gradio UI for stable diffusion)
* \[2022/08/28] [Prompt Parrot v2.0](https://colab.research.google.com/drive/1GtyVgVCwnDfRvfsHbeU0AlG-SgQn1p8e?usp=sharing) by @KyrickYoung (train gpt2 on prompt list then generate with stable-diff)
* \[2022/08/23] [Stable Diffusion Interpolation](https://colab.research.google.com/drive/1EHZtFjQoRr-bns1It5mTcOVyZzZD9bBc?usp=sharing) by @ygantigravity (animate from own prompt to another!)
* \[2022/08/23] [Deforum Stable Diffusion](https://colab.research.google.com/github/deforum/stable-diffusion/blob/main/Deforum_Stable_Diffusion.ipynb) ([discord link](https://discord.com/invite/upmXXsrwZc))&#x20;
* \[2022/08/23] [FunkyHorses Stable Diffusion](https://colab.research.google.com/drive/1xghRe23MFDTF_nZaE423V9x5S8F3Z4Ri) by  Coskaiy/Corran (has neat import from spreadsheet)
* \[2022/08/23] [NOP's Stable Diffusion Colab v0.19](https://colab.research.google.com/drive/1jUwJ0owjigpG-9m6AI_wEStwimisUE17?usp=sharing\&fbclid=IwAR1uofFGi7GeZi4BX2po14DjEFE2FpnGvIdUGaudPFJxQ9Tm0KHtciqwpWQ) by NOP#1337
* \[2022/08/23] [Stable Diffusion Lite](https://colab.research.google.com/drive/1cJPmCCUFqVMaF--ee51RVFDCOF09Epbc?usp=sharing) by @future\_\_art (prompt queueing and seed mining)
* \[2022/08/23] [Interactive notebook for Stable Diffusion](https://colab.research.google.com/github/cpacker/stable-diffusion/blob/interactive-notebook/scripts/stable_diffusion_interactive_colab.ipynb) &#x20;
* \[2022/08/22] [Stable Diffusion HuggingFace space](https://huggingface.co/spaces/stabilityai/stable-diffusion) by stabilityai
* \[2022/08/22] [Stable Diffusion notebook](https://colab.research.google.com/github/pharmapsychotic/ai-notebooks/blob/main/pharmapsychotic_Stable_Diffusion.ipynb) by @pharmapsychotic  (easy to use and batch to gdrive) ([tutorial](http://brainartlabs.com/2022/08/28/stablediffusion-notebook-by-pharmapsychotic-setup-tutorial/))
* \[2022/08/22] [Official Stable Diffusion notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb) - requires hugging face account
* \[2022/08/22] [DiscoStream v1.1](https://colab.research.google.com/github/WASasquatch/discostream/blob/main/DiscoStream.ipynb) by @WASasquatch
* \[2022/08/20] [Disco Diffusion v5.6 with Inpainting](https://colab.research.google.com/github/kostarion/disco_diffusion_inpainting_colab/blob/main/Disco_Diffusion_v5_6%2C_Inpainting_mode_by_cut_pow.ipynb) by @cut\_pow
* \[2022/08/18] [DiscoArt \[w/ Batch Prompts + GPT3 generator\]](https://colab.research.google.com/github/Skquark/structured-prompt-generator/blob/main/DiscoArt_\[_w_Batch_Prompts_%26_GPT_3_Generator].ipynb) by Skquark
* \[2022/08/16] [WAS's Disco Diffusion v5.6-9](https://colab.research.google.com/github/WASasquatch/disco-diffusion-portrait-playground/blob/main/WAS's_Disco_Diffusion_v5_6_9_\[Portrait_Generator_Playground].ipynb) Portrait Generator Playground by WASasquatch
* \[2022/08/08] [Paint Pour Diffusion](https://colab.research.google.com/github/spacerockzero/EclecticBeams-AI-notebooks/blob/main/Paint_Pour_Diffusion_v1_0_\[DD_5_6].ipynb) by @EclecticBeams (diffusion trained on paint pour art)
* \[2022/07/31] [Huemin Jax Diffusion 2.7 August 2022](https://colab.research.google.com/drive/1jvrFECJeaCTeR52acsmp5CJalOStl-OS?usp=sharing) by @huemin\_art
* \[2022/07/30] [CLIP Prior + VQGAN](https://colab.research.google.com/drive/1yOpCY9eXvzELHppvh-o0DevhxVYOGr5i) by @RiversHaveWings and @jd\_pressman (a new VQGAN notebook)
* \[2022/07/23] [Textile Diffusion](https://colab.research.google.com/github/KaliYuga-ai/Textile-Diffusion/blob/main/Textile_Diffusion_v1_0.ipynb) by @KaliYuga (diffusion trained on textiles)
* \[2022/07/21] [Floral Diffusion](https://colab.research.google.com/github/jags111/floral-diffusion/blob/main/Floral_Diffusion_V1_DD_v5_6.ipynb) by @jags111 (fine tunes for floral)
* \[2022/07/18] [Liminal Diffusion v1](https://colab.research.google.com/drive/1pHBrk8FsSmvu_TZREhZpK4jwmRBLebS_) by @BrainArtLabs (diffusion trained on liminal photographs)
* \[2022/07/18] [DifNESfusion 1.35](https://colab.research.google.com/github/0xLufiQ/DifNESFusion-1.0/blob/main/DifNESfusion_1_35.ipynb) by @LufiQ (fork or PixelArtDiffusion with NES dataset)
* \[2022/07/18] [Medieval Diffusion](https://colab.research.google.com/github/KaliYuga-ai/Medieval-Diffusion/blob/main/Medieval_Diffusion_v1_0.ipynb) by @KaliYuga (diffusion trained on medieval art)
* \[2022/07/17] [FeiArt\_Handpainted CG Diffusion](https://colab.research.google.com/github/FeiArt-Ai/Handpainted-CG-Diffusion/blob/main/FeiArt_Handpainted_CG_Diffusion.ipynb) by @FeiArt\_AiArt
* \[2022/07/17] [Fantasy Diffusion](https://colab.research.google.com/github/lavista9008/fantasydiffusion/blob/main/Fantasy_Diffusion.ipynb) by @LaVista (diffusion trained on fantasy art)
* \[2022/07/15] [Ukiyo-e Portrait Diffusion](https://colab.research.google.com/github/avantcontra/ukiyoe-portrait-diffusion/blob/main/Ukiyoe_Portrait_Diffusion.ipynb) by @avantcontra
* \[2022/07/15] [Lithography Diffusion](https://colab.research.google.com/github/KaliYuga-ai/Lithography-Diffusion/blob/main/Lithography_Diffusion_v1_0.ipynb) by @KaliYuga (diffusion trained on lithographic landscapes and portraits)
* \[2022/07/06] [Disco v5.2 Dynamic Prompting](https://colab.research.google.com/drive/1D-PX1x0rKY3c5jL8L215n_PyjzF4dJn_?usp=sharing) (dynamic prompt variations -  [tutorial video](https://www.youtube.com/watch?v=UOJ8LiNr1_c) )
* \[2022/07/06] [Watercolor Diffusion](https://colab.research.google.com/github/KaliYuga-ai/Watercolor-Diffusion/blob/main/Watercolor_Diffusion_v1_0.ipynb) by @KaliYuga (diffusion trained on watercolor paintings)
* \[2022/07/05] [EnzymeZoo edits to Huemin Jax Diffusion](https://colab.research.google.com/drive/1z1RF3hcqEQLNue_lJ_63ipemHXUYwxHi?usp=sharing) by @EnzymeZoo (brought over masking from Majesty)
* see older notebooks in [the archive](https://pharmapsychotic.com/archive.html)

### StyleGAN

* \[2022/08/23] [Painting with StyleGAN](https://colab.research.google.com/github/jmoso13/Painting_with_StyleGAN/blob/main/Painting_with_StyleGAN.ipynb) by @jmoso13 ([tutorial](https://www.youtube.com/watch?v=pkYHMPoZrkg)) - use VAE to navigate and animate!
* \[2022/04/25] [StyleGAN-Humans + CLIP](https://colab.research.google.com/drive/1H-rGlKbILaZbDfTsrQqSxw2fH7y2elg6?usp=sharing) modified by Diego Porres to use StyleGAN3
* [StyleGAN2-ADA](https://colab.research.google.com/github/dvschultz/stylegan2-ada-pytorch/blob/main/SG2_ADA_PyTorch.ipynb) - train your own StyleGAN2 model from an image set you create
* [StyleCLIP](https://colab.research.google.com/github/orpatashnik/StyleCLIP/blob/main/notebooks/StyleCLIP_global.ipynb#scrollTo=deFVuu4drKHp) - Text-drive manipulation of StyleGAN imagery
* [Structured Dreaming](https://colab.research.google.com/drive/1tf-xUjhYm0p4pQSyKD-jq6EAGl_L5Al0#scrollTo=XYNwvEy3HH49) - Styledreams With helpers
* [Structured Dreaming](https://colab.research.google.com/github/ekgren/StructuredDreaming/blob/main/colabs/Structured_Dreaming_Styledreams.ipynb) (CLIP+StyleGAN) by @ArYoMo ([tweet](https://twitter.com/ArYoMo/status/1444398963571019783?s=20))
* [StyleGAN 2 pretrained models](https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/) - can use these with Structured Dreaming
* [StyleGAN 2 awesome pretrained models](https://github.com/justinpinkney/awesome-pretrained-stylegan2) - BIG collection of models
* [StyleGAN 3 training](https://colab.research.google.com/github/dvschultz/stylegan3/blob/main/SG3.ipynb) - train a StyleGAN and do interpolation video by @dvsch (currently busted)
* [StyleGAN 3 music video generation](https://colab.research.google.com/drive/1BXNHZBai-pXtP-ncliouXo_kUiG1Pq7M?usp=sharing) - ([tweet](https://twitter.com/EarthML1/status/1449776224222523397?s=20\&t=oPLA241CLsGzifdwWcFgWQ))
* [StyleGAN 3 + CLIP](https://colab.research.google.com/drive/1ZHg3aaKNts1ZWtIyVFpI_f8khMsNZe1q) by Annas
* [StyleGAN3 + CLIP](https://colab.research.google.com/github/ouhenio/StyleGAN3-CLIP-notebook/blob/main/StyleGAN3%2BCLIP.ipynb) by @nshepperd1 and @RiversHaveWings
* [StyleGANXL + CLIP](https://colab.research.google.com/drive/1ZEnJE-EUnh-aCXJbu0kVhi8_Qdi2BV-S) by Eugenio Herrera and Rodrigo Mello
* [Lucid Sonic Dreams](https://colab.research.google.com/drive/1Y5i50xSFIuN3V4Md8TB30_GOAtts7RQD?usp=sharing) - animate path through StyleGAN latent space with music ([github](https://github.com/mikaelalafriz/lucid-sonic-dreams))

### Video

**Text to video**

* ModelScope ([colab](https://colab.research.google.com/drive/1uW1ZqswkQ9Z9bp5Nbo5z59cAn7I0hE6R?usp=sharing)) ([huggingface](https://huggingface.co/spaces/damo-vilab/modelscope-text-to-video-synthesis)) - super fun but prominant shutterstock watermarks
* \[2023/03/20] [ModelScope text-to-video](https://colab.research.google.com/github/camenduru/text-to-video-synthesis-colab/blob/main/text_to_video_synthesis.ipynb) Colab by @camenduru ([youtube](https://www.youtube.com/watch?v=b8D4am73e6I)) ([github](https://github.com/camenduru/text-to-video-synthesis-colab))
* \[2023/03/18] [ModelScope text-to-video](https://huggingface.co/spaces/damo-vilab/modelscope-text-to-video-synthesis) huggingface space
* Text2Video-zero ([colab](https://colab.research.google.com/github/camenduru/text2video-zero-colab/blob/main/text2video_custom.ipynb)) ([github](https://github.com/Picsart-AI-Research/Text2Video-Zero)) ([huggingface](https://huggingface.co/spaces/PAIR/Text2Video-Zero)) ([webui ext](https://github.com/SHI-Labs/Text2Video-Zero-sd-webui)) - zero shot video from Stable Diffusion

**Interpolation**

* [Video Enhance AI](https://www.topazlabs.com/video-enhance-ai/ref/1354/) by Topaz Labs - commercial upscaling and frame interpolation <- excellent
* [AnimationKit AI](https://colab.research.google.com/github/sadnow/AnimationKit-AI_Upscaling-Interpolation_RIFE-RealESRGAN/blob/main/AnimationKit_Rife_RealESRGAN_Upscaling_Interpolation.ipynb) - video upscaling and interpolation tool <- great
* [FILM colab](https://colab.research.google.com/drive/1tbbbnQge0yb0LmnWNchEKNhjtBNC6jX-) - by @KyrickYoung has pause, loops, reverse <- my fave FILM
* [3D Ken Burns Effect from single image](https://colab.research.google.com/drive/1hxx4iSuAOyeI2gCL54vQkpEuBVrIv1hY) - animated video from 2D image
* [3D Photo Inpainting](https://colab.research.google.com/github/fzantalis/colab_collection/blob/master/3D_Photo_Inpainting.ipynb) - cool 3D effects for 2D images
* [Animating Pictures with Eulerian Motion Fields](https://eulerian.cs.washington.edu/) - code not out yet, looks like it'll be awesome
* [DAIN colab](https://colab.research.google.com/github/baowenbo/DAIN/blob/master/Colab_DAIN.ipynb) - depth aware interpolation
* [EbSynth](https://ebsynth.com/) - stylize video by giving it ai or hand painted key frames from video
* [ESRGAN 4 Video](https://colab.research.google.com/github/MSFTserver/AI-Colab-Notebooks/blob/main/ESRGAN_4_Video.ipynb) - increase resolution of video with ESRGAN
* [FILM: Frame Interpolation for Large Motion](https://github.com/google-research/frame-interpolation) - ([replicate link](https://replicate.com/google-research/frame-interpolation)) smooth interpolation/morphing
* [Flowframes](https://nmkd.itch.io/flowframes) - free Windows tool with patreon option, uses RIFE and other models
* [PyTTI-Tools: FILM](https://colab.research.google.com/github/pytti-tools/frame-interpolation/blob/main/PyTTI_Tools_FiLM-colab.ipynb) - @DigThatData 's version of FILM for video frames
* [RIFE](https://colab.research.google.com/github/HeylonNHP/RIFE-Colab/blob/main/RIFE_Colab.ipynb) - smooth interpolation of video to increase frame rate<br>
* [Sequence Frame Interpolation](https://colab.research.google.com/drive/1VA3Mw2Cr3FoChBE7kQlqbS2W2z8DCdBB?usp=sharing) - batch version of FILM
* [Super Slomo](https://colab.research.google.com/github/tugstugi/dl-colab-notebooks/blob/master/notebooks/SuperSloMo.ipynb) - another way to increase frame rate of video
* [Video Art and Styling Tools](https://colab.research.google.com/drive/1Y3a76IS_dqqVqjVGuIFPu91dCUmASkB2) - by @Coskaiy (style transfer, interpolation, superres, and more)

### Prompt Engineering

To get good results with CLIP guided diffusion and VQGAN+CLIP you need to find the right words and phrases that will direct the neural network to the content and style you are looking for.

**Image to Text**

* [Antarctic-Captions](https://colab.research.google.com/drive/1FwGEVKXvmpeMvAYqGr4z7Nt3llaZz-F8) by @dzryk
* [BLIP image captioning](https://huggingface.co/spaces/Salesforce/BLIP) HuggingFace space
* [CLIP Interrogator](https://colab.research.google.com/github/pharmapsychotic/clip-interrogator/blob/main/clip_interrogator.ipynb) by @pharmapsychotic - image to prompt! ([huggingface](https://huggingface.co/spaces/pharma/CLIP-Interrogator)) ([lambda](https://cloud.lambdalabs.com/demos/ml/CLIP-Interrogator)) ([replicate](https://replicate.com/pharmapsychotic/clip-interrogator))
* [CLIP prefix captioning](https://colab.research.google.com/drive/1tuoAC5F4sC7qid56Z0ap-stR3rwdk0ZV?usp=sharing#scrollTo=pohtQ8AfWNk_) inference notebook ([github](https://github.com/rmokady/CLIP_prefix_caption))
* [LLaVa: Large Language and Vision Assistant](https://llava-vl.github.io/) - ask vision model to describe image
* [personality-clip](https://colab.research.google.com/drive/171GirNbCVc-ScyBynI3Uy2fgYcmW3BB9?usp=sharing) by @dzryk
* PEZ: Prompts made EZ - prompt from image or long to short prompt  ([huggingface](https://huggingface.co/spaces/tomg-group-umd/pez-dispenser)) ([colab](https://colab.research.google.com/drive/1VSFps4siwASXDwhK_o29dKA9COvTnG8A?usp=sharing))

### Other

* [sdtools.org](https://sdtools.org/) - cool wiki covering tools and methods related to Stable Diffusion
* [JAX CLIP Guided Diffusion 2.7 Guide](https://docs.google.com/document/d/11HWN5e57taWdpyZlW5s6gqzrwMsLlmOQivyJncOPPhE) - Google doc from huemin
* [Zippy's Disco Diffusion Cheatsheet](https://docs.google.com/document/d/1l8s7uS2dGqjztYSjPpzlmXLjl5PM3IGkRWI3IiCuK7g/edit) - Google Doc guide to Disco and all the parameters
* [EZ Charts](https://docs.google.com/document/d/1ORymHm0Te18qKiHnhcdgGp-WSt8ZkLZvow3raiu2DVU/edit) - Google Doc Visual Reference Guides for CLIP-Guided Diffusion (see what all the parameters do!)
* [Hitchhiker's Guide To The Latent Space](https://docs.google.com/document/d/1ON4unvrGC2fSEAHMVb4idopPlWmzM0Lx5cxiOXG47k4/edit)  - a guide that's been put together with lots of colab notebooks too
* [Resources for GAN Artists](https://docs.google.com/document/d/18BrtW9RzI9rRAAYnmxES59HOxeC_QH1_BT7VDcP-32E/mobilebasic) - another big Google Doc with notebooks and resources for AI art
* [Way of the TTI Artist](https://docs.google.com/document/d/1EvkiHa12ButetruSBr82MJeomHfVRkvczB9-FgqtJ48/mobilebasic) - pytti guide&#x20;
* [Guide to install Disco Diffusion 5 on Windows with WSL](https://gist.github.com/MSFTserver/6212f85d79058a024b0e49f3d19a1115#file-wsl-disco-v5-tutorial-md) - haven't tried this yet challenge is pytorch3d
* Great explanation of VQGAN+CLIP - <https://ljvmiranda921.github.io/notebook/2021/08/08/clip-vqgan/>
* Nice [overview of lots of different optimization algorithms](https://ruder.io/optimizing-gradient-descent/) SGD, Adam, RMSProp etc and their differences (also covered in this [lecture](https://www.youtube.com/watch?v=_JB0AO7QxSA\&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv\&index=7))
* Stanford's Convolutional Neural Networks class on YouTube - <https://www.youtube.com/playlist?list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv>&#x20;
* [ClipMatrix](https://colab.research.google.com/drive/1rT_NIYryAC1UNBsETm6XbgW3DWqIJnmf?usp=sharing) - text controlled 3D mesh deformation and stylization
* [CLIP-Mesh](https://colab.research.google.com/drive/15Fm4EhLlB20EugLUnTdhSJElvGVCU7Ys?usp=sharing) - text to 3D mesh with texture and normal map (still pretty simple and mixed results)
* [DreamFields](https://colab.research.google.com/drive/1u5-zA330gbNGKVfXMW5e3cmllbfafNNB?usp=sharing) - latest text to 3D ([github](https://github.com/shengyu-meng/dreamfields-3D))
* [ImageSorter](https://colab.research.google.com/github/pharmapsychotic/ai-notebooks/blob/main/pharmapsychotic_ImageSorter.ipynb) by @pharmapsychotic - sort images by similarity (nice for StyleGAN/FiLM animated loops)
* [PIFuHD Colab](https://colab.research.google.com/drive/11z58bl3meSzo6kFqkahMa35G5jmh2Wgt) - Human photo to 3D mesh of the human
* [Point-E](https://huggingface.co/spaces/openai/point-e) - OpenAI's text to 3d point clouds ([github](https://github.com/openai/point-e))
* [text2mesh](https://www.kaggle.com/code/neverix/text2mesh/notebook) - Kaggle notebook for text to 3D mesh
* [Watermark images](https://colab.research.google.com/drive/1OjKvOEYUOA8d1sMPL3hBVeCryGxZW-e2?usp=sharing) - little notebook to add text watermark to images
* [Zero-Shot Text-Guided Object Generation with Dream Fields](https://colab.research.google.com/drive/17GtPqdUCbG5CsmTnQFecPpoq_zpNKX7A?usp=sharing) - text to 3D render

### AI Art Discord Servers

There are quite a few Discord servers dedicated now to AI artists or discussing text to image techniques.

* [Ai NFT Discord](https://discord.com/invite/YagHZJjzsm) - AI NFT Consortium. Has especially useful StyleGAN training resources
* [Disco Diffusion Discord](https://discord.com/invite/V9SeW7GMgZ) - chat and tech support for the Disco notebook
* [EleutherAI Discord](https://www.eleuther.ai/get-involved/) - researchers and good art room with more technical discussions
* [Jukebox Community Discord](https://discord.gg/aEqXFN9amV) - server for using OpenAI Jukebox for music generation
* [LAION Discord](https://discord.com/invite/e2GFUEfK) - group working on replicating a full DALLE-E
* [NeuralismAI Discord](https://discord.gg/6Tgu7d66Eu) -  AI art competitions and knowledge exchange
* [Prompt Sharing Discord](https://discord.com/invite/ErxMhB3Qkf) - community for sharing text to image prompts
* [VQGAN+CLIP Discord](https://discord.com/invite/CDUM5V54PC) - home of Instagram #vqganclipcommunitycolab
* [Zoetrope Central Spoke Discord](https://discord.com/invite/QPxEB8fcrh) - support and discussion of the Looking Glass notebook&#x20;

**Online Galleries to Showcase Art**

* OnCyber art galleries - [https://oncyber.io](https://oncyber.io/) - Cool 3D art gallery to showcase your art with links to NFT market
* Spatial - [https://spatial.io](https://spatial.io/)

You can follow on twitter [@pharmapsychotic](https://twitter.com/pharmapsychotic)
