Gradio Web UI for LLMs

Text Generation Web UI

A Gradio web UI for Large Language Models.

Its goal is to become the AUTOMATIC1111/stable-diffusion-webui of Text Generation.



  • 3 interface modes: default (two columns), notebook, and chat.

  • Dropdown menu for quickly switching between different models.

  • Large number of extensions (built-in and user-contributed), including Coqui TTS for realistic voice outputs, Whisper STT for voice inputs, translation, multimodal pipelines, vector databases, Stable Diffusion integration, and a lot more. See the wiki and the extensions directory for details.

  • Precise chat templates for instruction-following models, including Llama-2-chat, Alpaca, Vicuna, Mistral.

  • LoRA: train new LoRAs with your own data, load/unload LoRAs on the fly for generation.

  • Transformers library integration: load models in 4-bit or 8-bit precision through bitsandbytes, use llama.cpp with transformers samplers (llamacpp_HF loader), CPU inference in 32-bit precision using PyTorch.

  • OpenAI-compatible API server with Chat and Completions endpoints -- see the examples.

How to install

  1. Clone or download the repository.

  2. Run the, start_windows.bat,, or start_wsl.bat script depending on your OS.

  3. Select your GPU vendor when asked.

  4. Once the installation ends, browse to http://localhost:7860/?__theme=dark.

  5. Have fun!

To restart the web UI in the future, just run the start_ script again. This script creates an installer_files folder where it sets up the project's requirements. In case you need to reinstall the requirements, you can simply delete that folder and start the web UI again.

The script accepts command-line flags. Alternatively, you can edit the CMD_FLAGS.txt file with a text editor and add your flags there.

To get updates in the future, run, update_windows.bat,, or update_wsl.bat.

Setup details and information about installing manually

List of command-line flags


Last updated