DIY AI Assistant: A Guide to Running Your Own Private Open Assistant on Genesis Cloud

DIY AI Assistant: A Guide to Running Your Own Private Open Assistant on Genesis Cloud

1. Introduction

In this blog post, we will guide you through the process of setting up and running your own private Open Assistant. Open Assistant is a chat-based assistant capable of understanding tasks, interacting with third-party systems, and retrieving information dynamically. It is a genuinely open-source solution, unlike other popular solutions that only include the word “open” in their names.

By following this tutorial, you can harness the power of Open Assistant without relying on third-party inference APIs or exposing your conversations to external entities.

Requirements

To follow this tutorial, you will need:

2. Step-by-Step Guide

Set Up a Genesis Cloud Instance

  1. Sign in to the Genesis Cloud console using your account.
  2. Click Create New Instance.
  3. Choose the location for your new instance: 🇮🇸 / 🇳🇴

    If you’re unsure, select one closer to your location to reduce latency, which will be beneficial later in the process.

  4. Assign a descriptive hostname to the instance, such as openassistant-tutorial or simply oa.
  5. Select the GPU type. To avoid issues with running out of vmem for your model later on, opt for an NVIDIA® GeForce™ RTX 3090 instance. The “CPU & Memory optimized” instance types are a good choice as they provide double the system memory, which is sensible for this use case.
  6. Make sure the driver installation toggle is inactive. We’ll manually install the correct drivers to ensure compatibility with all software.
  7. Choose Ubuntu version 20.04 as the system image for your instance.
  8. The SSH key you added to your account earlier should be pre-selected.
  9. Click Create Instance.

Your instance will be created and will appear on the console dashboard. A message will be displayed, and the public IPv4 address will become visible. This process usually takes 1-2 minutes.

Preparing the Base System

Unless stated otherwise, execute all the following steps via SSH on your instance.

Windows users can use Putty (guide available here), while Linux or macOS command line SSH client users can refer to this knowledge base entry.

Install CUDA 11.8

At the time of writing, CUDA 12 was released. As the compatibility of many software packages is not yet a given we install CUDA 11.8 to avoid unexpected issues.

Setup of oobabooga/text-generation-webui

We will use the text-generation-webui to interface with the Open Assistant model.

Downloading the Open Assistant Model

As outlined in the README of the text-generation-webui we have to place the models in the aptly named models folder. Luckily, this is mostly automated. Execute the following command in the current directory to take care of it:

python3 download-model.py OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5

This will download the ~23 GB open/free model data from the Hugging Face servers. There are other variants with questionable legality floating around, use those on your own risk. Your Genesis Cloud instance can (by default) download with up to 1Gbit/s so you can expect this step to take 4-5 minutes depending on the load and connectivity of the servers.

Starting the Web UI

Now that we have the model in place, we can start the web UI:

python3 server.py --gpu-memory 22 --share

If you use a GPU other than a RTX3090, you need to adopt the --gpu-memory parameter. The same is true if you want to use multiple GPUs. Running python3 server.py -h will provide more details and examples. Suppose you disconnected your SSH session to set up the forwarding. In that case, you need to re-activate the conda environment, switch to the text-generation-webui directory, and start the server again:

conda activate textgen
cd text-generation-webui
python3 server.py --chat --model OpenAssistant_oasst-sft-4-pythia-12b-epoch-3.5 --gpu-memory 22 --share
# Give it a few seconds to load the model and start-up

You can now access the web UI at the displayed URL (https://….gradio.live) 🎉

We recommend to not rely on the gradio proxy service to access the service but accessing it in another way. As there are many ways to skin this cat (SSH port forwarding, local proxy with TLS termination, using (free) Cloudflare fronting, …) it should be out of scope for this article (though not relying on the public gradio proxy service makes it much more responsive).

3. Using the Web UI

Now that everything is up and running we want to use the WebUi. Is it as easy as it gets:

Open Assistant on oobabooga chat interface

If you only get truncated responses, check your console output for OutOfMemoryError messages. You can work around those by using an instance with multiple (e.g., 2x RTX 3090) GPUs. If you use multiple GPU make sure to adopt the --gpu-memory parameter appropriately by noting the amount of vmem that should be allocated separated by spaces. E.g., --gpu-memory 23 23 for 2x RTX 3090).

Keep accelerating 🚀

The Genesis Cloud team

Never miss out again on Genesis Cloud news and our special deals: follow us on Twitter, LinkedIn, or Reddit.

Sign up for an account with Genesis Cloud here and benefit from $15 in free credits. If you want to find out more, please write to [email protected].