Skip to content

What Is Deepy

Dilyara Zharikova (Baymurzina) edited this page May 9, 2023 · 1 revision

What Is Deepy?

Deepy is a free, open-source Multiskill AI Assistant built using DeepPavlov Conversational AI Stack. It is a set of several distributions located within the larger DeepPavlov Dream Platform. It is built on top of DeepPavlov Agent running as container in Docker. It runs on x86_64 machines, and prefers having NVIDIA GPUs on the machine. You can use it to study how multiskill AI assistants can be built, as a base for research projects, or for commercial systems where running open source system on-premises (or in the cloud) is a requirement.

Deepy is based on a DeepPavlov Agent, a free, open-source Conversational Orchestrator that runs in Docker container. The rest of the AI Assistant runs as a collection of Docker containers. These containers include annotators, skills, skill and response selectors, as well as a number of supporting services that are used by these containers.

History of Deepy

Deepy was originally built in October 2020 as a demo of a simple multiskill AI Assistant to be shown at DeepPavlov's talk at NVIDIA GTC Fall 2020 Conference. However, Deepy is based on the 2020's DeepPavlov Dream AI Assistant Demo, which in turn is an adaptation of the original DREAM Socialbot created by DeepPavlov's student team for Alexa Prize Grand Socialbot Challenge 3 (2019-2020).

Deepy Architecture

image

Running Deepy on a PC

Getting Deepy up and running is quite simple, but it might require having GPU(s) depending on the number of GPU-heavy components you want to use in your solution.

  1. Clone the repository
  2. Change directory to it
  3. Pick the distribution you want to run from /assistant_dists
  4. Copy docker-compose.yml from it to the root directory of your repository (system will ask you to confirm rewriting the existing one; confirm it)
  5. Copy pipeline_conf.json from it to the /agent of your repository (system will ask you to confirm rewriting the existing one; confirm it)
  6. Write the command docker-compose -f docker-compose.yml build to build your distribution but don't press ENTER just yet.
  7. [Optional] If you want to use ASR & TTS modules, add them to the docker-compose command file chain; in this case the command below would look like this: docker-compose -f docker-compose.yml -f ass_tts.yml build
  8. For GPU-intensive services, change lines in your docker-compose.yml and asr_tts.yml (optional) to specify GPUs you want to run them on. Typical GPU-intensive service that uses BERT model needs ~4GB of GPU RAM so plan accordingly.
  9. Build your system by using the line you've formed above
  10. Once done, use the same line but also replace build in the end with up.
  11. Once the agent is up and running, use your favorite tool (e.g., curl or Postman) to talk to Deepy via http://localhost:4242/ endpoint by providing the following content:
{
   "user_id" : "24424252524525",
   "payload" : "Hello!"
}

If everything works correctly, you'll get the response like this:

{
    "dialog_id": "773dc35a567072c142b9d6d8bdea00fb",
    "utt_id": "dfd1c751875439c6d98e3bb1410448f7",
    "user_id": "r234242343",
    "response": "Hello, I'm a lunar assistant Deepy! How are you?",
    "active_skill": "program_y",
    "debug_output": [
        {
            "skill_name": "harvesters_maintenance_skill",
            "annotations": {
                "emotion_classification": [
                    {
                        "anger": 0.46746790409088135,
                        "fear": 0.3528013229370117,
                        "joy": 0.3129902184009552,
                        "love": 0.2804321050643921,
                        "sadness": 0.35413244366645813,
                        "surprise": 0.19576209783554077,
                        "neutral": 0.9979490041732788
                    }
                ]
            },
            "text": "I don't have this information.",
            "confidence": 0.5
        },
        {
            "skill_name": "program_y",
            "annotations": {
                "emotion_classification": [
                    {
                        "anger": 0.38495343923568726,
                        "fear": 0.22263416647911072,
                        "joy": 0.4415707588195801,
                        "love": 0.4192220866680145,
                        "sadness": 0.21526440978050232,
                        "surprise": 0.19943127036094666,
                        "neutral": 0.998478353023529
                    }
                ]
            },
            "text": "Hello, I'm a lunar assistant Deepy! How are you?",
            "confidence": 0.98,
            "ssml_tagged_text": "Hello, I'm a lunar assistant Deepy! How are you?"
        }
    ],
    "human_utt_annotations": {
        "sentseg": {
            "punct_sent": "hello!",
            "segments": [
                "hello!"
            ]
        },
        "spelling_preprocessing": "hello!"
    }
}

Current Status

Deepy is, like all software, under development. However, you can pick any of the distributions (as well call them) from /assistant_dists directory to use it in your own system. We use one of these configs (currently /assistant_dists/deepy_ai_adv/) on our Demo Web Site. Here are some of the features of the current system. Development is ongoing and we hope you will join the community and help out.

Deepy Documentation

Nearly all the documentation is in this wiki. This is a collection of pages with information on many topics relating to Deepy. When you are starting out, you should consult it often. Once you become more experienced, you can edit it, updating pages or adding new ones, just like Wikipedia.

Deepy Videos

Our team has made some videos about Deepy. You can watch them on YouTube as follow.

Deepy on the Web

Deepy has appeared on the Web in various places. Here are the references: