Skip to content

ChatGPT support

The DialoX platform has builtin support for ChatGPT and other large language models.

Prompt example

By creating a script of the GPT Prompt, it is possible to define one or more prompts that can be used at runtime in the bot.

A prompt file looks at minimum like this:

  - id: rhyme
    text: |
      make a sentence that rhymes with: {{ text }}

In bubblescript this exposes a constant called @prompts.rhyme, that can then be used like this:

dialog main do
  ask "Enter a sentence and I will make it rhyme for you"

  _result = GPT.complete(@prompts.rhyme, text: answer.text)
  say _result.text

resulting in a conversation like this:

bot:  Enter a sentence and I will make it rhyme for you
user: I want to fly away!
bot:  Today is Sunday Funday, let's go play!

Full prompt yaml

A full specification of a prompt yaml is this:

  - # exposed in bubblescript as @prompts.[id], so @prompts.summarize in this case:
    id: summarize

    # english label, used when the prompt is included in the CMS or Inbox widget
    label: Summarize

    # LLM provider, currently 'openai' is the only supported one.
    provider: openai

    # the LLM model to use. The `/v1/chat/completions` OpenAI endpoint is
    # used to execute the prompt. For supported models see:
    model: gpt-3.5-turbo

    # the actual text of the prompts. Can be a simple string or a $i18n structure, so
    # that the prompt is translated for the conversation's locale. The prompt text is
    # actually a Liquid template, so `{{ }}` bindings can be specified which then
    # need to be passed  in when calling `GPT.complete()`.
      $i18n: true
      nl: |
        system: Gegeven de volgende tekst, maak een korte en bondige samenvatting die
        alleen de meest noodzakelijke punten teruggeeft. Gebruik hooguit 50

        user: {{text}}
      en: |
        system: Given the following text, create a short summary that only highlights the
        most relevant parts of the text. Use at most 50 words:

        user: {{text}}

    # additional request parameters passed to the OpenAPI /v1/chat/completions endpoint
    # See for possible values
      temperature: 1.2

Executing prompts

By executing the GPT.complete(prompt, bindings) function, a call to the GPT API is done with the given prompt and its bindings. The prompt argument typically comes from a constant defined in a prompt YAML file, for instance @prompts.summarize.

The bindings is a map or keyword list that needs to contain the bindings that the prompt needs; in the summarize example only one binding is created named text. So a call to that prompt would be done like this:

  _result = GPT.complete(@prompts.summarize, text: "this is a long article ...")

The full result of the GPT.complete call is a map array which contains the following:

  • text - The output text that GPT produced
  • json - A JSON deserialized version of the text; the runtime detects whether JSON is available in the result and, if so, parses it. The JSON message itself can be padded with arbitrary other texts.
  • usage - The total tokens that were used for this API call
  • request_time - The nr of milliseconds this request took
  • raw - The raw OpenAPI response

User / bot / assistant roles

The prompt text can contain user:, assistant: or system: strings, which will be used for determining the different parts of the prompt (e.g. constructing the messages part of the OpenAPI request payload).

Automatic bindings

Some prompt bindings are done automatically.

In the case of Bubblescript GPT.complete calls, the following bindings are filled automatically:

  • locale - The conversation's locale
  • transcript - The last 5 turns of the bot / user. This is typically used to make a generic chatbot that responds to the previous conversation in a natural way.
  • bot - The metadata of the bot, for instance {{ bot.title }} is exposed.
  • conversation - The metadata of the conversation, for instance {{ bot.title }} is exposed.

The transcript binding is an array binding and needs to be specified as [[ transcript ]] so with square brackets, and on a line by itself!


For every GPT.complete call, a charge event (of type gpt.complete) is created and is taken into account in the customer's billing cycle.