Tool calling¶
Bubblescript tasks can be integrated with Large Language Models (LLMs) to enable dynamic function calling capabilities. When an LLM makes a tool call, it can execute a Bubblescript task that has been declared with proper input/output schemas. The LLM provides the input parameters, and the task returns a result that the LLM can use to continue the conversation. This integration allows LLMs to execute Bubblescript tasks as tools during conversations, making the bot capable of automatically executing Bubblescript tasks.
Declaring Tasks for LLM Tool Calls¶
To make a Bubblescript declared task available as an LLM tool, you need to:
- Declare the task with input/output schemas
- Define the task implementation
- Configure the LLM prompt to use the task as a tool
Here's an example:
# Declare the task with schemas
declare task get_weather,
description: "Get weather forecast information for a city",
in: @schemas.task_get_weather,
out: %{type: "string", description: "The forecast"}
# Implement the task
task get_weather do
_city = _args.city
return random(["sun", "rain", "cloud", "tornado"])
end
Declared tasks used as LLM tools require a clear description explaining the task's purpose, an input schema (in:
) specifying the expected parameters, and an output schema (out:
) defining the return type.
Example schema definitions:
# schemas.yml
task_get_weather:
type: object
properties:
city: { type: string }
required: [city]
Configuring LLM Prompts with Tools¶
In your prompts.yml file, you can specify which tasks are available as tools for the LLM:
prompts:
- id: get_weather
provider: microsoft_openai
temperature: 1
text: |
system: You are a weather bot. You can tell the user what weather it is in a city.
[[transcript]]
tools:
- type: task
task: get_weather
Using Tasks in Dialogs¶
When using LLM tool calls in dialogs, you can handle the results like this. A typical example would be a weather bot that uses the get_weather task:
dialog __main__ do
prompt dialog: llm_prompter
dialog llm_prompter do
_result = LLM.complete(@prompts.get_weather, %{})
if _result.finish_reason == "tool_calls" do
dispatch_llm_tasks(_result)
goto llm_prompter
end
say _result.text
end
dialog trigger: :text, do: _nothing
end
The dispatch_llm_tasks/1
function is a platform-defined function that checks if the LLM response contains tool calls and executes the corresponding declared tasks, feeding the result of the task back into the conversation transcript so that the next LLM call uses its result.
Best Practices¶
When declaring tasks for LLM tool calls, it's crucial to follow best practices around task declarations. Tasks should always have clear and descriptive documentation that explains their purpose and usage. The input and output schemas should be properly defined to ensure type safety and validation. Each task should be focused on a single responsibility and avoid trying to do too many things at once.
Schema design is another critical aspect of implementing tool calls effectively. The schemas that define the task inputs and outputs should be as specific as possible to the use case, clearly indicating which fields are required versus optional. Each field in the schema should have a clear description that helps developers and LLMs understand its purpose and expected values. Well-designed schemas help prevent errors and make the tools more reliable.
Example Implementation¶
Here's a complete example showing how to implement a weather bot using LLM tool calls:
# bubblescript
dialog __main__ do
prompt dialog: llm_prompter
dialog llm_prompter do
_result = LLM.complete(@prompts.get_weather, %{})
if _result.finish_reason == "tool_calls" do
dispatch_llm_tasks(_result)
goto llm_prompter
end
say _result.text
end
dialog trigger: :text, do: _nothing
end
declare task get_weather,
description: "Get weather forecast information for a city",
in: @schemas.task_get_weather,
out: %{type: "string", description: "The forecast"}
task get_weather do
_city = _args.city
return random(["sun", "rain", "cloud", "tornado"])
end
# prompts.yml
prompts:
- id: get_weather
provider: microsoft_openai
temperature: 1
text: |
system: You are a weather bot. You can tell the user what weather it is in a city.
[[transcript]]
tools:
- type: task
task: get_weather
# schemas.yml
task_get_weather:
type: object
properties:
city: { type: string }
required: [city]
This implementation allows the LLM to understand when to use the weather tool, provide the correct input parameters, handle the response appropriately, and continue the conversation naturally.