Creating Tools

Creating Tools

Tools in Newelle allow the LLM to execute functions and display custom widgets. They are the primary way to extend Newelle's capabilities beyond just text generation.

Tool System Overview

The tools system has several key classes:

  • Tool - Defines a tool that the LLM can invoke
  • ToolResult - The result returned by a tool execution (includes output text, widget, and interaction handling)
  • ToolRegistry - Manages all registered tools
  • InteractionOption - Defines an option for tools that require user interaction
  • Command - A slash command the user can execute from the chat input
  • create_io_tool() - Helper to quickly create simple input→output tools

ToolResult

ToolResult is what your tool function must return. It carries the output for the LLM and optionally a GTK widget to display.

from .tools import ToolResult

class ToolResult:
    output: Any = None           # Text sent to the LLM (set None for widget-only tools)
    widget: Any = None           # Optional GTK widget shown in the chat
    is_cancelled: bool = False   # Whether execution was cancelled
    requires_interaction: bool = False  # Whether user input is needed
    interaction_options: list    # InteractionOption list for user choices
    display_text: str | None     # Text shown to the user instead of the raw output

Key Methods

result = ToolResult()

# Set the GTK widget to display in chat
result.set_widget(my_widget)

# Set the text output given to the LLM (call this to unblock the tool)
result.set_output("The command returned: ...")

# Set display text shown to the user (e.g., formatted code)
result.set_display_text("```bash\nls -la\n```")

# Set interaction options for tools that need user decisions
result.set_intreaction_options([
    InteractionOption("Accept", lambda: execute()),
    InteractionOption("Skip", lambda: skip())
])

# Cancel the tool execution
result.cancel()

# Block until set_output() is called, then return the output value
output = result.get_output()

get_output() blocks the calling thread until set_output() is called (or the tool is cancelled). This is useful when you need to wait for the tool result from another thread.

Critical: Always call set_output() — even with None — or the tool will stall forever. If getting the output takes time, run it on another thread.

InteractionOption

For tools that need user confirmation or choice:

from .tools import InteractionOption

InteractionOption(
    title="Accept",       # Display name
    callback=my_callback  # Function to run when selected
)

When requires_interaction is True on a ToolResult, the chat UI will show the interaction options and wait for the user to choose before continuing.


Tool

Defines a tool the LLM can call. You can either instantiate it directly or use the create_io_tool() helper.

class Tool:
    def __init__(
        self,
        name: str,                  # Name the LLM uses to call this tool
        description: str,           # What the tool does (given to the LLM)
        func: Callable,             # Function that executes the tool
        schema: Dict = None,        # JSON schema of arguments (auto-detected if None)
        run_on_main_thread: bool = False,  # Run on main thread (default: worker thread)
        title: str = None,          # Display name in settings
        prompt_editable: bool = True,      # Whether user can edit the tool prompt
        restore_func: Callable = None,     # Function called when restoring from history
        default_on: bool = True,           # Enabled by default
        tools_group: str = None,           # Group name for settings organization
        icon_name: str = None,             # Icon shown in settings
        default_lazy_load: bool = False,   # Don't send schema to LLM by default
    ):

Schema Auto-Detection

If you don't provide a schema, it's auto-detected from your function's type hints:

def my_tool(path: str, count: int, verbose: bool = False) -> ToolResult:
    ...

# Auto-generates:
# {
#   "type": "object",
#   "properties": {
#     "path": {"type": "string"},
#     "count": {"type": "integer"},
#     "verbose": {"type": "boolean"}
#   },
#   "required": ["path", "count"]
# }

Note: parameters named self, msg_uuid, tool_uuid, or chat_id are automatically filtered out.

Tool Lazy Loading

When default_lazy_load is True, the tool's full schema is NOT sent in the system prompt. Instead, only the name and description are included. The LLM must call tool_search to retrieve the full schema before using the tool. This reduces prompt size for tools that are rarely used.

Users can also toggle lazy loading per tool in settings.

Restore Functions

A restore_func is called when a message using this tool is loaded from chat history. It should recreate the tool's widget with the saved output:

def restore_my_tool(self, tool_uuid: str, path: str):
    output = self.ui_controller.get_tool_result_by_id(tool_uuid)
    widget = MyWidget(output)
    result = ToolResult()
    result.set_widget(widget)
    result.set_output(output)
    return result

create_io_tool() — Simple Tools

For simple tools that take arguments and return text (no widget needed):

from .tools import create_io_tool

def read_file(self, path: str) -> str:
    with open(path, "r") as f:
        return f.read()

def get_tools(self) -> list:
    return [
        create_io_tool(
            name="read_file",
            description="Read a file and return its contents",
            func=self.read_file,
            title="Read File",
            default_on=True,
            tools_group="Files",
            icon_name="document-open-symbolic",
            default_lazy_load=False,
        ),
    ]

This wraps your function so it runs on a background thread and returns a ToolResult automatically.


ToolRegistry

ToolRegistry manages all tools and generates the prompt for the LLM.

from .tools import ToolRegistry

registry = ToolRegistry()
registry.register_tool(my_tool)    # Add a tool
registry.remove_tool("tool_name")  # Remove a tool
tool = registry.get_tool("name")   # Get a tool by name
all_tools = registry.get_all_tools()  # List all tools

Tool Prompt Generation

# Generate prompt with enabled tools
prompt = registry.get_tools_prompt(enabled_tools_dict, tools_settings)
# Returns: "<tools>\n[{...tool definitions...}]\n</tools>"

Command — Slash Commands

Commands are user-facing slash commands (like /help, /new) that can be typed in the chat input.

from .tools import Command

class Command:
    def __init__(
        self,
        name: str,              # Command name (e.g., "help")
        description: str,       # What the command does
        func: Callable,         # Function to execute
        icon_name: str = None,  # Icon for the command
        schema: Dict = None,    # Argument schema (auto-detected)
        restore_func: Callable = None,  # Restore from history
    ):

Commands can be added by extensions via get_commands():

def get_commands(self) -> list[Command]:
    return [
        Command("mycommand", "Does something useful", self.do_something, icon_name="star-symbolic"),
    ]

The @tool Decorator

You can also register tools using a decorator:

from .tools import tool

@tool(
    name="my_tool",
    description="Does something useful",
    title="My Tool",
    default_on=True,
    icon_name="applications-utilities-symbolic"
)
def my_tool(path: str) -> ToolResult:
    result = ToolResult()
    result.set_output(f"Processed: {path}")
    return result

Tool Groups

Tools can be organized into groups for better settings UI:

Tool(
    name="execute_command",
    description="Execute a bash command",
    func=self.execute_command_widget,
    tools_group="Shell",  # Groups in settings UI
)

create_io_tool(
    "text_to_speech", "...",
    self.text_to_speech,
    tools_group="Audio",
)

Full Example: A Tool with Widget and Interaction

from .tools import Tool, ToolResult, InteractionOption
from .extensions import NewelleExtension
from gi.repository import Gtk
import threading

class FileOpsExtension(NewelleExtension):
    id = "fileops"
    name = "File Operations"

    def delete_file_tool(self, path: str):
        result = ToolResult(requires_interaction=True)

        # Create a widget showing what will be deleted
        label = Gtk.Label(label=f"Delete {path}?")
        result.set_widget(label)

        # Set interaction options
        result.set_intreaction_options([
            InteractionOption("Confirm Delete", lambda: self._really_delete(path, result)),
            InteractionOption("Cancel", lambda: result.cancel()),
        ])

        # Set display text
        result.set_display_text(f"⚠️ Delete: `{path}`")

        # Don't call set_output yet — wait for user interaction
        return result

    def _really_delete(self, path, result):
        import os
        os.remove(path)
        result.set_output(f"Deleted {path}")

    def get_tools(self) -> list:
        return [
            Tool(
                name="delete_file",
                description="Delete a file at the given path (asks for confirmation)",
                func=self.delete_file_tool,
                title="Delete File",
                icon_name="user-trash-symbolic",
                tools_group="Files",
            ),
        ]

Tips

  • Thread safety: Tool functions run on worker threads by default. Set run_on_main_thread=True if you need to manipulate GTK widgets directly.
  • Updating tools: Call self.ui_controller.require_tool_update() to refresh the tool list after adding/removing tools dynamically.
  • Getting message context: Use self.ui_controller.get_current_message_id() inside a tool call.
  • TTY-like output: For long command output, consider using CopyBox with execution_request=True (see default_tools.py for examples).