Creating Interfaces
Creating Interfaces
Interfaces allow third-party applications, messaging services, and custom UIs to interact with Newelle programmatically. This guide explains how to create custom interfaces.
Architecture Overview
Interfaces extend the Interface base class. There are two tiers:
Interface— Minimal base class with lifecycle management (start/stop, state files, cross-process detection)ChatInterface— ExtendsInterfacewith per-user persistent chats, slash command dispatch, LLM-with-tools execution, and tool interaction tracking
Most chat-style interfaces (Telegram, web services) should extend ChatInterface.
classDiagram
Handler <|-- Interface
Interface <|-- ChatInterface
ChatInterface <|-- APIInterface
ChatInterface <|-- TelegramInterface
Interface <|-- GUIAPIInterface
class Interface {
+start()
+stop()
+is_running()
+is_locally_running()
+set_controller(controller)
+_write_state_file()
+_clear_state_file()
+check_external_running(key, path)
+stop_external(key, path)
}
class ChatInterface {
+get_or_create_chat(user_id)
+process_message(user_id, text, on_chunk, on_tool_event)
+try_handle_command(user_id, text)
+run_command(name, user_id, args)
+handle_tool_interaction(user_id, tool_name, result, interaction_id)
+resolve_pending_interaction(interaction_id, option_index)
}
Interface Base Class
Minimal implementation example:
from .handlers.interfaces.interface import Interface
from .handlers.extra_settings import ExtraSettings
import threading
class MyInterface(Interface):
key = "my-interface"
name = "My Custom Interface"
@staticmethod
def get_extra_requirements() -> list:
return ["some-package"]
def get_extra_settings(self) -> list:
return [
ExtraSettings.EntrySetting(
key="api_key",
title="API Key",
description="API key for the service",
default="",
password=True,
),
ExtraSettings.SpinSetting(
key="port",
title="Port",
description="Port to listen on",
default=8080,
min=1,
max=65535,
step=1,
),
]
def start(self):
"""Called when the interface is started."""
if self.controller is None:
return
if not self.is_installed():
print("Dependencies not installed")
return
# Start your server/service here
self._write_state_file()
print("Interface started")
def stop(self):
"""Called when the interface is stopped."""
self._clear_state_file()
print("Interface stopped")
def _is_locally_running(self):
"""Return True if this instance is running in the current process."""
return False # Replace with actual check
Interface Lifecycle
start()— Called when the user enables the interface. Start your server/service here. Callself._write_state_file()after starting.stop()— Called when the user disables the interface. Callself._clear_state_file()to clean up.is_running()— Checks if the interface is running (locally or in another process).is_locally_running()— Override_is_locally_running()to check the current process.
State Files & Cross-Process Detection
Newelle uses JSON state files to track interface processes across different instances:
# State file is automatically written to:
# {path}/interface_states/{key}.json
# File format:
{
"pid": 12345,
"key": "my-interface",
"started_at": "2024-01-01T12:00:00"
}
# Check if running in another process:
Interface.check_external_running("my-interface", path)
# Stop an external interface:
Interface.stop_external("my-interface", path)
State files are cleaned up automatically when processes are dead.
ChatInterface — Chat-Style Interfaces
ChatInterface is the recommended base class for text-channel interfaces (bots, API endpoints, messaging apps). It provides:
- Per-user persistent chats — Each user gets a dedicated chat under a named folder
- Slash command dispatch — Built-in commands (
/new,/models,/model,/profile,/tools, etc.) - LLM execution with tools — Full agent pipeline with streaming and tool callbacks
- Tool interaction tracking — Pause/resume for tools that need user input
Subclass Requirements
Your subclass must set class-level attributes:
class MyBotInterface(ChatInterface):
key = "my-bot"
name = "My Bot Interface"
# ChatInterface folder/chat config
folder_name = "My Bot Chats" # Folder name for this interface's chats
folder_color = "#3584e4" # Folder color
folder_icon = "folder-symbolic"
chat_name_prefix = "🤖 MyBot" # Prefix for auto-generated chat names
Processing Messages
The main entry point is process_message():
def process_message(
self,
user_id, # Identifies the per-user persistent chat
text, # The user's message text
on_chunk=None, # Optional: called with each text delta during streaming
on_tool_event=None, # Optional: called when a tool result arrives
) -> str:
"""Run text through the full LLM-with-tools pipeline. BLOCKING — call from a thread."""
Callbacks:
# on_chunk: receives incremental text as it's generated
def on_chunk(delta: str):
# delta is new text since last callback
send_to_user(delta)
# on_tool_event: receives structured events for tool results
def on_tool_event(event: dict):
# event format depends on type:
# {"type": "tool_result", "tool_name": "...", "display_text": "..."}
# {"type": "tool_interaction", "tool_name": "...", "interaction_id": "...", "options": [...]}
if event.get("type") == "tool_result":
send_status(f"Running tool: {event['tool_name']}")
Tool Interaction Handling
When a tool requires user input (requires_interaction=True), the LLM thread pauses. You need to:
- Present options to the user (buttons, keyboard, etc.)
- Override
handle_tool_interaction()to deliver those options:
def handle_tool_interaction(self, user_id, tool_name, result, interaction_id):
"""Called when a tool needs user input. Deliver options to the user."""
options = self._pending_interactions.get(interaction_id, {}).get("options", [])
# Send options to user via your interface's UI
for opt in options:
send_button(user_id, opt.title, callback_data=f"choice_{interaction_id}_{i}")
- Call
resolve_pending_interaction()when the user chooses:
def on_user_choice(self, interaction_id: str, option_index: int):
success = self.resolve_pending_interaction(interaction_id, option_index)
if success:
# The LLM thread unblocks and continues
pass
Built-in Commands
ChatInterface provides these commands out of the box:
| Command | Description |
|---|---|
/start |
Welcome message with available commands |
/new [name] |
Create a new persistent chat |
/models |
List available LLM providers and models |
/model [provider:]model |
Switch model |
/profile <name> |
Switch profile |
/prompts |
List available prompts |
/tools [toggle <name>] |
List/manage tools |
/scheduled |
View scheduled tasks |
/skill <name> |
Execute a skill command |
/cd [path] |
Change working directory |
/list_chats |
List all chats |
/peek <chat_id> |
Preview a chat |
/resume <chat_id> |
Switch to a different chat |
/autoexec |
Toggle auto command execution |
/option <n> |
Choose a pending interaction option |
Per-User Chat Management
# Get or create a chat for a user (auto-creates folder if needed)
chat_id = self.get_or_create_chat(user_id)
# The chat is saved automatically and persists across restarts
Full Example: A Simple CLI Chat Interface
from .handlers.interfaces.chat_interface import ChatInterface
from .handlers.extra_settings import ExtraSettings
import threading
class CLIChatInterface(ChatInterface):
key = "cli-chat"
name = "CLI Chat Interface"
folder_name = "CLI Chats"
folder_color = "#33d17a"
chat_name_prefix = "⌨️ CLI"
def __init__(self, settings, path):
super().__init__(settings, path)
self._running = False
self._thread = None
def start(self):
if self.controller is None:
return
self._running = True
self._thread = threading.Thread(target=self._run_loop, daemon=True)
self._thread.start()
self._write_state_file()
print("CLI Chat interface started")
def _run_loop(self):
print("Type /help for commands, /quit to exit")
while self._running:
try:
text = input("> ")
if text == "/quit":
break
if text.startswith("/"):
resp = self.try_handle_command("cli-user", text)
if resp:
print(resp)
else:
print("Thinking...", end="\r")
result = self.process_message("cli-user", text)
print(result)
except (EOFError, KeyboardInterrupt):
break
def stop(self):
self._running = False
self._clear_state_file()
print("CLI Chat interface stopped")
def _is_locally_running(self):
return self._running and self._thread is not None and self._thread.is_alive()
Adding Interfaces as Extensions
Extensions can provide interface handlers:
from .handlers.descriptors import HandlerDescription
class MyCustomExtension(NewelleExtension):
def get_interface_handlers(self) -> list[dict]:
return [
HandlerDescription(
key="my-interface",
title="My Interface",
description="Description of my interface",
handler_class=MyInterfaceHandler,
)
]
Or if contributing to Newelle directly, add the handler to AVAILABLE_INTERFACES in constants.py.
Built-in Interfaces Reference
OpenAI Compatible API (api)
Exposes the current LLM as an OpenAI-compatible API server (FastAPI + uvicorn).
Endpoints:
- POST /v1/chat/completions — Standard chat completions
- POST /v2/chat/completions — Agent endpoint with tools, commands, per-user persistent chat
- GET /v1/models — List available models
- POST /v1/audio/speech — TTS endpoint
- POST /v1/audio/transcriptions — STT endpoint
- POST /v1/embeddings — Embedding endpoint
Newelle GUI API (gui-api)
Full REST API for building alternative UIs or integrating deeply with Newelle.
Key endpoints:
- GET/POST /api/chats — Chat management
- GET /api/chats/{id}/history — Message history
- POST /api/messages/run-llm — Run LLM with tools
- GET/POST /api/prompts — Prompt management
- GET/POST /api/tools — Tool management
- GET/POST /api/profiles — Profile management
- GET/PATCH /api/settings — Settings management
- GET/POST /api/folders — Folder management
- POST /api/interfaces/{key}/start — Interface control
- GET /api/chats/{id}/stream — SSE streaming with tool support
- POST /api/tts/play — TTS playback
- POST /api/stt/recognize — STT recognition
- POST /v1/chat/completions — OpenAI-compatible endpoint
Telegram Bot (telegram)
Chat with Newelle via Telegram. Supports: - Text messaging with streaming - Voice message transcription - Photo/image analysis (via vision-capable LLMs) - Inline keyboard for tool interactions - All ChatInterface built-in commands - Streaming via message drafts or edit-in-place mode
Tips
- Thread safety:
process_message()is blocking — always call it from a worker thread. - Controller access:
self.controllergives you access to LLM handlers, tools, chats, profiles, and settings. - Error reporting: Set
self._errorto surface errors in the UI. - Dependencies: Use
get_extra_requirements()to declare pip packages needed by your interface.