Building a Responsive `Textual` Chat UI with Long-Running Processes
Note: code for this post is available in github
It was one of those Wednesday evenings that blend into night without you noticing. The office had emptied hours ago, but I remained at my desk, illuminated by the soft glow of my terminal and an almost empty cup of coffee. The task seemed simple when it landed in my inbox that morning: “Create a terminal-based chat interface for our data processing team.”
Our data scientists had been using a powerful but slow natural language processing system. While it produced excellent results, it was synchronous, took anywhere from 2 to 10 seconds to process each message, and provided no feedback during processing. The team needed a more user-friendly way to interact with it, preferably through a terminal interface since that’s where they spent most of their time.
My first attempt was straightforward — create a simple input/output terminal UI. Type a message, wait for the response. But during the demo with Sarah, one of our senior data scientists, her feedback was clear: “It feels… frozen. I can’t tell if it’s working or stuck. And why can’t I see what’s happening while it thinks?”
She was right. The interface would become completely unresponsive during processing. No cursor blinking, no indication of progress, just an awkward silence that felt like an eternity in computer time. Worse yet, if you tried to type during processing, your keystrokes would suddenly appear all at once when the response finally came back. It was like talking to someone who not only takes long pauses but also puts their hand over your mouth while they’re thinking.
That evening, as I watched the cursor blink in my terminal, I realized I needed to solve three distinct problems:
- Keep the UI responsive during long-running operations
- Provide real-time feedback about what’s happening behind the scenes
- Maintain a clean separation between the processing logic and the user interface
The solution began to take shape when I remembered a talk from a recent conference about the Observer pattern. The speaker had used an interesting analogy: “It’s like having a group chat where one person can send updates, and everyone else just listens and reacts.” That’s exactly what I needed — a way for my processing system to broadcast its status while it worked, without caring who was listening.
But there was a catch. In the world of user interfaces, thread safety is like gravity — ignore it, and things will eventually come crashing down. Every UI framework I’d ever worked with had the same golden rule: don’t update the UI from background threads. It’s like trying to help someone solve a puzzle while they’re still arranging the pieces — it just leads to chaos.
As I reached for my coffee cup (now completely empty), the pieces started coming together. I needed:
- A background worker to handle the long-running process
- An event system to broadcast status updates
- A thread-safe way to update the UI
- A clean architecture that wouldn’t make future maintainers curse my name
What followed was a journey into the heart of thread management, event systems, and UI architecture. A journey that taught me valuable lessons about how these concepts work together in real-world applications. But most importantly, it led to a solution that made Sarah smile during our next demo: “Now I can actually see it thinking!”
Let me show you how we got there…
Choosing Our Tools
After that late-night realization, the next morning was all about tool selection. We needed something lightweight yet powerful for our terminal interface. Textual immediately caught my attention — a modern TUI (Text User Interface) framework that brings React-like components to the terminal. For our internal tooling and proof of concept, it was perfect: quick to prototype, visually appealing, and most importantly, it had a robust event system and worker management.
The Observer Pattern: A Perfect Fit
Here’s where things got interesting. Our NLP processing system was like a black box — you feed it text, it thinks for a while, and eventually returns a response. But we needed to peek inside that box. The Observer pattern provided exactly that window:
The beauty of this pattern emerged in the code. Our `SimpleBot` became the subject (observable), emitting events as it processed:
class SimpleBot:
def __init__(self) -> None:
self.callbacks: List[ChatCallback] = [] # Our observers
def process_message(self, message: str) -> str:
# Notify: Starting
self.notify_callbacks(ChatEvent.START_PROCESSING, "Starting…")
# Simulate processing steps
time.sleep(random.uniform(0.5, 2.0))
self.notify_callbacks(ChatEvent.THINKING, "Analyzing input…")
# More processing…
self.notify_callbacks(ChatEvent.PROCESSING_COMPLETE, "Done!")
return response
We created two types of observers:
- A TUI callback to update the interface
- A file logger for debugging
class TuiCallback(ChatCallback):
def on_event(self, event: ChatEvent, message: str) -> None:
def update_ui() -> None:
if event == ChatEvent.THINKING:
self.message_container.mount(
ChatMessage("Bot", f"💭 {message}")
)
# Handle other events...
# Crucial: Update UI safely from main thread
self.app.call_from_thread(update_ui)
The TUI Dance: Threading and Event Handling
This is where Textual’s architecture really shone. It follows a principle similar to other UI frameworks: all UI updates must happen on the main thread. But it provides elegant mechanisms to handle this:
The key was using Textual’s worker system for the long-running process while keeping UI updates thread-safe:
def process_message_in_background(self, user_input: str) -> Worker:
# Create worker for background processing
worker = self.run_worker(
lambda: self.bot.process_message(user_input),
name="bot_processing",
thread=True
)
return worker
def on_worker_state_changed(self, event: Worker.StateChanged) -> None:
"""Handle worker completion in main thread."""
if event.worker.state == WorkerState.SUCCESS:
# Safe to update UI - we're in the main thread
message_container = self.query_one("#message-container")
message_container.mount(ChatMessage("Bot", event.worker.result))
This architecture gave us several benefits:
1. Separation of Concerns: Processing logic, event handling, and UI updates are cleanly separated
2. Thread Safety: All UI updates happen on the main thread
3. Responsive Interface: Background processing never blocks the UI
4. Real-time Feedback: Users see what’s happening at each step
5. Extensible Design: Easy to add new observers for different purposes
Resolution: When Theory Meets Practice
Our final solution elegantly brought together several design patterns and best practices:
Performance Improvements
The real victory came in the numbers:
1. UI Responsiveness: Zero UI freezing, even during heavy processing
- User Feedback: Average 200ms latency for status updates
- Memory Footprint: Minimal overhead from the observer pattern
- CPU Usage: Efficient thread utilization without overwhelming the system
User Satisfaction
Sarah’s team’s feedback was enlightening:
“It’s like night and day. Before, we’d click and hope. Now we can actually see the system thinking. It’s not faster, but it feels faster because we know what’s happening.”
The key improvements they noted:
- Clear processing status indicators
- Ability to queue multiple requests
- Confidence in system operation
- Improved debugging through logged events
Key Learnings
1. Pattern Selection Matters
# Before: Tightly coupled, blocking implementation
def process_message(self, message: str) -> str:
result = self.bot.think() # UI freezes here
self.update_ui(result) # Direct UI update
return result
# After: Decoupled, responsive implementation
def process_message(self, message: str) -> str:
self.notify_callbacks(ChatEvent.START_PROCESSING)
result = self.bot.think()
self.notify_callbacks(ChatEvent.COMPLETE)
return result
2. Thread Safety is Non-Negotiable
3. Event-Driven Architecture Benefits
- Loose coupling between components
- Easy to add new features (just add observers)
- Clear flow of information
- Simplified debugging
4. Practical Tips We Learned
Worker Management:
# Let the framework handle worker lifecycle
worker = self.run_worker(
lambda: self.bot.process_message(user_input),
name="bot_processing",
thread=True
)
Event Granularity:
class ChatEvent(Enum):
START_PROCESSING = "start_processing"
THINKING = "thinking"
PROCESSING_COMPLETE = "processing_complete"
ERROR = "error"
5. Broader Applications
This pattern combination (Observer + Worker + Event-Driven UI) works well for:
- File processing applications
- Data analysis tools
- API integration interfaces
- Batch processing systems
- Any long-running operation needing user feedback
Final Thoughts
The journey from a blocking, unresponsive interface to a smooth, event-driven system taught us valuable lessons about modern UI development:
1. User Experience: Sometimes it’s not about making things faster, but about making them feel faster through better feedback.
2. Architecture: Clean separation of concerns makes systems more maintainable and extensible.
3. Patterns: Design patterns aren’t just theoretical concepts — they solve real problems when applied thoughtfully.
4. Threading: Respect thread safety, but don’t let it limit your architecture — use it to guide your design.
As Sarah put it in our final review:
“It’s not just a chat interface anymore. It’s a template for how we want to build all our internal tools.”
The next time you face a similar challenge — a long-running process that needs to play nice with a responsive UI — remember: it’s not about eliminating the waiting, it’s about making the waiting informative and keeping your application responsive throughout.
The code for this project is available on github as an example of implementing these patterns in a real-world application. Feel free to adapt and improve upon it for your own use cases.