HCI in AI Era (1): Reborn of MVC
HCI in AI Era is not just a rethinking of tools and technologies; it is a reimagining of how humans interact with the digital world. The rebirth of MVC is just the beginning.

Introduction
The evolution of Human-Computer Interaction (HCI) has undergone significant transformations in the past few decades, from static, task-oriented systems to more intuitive and interactive designs. As we enter the era of artificial intelligence, the relationship between humans and software systems is being redefined. One of the most fundamental and enduring design patterns in software development, the Model-View-Controller (MVC) architecture, is experiencing a renaissance in this new AI-driven landscape. While traditionally applied to numeric or structured data, the advancement of Large Language Models (LLMs) has made it possible to extend MVC to unstructured text content, opening up revolutionary ways to interact with and visualize information.
This essay explores the concept of MVC in the AI era, emphasizing how LLMs enable multiple perspectives on textual data, much like traditional MVC systems allow for diverse views on numeric or structured data. With a focus on emerging tools and possibilities, such as AI-powered mind maps, timelines, and visual-rich lists, this essay highlights the rebirth of MVC as a foundational pattern for next-generation HCI.
The MVC Design Pattern: A Brief Overview
The Model-View-Controller (MVC) pattern separates applications into three parts:
- Model: Manages the data and logic.
- View: Displays the model data in various formats.
- Controller: Processes user input and updates the model or view.
By decoupling these components, MVC allows multiple views to present the same data in different ways — like charts, graphs, or tables for numeric datasets.
A fine example is Virginia Tech’s Virginia Bioinformatics Institute (VBI) system (2003–2005), which registered visualization tools for bioinformatics data. Upon receiving data, it identified and presented compatible tools, enabling users to explore datasets from multiple perspectives to uncover insights. (I was one of the main developers.)
From Numeric Data to Textual Content: The Limitations of Traditional MVC
Traditional MVC excels with structured numeric data, enabling clear visualizations like charts or tables. However, unstructured text presents challenges, as it doesn’t naturally fit into a single visual format. For example:
- A news article contains timelines, relationships, and keywords.
- A research paper has both hierarchies (headings) and sequential flow.
- A forum thread requires relational views for interactions and timelines for conversation order.
Traditional MVC systems lack the capability to interpret text data in a meaningful way that supports diverse visualizations. The process of transforming text into structured, visualizable data has historically been manual, tedious, and incomplete.
The Role of Large Language Models (LLMs)
The emergence of Large Language Models (LLMs), such as GPT-4, has fundamentally changed how we process and understand text. LLMs are trained on massive amounts of textual data, enabling them to perform tasks such as:
- Semantic understanding of text.
- Generating structured outputs (e.g., extracting key points, creating hierarchies, summarizing timelines).
- Context-aware reasoning.
In the context of MVC, LLMs act as powerful transformation engines that can convert unstructured text into structured data suitable for visualization. This capability enables the registration of multiple views — each tailored to reveal different aspects of the same text model. Just as numeric data can be visualized through various chart types, LLMs make it possible to visualize textual data through:
- Hierarchical Views: Representing text as mind maps or tree structures.
- Sequential Views: Structuring content as ordered lists or timelines.
- Relational Views: Mapping relationships, such as connections between ideas, people, or themes.
- Sentiment and Contextual Views: Visualizing emotional tones, key phrases, or conceptual clusters.
Practical Applications: Text-Centric Views in the AI Era
Let us explore how AI, powered by LLMs, is breathing new life into MVC through text-centric visualizations.
1. Mind Map Views: Hierarchical Representation of Text
Mind maps provide a tree-like structure for visualizing text hierarchies, such as outlines, headings, or conceptual relationships. LLMs can process long-form text and extract key themes, subtopics, and details to generate mind maps dynamically.
For example:
- A textbook chapter: LLMs can identify headings, subheadings, and bullet points to construct a hierarchical outline.
- A legal document: Key clauses and their relationships can be mapped visually.
- A paragraph can be broken down into multiple pages and displayed in a slider view, utilizing tools like the popular impress.js to enable smooth 3D transitions between pages.
In an educational setting, a mind map view enables students to see how concepts are organized, helping them form clearer mental models of the material.
2. Timeline Views: Sequential Text Transformation
Text that contains temporal data (e.g., events, processes, or histories) can be transformed into a timeline view. LLMs can identify time markers and organize content chronologically.
For example:
- A historical article: Events can be extracted and mapped on a timeline.
- A project report: Milestones and deadlines can be visualized sequentially.
- A novel or screenplay: The plot can be represented as a timeline, revealing narrative flow.
This approach is already partially adopted by tools like Google’s Learning Now, which extracts and organizes text into list-based views. However, AI-driven systems can go further by detecting implicit temporal relationships within text and enriching timelines with contextual summaries and visuals.
3. Relational Views: Mapping Connections
Many types of text involve relationships between ideas, entities, or people. LLMs can analyze these connections and generate network graphs or other relational views.
For example:
- Research papers: Citations, authors, and referenced works can be mapped as a knowledge graph.
- Discussion threads: Relationships between participants, replies, and themes can be visualized as networks.
- Story analysis: Characters and their interactions can be shown as relational graphs.
These views allow users to identify patterns, clusters, and key relationships that are otherwise difficult to discern in linear text.
4. Sentiment and Contextual Views
LLMs can analyze text for tone, sentiment, and context, enabling more abstract forms of visualization. For example:
- A customer feedback report could visualize emotional clusters (positive, negative, neutral).
- A speech or debate transcript could highlight emotional peaks, key phrases, and rhetorical shifts.
- A social media thread could summarize sentiment trends over time.
- NLP for entity chains: when a user clicks on an entity, all related entities are dynamically highlighted to enhance contextual understanding.
These views add a layer of interpretation that goes beyond structure, helping users understand the emotional or thematic tone of textual content.
A New Paradigm: Extending MVC with AI-Driven Flexibility
The rebirth of MVC in the AI era introduces a new paradigm for software design — one that focuses not just on structured data but on the dynamic transformation of unstructured content. LLMs serve as the transformation layer, bridging the gap between text models and visual representations. This paradigm offers several key advantages:
- Dynamic Visualization: Users can switch between multiple views seamlessly, exploring the same text model through different perspectives (hierarchical, sequential, relational, etc.).
- Context-Aware Interaction: AI can infer user intent and recommend the most appropriate views or transformations based on the task at hand.
- Personalized Insights: By registering multiple tools for text analysis, AI-driven systems allow users to tailor visualizations to their individual needs and preferences.
- Scalability: The approach scales across domains — from education and research to project management, legal analysis, and creative writing
Future Implications: HCI and Beyond
The extension of MVC from numerical data to unstructured text represents a broader shift in HCI principles. Traditional interfaces were static and predefined, offering limited flexibility for exploring data. In contrast, AI-powered interfaces are dynamic, adaptive, and user-driven.
- Education: Students can interact with course materials through mind maps, timelines, or relational views, fostering active learning and deeper understanding.
- Knowledge Work: Researchers, analysts, and content creators can leverage multiple perspectives to extract insights from text-heavy sources.
- Collaboration: AI systems can visualize collaborative workspaces, mapping contributions, timelines, and relationships within teams.
- Accessibility: Dynamic views can cater to different learning styles and cognitive needs, making textual content more accessible.
Conclusion
The MVC design pattern, long a cornerstone of software architecture, is experiencing a profound rebirth in the AI era. By harnessing the power of LLMs, we can extend MVC from numeric data to textual content, enabling dynamic, multi-perspective visualizations. This evolution represents a fundamental shift in how we design software interfaces, transforming HCI to be more intuitive, adaptive, and insightful.
In the years to come, AI-driven MVC systems will become a foundational tool for exploring, analyzing, and interacting with information in ways that were previously unimaginable. Whether through hierarchical mind maps, sequential timelines, or relational graphs, the rebirth of MVC marks a significant milestone in the ongoing evolution of human-computer interaction — one that brings us closer to software that truly understands and augments human intent.
PS: Thanks to ChatGPT-4o for crafting this essay based on the main ideas and examples I provided.