GUIDE: LLM-Driven GUI Generation Decomposition for Automated Prototyping
Problem Statement
Current LLM-based GUI prototyping produces non-editable text or image outputs that lack visual editability and direct manipulation, failing to match traditional prototyping workflows. Minor change requests force full regeneration of the entire GUI, making iterative design inefficient. There is a critical gap between LLM generative capabilities and the interactive, controllable nature that professional GUI prototyping demands.
Key Novelty
- Hierarchical decomposition of high-level GUI descriptions into fine-granular GUI requirements, enabling targeted and efficient incremental updates rather than full regeneration
- RAG-based integration of Material Design component libraries into the LLM prompting pipeline, grounding generation in standardized, reusable UI components
- Seamless Figma plugin integration that bridges LLM-generated outputs with a widely-used professional prototyping environment, enabling direct visual editing post-generation
Evaluation Highlights
- Preliminary evaluation demonstrates GUIDE effectively bridges the gap between LLM generation and traditional GUI prototyping workflows in terms of controllability and editability
- The decomposition approach enables more efficient adaptation to user-requested changes compared to direct LLM-based full prototype regeneration
Breakthrough Assessment
Methodology
- Step 1 - Decomposition: Parse high-level natural language GUI descriptions using an LLM to extract fine-granular GUI requirements (individual UI components, layout constraints, interaction behaviors)
- Step 2 - RAG-enhanced Component Mapping: Use retrieval-augmented generation to query a Material Design component library, matching each fine-granular requirement to appropriate standardized components with contextually grounded prompts
- Step 3 - Figma Prototype Generation: Translate the mapped components and requirements into editable Figma prototype elements via a plugin, enabling direct visual inspection and manual refinement within the Figma environment
System Components
LLM-driven module that breaks down high-level GUI descriptions into structured, fine-granular UI requirements for individual components and layout elements
Retrieval-augmented generation pipeline that indexes Material Design component library and retrieves relevant components to ground LLM prompts during prototype generation
Plugin layer that interfaces with the Figma API to render LLM-generated Material Design prototypes as editable vector/UI elements within the Figma workspace
Change management mechanism that maps user-requested modifications to specific decomposed requirements, enabling targeted regeneration of only the affected GUI segments
Results
| Metric/Benchmark | Baseline (Direct LLM) | GUIDE | Delta |
|---|---|---|---|
| Editability of output | Non-editable text/image | Fully editable Figma components | Qualitative improvement |
| Change efficiency | Full prototype regeneration | Targeted component-level update | Reduced regeneration scope |
| Component grounding | Unconstrained generation | Material Design library-aligned | Improved consistency |
| Workflow integration | Standalone LLM output | Native Figma environment | Professional tool compatibility |
Key Takeaways
- Decomposing complex generation tasks (like full GUIs) into fine-granular sub-requirements before LLM prompting improves controllability and enables surgical updates — a transferable pattern for other structured generation domains
- RAG is an effective strategy for grounding LLM outputs in domain-specific component libraries (design systems, API specs, schema definitions), reducing hallucination and improving standard compliance in code/UI generation
- Integrating LLM generation pipelines directly into existing professional tools (Figma, IDEs, etc.) rather than building standalone interfaces dramatically lowers adoption friction and preserves human-in-the-loop editing workflows
Abstract
Graphical user interface (GUI) prototyping serves as one of the most valuable techniques for enhancing the elicitation of requirements, facilitating the visualization and refinement of customer needs and closely integrating the customer into the development activities. While GUI prototyping has a positive impact on the software development process, it simultaneously demands significant effort and resources. The emergence of Large Language Models (LLMs) with their impressive code generation capabilities offers a promising approach for automating GUI prototyping. Despite their potential, there is a gap between current LLM-based prototyping solutions and traditional user-based GUI prototyping approaches which provide visual representations of the GUI prototypes and direct editing functionality. In contrast, LLMs and related generative approaches merely produce text sequences or non-editable image output, which lacks both mentioned aspects and therefore impede supporting GUI prototyping. Moreover, minor changes requested by the user typically lead to an inefficient regeneration of the entire GUI prototype when using LLMs directly. In this work, we propose GUIDE, a novel LLM-driven GUI generation decomposition approach seamlessly integrated into the popular prototyping framework Figma. Our approach initially decomposes high-level GUI descriptions into fine-granular GUI requirements, which are subsequently translated into Material Design GUI prototypes, enabling higher controllability and more efficient adaption of changes. To efficiently conduct prompting-based generation of Material Design GUI prototypes, we propose a retrieval-augmented generation (RAG) approach to integrate the component library. Our preliminary evaluation demonstrates the effectiveness of GUIDE in bridging the gap between LLM generation capabilities and traditional GUI prototyping workflows, offering a more effective and controlled user-based approach to LLM-driven GUI prototyping. Video presentation of GUIDE is available at: https://youtu.be/C9RbhMxqpTU