top of page

LLM-Powered Voice-Controlled Immersive Data Visualization in Unity

Abstract

A VR network visualization system using voice commands and LLM-powered agents to transform natural language into Neo4j queries, enabling interactive graph exploration through categorical coloring, shape encoding, and dynamic layouts.

Skills

Awards

Link

Contents

This is an ongoing project with one lead PhD and two main developers. I designed and implemented the multi-agent LangGraph pipeline that transforms voice input into visualization operations, including speech error correction and ambiguity detection. Also, I am responsible for the categorical encoding system for bivariate visualization (color and shape by attribute) with automatic mapping and legend generation. I also created the end-to-end execution flow integrating Whisper STT, LangGraph, Neo4j, and Unity rendering.

bottom of page