PhD Student in Computer Science
Singapore Management University
I’m Smitha, a PhD candidate in Human-Computer Interaction at Singapore Management University (SMU). My work focuses on developing solutions for indoor positioning using conversational interaction. I’m especially interested in how natural language, when combined with sensor data and behavioral models, can unlock new possibilities in spatial intelligence, a domain where language is still underused.
My current research explores how conversational input can drive concrete performance improvements in positioning systems for indoor spaces. By integrating natural language interaction into positioning systems, I aim to design low-cost, infrastructure-free approaches that are both practical and human-centered.
Before my PhD, I completed my master’s at the National University of Singapore (NUS), where I investigated how sensorimotor engagement through haptic touchscreen interfaces could enhance second-language vocabulary learning.
Across projects, I specialize in rapid prototyping (Python is my go-to), user-centered design with iterative pilots, and user behavior analysis. Outside research, I’m always up for discovering new food spots and never get tired of rewatching Jurassic Park 🦖.
We developed a low-cost indoor tracking system that improves inertial sensor accuracy by incorporating user-provided conversational cues. Our approach combines smartphone IMU data with natural language descriptions of known indoor locations, enabling accurate tracking without requiring pre-installed infrastructure or situated data collection.
We developed Conversational Localization, a sensorless indoor localization approach using natural language interactions. Our system estimates user location based on described surroundings, employing strategies to extract the most useful details.
Sensorimotor activation enhances learning, and we applied this principle to digital vocabulary learning by incorporating free-form digital annotation in mobile flashcard interfaces. By varying annotation type, presentation sequence, and haptic feedback, we identified a design that significantly improved retention, yielding a 24.21% increase in immediate recall and 30.36% improvement in 7-day delayed recall.
Thrilled to share that our work was accepted to IMWUT / UbiComp 2025! We enabled smartphone-only tracking by strategically integrating conversational interaction, advancing continuous indoor localization without extra infrastructure.
Excited to be an invited speaker at FIT2025, presenting my work on Conversational Localization (September 2025)
We recently tested our indoor tracking system in a museum and a mall. It was a fun chance to see our research in action. Huge thanks to the wonderful participants! (July 2025)
I received the award for my research contributions in natural language interfaces for indoor spatial intelligence, emphasizing sensorless localization.
Met some lovely folks at the SG HCI Meetup 2025, where I demoed my work on conversationally improving indoor smartphone tracking.
I had a great time presenting our work in IMWUT 2024, held in Melbourne.
At the Ministry of Education (MOE) Academic Research Council (ARC) Site Visit, I engaged with academic leaders and policymakers, discussing research and exchanging insights on human-centered technology.
I had the opportunity to share my work with young researchers from around the world at GYSS 2024, engaging in discussions and exchanging ideas with peers and experts.
The Singapore HCI Meetup is a national gathering of researchers, designers, and practitioners. At the 2024 event, I presented my ongoing work and engaged with experts in human-centered technology.