top of page

Research Blog

Search
Writer's picture: Sumudu SamarakoonSumudu Samarakoon

Updated: Jan 13

LLM-aided Human-Robot Interfacing for A Prompt-to-Control Application - ArticAI2024


This work introduces an innovative solution for seamless human-robot interaction, enabling operators to control a mobile robot using natural language commands. Here, a custom large language model (LLM), is hosted on a server and equipped with contextual knowledge of the environment, that can process free form text commands provided through a user-friendly interface, and thus, eliminating the need for prior knowledge of machine-specific languages. The LLM translates these directives into high-level plans structured around three core questions: Go where?, Find what?, and Do what?. These plans are then transmitted to the robot, which autonomously navigates to the specified location and identifies target objects using an optimized, lightweight artificial intelligence (AI) model designed for real-time performance. Demonstrations validate the system's ability to generate precise, actionable plans from broad commands and execute tasks efficiently, highlighting its potential to enhance human-robot collaboration.




Relevant Resources


Paper: [to be included]

Writer's picture: Sumudu SamarakoonSumudu Samarakoon

Updated: Jan 13

Real-Time Remote Control via VR over Limited Wireless Connectivity - ISCC2024


This work presents an innovative approach to improving human-robot interaction over constrained wireless networks. Leveraging a virtual reality (VR) interface, the solution enables remote control of a robot while ensuring smooth transitions to autonomous operation during connectivity interruptions.

The VR interface provides users with an immersive experience, featuring a dynamic 3D virtual map that updates in real time using sensor data collected and transmitted by the robot. To ensure reliability, the robot continuously monitors wireless connectivity, autonomously switching to self-navigation when connectivity is limited. By integrating real-time mapping, VR-based remote control, wireless connectivity monitoring, and autonomous navigation, this solution offers a robust and seamless framework for end-to-end human-robot interaction in dynamic network environments.



Relevant Resources


Writer's picture: Sumudu SamarakoonSumudu Samarakoon

Updated: Jan 13

Maze Discovery using Multiple Robots via Federated Learning - ISCC2024


This work explores the application of federated learning (FL) in a maze discovery scenario using robots equipped with LiDAR sensors. The objective is to train classification models capable of identifying grid area shapes within two distinct square mazes, each featuring irregularly shaped walls. A key challenge arises from the unique wall shapes in each maze, which prevent a model trained on one maze from generalizing to the other. To overcome this, FL enables robots exploring a single maze to collaboratively share and aggregate knowledge, enhancing their ability to accurately classify shapes in the unseen maze. This use case highlights the potential of FL in real-world applications, demonstrating its ability to improve classification accuracy and robustness in complex, dynamic tasks such as maze discovery.




Relevant Resources


2
bottom of page