A large language model (LLM) has been successfully used by researchers at Princeton University’s School of Engineering to assist a robotic manipulator in understanding commands to organize a room.
Manipulators, or robotic arms, are excellent at carrying out specified duties. The manipulator can assemble machine parts, paint cars, and even carve sculptures in a factory setting. Get one at home, though, and the robot is completely lost. For a straightforward directive like “tidy up the room,” it might completely turn the house upside down.
This is attributed by Princeton University researchers to a person’s personal preferences. The biggest difficulty in organizing is deciding where to put each item. Every object in an industrial setting has a certain location. But at home, a variety of factors, from personal preference to cultural origins, determine where a particular object is stored. For instance, although some people prefer to hang their shirts up, others prefer to store them on shelves.
How leveraging AI is making a difference
Although a robot placed in the home could be specifically designed to perform these duties, the development of extensive language models has made this step unnecessary. With only a few samples of previous exchanges, the Princeton researchers successfully attempted to teach the robot particular preferences.
The team used the robot’s three basic operations of picking, placing, and throwing to establish placement rules for things by running user preferences through an LLM. The robot was able to sort tools, wooden blocks, and fruits into different shelves based on the guidelines, as well as separate dark and light garments into various containers.
The robot proved successful in separating plastic utensils from paper bags and cans in a cluttered room scenario, and it even put the separated items in the appropriate garbage or recycling bins. In addition to being able to open and close drawers, the robotic arm was effective at putting objects into them, whether they were spherical or rectangular. Additionally, it was able to sort objects in accordance with the given instructions and distinguish between storage boxes based on color.
The robot successfully placed objects in real-world circumstances with an accuracy rate of 85%. The code for LLM evaluation and real robot implementation has been posted to the software repository GitHub by the researchers.