Robotic hands could grab with a gentler, subtler touch.
There have been many attempts at teaching robots how to grab delicate objects, but they tend to rely on rough approximations that quickly fall apart in real life. MIT researchers may have a better solution: teach robots to predict how even the squishiest items will react to their touch. They’ve developed a “learning-based” particle simulation system that helps robots refine their approach. The new model captures how small pieces of a given material (the “particles” in question) react to touch, and learns from that information when the physics of a given interaction aren’t clear. It’s akin to how humans intuitively understand grip — we already have ideas based on our personal understanding of physics.
The team demonstrated its system by tasking a two-fingered robot, RiceGrip, with reshaping deformable foam into a desired shape, much like you might shape sushi. It used a depth camera and object recognition to identify the foam, and then used the model to envision the foam as a dynamic graph for deformable materials. While it already had an idea as to how the particles would react, it would adjust its model if the “sushi” behaved in a way it didn’t expect.
It’s still early days, and the scientists want to improve their approach by using partly observable situations (such as knowing how a pile of boxes will fall. They’d also like it to work directly with imagery. If and when that happens, though, it could represent a breakthrough for robots. They’d have an easier time manipulating virtually any kind of object, even when liquids or soft solids might make the results difficult to determine in advance. While robots might not replace sushi chefs any time soon, MIT’s learning method makes the prospect that much more realistic.