Robots prepare humans without the full picture

To tackle the issue of obstructed vision, the team has developed a “state estimation algorithm” that allows them to make reasonably accurate educated guesses as to where, at any given time, the elbow is and how the arm is bent – is it up. Straight outstretched or bent at the elbow, pointing toward, down, or sideways – even when it is completely covered by clothing.

At each instance of time, the algorithm takes as input the robot’s measurement of the force applied to the fabric and then estimates the position of the elbow – not exactly but placing it within a box or volume that includes all possible positions. Is.

That knowledge, in turn, tells the robot how to walk, Storitis says. “If the arm is straight, the robot will follow a straight line; if the arm is bent, the robot must rotate around the elbow.” It is important to get a reliable picture, he adds. “If the elbow estimation is incorrect, the robot may decide on a motion that would produce excessive, and unsafe, force.”

The algorithm includes a dynamic model that predicts how the arm will move in the future, and each prediction is corrected by a measurement of the force that is being exerted on the cloth at a particular time. While other researchers have predicted such a state estimation, what distinguishes this new work is that the MIT investigators and their collaborators can set a clear upper bound on the uncertainty and guarantee that the elbow is somewhere within a specified box. Will be inside

Both models for predicting arm movements and elbow position, and models for measuring the force applied by robots incorporate machine learning techniques. The data used to train the machine learning system was obtained from people wearing “Xsens” suits with built-in sensors that accurately track and record body movements.

After the robot was trained, it was able to infer elbow posture when putting the jacket on a human subject, a person who would move their hand in various ways during the procedure – sometimes in response to the robot’s tugging on the jacket and sometimes -Sometimes engaging in random motion of your own free will.

The work focused strictly on assessments – determining the position of the elbow and hand as accurately as possible – but Shah’s team has already moved on to the next step: developing a robot that continuously adjusts its movements in response to change. Can do. Hand and elbow orientation.

In the future, they plan to address the issue of “privatization”—developing a robot that can account for the silly ways different people walk. Likewise, they envision robots that are versatile enough to work with a wide variety of textile materials, each of which may respond somewhat differently to stretching.

While the researchers in this group are certainly interested in robot-assisted dressing, they recognize the technology’s potential for wider utility. “We didn’t specialize this algorithm in any way to work only for dressing robots,” notes Lee.

“Our algorithm solves the general state estimation problem and can therefore lend itself to a number of potential applications. Key to all of this is the ability to infer or estimate an invisible state. The human can guide the partner to recognize intentions as it works collaboratively to move blocks or set the dining table in an orderly manner.

Here’s a conceivable scenario for the not-too-distant future: A robot could set the table for dinner and maybe let your child stack the blocks left on the dining room floor neatly in the corner of the room. . It may then help to get your dinner jacket on to make yourself more presentable before a meal.

It can also take the plate to the table and serve appropriate portions to the diners. One thing the robot won’t do is make sure you and the others eat all the food before it gets to the table. Fortunately, it’s an “app”—as in app rather than appetizing—that isn’t on the drawing board.

Written by Steve Nadis

Source: Massachusetts Institute of Technology

Leave a Reply

Your email address will not be published.