This AI can see people through walls. Here’s how

Researchers from MIT have built a neural network that estimates the pose and movement of people who are behind a wall. MIT students Trained an AI system using radio signals and video footage of people moving around. This made it possible to generate stick figures showing what people were doing behind a wall. The system, called RF-Pose.

X-ray visionary:

Dina Katabi, a researcher at MIT, has been developing technology for detecting people and their movements behind a solid wall using radio waves. The approach relies on cutting-edge machine learning to interpret the signals. The technology is now capable of revealing something more precise: it depicts the people in the scene as skeleton-like stick figures and can show them moving in real time as they do normal activities, like walk or sit down. It focuses on key points of the body, including joints like elbows, hips, and feet. When a person either occluded by a wall or not takes a step, “you see that skeleton, or stick figure, that you created, takes a step with it,” she says. “If the person sits down, you see that stick figure sitting down.”

How it works

The radio signal they use is similar to Wi-Fi but substantially less powerful. The system works because those radio waves can penetrate objects like a wall, then bounce off a human body—which is mostly water, no friend to radio wave penetration—and travel back through the wall and to the device. “Now the challenge is: How do you interpret it?” Katabi says. That’s where the AI comes into play, specifically, a machine learning tool called a neural network.

The way that artificial intelligence researchers train a neural network—which can deduce its own rules from data in order to learn—is by feeding it annotated information. It’s a process called supervised learning. Want to teach a self-driving car what a traffic light looks like? Show it images that include traffic lights, and annotate them to show the AI wherein the image the light is. Neural networks are commonly used to interpret images, but can also be used to carry out complex tasks like translate from one language to another, or even generate new text by imitating the data it’s given. But in this case, they had a problem. “Nobody can take a wireless signal and label it where the head is, and where the joints are, and stuff like that,” she says. In other words: labelling an image is easy, labelling radio wave data that’s bounced off a person, not so much.

Their solution, just for the training period, was to couple the radio with a camera, and then label the images the camera created to help the neural network correlate the activities. This had to be done without a wall, so the camera could actually see. “We used those labels from the camera,” she says, “along with the wireless signal, that happened concurrently, and we used them for training.” After the training, they were surprised to discover that even though the system had only been trained with the people visible, and not occluded, it could detect people who were hidden. “It was able to see and create the stick figure of the human behind the wall,” she says, “although it never saw such thing during training.” Not only that, it can even tell people apart by their gait. With the help of another neural network, the system could see examples of people walking, and then later, in new instances involving the same people, identify individuals with an accuracy of more than 83 percent, even through walls.

How will it be used?

The researchers have already started using the system, in a small study, with Parkinson’s patients. By putting the devices in the patients’ homes, they could monitor their movements in a comfortable setting without using cameras—in that sense, it’s a less invasive way of learning about someone’s body movements than a traditional video would be. That study involved seven people and lasted eight weeks.

Author : Vineela Chalumuri
Source: Popular Science