The radio signal they use is similar to Wi-Fi but substantially less powerful. The system works because those radio waves can penetrate objects like a wall, then bounce off a human body—which is mostly water, no friend to radio wave penetration—and travel back through the wall and to the device. “Now the challenge is: How do you interpret it?” Katabi says. That’s where the AI comes into play, specifically, a machine learning tool called a neural network.
The way that artificial intelligence researchers train a neural network—which can deduce its own rules from data in order to learn—is by feeding it annotated information. It’s a process called supervised learning. Want to teach a self-driving car what a traffic light looks like? Show it images that include traffic lights, and annotate them to show the AI wherein the image the light is. Neural networks are commonly used to interpret images, but can also be used to carry out complex tasks like translate from one language to another, or even generate new text by imitating the data it’s given. But in this case, they had a problem. “Nobody can take a wireless signal and label it where the head is, and where the joints are, and stuff like that,” she says. In other words: labelling an image is easy, labelling radio wave data that’s bounced off a person, not so much.
Their solution, just for the training period, was to couple the radio with a camera, and then label the images the camera created to help the neural network correlate the activities. This had to be done without a wall, so the camera could actually see. “We used those labels from the camera,” she says, “along with the wireless signal, that happened concurrently, and we used them for training.” After the training, they were surprised to discover that even though the system had only been trained with the people visible, and not occluded, it could detect people who were hidden. “It was able to see and create the stick figure of the human behind the wall,” she says, “although it never saw such thing during training.” Not only that, it can even tell people apart by their gait. With the help of another neural network, the system could see examples of people walking, and then later, in new instances involving the same people, identify individuals with an accuracy of more than 83 percent, even through walls.
The researchers have already started using the system, in a small study, with Parkinson’s patients. By putting the devices in the patients’ homes, they could monitor their movements in a comfortable setting without using cameras—in that sense, it’s a less invasive way of learning about someone’s body movements than a traditional video would be. That study involved seven people and lasted eight weeks.
Author : Vineela Chalumuri
Source: Popular Science