![]() | Only 14 pages are availabe for public view |
Abstract This study presented two techniques for concealing objects. The first quantizes the background scene into a set of depth levels. At each location of observer{u2019}s eyes, one depth level is nominated. A 3D observation point (OP) is computed from the 3D location of observer{u2019}s eyes, head rotation angles and face features. The precision is 90% for 3m-planar dynamic background and it decreases if levels or distancesincrease. The second uses the computed OP and the locations of display{u2019}s corners to predict their corresponding 3D points in the background using neural networks. Then searching for their nearest neighbors in the constructed point cloud of the background. The results of this general technique show a very promising solution for concealing 3D objects that are covered with displaying devices using RGB-D sensors |