Current tabletop systems are designed to sense 2D interactions taken place on the tabletop surface, such as finger touches and tangible objects. The ability to interact above the tabletop surface makes it possible to support 3D interactions. For example, an architect can examine a 2D blueprint of a building shown on the tabletop display while inspecting 3D views of the building by moving a mobile display above the tabletop. Recent approaches to localize objects in 3D requires visible markers or the use of embedded sensors [Song et al. 2009]. The use of visible markers often interferes with the content users are focusing on, limiting its usefulness and applicability.