A view synthesis problem is to generate a virtual view based on the given one or multiple views and their associated depth maps. We adopt the depth image based rendering (DIBR) approach in this paper for synthesizing the new views. No explicit 3D modeling is involved. Another component of this study is the popular commodity RGB-D (color plus depth) cameras. The color and depth images captured by a pair of RGB-D cameras (Microsoft Kinect for Windows v2) are our inputs to synthesize intermediate virtual views between these two cameras. Several methods include depth to color warping, disocclusion filling, and color to color warping are adopted and designed to achieve this target. One of our major contributions is a new disocclusion detection algorithm proposed to improve the disocclusion filling result. Furthermore, an improved camera calibration method is proposed to make use of the additional depth information. Good quality synthesized views are shown at the end.