Interacting with computational devices was limited to using a keyboard and mouse for many years. More recently, touchscreens have become popular, especially in mobile computing. However, accurate three dimensional gestures are still difficult to recognize. One approach is the use of depth cameras, which are currently receiving much attention again in academic and in commercial fields due to the release of inexpensive consumer electronics like Microsofts Kinect in 2010. This presentation reflects depth camera technologies, and how gestures can be used for new interface designs. The first part establishes a framework to evaluate the suitability of depth cameras for gesture recognition. The second part focuses on interactions in three dimensional virtual spaces. I developed a gesture interface for an existing application to navigate in virtual spaces and to construct LEGO models. A user study was performed to compare the gesture input with traditional mouse and keyboard input. Finally, I discussed the influences of the findings for future research in gesture interfaces.