Kinect Visual Studio

From EQUIS Lab Wiki

Jump to: navigation, search



Contents

Kinect Virtual Game Pad Studio

Members

Monte Creasor

Background

The release of the Kinect for XBox 360 in November 2010 has not only dramatically changed the face of computer gaming, but has also opened up many interesting possibilities for non-gaming applications. The device itself is relatively simple, consisting of an RGB camera, depth sensor, multi-array microphone, and a pivot motor. Although, initially designed to function as a human touch-less user interface style game controller, it potential was quickly recognized by researchers and software developers who immediately attempted to reverse engineer a set of drivers for the device. PrimeSense soon released its own set of open source drivers which facilitated the establishment of and openkinect.org, whose mission statement is "a community of people dedicated to creating a useful set of open source libraries for Windows, Linux, and Mac". Their mission is to provide standards and software tools to facilitate the use of the Kinect with PCs and other devices.

Motivation

Most of the XBox games released for the Kinect, require either full body movement or broad hand gestures to control game-play. Players are typically required to stand about 2 meters from the output screen, and game-play usually involves a significant amount of body movement as well as holding arms outstretched in front of the body and making gestures to manipulate virtual objects or controls. On the one hand, it has been argued that this technology promotes physical health by requiring the user to perform athletic manoeuvres during game-play. Another less obvious benefit is the reduction of damaging physical strain produced by small repetitive keyboard and mouse movements. On the other hand, this type of input technology is problematic for a large number of people with physical disabilities, as well as the older segment of the population. The exhausting nature of this type of game-play is difficult to sustain for any length of time. Holding your arms out, without support, in front of your body is impractical for games which require longer game sessions for a satisfying gaming experience. Also, some games become less visually immersive when the players are required to be far away from the display device.

Goal

To avoid these constraints, I propose to build an application that will provide gamers with the ability to design and create their own custom input 2 dimensional virtual game pads or "touch surfaces" which they can then use to control any game which currently uses keyboard and mouse input. The player will be required to design a flat surface which can contain any number of optically distinguishable control artifacts of any size or shape, and at any location on the surface. These virtual games pads can be either designed by hand on any flat surface, or designed in a graphic editor (for example CorelDRAW!) and then printed. The user will then be required to optically register the virtual game pad and associated controls with the application, after which they can assign any combination of key and mouse actions to each of the virtual control artifacts.

(Note that the term "touch surface" has been used here to illustrate that while the scope of this project will likely require a more modest 2 dimensional implementation, there are no design constraints restricting the game pad from being a 3 dimensional surface.)

There are many benefits resulting from this approach. Users who suffer from repetitive stress injuries will be able to design their own ergonomic control surfaces according to their specific needs. They will also be able to design multiple surfaces which they can alternate as input devices to avoid repetitive stress. Players with motor control difficulties will be able to create virtual controls with shapes and sizes that allow them more accurate control during game-play. Virtual game pads may also provide a cost saving by reducing the necessity of having different hardware controllers for different games.

Game-play

The project focus will be on building a game pad editor and driver which will work with any game which uses mouse and keyboard input. To achieve this goal within the required time frame, the application will be implemented and tested using an existing game or games, the choice of which, will be made at a later stage in the project development.

Game-play can best be described in terms of what type of virtual control and control manipulations will be supported. Virtual controls will likely include buttons or key like artifacts, sliders, and possibly dials. Players will manipulate these controls with either their fingers or some other type of physical pointing device which represents a finger or fingers. The game pad surface will likely be flat on table in front of the display, or on the players lap. Note that there is no reason to restrict the surface to be horizontal other than for ergonomic ease of use. The only requirement will be that the surface and Kinect are appropriately positions to allow for the proper registration and manipulation of control artifacts.

During game-play, the application device driver will be running in the background and will not obscure the game display nor be intrusive to game immersion. The user will be able to register a special control artifact to access the device driver and game-pad editor during game-play. This will allow the user to adjust control settings without having to leave the gaming environment.

All game pad registration and control information will be saved between sessions and will only require a quick one-step optical surface re-registration at the beginning of each session. This will involve calibrating the surface as a whole and not the individual control artifacts.

Technological Issues

The major issue which the project addresses is the "support for novel platform", in this case, the Kinect 3D input device. It also addresses the topic of "game tool design and development" with respect to the game pad editor itself.

There will be a number of technical challenges within this project most of which are due to the short time Kinect has been available to the public. The device itself has only been out for a year and therefore the available support software is limited. Microsoft has not yet released official drivers, SDK, and documentation although they are rumored to be releasing a support suite which will be integrated into XNA in the summer of 2011. As a result of these factors, the project will be, as all Kinect projects are called, a hack.

Although official device drivers are available, the actual Kinect hardware and embedded software was released before Microsoft had perfected the design, and as a result, there is a know lag time problem which is a problematic for some game designs. In the case of full body movement and broad hand gestures, the lag time is acceptable. On the other hand, using the Kinect to drive a virtual game pad may feel "unresponsive" to the user when compared with that of a keyboard and mouse. This issue will have to be addressed once the project has a prototype to test this possible limitation.

Perhaps the biggest challenge will be leveraging and working with open source developed by a broad range of people, ranging from the untrained hacker to well intentioned unpaid open source software architect. It will be difficult to determine which software projects or packages will be easiest to adapt for different components of the project design and implementation. At best, it will likely involve a lot of dead-ends and wasted time.

Tools

As mentioned earlier, there are no official drivers, SDK, or documentation for the Kinect. The project will therefore use open source drivers along with the "libfreenect" support libraries provided by OpenKinect. Where possible, Kinect project hacks will be used as a base for developing the game-pad registration, and control manipulation components. There are a number of touch surface projects which have already been implemented in the last year and which also offer open source.

The project will target Windows 7 platform and be built using Microsoft Visual Studio 2010. The user interface will be built using C++ and/or C#, the choice of which will depend on which examples and implementations will be most useful as a starting point (and what language they are written in).

Another possible tool to consider is the CL NUI Platform which is working on providing a framework for developing the Kinect. This tool will be investigated to see what it has to offer both in terms of understanding how to program the connect as well as a possible tool to use.

Finally, XNA will also be explored in order to optimize the design of keyboard and mouse hooking layer which will interface with DirectX based games.