Input/Output model for multiple simultaneous users
Description of the project: For decades, the majority of human computer interaction has focused on a single user utilizing one mouse, one keyboard, with a single monitor. Many systems now include multiple monitors; however, at the desktop level, the dominant paradigm is still focused on a single user with one keyboard and one mouse. Multi-user interaction at the desktop is slowly occurring: Synergy allows users to share keyboard and mouse across disparate systems; inexpensive new input devices such as the PSMove and Kinect may replace or augment the mouse as an input device; and virtual/augmented reality devices may replace the traditional monitor. Systems like the MPX allow multiple keyboards and mice to be used within X11 desktops, but most applications and toolkits are not yet configured to work well with multiple simultaneous users.
There are multiple focus areas for separate GSoC projects. Below are some example projects that would fall under this topic:
- Improve toolkit support for multiple simultaneous pointing devices
- Improve support for multiple simultaneous keyboard devices (e.g. deconflict typing; extend the keyboard focus model).
- Improve support for non-traditional input devices (e.g. PSMove, Kinect, etc).
Confirmed Mentor: Klee Dienes
How to contact the mentor: email@example.com
Deliverables of the project: Debian package(s) that perform some or all of the above functions.
For example, we have developed a package to enable the use of a PSMove controller (PSMoveAPI) to create a pointer anywhere on any screen in a multi-system computing environment, just by pointing directly to the physical screen. Example projects might be to:
- Improve the configuration and integration of the PSMoveAPI packaging support.
- Improve the integration of the PSMove input stream into window managers (such a kwin or mutter or weston).
- Create an API to allow "detail" data such as position and orientation to augment mouse events as seen by an application or web browser (properly managing coordinate transformations as the window moves about the physical space).
Desirable skills: Coding, packaging, general engineering ability
What the student will learn: A deep understanding of USB human interface devices, input models and toolkits, and advanced gestural and spatial input devices.
Related projects: Various portions of this project have already been implemented and will be made available via GPL.