Sunday, October 21, 2007

Visual input control

Webcams are more ubiquitous these days. Many newer laptops have integrated webcams in the top of their screens. You could employ all those webcams not just for taking pictures or video-based chatting, but also as another input device for controlling the computer.

The user could provide visual gestures. E.g. they could point at a window to switch to it.

Such visual input might be best combined with voice control. E.g. pointing to a window and saying ‘close it’.

You could analyse the video input in more sophisticated ways. For example, to determine where on the screen the user was looking. So instead of pointing at a window and asying “close it”. The user could just look at it and say “close it” (or something like that). Or they could look at a place in a document and say “move cursor there”.

(I have no idea whether it’s feasable to accurately determine where the user is looking. It mightn’t be possible. For instance, there’s a lot of processing that constructs what you see from what your eyes receive, and the specific thing you're fixating on may not be exactly in line with where your eyes are pointed towards).

Here are a few things in this vein:


EyeTwig 'headmouse' "when you move your head left and right, up and down, the Windows cursor, typically controlled by your mouse, moves about the screen".

Camera Mouse "track head or other body movements and to convert those movements into cursor movements on a computer screen."

Some variations on the theme of getting a camera to pick up on a laser pointer shone on the wall, in order to control things like your music player: here and
here

Video of using headtracking for some control in a FPS game.

No comments:

Post a Comment