Computer Vision is an excellent complement to existing methods in Human Computer Interaction. Computer Vision methods allows us to analyze and recognize human movements and behavior through digital images and video. Most computers still receive input from legacy low bandwidth input devices such as keyboards and mouse. This type of interface forces a user to adapt to a computer in an unnatural way. Stereo vision, voice recognition, face recognition, emotions recognition, and motion tracking are just a few techniques used for tracking how people to interact with their surroundings in a natural way. These techniques supply us with promising human-computer interaction methods by analyzing and recognizing human movements. The process of detection and tracking human body parts is one of the main steps necessary to reach a robust and natural interaction or interface model. There are already some practical applications using this model in HCI. Perceptional interface has recently emerged to motivate an increasingly large amount of research within the computer vision community. The focus of this effort is to provide highly interactive, multimodal interfaces that enable rich, natural and efficient interaction with computers. A multimodal interface is a system that combines two or more input modalities in a coordinated manner. Perceptional interfaces are inherently multi-modal. Humans interact with world by way of information being sent and received, primary through the five major senses of sight, hearing, touch, taste, and smell. A modality refers to a particular sense. The topic of perceptional interfaces is very broad and in recent years several workshops and conferences have begun to focus on how computer vision and advances in other areas such as speech and sound recognition can increase research opportunities.
Some Computer Vision Links:
Human-computer interaction through computer vision
Extending human-computer interaction by using computer vision and colour recognition
Emerging Topics in Computer Vision