Computer Vision is an excellent complement to existing methods in Human Computer Interaction. Computer Vision methods allows us to analyze and recognize human movements and behavior through digital images and video. Most computers still receive input from legacy low bandwidth input devices such as keyboards and mouse. This type of interface forces a user to adapt to a computer in an unnatural way. Stereo vision, voice recognition, face recognition, emotions recognition, and motion tracking are just a few techniques used for tracking how people to interact with their surroundings in a natural way. These techniques supply us with promising human-computer interaction methods by analyzing and recognizing human movements. The process of detection and tracking human body parts is one of the main steps necessary to reach a robust and natural interaction or interface model. There are already some practical applications using this model in HCI. Perceptional interface has recently emerged to motivate an increasingly large amount of research within the computer vision community. The focus of this effort is to provide highly interactive, multimodal interfaces that enable rich, natural and efficient interaction with computers. A multimodal interface is a system that combines two or more input modalities in a coordinated manner. Perceptional interfaces are inherently multi-modal. Humans interact with world by way of information being sent and received, primary through the five major senses of sight, hearing, touch, taste, and smell. A modality refers to a particular sense. The topic of perceptional interfaces is very broad and in recent years several workshops and conferences have begun to focus on how computer vision and advances in other areas such as speech and sound recognition can increase research opportunities.
Some Computer Vision Links:
Human-computer interaction through computer vision
Extending human-computer interaction by using computer vision and colour recognition
Emerging Topics in Computer Vision
Showing posts with label HCI. Show all posts
Showing posts with label HCI. Show all posts
Sunday, August 9, 2009
Computer Vision
Labels:
Computer Vision,
HCI,
Perceptional Interface
3D Tangible User Interfaces
A tangible user interface (TUI) is a user interface in which a person interacts with digital information through the physical environment. TUIs offer a natural means of interaction with computers and are already been shown to simplify existing computerized applications and offer solutions for tasks that were considered to be “out of the scope” of human computer interaction. People have natural tangible abilities that allow them to manipulate physical objects with very little cognitive effort. Current HCI (Human Computer Interfaces) use very little of our abilities and do not provide much opportunity to manipulate physical objects. The Window-Icon-Menu-Pointer (WIMP) interaction metaphor is the predominant standard for the current HCIs. This interface blocks our innate natural abilities. Creating HCI as 3D virtual worlds can dramatically change the way we perform CAD or construction oriented tasks and open the door for other applications that are not currently supported by computers. Manipulating objects in a 3D space such as a virtual world can provide interactions through a transparent interface that is inline with our physical qualities. TUIs are also a great tool for assisting children in learning, because of its ability to take advantage of their natural ability manipulate objects. Children have played with physical objects to learn a variety of skill, a tangible interface therefore would appear as a “natural” form to them.
Some Tangible User Interfaces Links:
3D Computer Interaction Using Physical Objects: Exploration of Tangible User Interfaces
Tangible User Interface for Children An Overview
A Tangible User Interface as Interaction and Presentation Device to a Social Learning Software
Some Tangible User Interfaces Links:
3D Computer Interaction Using Physical Objects: Exploration of Tangible User Interfaces
Tangible User Interface for Children An Overview
A Tangible User Interface as Interaction and Presentation Device to a Social Learning Software
Labels:
HCI,
Tangible user interface
Robotics (Human Robot Interaction)
Human-robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers. Traditional graphical user interfaces and input devices (keyboards and mouse) are commonly used in the field of HRI to control robots. These traditional user interfaces are currently the standard, but do not offer an ideal mapping to various robotic tasks. They do not take advantage of the innate skills of physical object manipulation and spatial perception. Tangible user interfaces (TUIs) is a physical metaphor that could be used to manage the interaction between robots and physical objects. Past research has focused on how to improve the robot development. The operation of robots in different spatial situations and environments has been often overlooked. The 2D user interface (UI) limits a human robot operator’s spatial perception when interacting with three dimensional objects. Low level robot control tasks can really benefit TUIs. TUIs couple digital information and function with physical objects allowing a virtual entity in digital realm to be manipulated through a physical medium. TUIs provide spatial orientation and the position of a physical object in relation to its surroundings can expose additional information and provide interaction insight and task awareness to the human robot operator.
Some Human Robot Interaction Links:
Utilizing Physical Objects and Metaphors for Human Robot Interaction
Human-Robot Interaction for Cooperative Manipulation: Handing Objects to One Another
Some Human Robot Interaction Links:
Utilizing Physical Objects and Metaphors for Human Robot Interaction
Human-Robot Interaction for Cooperative Manipulation: Handing Objects to One Another
Labels:
HCI,
HRI,
Robotics,
Tangible user interface
Surface Computing and Multi-Touch Interfaces
There is a strong focus on multi-touch interaction in HCI. Experimental multi-touch interfaces have been around since 1982. The first multi-touch display was based on finger pressure technology. A multi-touch table uses a very simple method for tracking the finger/object touches/movement on a surface screen. It takes advantage of a cameras ability to see a greater spectrum of light then the human eye, extending into the infrared spectrum. There are 4 major multi-touch surface interface solutions: Microsoft Surface, Perspective Pixel, TouchLib, and TBeta. These solutions implement a design that can host complex high performance applications. The solutions provide control of multi-screen projection and control of audio and visual events. Microsoft Surface uses a design that is based on a standard IKEA table and has a projector light diffuser in the center of the table. One of the most famous solutions is Perspective Pixel designed by Jeff Han. Jeff Han is arguably one of the major pioneers of multi-touch surface computing, because of his ability to promote multi-touch surface computing. The solution uses the total internal reflection method (FTIR). The two major open source alternative solutions are TouchLib and TBeta. These solutions can track finger touches and translate the input into standard HID (Human Interface Device) events.
Some Surface Computing and Multi-Touch Interface Links:
Labels:
HCI,
Multi-Touch,
Surface Computing
Human Factors and Artificial Intelligence
Increasing machine intelligence leads to a shift from a mere interactive to a much more complex cooperative human-machine relation requiring a multidisciplinary development approach. Current HCI research, personalization and adaptive interfaces are proposed to enable easy access to proliferating functions and services. The increase in intelligence of machines leads to a shift from HCI to human-machine cooperation (HMC). Artificial Intelligence can be used to design future machines that are designed or learn to cooperate with humans. There will be a need for humans and machines to understand and comprehend each other’s reasoning and behavior. The possibilities for HMC are extensive, but knowledge is lacking in how to integrate human factors (HF) and Artificial Intelligence (AI) into a successful machine design. Cognitive engineering is a methodology that provides insight into increasing cognitive factors of human performance.
Some Human Factors and Artificial Intelligence Links:
Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction
Integrating Human Factors and Artificial Intelligence in the Development of Human-Machine Cooperation
Some Human Factors and Artificial Intelligence Links:
Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction
Integrating Human Factors and Artificial Intelligence in the Development of Human-Machine Cooperation
Labels:
Artificial Intelligence,
Cognitive Engineering,
HCI,
HMC,
Human Factors
Subscribe to:
Posts (Atom)