Tuesday, August 11, 2009

Collection of Posts by David Kabala

Input

Input for VR systems has primarily been concerned with tracking the position and orientation of users. This information is required for immersive stereoscopic display when calculating the correct viewpoint for each eye. 6-degrees of freedom (6-DOF), 3D position and orientation, data are necessary for this calculation. There are three main types of devices that have been produced for 6-DOF tracking; magnetic, sonic, and image based. The following table is a comparison of the cost of these systems. Recently there has been research into using low cost cameras and visual fidutials in a fully enclosed cave for tracking (Reference VF paper). This technique has been shown to be accurate, with moderate latency. Using visual fidutials is currently limited by the requirement to be used in fully enclosed cave environments, and additional processing on the image data from the camera.

Other than tracking devices, VR environments use common computer interaction devices, the keyboard and mouse being the most prevalent. Touch screen devices, gamepads, 3-degree of freedom trackers (Wiimote, PS3 controller), microphone (audio processing). There are additional interfaces including haptics, smell, taste, and even neural interfaces.

---

HCI of RTSs and console vs pc interfaces(Mouse)

Real time strategy(RTSs) games have historically been confined to PCs. The main reason appears to be that the use of a mouse and keyboard for control are superior to the controls provided by console video game platforms. Notable attempts to bring RTSs to consoles are the Nitendo 64(N64) version of Starcraft and the more recent Halo Wars. Halo Wars has been more successful, as it was designed from the ground up to use a control pad whereas the N64 Starcraft was simply a port of the PC game.

So what is it about a mouse and keyboard that is superior to gamepads for RTSs? I argue that there are two principle ways that separate advantages of the mouse and keyboard. I will break up the discussion of these in to two blogs. First the Mouse:
1. RTSs require very accurate and precise selection of items on screen.
A mouse is better suited for this than a joystick(s) on a gamepad. RTSs require the quick selection/deselection of friendly and enemy units, they also require accurate control of selecting small units within larger groups and accurate placement of commands for selected units. A joystick is limited to, usually, 128(8 bits) of precision on each axis. However modern mice have a much larger range of placement, and greater control over the speed of movement. In the future other interfaces may prove better at this currently then mice touch devices may be very good at this as it allows for direct connection of selection and the display itself.

---

HCI of RTSs and console vs pc interfaces(Keyboard)

Now for the Keyboard.
2. RTSs require a wide range of commands to be quickly executable. US traditional keyboards have at least 101 keys, in contrast to gamepads that have ~16 buttons max. When using a mouse an keyboard at the same time, as is done for RTSs, only one hand is covering the keyboard while the other is covering the mouse. This means that the whole keyboard cannot be covered at once, only ~30 keys are covered at one time. This can also be added to the 2-3 buttons on the mouse. However the hand covering the keyboard can be moved to cover different portions of the keyboard depending on the situation, so all of the keys can be readily available. Given the number of possible unit commands, magic commands, building commands, selection commands, and camera placement commands the ~16 max buttons of most gamepads are inadequate.

---

The software available for developing VR applications can be divided into two categories: code development libraries and end-user applications. The development libraries: Cavelibs, vrjuggler, OpenSceneGraph, OpenSG, and vrTools are targeted to application programmers. The End-user applications: Quest3D, Unigine, Alice, Agent Sheets, and Vizard are targeted for application designers.

end-user applications
Alice
3D programming environment
Focuses on teaching programming
Agent sheets
Create games and computational science applications with music, speech, and visualizations
License restrictions
No VR support
Unigine
Primarily a development library
Unigine viewer allows modification of scene
Licensing restrictions
Quest3D
development tool for creating real-time 3D applications
Licensing restrictions
Expensive
Windows-only

---

Software Licenses

When considering the accessibility of software and libraries it's important to understand how it is affected by the licensing. Bruce Perens describes four main categories of software licenses: proprietary, “gift”, “sharing with rules”, and “in-between” licenses[Parens]. Proprietary software is licensed such that it may not be modified or used in another package, doing so would be copyright infringement. The Open-source “gift” licenses like the Apache license[Apache] allow modification and use of the software in any derivative work including proprietary software. Open-source “sharing with rules” licenses allow modification and use of software as long as the derivative work is also shared. The General Public License version 3(GPL3)[GPL3] is an example of an open-source “sharing with rules” license. Open-source “in-between” licenses like the Lesser General Public License version 3(LGPL3)[LGPL] allow modification and use of the software in derivative work, including proprietary software, with the condition that the original software code be made available with the derivative work.

There may be software that provides the functionality needed in a new application, but because of licensing, that software may not be legally usable. This makes the general use of proprietary software and systems inaccessible to many. But under open-source licenses, there is legally solid ground for users to use, modify, and share derivative work

---

Expertise

The complexity of utilizing the hardware and software necessary for a VR system still requires specialized knowledge. The development of an application’s software has the most effect on accessibility as compared to hardware. This is because there are many different VR hardware arrangements ranging from very complex to support, 6-sided cave, to relatively simple to support, a single computer with attached HMD. However, developing a VR application with VR software is complex for the range of computer and HMD to 6-sided cave hardware arrangements. To utilize the available software requires software engineering, graphics, and VR domain knowledge. Because of this, most VR projects have an additional cost to pay for personnel that have this specialized knowledge.

Sunday, August 9, 2009

Computer Vision

Computer Vision is an excellent complement to existing methods in Human Computer Interaction. Computer Vision methods allows us to analyze and recognize human movements and behavior through digital images and video. Most computers still receive input from legacy low bandwidth input devices such as keyboards and mouse. This type of interface forces a user to adapt to a computer in an unnatural way. Stereo vision, voice recognition, face recognition, emotions recognition, and motion tracking are just a few techniques used for tracking how people to interact with their surroundings in a natural way. These techniques supply us with promising human-computer interaction methods by analyzing and recognizing human movements. The process of detection and tracking human body parts is one of the main steps necessary to reach a robust and natural interaction or interface model. There are already some practical applications using this model in HCI. Perceptional interface has recently emerged to motivate an increasingly large amount of research within the computer vision community. The focus of this effort is to provide highly interactive, multimodal interfaces that enable rich, natural and efficient interaction with computers. A multimodal interface is a system that combines two or more input modalities in a coordinated manner. Perceptional interfaces are inherently multi-modal. Humans interact with world by way of information being sent and received, primary through the five major senses of sight, hearing, touch, taste, and smell. A modality refers to a particular sense. The topic of perceptional interfaces is very broad and in recent years several workshops and conferences have begun to focus on how computer vision and advances in other areas such as speech and sound recognition can increase research opportunities.

Some Computer Vision Links:

Human-computer interaction through computer vision

Extending human-computer interaction by using computer vision and colour recognition

Emerging Topics in Computer Vision

3D Tangible User Interfaces

A tangible user interface (TUI) is a user interface in which a person interacts with digital information through the physical environment. TUIs offer a natural means of interaction with computers and are already been shown to simplify existing computerized applications and offer solutions for tasks that were considered to be “out of the scope” of human computer interaction. People have natural tangible abilities that allow them to manipulate physical objects with very little cognitive effort. Current HCI (Human Computer Interfaces) use very little of our abilities and do not provide much opportunity to manipulate physical objects. The Window-Icon-Menu-Pointer (WIMP) interaction metaphor is the predominant standard for the current HCIs. This interface blocks our innate natural abilities. Creating HCI as 3D virtual worlds can dramatically change the way we perform CAD or construction oriented tasks and open the door for other applications that are not currently supported by computers. Manipulating objects in a 3D space such as a virtual world can provide interactions through a transparent interface that is inline with our physical qualities. TUIs are also a great tool for assisting children in learning, because of its ability to take advantage of their natural ability manipulate objects. Children have played with physical objects to learn a variety of skill, a tangible interface therefore would appear as a “natural” form to them.

Some Tangible User Interfaces Links:

3D Computer Interaction Using Physical Objects: Exploration of Tangible User Interfaces

Tangible User Interface for Children An Overview

A Tangible User Interface as Interaction and Presentation Device to a Social Learning Software

Robotics (Human Robot Interaction)

Human-robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers. Traditional graphical user interfaces and input devices (keyboards and mouse) are commonly used in the field of HRI to control robots. These traditional user interfaces are currently the standard, but do not offer an ideal mapping to various robotic tasks. They do not take advantage of the innate skills of physical object manipulation and spatial perception. Tangible user interfaces (TUIs) is a physical metaphor that could be used to manage the interaction between robots and physical objects. Past research has focused on how to improve the robot development. The operation of robots in different spatial situations and environments has been often overlooked. The 2D user interface (UI) limits a human robot operator’s spatial perception when interacting with three dimensional objects. Low level robot control tasks can really benefit TUIs. TUIs couple digital information and function with physical objects allowing a virtual entity in digital realm to be manipulated through a physical medium. TUIs provide spatial orientation and the position of a physical object in relation to its surroundings can expose additional information and provide interaction insight and task awareness to the human robot operator.

Some Human Robot Interaction Links:

Utilizing Physical Objects and Metaphors for Human Robot Interaction

Human-Robot Interaction for Cooperative Manipulation: Handing Objects to One Another

Surface Computing and Multi-Touch Interfaces

There is a strong focus on multi-touch interaction in HCI. Experimental multi-touch interfaces have been around since 1982. The first multi-touch display was based on finger pressure technology. A multi-touch table uses a very simple method for tracking the finger/object touches/movement on a surface screen. It takes advantage of a cameras ability to see a greater spectrum of light then the human eye, extending into the infrared spectrum. There are 4 major multi-touch surface interface solutions: Microsoft Surface, Perspective Pixel, TouchLib, and TBeta. These solutions implement a design that can host complex high performance applications. The solutions provide control of multi-screen projection and control of audio and visual events. Microsoft Surface uses a design that is based on a standard IKEA table and has a projector light diffuser in the center of the table. One of the most famous solutions is Perspective Pixel designed by Jeff Han. Jeff Han is arguably one of the major pioneers of multi-touch surface computing, because of his ability to promote multi-touch surface computing. The solution uses the total internal reflection method (FTIR). The two major open source alternative solutions are TouchLib and TBeta. These solutions can track finger touches and translate the input into standard HID (Human Interface Device) events.

Some Surface Computing and Multi-Touch Interface Links:


Human Factors and Artificial Intelligence

Increasing machine intelligence leads to a shift from a mere interactive to a much more complex cooperative human-machine relation requiring a multidisciplinary development approach. Current HCI research, personalization and adaptive interfaces are proposed to enable easy access to proliferating functions and services. The increase in intelligence of machines leads to a shift from HCI to human-machine cooperation (HMC). Artificial Intelligence can be used to design future machines that are designed or learn to cooperate with humans. There will be a need for humans and machines to understand and comprehend each other’s reasoning and behavior. The possibilities for HMC are extensive, but knowledge is lacking in how to integrate human factors (HF) and Artificial Intelligence (AI) into a successful machine design. Cognitive engineering is a methodology that provides insight into increasing cognitive factors of human performance.

Some Human Factors and Artificial Intelligence Links:

Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction

Integrating Human Factors and Artificial Intelligence in the Development of Human-Machine Cooperation

Service Blueprinting

I ran across an article while performing research for SD2, it talked about how service blueprinting is a key to service innovation. The article talks about how service blueprinting offers a surprisingly flexible means of helping companies understand how well, or not, they are serving their customers. Service blueprinting focuses primary on customer experience. The idea behind service blueprinting is simple. Companies put themselves in their customers’ shoes to find out what’s working, what’s not, and what needs to be changed. The article goes on to discuss the uniqueness of this technique as compared to other process techniques. The customer is the center and focus for service improvement and innovation. Companies that create blueprints first quickly identify the specific service process to b e explored and can identify the customers that the service targets. Blueprinting is flexible, because most businesses are services at some level. Blueprinting has been used by companies across all industries, and the researchers expect the years to come to bring more widespread adoption of the technique. As that occurs, users will offer their own innovations to the process, making it all the more effective in the long run.

Futuristic Interfaces

SIGGRAPH 2009 just wrapped up in New Orleans this week. At the event, there are some interesting futuristic interfaces were displayed. You can check out other interfaces in the above link.

  • Touchable Holography
  • Augmented Reality for Ordinary Toys
  • Hyper-realistic Virtual Reality
  • 3D Teleconferencing
  • Scratchable Input
These futuristic interfaces imagined and developed. Some are very practical and some are not. I feel futuristic inputs are cool and fine, but something seems to be missing from their project descriptions. As we read Dourish, “Where the Action Is”, I personally think designers should read this book and understand what embodied action is. It’s about shared awareness between users and the world surrounds us. I think I mentioned this in other post about engineers or developers creating cool gadget because they can. That should not be encouraged or at least others should be asking questions. Designers must consider social awareness, intimacy and emotions in physical world. Any system should be considered as human-centered. What do you think of designers considering embodied interaction? Is it practical or that’s just an idealistic view?

By the way, a guy that does scratching surfaces will not be invited to my place. I don’t want him to scratch my furniture. ;)

Source:
http://www.technologyreview.com/blog/editors/23940/

Why does your vehicle smell like “forest”?

In last July, Nissan Motor Co., Ltd. announced the development of new air conditioning system called, “Forest AC”. This new AC system creates optimum cabin environment by controlling temperature, ventilation, aroma, and humidity. You say, aroma? Yes, I know. Nissan says the new system was based on a research on effects of aroma on human mental activity. You can read the rest as I was curious about what can really do with aroma. There are some systems out there to control automatic temperature of interior of your car and add pollen filters to help you make comfortable while you drive. All these system goes under ubiquitous computing. This new system even goes further by blocking order and removing order using “ion”. Making feel comfortable as you drive is a good idea, but you wonder about the process of designing system like that.

Nissan says…

“Nissan develops technology to set new standards in terms of comfort, quality and other factors for every aspect of the automobile, putting human characteristics first, based on the Life on Board concept, which is designed to deliver more joy of driving.”

Where am I going with this? Well, I am wondering if engineers in general think because they can develop system like this, they will develop it. I wondered if they ever consider users and context of driving in general. You can put as many sensors as you can install in a vehicle, but I wonder if they try to understand the context of driving. Making driver “not think” may not apply to this context. Did they just forget about “experience of driving”? Humidity and airflow control maybe good idea, but aroma has some issues. I think “scent” is another sensor drivers use to detect something wrong in a car. Is it good idea to take away users natural senses while driving? I thought drivers should be using all the senses to connect with vehicle and roads to create “driving experience”. I wonder what Dourish would say about relationship between a driver, car, and the road. Did he think these senses ruin your interaction with the world surround us?

Source:

http://www.engadget.com/2009/07/27/nissans-forest-ac-blows-wild-scents-through-your-ride-to-keep-y/

http://www.nissan-global.com/EN/NEWS/2009/_STORY/090723-01-e.html

Visual Search

Recently I started reading Ambient Findability by Peter Morville. So far it is a an excellent read on information architecture and wayfinding. (http://www.amazon.com/exec/obidos/tg/detail/-/0596007655/findability-20/). As I read the book, it strikes me how utterly dependent we have become on text and knowing the right words or combination of words to find what we are searching for when interacting with the web. We have all experienced this, we are searching from something and we find very little but later come to realize that we did not have the best keyword to yield the results we were searching for. If we focus on creating the best user experience possible, this will most likely include the ability for users to search items with more than just the perfect keywords or through a winding path of linked ideas. What about searching through key images or visual represetations of our desired topic? I conducted a search (using text of course) and found the following link with terrific references to visual search engines available today.

http://www.masternewmedia.org/top-visual-search-engines-the-most-interesting-ways-to-visually-explore-search-engine-results/

After quickly finding this page, I sit with the question, why are these visual search engines not in the mainstream search options? Google, the Godfather of search engines has incorporated visual aids such as the Wonder Wheel and Timeline to assist searchers with visual and semantic information but they have not created anything yet that is similar to the visual search engines on this page.

Happy searching!

Advertisement vs. Design

On July 27, 09 in New York Times, an article talked about how Walt Disney is cooked up this Disney Media and Advertisement Lab to identify how effective ads are displayed on their sites.

The chosen tools are advanced than what you would think. As we have discussed various usability tests in the class, these media groups are using similar methods to track how ads are viewed and which ones are effective. Such tools besides eye tracking were heart-rate monitors, skin temperature readings and facial expressions to monitor. They are trying to figure out how small the ads can be but still viewed by users or how they feel about certain ads by looking at their facial expressions.

Again, we are faced with corporate driven $tragegy vs. functionality of information dissemination. This disturbs me greatly as they have to figure out a way to effectively making sure users see ads. As a usability designer, how would you feel about your design being compromised. I understand if it is for entertainment. What annoys me the most is that ads are getting aggressive and no longer a part of the page anymore. In old days, ads had a space in your page. They are in your face and you have to click to stop ads. In usability standpoint, it is not functional part of the sites and not friendly to users. It distracts you from doing what you want to do. If you are an independent designer, you may purposely add spaces for ads. In larger corporate levels, is this how it works? Would designer informed how ads will be displayed so they need to change design to accommodate ads or simply told to design a site but marketing dept. will figure how to put over ads over your design?

What happened to user-centered design? I haven’t across any design books/sites that talks about how to incorporate ads as a part of design. I see Google ads model where you can simply select what type of ads and place a widget on your page. But that’s more for independent designer. I wonder about larger corporations that accept ads. What do you think?

Emergence of Interaction Design (section 2.2)

Since the first few weeks of HCI 596, I have been challenged. I have stated before that the readings/teachings in this class has challenged a lot of what I have been taught in my world of engineering. So I have been searching to find a meeting place between the two - HCI and Engineering. How can I apply this to systems engineering or Electrical engineering? If I can't, why not? I can not say that I have found the answers, but I did find a few interesting readings. One in particular I will attach to this blog.

http://hci.stanford.edu/~winograd/acm97.html See section 2.2

The article talks about the emerging area of interaction design. Now I am still trying to wrap my fingers around service design; now interaction design. Simply put, the author describes it as a shift from seeing the machinery to seeing the people using it. This sound a lot like service design; is it the same thing? Or is service design one sector of interaction deisgn? Wikipedia describes service as the non-material equivalent of a good. Interaction is a kind of action that occurs as two or more objects have an effect on one another. I guess from the definition one can have service interaction design. Seeing the people using the service instead of the service itself..make sense? So if I am an engineer designing a vehicle display, I am designing based on the user that will be viewing the display and not the technology. Help me here? What are your thoughts?

Usability Study Design Tips

While working on project 1, I had a few questions about how to to complete usability studies. I sent an email asking/commenting the following:

"In your usability studies, how do you get the subjects to take the interview, survey, etc as serious as you do? Most people who take surveys fill it out without much thought. One time I created a survey to determine if a particular tool was useful; it seemed as if the participants just went through the motions, especially if the questions only required them to select from a list (i.e. agree..strongly agree). Is there a certain method of questioning or interviewing that is more effective for getting honest answers or well thought out answers?"

The response to my email was to do the following:

1. Build rapport with the people you are interviewing. Make sure they feel like their opinions are valued and put them at ease so they feel they can be completely honest.
2. Wording of questions is critical. Make sure you ask really open ended questions without leading.

I did some browsing around also and what I found in a document that someone posted on the web was helpful in providing more insight as well as confirming the email response. The author of the document posted on the web gave some do's and don'ts of usuability study design. For example, give the participant an options to end the study if they feel uncomfortable. Make sure the user understand that there is no right or wrong answer. Also, have them sign a form explaining that he results are anonymous; this helps them feel that they can be honest in their answers. The author also gave techniques on what to do during the study as far as what to watch for. For example, be sure to note non-verbal communication. Some advice on the types of questions were to give open ended questions as well as task oriented questions.

The author also gave pointers on when to perform usability studies, the best person to conduct the study, etc.

http://www.highedweb.org/2007/presentations/PST6.pdf

Neccessary Excise

Cooper chapter 11 talks about eliminating exise. Cooper claims that software is often "top heavy", "requiring extra work". Therefore by eliminating excise you decrease the amount of cognitive and physical effort when using the software or product. My question is, is there a need for excise in certain situations? When it comes to user interaction, isn't there times where you need that confirmation or alert message? There maybe be information that needs to be relayed to the user before completing a task. For example, I have always appreciated the "Are you sure you want to delete this?" alert message. That has saved me plenty of times, but I do understand that these types of things irritate some. So where is the balance? Or do we need not need a balance? I guess in the end Cooper does imply that excise isn't necessarily bad if the effort to complete the excise task is kept to a minimum. Then, how do we assess the effort? When you are in a rush, any extra click is too much. Your thoughts?

What is Banner Blindness? Is it real?

Banner Blindness? I admit that I am new to HCI. In fact, HCI=Human Factors in a system design, period. While browsing around on the web trying to get some bearings in this new subject, I found an article about banner blindness. It is a phenomenon where website vistors ignore banners on the website or anything that looks like an adverstisement. I thought, well this is silly. Considering the fact that a lot of websites make their money from adverstisements. I mean who would pay for something that's being totally ignored. On the other hand I thought, maybe that's why Craigslist is so "boring" in appearance because they know about this. This is the article link

http://www.useit.com/alertbox/banner-blindness.html

If you read through the article, I know from my personal experience, I agree. I never actually pay much attention to the "happenings" on the side. They also stated that if the user does engage themselves with the ads it is usually one that looks like a windows command. I AGREE. I remember the first time I saw one of those...I CLICKED. That was the last time I was fooled, so nowadays I don't think those are popular anymore.

Now I understand why advertisers are just plain old bombarding web browsers with advertisements. You ever been reading an article then have a McDonalds ad just pop up right in front of your article? Then you have to search for the mistery "close" button. Now I know why! The ads want to be heard or seen in this case. This is another reason why I am beginning to accept that this phenomenon is real.

Creating Effective User Personas

I found creating personas to be quite challenging. So I looked around the web for other resources that might make it clearer or at least give a different perspective. I found one article by Lene Nielsen. The article gave 10 steps to use when creating usability personas. Please find the link below
http://www.masternewmedia.org/interface_and_navigation_design/usability/how-to-create-effective-personas-20071004.htm
I found this article to be helpful because it list steps to help guide you in developing your personas. I thought Cooper (Chapter 5 p 77 did a great job with this also. This article gives a different perspective of the step by step process for developing effective personas. One part that I found interesting in the article as opposed to Cooper is the verification step. This is where you find data to support your persona descriptions and scenarios. This made sense because when creating personas, you use the data collected to help get an idea of what your users are like. But is what you came up with correct? Or make sense? Even though article seems to be focus more on development as oppose to the “user”; I think you can pull different techniques and ideas from each process (Cooper and the article).

Saturday, August 8, 2009

“Trust” as service, or all about Benjamin?

As I tried to things to post here now, I have been gathering some materials for a month or so. I just couldn’t find a time to do that. The projects were killers. :(

I came across this Web 2.0 site called, Smartypig.com. Smartypig.com is a service site which users are able to set own saving amount. It’s a virtual piggy bank. At the same time, you can share your saving goal with others (family members too) so that they can contribute it if they wish. Since this is Web 2.0, it integrates with facebook and other social network sites.

I am little baffled with this service model and user behaviors. I maybe old school, but I don’t get the idea of sharing this kind of personal details. It’s a good behavior that can be recommended, especially in this tough economic time. Everyone wants to save some but I am not sure but doing this in public places.

Then I remember seeing other sites that are related to money lending sites a few years ago. Instead of getting load from banks, you ask from peers. The sites are even more popular now. Some college financial offices are even listing these sites as alternative way to get loans.

"Matching Borrower with Lender, Social-Network Style"
"Peer to Peer Education Loans"

Other sites are…
GreenNote.com
Lendingclub.com

Obviously, lenders and loaners are getting something out of this service. The needs for both parties are different but the service brings them together to provide what both user wants. To bring this service as legitimate, the service site has to create “trust” to users. I wondered how this kind of service would be started… Peer-to-Peer service is common, but the amount of money both users deal are completely different level. Would this be a common service in future? Is this part of cloud service?

Broken Service. Can HCI save everyone?

As we wrapped up service design course, I thought I would ask others about this issue. Digital Copyrights has been and will be one mess experience for both content owner and users. No technology so far seemed to work well between content provider and customers

Many industries, particular music and movie have been out of control with how they deal with copyright infringement. General public are fed up but the industries are not responding what customer wants. RIAA is actively going after individuals who violated copyright. The recent case of a grad student from Boston who was ordered to pay over $22,000 per song (total of 30 songs) shows how broken the system is.

AP thought they deserve to get paid every word they publish. So they implemented iCopyright to charge # of words, starting $12.50 per word. Also AP got Copyright Bounty Hunter to go after those who violated the copyright. Guess, how this AP copyright ended up working? James Grimmelmann, associate professor at NY Law School, submitted an expert from Thomas Jefferson’s letter, not written by AP. Of course, AP said he owes $12.

I understand you want to get paid for what you create. But the overall service is broken. The experience of service is not pleasant to both creator and users. Smaller music service sites are trying different models to accommodate both musicians and users. Are there any ways to design workable and realistic service for this kind of issues?

Monday, August 3, 2009

Mock up for touch?

For our first project, I found that the observations and feedback lead me to a multitouch interface, or at least touch. When it came time to create a running mock-up, I found the existing tools lacking in this area because using a mouse in an environment intended for touch skews the usability benefits and issues. I don't see this as a software issue, but more of a lack of test hardware. I could go buy an iPhone and create an application specific to it for testing. A tablet or one of the new touch monitors might work, but again, this is something I'd have to buy. I could potentially use a multitouch device on a university campus if I lived near one. None of these options are really feasible for me at the moment, so is there a solution to individuals working with small projects that are touch based? Is there a way to test touch based interfaces with a mouse and gain more accurate interaction data (beyond cognitive measures)? Or is a paper prototype potentially the best way to test with users since they can simulate dragging, touching objects with their fingers, and experience moving their whole arm?

I was pointed to this post about the SDK for Microsoft Surface which permits multiple mice to simulate multi-touch interfaces. This would help verify the correct manipulation features, but would not provide accurate data for ease of use or speed for this different form of interaction. It seems like this area is currently lacking in prototyping opportunities, but then maybe that is the price of using interaction concepts with newer technology.