Tuesday, August 11, 2009
Input for VR systems has primarily been concerned with tracking the position and orientation of users. This information is required for immersive stereoscopic display when calculating the correct viewpoint for each eye. 6-degrees of freedom (6-DOF), 3D position and orientation, data are necessary for this calculation. There are three main types of devices that have been produced for 6-DOF tracking; magnetic, sonic, and image based. The following table is a comparison of the cost of these systems. Recently there has been research into using low cost cameras and visual fidutials in a fully enclosed cave for tracking (Reference VF paper). This technique has been shown to be accurate, with moderate latency. Using visual fidutials is currently limited by the requirement to be used in fully enclosed cave environments, and additional processing on the image data from the camera.
Other than tracking devices, VR environments use common computer interaction devices, the keyboard and mouse being the most prevalent. Touch screen devices, gamepads, 3-degree of freedom trackers (Wiimote, PS3 controller), microphone (audio processing). There are additional interfaces including haptics, smell, taste, and even neural interfaces.
HCI of RTSs and console vs pc interfaces(Mouse)
Real time strategy(RTSs) games have historically been confined to PCs. The main reason appears to be that the use of a mouse and keyboard for control are superior to the controls provided by console video game platforms. Notable attempts to bring RTSs to consoles are the Nitendo 64(N64) version of Starcraft and the more recent Halo Wars. Halo Wars has been more successful, as it was designed from the ground up to use a control pad whereas the N64 Starcraft was simply a port of the PC game.
So what is it about a mouse and keyboard that is superior to gamepads for RTSs? I argue that there are two principle ways that separate advantages of the mouse and keyboard. I will break up the discussion of these in to two blogs. First the Mouse:
1. RTSs require very accurate and precise selection of items on screen.
A mouse is better suited for this than a joystick(s) on a gamepad. RTSs require the quick selection/deselection of friendly and enemy units, they also require accurate control of selecting small units within larger groups and accurate placement of commands for selected units. A joystick is limited to, usually, 128(8 bits) of precision on each axis. However modern mice have a much larger range of placement, and greater control over the speed of movement. In the future other interfaces may prove better at this currently then mice touch devices may be very good at this as it allows for direct connection of selection and the display itself.
HCI of RTSs and console vs pc interfaces(Keyboard)
Now for the Keyboard.
2. RTSs require a wide range of commands to be quickly executable. US traditional keyboards have at least 101 keys, in contrast to gamepads that have ~16 buttons max. When using a mouse an keyboard at the same time, as is done for RTSs, only one hand is covering the keyboard while the other is covering the mouse. This means that the whole keyboard cannot be covered at once, only ~30 keys are covered at one time. This can also be added to the 2-3 buttons on the mouse. However the hand covering the keyboard can be moved to cover different portions of the keyboard depending on the situation, so all of the keys can be readily available. Given the number of possible unit commands, magic commands, building commands, selection commands, and camera placement commands the ~16 max buttons of most gamepads are inadequate.
The software available for developing VR applications can be divided into two categories: code development libraries and end-user applications. The development libraries: Cavelibs, vrjuggler, OpenSceneGraph, OpenSG, and vrTools are targeted to application programmers. The End-user applications: Quest3D, Unigine, Alice, Agent Sheets, and Vizard are targeted for application designers.
3D programming environment
Focuses on teaching programming
Create games and computational science applications with music, speech, and visualizations
No VR support
Primarily a development library
Unigine viewer allows modification of scene
development tool for creating real-time 3D applications
When considering the accessibility of software and libraries it's important to understand how it is affected by the licensing. Bruce Perens describes four main categories of software licenses: proprietary, “gift”, “sharing with rules”, and “in-between” licenses[Parens]. Proprietary software is licensed such that it may not be modified or used in another package, doing so would be copyright infringement. The Open-source “gift” licenses like the Apache license[Apache] allow modification and use of the software in any derivative work including proprietary software. Open-source “sharing with rules” licenses allow modification and use of software as long as the derivative work is also shared. The General Public License version 3(GPL3)[GPL3] is an example of an open-source “sharing with rules” license. Open-source “in-between” licenses like the Lesser General Public License version 3(LGPL3)[LGPL] allow modification and use of the software in derivative work, including proprietary software, with the condition that the original software code be made available with the derivative work.
There may be software that provides the functionality needed in a new application, but because of licensing, that software may not be legally usable. This makes the general use of proprietary software and systems inaccessible to many. But under open-source licenses, there is legally solid ground for users to use, modify, and share derivative work
The complexity of utilizing the hardware and software necessary for a VR system still requires specialized knowledge. The development of an application’s software has the most effect on accessibility as compared to hardware. This is because there are many different VR hardware arrangements ranging from very complex to support, 6-sided cave, to relatively simple to support, a single computer with attached HMD. However, developing a VR application with VR software is complex for the range of computer and HMD to 6-sided cave hardware arrangements. To utilize the available software requires software engineering, graphics, and VR domain knowledge. Because of this, most VR projects have an additional cost to pay for personnel that have this specialized knowledge.
Monday, August 10, 2009
To what extent should designers consider accessibility options during the prototype phase? Is this something that designers should be inherently doing and building into their everyday design practices? Are there design patterns to meet these standards?
For example, Section 508 states that “the standards aim to ensure that such information is also available in an accessible format…the use of text labels or descriptors for graphics and certain format elements”
I recently observed a usability review of a table displaying data. There was a page title and description directly over the table. Users generally know where they are from the navigation, so why have these extra titles? They repeated what the navigation (tabs) stated, and added more clutter to the page. The reason? To support accessibility requirements. I feel often designers are given a false choice between “good usability” and supporting accessibility requirements.
What are some ways to work towards a more seamless integration of supporting accessibility standards and following traditional usability practices?
I looked at a few posts and sites about the differences, and to summarize:
- Focuses on larger samples
- Focuses on what people say
- Focuses on what people will buy
- Focuses on smaller samples
- Focuses on what people do
- Focuses on how people will use
So is it important to have both? I’d say yes, since they focus on different areas. UX research often feeds directly into design decisions, where as marketing research feeds into business/strategic decisions.
Is it important to have coordination between the UX and Marketing team? What I found was marketing research could help drive specific areas the UX team can focus on.
Another reason to coordinate internally is to avoid bothering users too much. I realized I had no visibility on customer surveys being conducted by the Marketing team. Since some of our users are contacted by both teams, I’d hate to send a survey out at the same time and bombard the user with questions.
An interesting point about Apple’s approach of not using market research from Jobs: “It’s not about pop culture, and it’s not about fooling people, and it’s not about convincing people that they want something they don’t. We figure out what we want. And I think we’re pretty good at having the right discipline to think through whether a lot of other people are going to want it, too. That’s what we get paid to do.”
This goes completely against UX principles of focusing on user experiences. But Apple is successful. Why do you think that is?
“The usability problem with non-linear content is crucial because it indicates a deeper issue: Kindle's user experience is dominated by the book metaphor. The idea that you'd want to start on a section's first page makes sense for a book because most are based on linear exposition. Unfortunately, this is untrue for many other content collections, including newspapers, magazines, and even some non-fiction books such as travel guides, encyclopedias, and cookbooks.
So, the design decisions that make Kindle good for reading novels (and linear non-fiction) make it a bad device for reading non-linear content. Sure, Amazon designers could fix simple UI stupidities, such as the interaction design for a newspaper ToC. But doing so would simply apply a band-aid. To truly optimize the non-linear user experience, they'd have to completely reconceptualize the Kindle design. “
During the initial stages of product design, I’m curious to know what role the designer plays in understanding and designing for long-term goals. For example, Kindle is obviously focused on the book metaphor and specifically novels for now, but is their larger vision to encompass more forms of media and books in the future? If so, do designers need to understand long-term goals of a product, and consider these when producing the first version? How can designers create designs that are flexible enough to allow for changes in the future without starting from scratch?
Or if the Kindle PM team really didn’t consider anything beyond a novel, should UX have investigated different usage patterns and persona types and considered these during the initial design?
Our agile consultant recently passed these two articles about how UX can use Agile to its advantage:
Both of these articles had really good suggestions. Here are the ones I that stood out to me and could be applied to projects not using agile as well:
“Best practice” suggests that designers should research iteration n+2, design iteration n+1, support iteration n and review iteration n-1. The iteration zero is deservedly becoming an accepted way of buying time, but some teams are extending this idea with an additional mid-project iteration zero, in which no user stories are delivered. Instead, developers can tidy up code and plan next steps, while designers can revisit the vision and check that brand, aesthetics, and experience are coherent across the site so far.
Bill Buxton remarked that a problem with Agile development was iterating without ideating. I'm paraphrasing here, but basically Bill asserted that Agile teams lock onto a solution and iterate to refine it without considering that there may be a better solution out there. He's right, but not just about Agile teams. I see a fair number of designers guilty of the same behavior.
Today it's easier to respond to Bill's call to action with some concrete practices like Adaptive Path's Sketchboarding and Jeff & Jim's Design Studio approach. Desiree Sy described using interns to prototype 10 or more design solutions to a possible design problem.
The smart people at salesforce.com have taken RITE and cranked the dials up to 11. They build html prototypes and iteratively test and repair them using remote usability testing. They'll complete several rounds of this on each chunk of work before it goes into a development time-box.
In an attempt to travel light, I often hear UX people describe their prototypes as their specification. It's common to deliver only the prototype, or ideally the prototype + a discussion with the team building the software. During the discussion annotate the prototype by hand if necessary. No need to produce detailed documentation.
Cultivate a user validation group for use for continuous user validation. Use customer time to do some contextual inquiry style observation and interviewing, then sit down and review a prototype for something that may be built in a future iteration, then to review the working software testing features just built in a previous iteration. The trick here is to leverage that user face time for research and validation. Don't segregate your work.
Sure, a well-designed, intuitive interface can reduce the need for help, but to completely eliminate all help assumes a perfect system and really, a perfect world (don’t think this is going to happen any time soon).
I recently began collaborating with our Documentation team on how to build a better help system in our products. While I’ve found a few articles around document presentation, readability, etc., I wasn’t very successful in finding overall approaches to Help usability. There were a few articles that focused on heuristics, like this one from the STC: http://www.stcsig.org/usability/newsletter/0401-heuristics.html
While heuristic evaluations can help, they don’t approach help by looking at the overall user experience, or help experience.
We decided to try an look at help more holistically. Here’s an example of what we found from a customer interview:
A particularly irate customer printed out the entire documentation (1000+ pages), thumped it down on the table, and then gave me a single error message they had received in the product. They were insistent that I myself try to use our help system, and they wanted to watch. The customer asked me to solve the problem (and remember their production line is down, and every second translates to money lost). I did the following:
1. Read the error message for any clues. It only gave the error type (error), error ID, and a one line message: Contact your system administrator.
2. Searched the documentation (online) for the error code
3. Searched the documentation (online) using various keyword searches
4. Browsed through the topics (online) in the documentation
5. Checked the support website (that lead me into other problems I won’t go into)
None of these approaches produced the reason or solution for the problem. After he shared his own approaches, all of which failed as well. The interface was well-designed, but the help wasn’t.
At first this seems like just a error message problem. If the error message had better explanations, there would be no problem. Yes, this is one area that needs improvement, and there are numerous articles on creating effective error messages for UX designers.
But we took this a step further. Sometimes the reason cannot be given for the error, or the error is produced from customer-side reasons that engineers can’t predict.
We started to evaluate the overall help experience: How is someone using the help? Are they even using the help? What do they assume they can find in the help? When to they turn to help? Etc. We then broke this down into concrete areas we could focus on improving collaboratively with the Documentation team. Here are some examples:
Messages (info, error, warning, etc.)
Search engine capabilities for documentation
How users call help in the system (help links, context-sensitive help, tooltips, etc.)
Grouping of information
What needs to be documented?
Readability of content
I think there is a high potential of improving the overall user experience of systems by working with documentation teams to evaluate and implement help systems. What do you guys think?
“As web designers and information architects, we often dismiss deep consideration of content when we design interactive experiences. By content I’m not only referring to the various forms of text (e.g., headers, body copy, error messages) but also imagery, graphics, and videos or audio that make up the full interactive experience.
Sure, we have a sense of what content is available, and we’ve likely considered it to some extent when creating flows, wireframes, and prototypes. But the design artifacts that we create represent only part of the overall user experience that we’re designing. The content that sits inside of our design framework is often the final arbiter of success, yet we sometimes diminish its importance and separate ourselves from it. The more we separate our design activities from content development, the greater the risk of design failure.”
Recently I was designing a new navigation system and was asked by my UX manager to not use real content, so people could focus on the actual navigation system and not on the content. Also, since this was a navigation system for a set of products, the prototype needed to be generic.
I can see the pros and cons to this request. During the first round of feedback sessions I had fake content in my prototypes. I was able to focus on the actual mechanics of the system quite easily, but I received a lot of questions around if I’d tried to integrate real content to make sure it works. There was a lot of effort for people to imagine how it would really work, and questions about if I considered x, y, z content use cases.
For the second round of feedback I used real content. Having the content helped me find problems early on (can these products’ models work in this structure, spacing issues, size limits, etc.). During feedback sessions, this did help people to understand how the system would actually work, but it also added another level of distraction. The focus would move away from the navigation system and discussions cropped up around the content.
Though more time consuming, I felt having both versions was helpful. Starting with a prototype without real content allowed me to focus on the system itself, and then adding content was like a first test, and gave different, but still useful feedback.
Sunday, August 9, 2009
Some Computer Vision Links:
Human-computer interaction through computer vision
Extending human-computer interaction by using computer vision and colour recognition
Emerging Topics in Computer Vision
Some Tangible User Interfaces Links:
3D Computer Interaction Using Physical Objects: Exploration of Tangible User Interfaces
Tangible User Interface for Children An Overview
A Tangible User Interface as Interaction and Presentation Device to a Social Learning Software
Some Human Robot Interaction Links:
Utilizing Physical Objects and Metaphors for Human Robot Interaction
Human-Robot Interaction for Cooperative Manipulation: Handing Objects to One Another
Some Human Factors and Artificial Intelligence Links:
Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction
Integrating Human Factors and Artificial Intelligence in the Development of Human-Machine Cooperation
- Touchable Holography
- Augmented Reality for Ordinary Toys
- Hyper-realistic Virtual Reality
- 3D Teleconferencing
- Scratchable Input
By the way, a guy that does scratching surfaces will not be invited to my place. I don’t want him to scratch my furniture. ;)
In last July, Nissan Motor Co., Ltd. announced the development of new air conditioning system called, “Forest AC”. This new AC system creates optimum cabin environment by controlling temperature, ventilation, aroma, and humidity. You say, aroma? Yes, I know. Nissan says the new system was based on a research on effects of aroma on human mental activity. You can read the rest as I was curious about what can really do with aroma. There are some systems out there to control automatic temperature of interior of your car and add pollen filters to help you make comfortable while you drive. All these system goes under ubiquitous computing. This new system even goes further by blocking order and removing order using “ion”. Making feel comfortable as you drive is a good idea, but you wonder about the process of designing system like that.
“Nissan develops technology to set new standards in terms of comfort, quality and other factors for every aspect of the automobile, putting human characteristics first, based on the Life on Board concept, which is designed to deliver more joy of driving.”
Where am I going with this? Well, I am wondering if engineers in general think because they can develop system like this, they will develop it. I wondered if they ever consider users and context of driving in general. You can put as many sensors as you can install in a vehicle, but I wonder if they try to understand the context of driving. Making driver “not think” may not apply to this context. Did they just forget about “experience of driving”? Humidity and airflow control maybe good idea, but aroma has some issues. I think “scent” is another sensor drivers use to detect something wrong in a car. Is it good idea to take away users natural senses while driving? I thought drivers should be using all the senses to connect with vehicle and roads to create “driving experience”. I wonder what Dourish would say about relationship between a driver, car, and the road. Did he think these senses ruin your interaction with the world surround us?
After quickly finding this page, I sit with the question, why are these visual search engines not in the mainstream search options? Google, the Godfather of search engines has incorporated visual aids such as the Wonder Wheel and Timeline to assist searchers with visual and semantic information but they have not created anything yet that is similar to the visual search engines on this page.
The chosen tools are advanced than what you would think. As we have discussed various usability tests in the class, these media groups are using similar methods to track how ads are viewed and which ones are effective. Such tools besides eye tracking were heart-rate monitors, skin temperature readings and facial expressions to monitor. They are trying to figure out how small the ads can be but still viewed by users or how they feel about certain ads by looking at their facial expressions.
Again, we are faced with corporate driven $tragegy vs. functionality of information dissemination. This disturbs me greatly as they have to figure out a way to effectively making sure users see ads. As a usability designer, how would you feel about your design being compromised. I understand if it is for entertainment. What annoys me the most is that ads are getting aggressive and no longer a part of the page anymore. In old days, ads had a space in your page. They are in your face and you have to click to stop ads. In usability standpoint, it is not functional part of the sites and not friendly to users. It distracts you from doing what you want to do. If you are an independent designer, you may purposely add spaces for ads. In larger corporate levels, is this how it works? Would designer informed how ads will be displayed so they need to change design to accommodate ads or simply told to design a site but marketing dept. will figure how to put over ads over your design?
What happened to user-centered design? I haven’t across any design books/sites that talks about how to incorporate ads as a part of design. I see Google ads model where you can simply select what type of ads and place a widget on your page. But that’s more for independent designer. I wonder about larger corporations that accept ads. What do you think?
http://hci.stanford.edu/~winograd/acm97.html See section 2.2
The article talks about the emerging area of interaction design. Now I am still trying to wrap my fingers around service design; now interaction design. Simply put, the author describes it as a shift from seeing the machinery to seeing the people using it. This sound a lot like service design; is it the same thing? Or is service design one sector of interaction deisgn? Wikipedia describes service as the non-material equivalent of a good. Interaction is a kind of action that occurs as two or more objects have an effect on one another. I guess from the definition one can have service interaction design. Seeing the people using the service instead of the service itself..make sense? So if I am an engineer designing a vehicle display, I am designing based on the user that will be viewing the display and not the technology. Help me here? What are your thoughts?
"In your usability studies, how do you get the subjects to take the interview, survey, etc as serious as you do? Most people who take surveys fill it out without much thought. One time I created a survey to determine if a particular tool was useful; it seemed as if the participants just went through the motions, especially if the questions only required them to select from a list (i.e. agree..strongly agree). Is there a certain method of questioning or interviewing that is more effective for getting honest answers or well thought out answers?"
The response to my email was to do the following:
1. Build rapport with the people you are interviewing. Make sure they feel like their opinions are valued and put them at ease so they feel they can be completely honest.
2. Wording of questions is critical. Make sure you ask really open ended questions without leading.
I did some browsing around also and what I found in a document that someone posted on the web was helpful in providing more insight as well as confirming the email response. The author of the document posted on the web gave some do's and don'ts of usuability study design. For example, give the participant an options to end the study if they feel uncomfortable. Make sure the user understand that there is no right or wrong answer. Also, have them sign a form explaining that he results are anonymous; this helps them feel that they can be honest in their answers. The author also gave techniques on what to do during the study as far as what to watch for. For example, be sure to note non-verbal communication. Some advice on the types of questions were to give open ended questions as well as task oriented questions.
The author also gave pointers on when to perform usability studies, the best person to conduct the study, etc.
If you read through the article, I know from my personal experience, I agree. I never actually pay much attention to the "happenings" on the side. They also stated that if the user does engage themselves with the ads it is usually one that looks like a windows command. I AGREE. I remember the first time I saw one of those...I CLICKED. That was the last time I was fooled, so nowadays I don't think those are popular anymore.
Now I understand why advertisers are just plain old bombarding web browsers with advertisements. You ever been reading an article then have a McDonalds ad just pop up right in front of your article? Then you have to search for the mistery "close" button. Now I know why! The ads want to be heard or seen in this case. This is another reason why I am beginning to accept that this phenomenon is real.
I found this article to be helpful because it list steps to help guide you in developing your personas. I thought Cooper (Chapter 5 p 77 did a great job with this also. This article gives a different perspective of the step by step process for developing effective personas. One part that I found interesting in the article as opposed to Cooper is the verification step. This is where you find data to support your persona descriptions and scenarios. This made sense because when creating personas, you use the data collected to help get an idea of what your users are like. But is what you came up with correct? Or make sense? Even though article seems to be focus more on development as oppose to the “user”; I think you can pull different techniques and ideas from each process (Cooper and the article).
Saturday, August 8, 2009
I came across this Web 2.0 site called, Smartypig.com. Smartypig.com is a service site which users are able to set own saving amount. It’s a virtual piggy bank. At the same time, you can share your saving goal with others (family members too) so that they can contribute it if they wish. Since this is Web 2.0, it integrates with facebook and other social network sites.
I am little baffled with this service model and user behaviors. I maybe old school, but I don’t get the idea of sharing this kind of personal details. It’s a good behavior that can be recommended, especially in this tough economic time. Everyone wants to save some but I am not sure but doing this in public places.
Then I remember seeing other sites that are related to money lending sites a few years ago. Instead of getting load from banks, you ask from peers. The sites are even more popular now. Some college financial offices are even listing these sites as alternative way to get loans.
"Matching Borrower with Lender, Social-Network Style"
"Peer to Peer Education Loans"
Other sites are…
Obviously, lenders and loaners are getting something out of this service. The needs for both parties are different but the service brings them together to provide what both user wants. To bring this service as legitimate, the service site has to create “trust” to users. I wondered how this kind of service would be started… Peer-to-Peer service is common, but the amount of money both users deal are completely different level. Would this be a common service in future? Is this part of cloud service?
As we wrapped up service design course, I thought I would ask others about this issue. Digital Copyrights has been and will be one mess experience for both content owner and users. No technology so far seemed to work well between content provider and customers
Many industries, particular music and movie have been out of control with how they deal with copyright infringement. General public are fed up but the industries are not responding what customer wants. RIAA is actively going after individuals who violated copyright. The recent case of a grad student from Boston who was ordered to pay over $22,000 per song (total of 30 songs) shows how broken the system is.
AP thought they deserve to get paid every word they publish. So they implemented iCopyright to charge # of words, starting $12.50 per word. Also AP got Copyright Bounty Hunter to go after those who violated the copyright. Guess, how this AP copyright ended up working? James Grimmelmann, associate professor at NY Law School, submitted an expert from Thomas Jefferson’s letter, not written by AP. Of course, AP said he owes $12.
I understand you want to get paid for what you create. But the overall service is broken. The experience of service is not pleasant to both creator and users. Smaller music service sites are trying different models to accommodate both musicians and users. Are there any ways to design workable and realistic service for this kind of issues?
Monday, August 3, 2009
I was pointed to this post about the SDK for Microsoft Surface which permits multiple mice to simulate multi-touch interfaces. This would help verify the correct manipulation features, but would not provide accurate data for ease of use or speed for this different form of interaction. It seems like this area is currently lacking in prototyping opportunities, but then maybe that is the price of using interaction concepts with newer technology.