Tuesday, August 11, 2009

Collection of Posts by David Kabala

Input

Input for VR systems has primarily been concerned with tracking the position and orientation of users. This information is required for immersive stereoscopic display when calculating the correct viewpoint for each eye. 6-degrees of freedom (6-DOF), 3D position and orientation, data are necessary for this calculation. There are three main types of devices that have been produced for 6-DOF tracking; magnetic, sonic, and image based. The following table is a comparison of the cost of these systems. Recently there has been research into using low cost cameras and visual fidutials in a fully enclosed cave for tracking (Reference VF paper). This technique has been shown to be accurate, with moderate latency. Using visual fidutials is currently limited by the requirement to be used in fully enclosed cave environments, and additional processing on the image data from the camera.

Other than tracking devices, VR environments use common computer interaction devices, the keyboard and mouse being the most prevalent. Touch screen devices, gamepads, 3-degree of freedom trackers (Wiimote, PS3 controller), microphone (audio processing). There are additional interfaces including haptics, smell, taste, and even neural interfaces.

---

HCI of RTSs and console vs pc interfaces(Mouse)

Real time strategy(RTSs) games have historically been confined to PCs. The main reason appears to be that the use of a mouse and keyboard for control are superior to the controls provided by console video game platforms. Notable attempts to bring RTSs to consoles are the Nitendo 64(N64) version of Starcraft and the more recent Halo Wars. Halo Wars has been more successful, as it was designed from the ground up to use a control pad whereas the N64 Starcraft was simply a port of the PC game.

So what is it about a mouse and keyboard that is superior to gamepads for RTSs? I argue that there are two principle ways that separate advantages of the mouse and keyboard. I will break up the discussion of these in to two blogs. First the Mouse:
1. RTSs require very accurate and precise selection of items on screen.
A mouse is better suited for this than a joystick(s) on a gamepad. RTSs require the quick selection/deselection of friendly and enemy units, they also require accurate control of selecting small units within larger groups and accurate placement of commands for selected units. A joystick is limited to, usually, 128(8 bits) of precision on each axis. However modern mice have a much larger range of placement, and greater control over the speed of movement. In the future other interfaces may prove better at this currently then mice touch devices may be very good at this as it allows for direct connection of selection and the display itself.

---

HCI of RTSs and console vs pc interfaces(Keyboard)

Now for the Keyboard.
2. RTSs require a wide range of commands to be quickly executable. US traditional keyboards have at least 101 keys, in contrast to gamepads that have ~16 buttons max. When using a mouse an keyboard at the same time, as is done for RTSs, only one hand is covering the keyboard while the other is covering the mouse. This means that the whole keyboard cannot be covered at once, only ~30 keys are covered at one time. This can also be added to the 2-3 buttons on the mouse. However the hand covering the keyboard can be moved to cover different portions of the keyboard depending on the situation, so all of the keys can be readily available. Given the number of possible unit commands, magic commands, building commands, selection commands, and camera placement commands the ~16 max buttons of most gamepads are inadequate.

---

The software available for developing VR applications can be divided into two categories: code development libraries and end-user applications. The development libraries: Cavelibs, vrjuggler, OpenSceneGraph, OpenSG, and vrTools are targeted to application programmers. The End-user applications: Quest3D, Unigine, Alice, Agent Sheets, and Vizard are targeted for application designers.

end-user applications
Alice
3D programming environment
Focuses on teaching programming
Agent sheets
Create games and computational science applications with music, speech, and visualizations
License restrictions
No VR support
Unigine
Primarily a development library
Unigine viewer allows modification of scene
Licensing restrictions
Quest3D
development tool for creating real-time 3D applications
Licensing restrictions
Expensive
Windows-only

---

Software Licenses

When considering the accessibility of software and libraries it's important to understand how it is affected by the licensing. Bruce Perens describes four main categories of software licenses: proprietary, “gift”, “sharing with rules”, and “in-between” licenses[Parens]. Proprietary software is licensed such that it may not be modified or used in another package, doing so would be copyright infringement. The Open-source “gift” licenses like the Apache license[Apache] allow modification and use of the software in any derivative work including proprietary software. Open-source “sharing with rules” licenses allow modification and use of software as long as the derivative work is also shared. The General Public License version 3(GPL3)[GPL3] is an example of an open-source “sharing with rules” license. Open-source “in-between” licenses like the Lesser General Public License version 3(LGPL3)[LGPL] allow modification and use of the software in derivative work, including proprietary software, with the condition that the original software code be made available with the derivative work.

There may be software that provides the functionality needed in a new application, but because of licensing, that software may not be legally usable. This makes the general use of proprietary software and systems inaccessible to many. But under open-source licenses, there is legally solid ground for users to use, modify, and share derivative work

---

Expertise

The complexity of utilizing the hardware and software necessary for a VR system still requires specialized knowledge. The development of an application’s software has the most effect on accessibility as compared to hardware. This is because there are many different VR hardware arrangements ranging from very complex to support, 6-sided cave, to relatively simple to support, a single computer with attached HMD. However, developing a VR application with VR software is complex for the range of computer and HMD to 6-sided cave hardware arrangements. To utilize the available software requires software engineering, graphics, and VR domain knowledge. Because of this, most VR projects have an additional cost to pay for personnel that have this specialized knowledge.

Sunday, August 9, 2009

Computer Vision

Computer Vision is an excellent complement to existing methods in Human Computer Interaction. Computer Vision methods allows us to analyze and recognize human movements and behavior through digital images and video. Most computers still receive input from legacy low bandwidth input devices such as keyboards and mouse. This type of interface forces a user to adapt to a computer in an unnatural way. Stereo vision, voice recognition, face recognition, emotions recognition, and motion tracking are just a few techniques used for tracking how people to interact with their surroundings in a natural way. These techniques supply us with promising human-computer interaction methods by analyzing and recognizing human movements. The process of detection and tracking human body parts is one of the main steps necessary to reach a robust and natural interaction or interface model. There are already some practical applications using this model in HCI. Perceptional interface has recently emerged to motivate an increasingly large amount of research within the computer vision community. The focus of this effort is to provide highly interactive, multimodal interfaces that enable rich, natural and efficient interaction with computers. A multimodal interface is a system that combines two or more input modalities in a coordinated manner. Perceptional interfaces are inherently multi-modal. Humans interact with world by way of information being sent and received, primary through the five major senses of sight, hearing, touch, taste, and smell. A modality refers to a particular sense. The topic of perceptional interfaces is very broad and in recent years several workshops and conferences have begun to focus on how computer vision and advances in other areas such as speech and sound recognition can increase research opportunities.

Some Computer Vision Links:

Human-computer interaction through computer vision

Extending human-computer interaction by using computer vision and colour recognition

Emerging Topics in Computer Vision

3D Tangible User Interfaces

A tangible user interface (TUI) is a user interface in which a person interacts with digital information through the physical environment. TUIs offer a natural means of interaction with computers and are already been shown to simplify existing computerized applications and offer solutions for tasks that were considered to be “out of the scope” of human computer interaction. People have natural tangible abilities that allow them to manipulate physical objects with very little cognitive effort. Current HCI (Human Computer Interfaces) use very little of our abilities and do not provide much opportunity to manipulate physical objects. The Window-Icon-Menu-Pointer (WIMP) interaction metaphor is the predominant standard for the current HCIs. This interface blocks our innate natural abilities. Creating HCI as 3D virtual worlds can dramatically change the way we perform CAD or construction oriented tasks and open the door for other applications that are not currently supported by computers. Manipulating objects in a 3D space such as a virtual world can provide interactions through a transparent interface that is inline with our physical qualities. TUIs are also a great tool for assisting children in learning, because of its ability to take advantage of their natural ability manipulate objects. Children have played with physical objects to learn a variety of skill, a tangible interface therefore would appear as a “natural” form to them.

Some Tangible User Interfaces Links:

3D Computer Interaction Using Physical Objects: Exploration of Tangible User Interfaces

Tangible User Interface for Children An Overview

A Tangible User Interface as Interaction and Presentation Device to a Social Learning Software

Robotics (Human Robot Interaction)

Human-robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers. Traditional graphical user interfaces and input devices (keyboards and mouse) are commonly used in the field of HRI to control robots. These traditional user interfaces are currently the standard, but do not offer an ideal mapping to various robotic tasks. They do not take advantage of the innate skills of physical object manipulation and spatial perception. Tangible user interfaces (TUIs) is a physical metaphor that could be used to manage the interaction between robots and physical objects. Past research has focused on how to improve the robot development. The operation of robots in different spatial situations and environments has been often overlooked. The 2D user interface (UI) limits a human robot operator’s spatial perception when interacting with three dimensional objects. Low level robot control tasks can really benefit TUIs. TUIs couple digital information and function with physical objects allowing a virtual entity in digital realm to be manipulated through a physical medium. TUIs provide spatial orientation and the position of a physical object in relation to its surroundings can expose additional information and provide interaction insight and task awareness to the human robot operator.

Some Human Robot Interaction Links:

Utilizing Physical Objects and Metaphors for Human Robot Interaction

Human-Robot Interaction for Cooperative Manipulation: Handing Objects to One Another

Surface Computing and Multi-Touch Interfaces

There is a strong focus on multi-touch interaction in HCI. Experimental multi-touch interfaces have been around since 1982. The first multi-touch display was based on finger pressure technology. A multi-touch table uses a very simple method for tracking the finger/object touches/movement on a surface screen. It takes advantage of a cameras ability to see a greater spectrum of light then the human eye, extending into the infrared spectrum. There are 4 major multi-touch surface interface solutions: Microsoft Surface, Perspective Pixel, TouchLib, and TBeta. These solutions implement a design that can host complex high performance applications. The solutions provide control of multi-screen projection and control of audio and visual events. Microsoft Surface uses a design that is based on a standard IKEA table and has a projector light diffuser in the center of the table. One of the most famous solutions is Perspective Pixel designed by Jeff Han. Jeff Han is arguably one of the major pioneers of multi-touch surface computing, because of his ability to promote multi-touch surface computing. The solution uses the total internal reflection method (FTIR). The two major open source alternative solutions are TouchLib and TBeta. These solutions can track finger touches and translate the input into standard HID (Human Interface Device) events.

Some Surface Computing and Multi-Touch Interface Links:


Human Factors and Artificial Intelligence

Increasing machine intelligence leads to a shift from a mere interactive to a much more complex cooperative human-machine relation requiring a multidisciplinary development approach. Current HCI research, personalization and adaptive interfaces are proposed to enable easy access to proliferating functions and services. The increase in intelligence of machines leads to a shift from HCI to human-machine cooperation (HMC). Artificial Intelligence can be used to design future machines that are designed or learn to cooperate with humans. There will be a need for humans and machines to understand and comprehend each other’s reasoning and behavior. The possibilities for HMC are extensive, but knowledge is lacking in how to integrate human factors (HF) and Artificial Intelligence (AI) into a successful machine design. Cognitive engineering is a methodology that provides insight into increasing cognitive factors of human performance.

Some Human Factors and Artificial Intelligence Links:

Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction

Integrating Human Factors and Artificial Intelligence in the Development of Human-Machine Cooperation

Service Blueprinting

I ran across an article while performing research for SD2, it talked about how service blueprinting is a key to service innovation. The article talks about how service blueprinting offers a surprisingly flexible means of helping companies understand how well, or not, they are serving their customers. Service blueprinting focuses primary on customer experience. The idea behind service blueprinting is simple. Companies put themselves in their customers’ shoes to find out what’s working, what’s not, and what needs to be changed. The article goes on to discuss the uniqueness of this technique as compared to other process techniques. The customer is the center and focus for service improvement and innovation. Companies that create blueprints first quickly identify the specific service process to b e explored and can identify the customers that the service targets. Blueprinting is flexible, because most businesses are services at some level. Blueprinting has been used by companies across all industries, and the researchers expect the years to come to bring more widespread adoption of the technique. As that occurs, users will offer their own innovations to the process, making it all the more effective in the long run.

Futuristic Interfaces

SIGGRAPH 2009 just wrapped up in New Orleans this week. At the event, there are some interesting futuristic interfaces were displayed. You can check out other interfaces in the above link.

  • Touchable Holography
  • Augmented Reality for Ordinary Toys
  • Hyper-realistic Virtual Reality
  • 3D Teleconferencing
  • Scratchable Input
These futuristic interfaces imagined and developed. Some are very practical and some are not. I feel futuristic inputs are cool and fine, but something seems to be missing from their project descriptions. As we read Dourish, “Where the Action Is”, I personally think designers should read this book and understand what embodied action is. It’s about shared awareness between users and the world surrounds us. I think I mentioned this in other post about engineers or developers creating cool gadget because they can. That should not be encouraged or at least others should be asking questions. Designers must consider social awareness, intimacy and emotions in physical world. Any system should be considered as human-centered. What do you think of designers considering embodied interaction? Is it practical or that’s just an idealistic view?

By the way, a guy that does scratching surfaces will not be invited to my place. I don’t want him to scratch my furniture. ;)

Source:
http://www.technologyreview.com/blog/editors/23940/

Why does your vehicle smell like “forest”?

In last July, Nissan Motor Co., Ltd. announced the development of new air conditioning system called, “Forest AC”. This new AC system creates optimum cabin environment by controlling temperature, ventilation, aroma, and humidity. You say, aroma? Yes, I know. Nissan says the new system was based on a research on effects of aroma on human mental activity. You can read the rest as I was curious about what can really do with aroma. There are some systems out there to control automatic temperature of interior of your car and add pollen filters to help you make comfortable while you drive. All these system goes under ubiquitous computing. This new system even goes further by blocking order and removing order using “ion”. Making feel comfortable as you drive is a good idea, but you wonder about the process of designing system like that.

Nissan says…

“Nissan develops technology to set new standards in terms of comfort, quality and other factors for every aspect of the automobile, putting human characteristics first, based on the Life on Board concept, which is designed to deliver more joy of driving.”

Where am I going with this? Well, I am wondering if engineers in general think because they can develop system like this, they will develop it. I wondered if they ever consider users and context of driving in general. You can put as many sensors as you can install in a vehicle, but I wonder if they try to understand the context of driving. Making driver “not think” may not apply to this context. Did they just forget about “experience of driving”? Humidity and airflow control maybe good idea, but aroma has some issues. I think “scent” is another sensor drivers use to detect something wrong in a car. Is it good idea to take away users natural senses while driving? I thought drivers should be using all the senses to connect with vehicle and roads to create “driving experience”. I wonder what Dourish would say about relationship between a driver, car, and the road. Did he think these senses ruin your interaction with the world surround us?

Source:

http://www.engadget.com/2009/07/27/nissans-forest-ac-blows-wild-scents-through-your-ride-to-keep-y/

http://www.nissan-global.com/EN/NEWS/2009/_STORY/090723-01-e.html

Visual Search

Recently I started reading Ambient Findability by Peter Morville. So far it is a an excellent read on information architecture and wayfinding. (http://www.amazon.com/exec/obidos/tg/detail/-/0596007655/findability-20/). As I read the book, it strikes me how utterly dependent we have become on text and knowing the right words or combination of words to find what we are searching for when interacting with the web. We have all experienced this, we are searching from something and we find very little but later come to realize that we did not have the best keyword to yield the results we were searching for. If we focus on creating the best user experience possible, this will most likely include the ability for users to search items with more than just the perfect keywords or through a winding path of linked ideas. What about searching through key images or visual represetations of our desired topic? I conducted a search (using text of course) and found the following link with terrific references to visual search engines available today.

http://www.masternewmedia.org/top-visual-search-engines-the-most-interesting-ways-to-visually-explore-search-engine-results/

After quickly finding this page, I sit with the question, why are these visual search engines not in the mainstream search options? Google, the Godfather of search engines has incorporated visual aids such as the Wonder Wheel and Timeline to assist searchers with visual and semantic information but they have not created anything yet that is similar to the visual search engines on this page.

Happy searching!

Advertisement vs. Design

On July 27, 09 in New York Times, an article talked about how Walt Disney is cooked up this Disney Media and Advertisement Lab to identify how effective ads are displayed on their sites.

The chosen tools are advanced than what you would think. As we have discussed various usability tests in the class, these media groups are using similar methods to track how ads are viewed and which ones are effective. Such tools besides eye tracking were heart-rate monitors, skin temperature readings and facial expressions to monitor. They are trying to figure out how small the ads can be but still viewed by users or how they feel about certain ads by looking at their facial expressions.

Again, we are faced with corporate driven $tragegy vs. functionality of information dissemination. This disturbs me greatly as they have to figure out a way to effectively making sure users see ads. As a usability designer, how would you feel about your design being compromised. I understand if it is for entertainment. What annoys me the most is that ads are getting aggressive and no longer a part of the page anymore. In old days, ads had a space in your page. They are in your face and you have to click to stop ads. In usability standpoint, it is not functional part of the sites and not friendly to users. It distracts you from doing what you want to do. If you are an independent designer, you may purposely add spaces for ads. In larger corporate levels, is this how it works? Would designer informed how ads will be displayed so they need to change design to accommodate ads or simply told to design a site but marketing dept. will figure how to put over ads over your design?

What happened to user-centered design? I haven’t across any design books/sites that talks about how to incorporate ads as a part of design. I see Google ads model where you can simply select what type of ads and place a widget on your page. But that’s more for independent designer. I wonder about larger corporations that accept ads. What do you think?

Emergence of Interaction Design (section 2.2)

Since the first few weeks of HCI 596, I have been challenged. I have stated before that the readings/teachings in this class has challenged a lot of what I have been taught in my world of engineering. So I have been searching to find a meeting place between the two - HCI and Engineering. How can I apply this to systems engineering or Electrical engineering? If I can't, why not? I can not say that I have found the answers, but I did find a few interesting readings. One in particular I will attach to this blog.

http://hci.stanford.edu/~winograd/acm97.html See section 2.2

The article talks about the emerging area of interaction design. Now I am still trying to wrap my fingers around service design; now interaction design. Simply put, the author describes it as a shift from seeing the machinery to seeing the people using it. This sound a lot like service design; is it the same thing? Or is service design one sector of interaction deisgn? Wikipedia describes service as the non-material equivalent of a good. Interaction is a kind of action that occurs as two or more objects have an effect on one another. I guess from the definition one can have service interaction design. Seeing the people using the service instead of the service itself..make sense? So if I am an engineer designing a vehicle display, I am designing based on the user that will be viewing the display and not the technology. Help me here? What are your thoughts?

Usability Study Design Tips

While working on project 1, I had a few questions about how to to complete usability studies. I sent an email asking/commenting the following:

"In your usability studies, how do you get the subjects to take the interview, survey, etc as serious as you do? Most people who take surveys fill it out without much thought. One time I created a survey to determine if a particular tool was useful; it seemed as if the participants just went through the motions, especially if the questions only required them to select from a list (i.e. agree..strongly agree). Is there a certain method of questioning or interviewing that is more effective for getting honest answers or well thought out answers?"

The response to my email was to do the following:

1. Build rapport with the people you are interviewing. Make sure they feel like their opinions are valued and put them at ease so they feel they can be completely honest.
2. Wording of questions is critical. Make sure you ask really open ended questions without leading.

I did some browsing around also and what I found in a document that someone posted on the web was helpful in providing more insight as well as confirming the email response. The author of the document posted on the web gave some do's and don'ts of usuability study design. For example, give the participant an options to end the study if they feel uncomfortable. Make sure the user understand that there is no right or wrong answer. Also, have them sign a form explaining that he results are anonymous; this helps them feel that they can be honest in their answers. The author also gave techniques on what to do during the study as far as what to watch for. For example, be sure to note non-verbal communication. Some advice on the types of questions were to give open ended questions as well as task oriented questions.

The author also gave pointers on when to perform usability studies, the best person to conduct the study, etc.

http://www.highedweb.org/2007/presentations/PST6.pdf

Neccessary Excise

Cooper chapter 11 talks about eliminating exise. Cooper claims that software is often "top heavy", "requiring extra work". Therefore by eliminating excise you decrease the amount of cognitive and physical effort when using the software or product. My question is, is there a need for excise in certain situations? When it comes to user interaction, isn't there times where you need that confirmation or alert message? There maybe be information that needs to be relayed to the user before completing a task. For example, I have always appreciated the "Are you sure you want to delete this?" alert message. That has saved me plenty of times, but I do understand that these types of things irritate some. So where is the balance? Or do we need not need a balance? I guess in the end Cooper does imply that excise isn't necessarily bad if the effort to complete the excise task is kept to a minimum. Then, how do we assess the effort? When you are in a rush, any extra click is too much. Your thoughts?

What is Banner Blindness? Is it real?

Banner Blindness? I admit that I am new to HCI. In fact, HCI=Human Factors in a system design, period. While browsing around on the web trying to get some bearings in this new subject, I found an article about banner blindness. It is a phenomenon where website vistors ignore banners on the website or anything that looks like an adverstisement. I thought, well this is silly. Considering the fact that a lot of websites make their money from adverstisements. I mean who would pay for something that's being totally ignored. On the other hand I thought, maybe that's why Craigslist is so "boring" in appearance because they know about this. This is the article link

http://www.useit.com/alertbox/banner-blindness.html

If you read through the article, I know from my personal experience, I agree. I never actually pay much attention to the "happenings" on the side. They also stated that if the user does engage themselves with the ads it is usually one that looks like a windows command. I AGREE. I remember the first time I saw one of those...I CLICKED. That was the last time I was fooled, so nowadays I don't think those are popular anymore.

Now I understand why advertisers are just plain old bombarding web browsers with advertisements. You ever been reading an article then have a McDonalds ad just pop up right in front of your article? Then you have to search for the mistery "close" button. Now I know why! The ads want to be heard or seen in this case. This is another reason why I am beginning to accept that this phenomenon is real.

Creating Effective User Personas

I found creating personas to be quite challenging. So I looked around the web for other resources that might make it clearer or at least give a different perspective. I found one article by Lene Nielsen. The article gave 10 steps to use when creating usability personas. Please find the link below
http://www.masternewmedia.org/interface_and_navigation_design/usability/how-to-create-effective-personas-20071004.htm
I found this article to be helpful because it list steps to help guide you in developing your personas. I thought Cooper (Chapter 5 p 77 did a great job with this also. This article gives a different perspective of the step by step process for developing effective personas. One part that I found interesting in the article as opposed to Cooper is the verification step. This is where you find data to support your persona descriptions and scenarios. This made sense because when creating personas, you use the data collected to help get an idea of what your users are like. But is what you came up with correct? Or make sense? Even though article seems to be focus more on development as oppose to the “user”; I think you can pull different techniques and ideas from each process (Cooper and the article).

Saturday, August 8, 2009

“Trust” as service, or all about Benjamin?

As I tried to things to post here now, I have been gathering some materials for a month or so. I just couldn’t find a time to do that. The projects were killers. :(

I came across this Web 2.0 site called, Smartypig.com. Smartypig.com is a service site which users are able to set own saving amount. It’s a virtual piggy bank. At the same time, you can share your saving goal with others (family members too) so that they can contribute it if they wish. Since this is Web 2.0, it integrates with facebook and other social network sites.

I am little baffled with this service model and user behaviors. I maybe old school, but I don’t get the idea of sharing this kind of personal details. It’s a good behavior that can be recommended, especially in this tough economic time. Everyone wants to save some but I am not sure but doing this in public places.

Then I remember seeing other sites that are related to money lending sites a few years ago. Instead of getting load from banks, you ask from peers. The sites are even more popular now. Some college financial offices are even listing these sites as alternative way to get loans.

"Matching Borrower with Lender, Social-Network Style"
"Peer to Peer Education Loans"

Other sites are…
GreenNote.com
Lendingclub.com

Obviously, lenders and loaners are getting something out of this service. The needs for both parties are different but the service brings them together to provide what both user wants. To bring this service as legitimate, the service site has to create “trust” to users. I wondered how this kind of service would be started… Peer-to-Peer service is common, but the amount of money both users deal are completely different level. Would this be a common service in future? Is this part of cloud service?

Broken Service. Can HCI save everyone?

As we wrapped up service design course, I thought I would ask others about this issue. Digital Copyrights has been and will be one mess experience for both content owner and users. No technology so far seemed to work well between content provider and customers

Many industries, particular music and movie have been out of control with how they deal with copyright infringement. General public are fed up but the industries are not responding what customer wants. RIAA is actively going after individuals who violated copyright. The recent case of a grad student from Boston who was ordered to pay over $22,000 per song (total of 30 songs) shows how broken the system is.

AP thought they deserve to get paid every word they publish. So they implemented iCopyright to charge # of words, starting $12.50 per word. Also AP got Copyright Bounty Hunter to go after those who violated the copyright. Guess, how this AP copyright ended up working? James Grimmelmann, associate professor at NY Law School, submitted an expert from Thomas Jefferson’s letter, not written by AP. Of course, AP said he owes $12.

I understand you want to get paid for what you create. But the overall service is broken. The experience of service is not pleasant to both creator and users. Smaller music service sites are trying different models to accommodate both musicians and users. Are there any ways to design workable and realistic service for this kind of issues?

Monday, August 3, 2009

Mock up for touch?

For our first project, I found that the observations and feedback lead me to a multitouch interface, or at least touch. When it came time to create a running mock-up, I found the existing tools lacking in this area because using a mouse in an environment intended for touch skews the usability benefits and issues. I don't see this as a software issue, but more of a lack of test hardware. I could go buy an iPhone and create an application specific to it for testing. A tablet or one of the new touch monitors might work, but again, this is something I'd have to buy. I could potentially use a multitouch device on a university campus if I lived near one. None of these options are really feasible for me at the moment, so is there a solution to individuals working with small projects that are touch based? Is there a way to test touch based interfaces with a mouse and gain more accurate interaction data (beyond cognitive measures)? Or is a paper prototype potentially the best way to test with users since they can simulate dragging, touching objects with their fingers, and experience moving their whole arm?

I was pointed to this post about the SDK for Microsoft Surface which permits multiple mice to simulate multi-touch interfaces. This would help verify the correct manipulation features, but would not provide accurate data for ease of use or speed for this different form of interaction. It seems like this area is currently lacking in prototyping opportunities, but then maybe that is the price of using interaction concepts with newer technology.

Friday, July 31, 2009

Direct Better than Clean

We often see that links embedded in a paragraph should be used on words that describe what the user will see after they click it. A coworker recently shared this post from Dustin Curtis about his personal experience as he modified the text around a link to his Twitter account. He started with simple statements like "I'm on Twitter." with his Twitter account linked from the word "Twitter". He then ended up with the link in a sentence on the word "here", which isn't as elegant of a presentation, but seems to be more effective. He identifies the success of this presentation to the forcefulness of the phrase "You should follow me on Twitter here," and how it directly addresses the person reading it.

I find this interesting because it is definitely not the most sophisticated solution, but if you're trying to get people to act on a link, sophistication may not be what you want. Instead, you need to take the route of advertising, address the broadest range of users, and tell them what they should do instead of hinting at it.

What do you think? Is it more important to cater to the masses even if the design is not as clean, or is there a way to marry the two and have good, effective design that pushes action?

Augmenting PC Reality

My last post on Augmented Reality showcased using mobile devices as the viewport. Other AR experience are leveraging increasingly-available webcams, particularly on laptop computers. Consider LevelHead, a game that projects the game environment onto cubes that serve as the game control mechanism:

levelHead v1.0, 3 cube speed-run (spoiler!) from Julian Oliver on Vimeo.


From a developer standpoint, the biggest barriers to adopting these techniques are the complexities of computer vision and manipulating 3D space. As a developer, I'm particularly interested in ARtisan, which tries to simplify the use of Flash AR tools FLARToolKit and Papervision3D. Flash is uniquely positioned for the AR space because of the ease with which applications can be distributed and its access to the host computer's camera and microphone. If you have a webcam, I would definitely recommend checking out some of their examples.

Finally, this post has some other interesting examples of AR experiences.

Thursday, July 30, 2009

Design without research

This article was in the Interactions magazine. It is a two part article. The jyst of the article is the author begging the question, how far do we take user feedback? What lengths should designers go to in order to get user feedback and incorporate usability results? The author makes the point to keep it in perspective. Designers don't always have to go out and get extensive user feedback.

Part I of the article concludes:
"Like everything else in design and research (often overlapping terms that I’ve avoided specifying here), the answer to “design without research?” is, it depends. Among other factors, it depends on how much we already know about our customers (perhaps through our own experience). It depends on what we hope to learn and how we want to use that learning to create action. It depends on where we are in the development timeline of a product or service, and whether the product or service is new, me-too, innovative, or a redesign. It depends on business constraints like time to market, the maturity of the category, and the cost to evolve the design. No doubt it depends on other things as well. What do you think it depends on?"

Part II of the article makes the point that I would strongly agree with that when presenting design alternatives or business concepts to users in order to get feedback, it is important to give the concept you are presenting the best presentation possible. Meaning, the design alternatives are the most accurate representation of the concept you are getting feedback on. He summarizes the article with the following:
"When we’re using research to understand whether or not a concept is going to address people’s needs, we need design to create the best representation of that concept, and we need design to translate the output from that research into the next iteration of that concept. We can conclude that research needs design, before and after. Rather than treat research and design as separate activities (sometimes performed by siloed departments or vendors), I would encourage all the stakeholders in the product development process to advocate for an integrated approach in which design activities and research activities are tightly coordinated and aligned."

For the full articles, visit the following links:
Part 1: http://interactions.acm.org/content/?p=1255
Part 2: http://interactions.acm.org/content/?p=1273

Saturday, July 25, 2009

Bear as user, part 2

Some of you might remember the popular videos a while back of a bear user-testing a bear-proof trash can. Janea Triplett highlighted the story here.

Now a most-popular NYTimes article is related: designing the food canister that a bean can't open. It's almost anti-user design: know your user so well you make the product unusable to it. Or perhaps it's just an example of multiple stakeholders: bears and campers.

Looks Good Works Well

I recently came across a the blog of Bill Scott, the Director of UI Engineering at Netflix (http://looksgoodworkswell.blogspot.com/). Bill has some terrific blog posts as well as embedded slideshare presentations that are quite thought provoking about design, functionality and user interaction. Specifically, the presentations by Stephen Anderson is exceptionally done and deliver solid messages. You can view them at: http://www.slideshare.net/stephenpa. I especially appreciate slide #63 in this presentation. It illustrates how usability is about removing friction while psychology is about increasing movitation. This slide was created by Joshua Porter, another great set of presentations if you are interested. Joshua is the Founder at Bokardo Design, a company that specializes in social web app design. You can also find his slides on Slideshare.

One more interesting quote from Stephen Anderson's site:
“You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.” — R. Buckminster Fuller

Thursday, July 9, 2009

Some Great Resources for Project 3 and Service Design

After several days of not reading my Google Reader feed, I finally got around to catching up on some of my usability & UX news.

First off, for project 3 on encouraging people to go for walks, you may find Social by Social (a practical guide to using technologies for social impact) helpful, at least if you find that the lack of physical exercise, like walking, is essentially a social problem and not about the individual. I discovered this site from Experientia's blog. I, admittedly, have a bias in looking at things to see the social--even in creativity--but I haven't actually looked at the issue of why people walk (or don't) so I can't say whether or not this is a viable solution path (only your data can tell you that).

Also, and I wish I had known about this site earlier (so I could pass it on earlier) is this new site on Service Design tools, which goes over various ways of communicating service design to clients (so a lot of what the last and current service design project deliverables covered).

Also, in case there are still any lingering doubts of the benefits for design-oriented folks to know about implementation, I recommend reading this blog post about a designer now wearing more of a devloper hat ("programming for non-programmers", mostly, but the reflections are still very relevant).

Hope you all find some of this useful.

Tuesday, June 30, 2009

Civility in the Workplace

This will be more like a Tweet then a blog, but Jean McGuire who was kind enough to attend class last week and provide some of the sociological side of the discussion sent me this blog post on Civility in the Workplace by her former colleagues in Iowa State's extension offices. It discusses the role of agile software development and the importance of designing applications in partnership with those who will be using the applications.

Although since I'm writing up a blog post anyway, I recommend checking out this hour long interview with Don Norman where he talks about service design, user experience vs. usability, and several other topics. Project 1 and 2 focus primarily on what I refer to as 'classical usability' with a lot of focus on Cooper for design and Tullis for evaluation where it's about creating applications where the user can complete a task because the design follows the cognitive concepts in the layout. The service design project is more about the holistic user experience, taking into account not just the usability but also the emotional and other aspects that allow a user to forgive mishaps so long as there is that extremely salient service moment that leaves the customer happy with the overall experience (Norman uses the example of how at Disney everybody hates the lines--which are everywhere and make up the majority of the experience--but almost everybody would go back).

On a final point with service design, I am going to throw out this controversial idea (feel free to discuss in the comments:

Computers will never replace the best waiters and waitresses but very few waiters and waitresses excel past the mediocre level* and computers will always outperform the mediocre waiters and waitresses.

*I am defining "mediocre level" as taking an order and delivering food. E.g. doing their job at the bare minimum.

Tuesday, June 16, 2009

Augmented Reality

Two interesting augmented reality videos came to my attention tonight. The first, via this post on O'Reilly Radar, is a first-person shooter created by the Georgia Tech Augmented Environments Lab:

The second is Layar, an Android application from the Netherlands that overlays useful digital information over a real-time video from the phone's camera:

Imagine a shopping experience that show furniture designs as they would look in your own home. Or advertisements that render a three-dimensional view of a product when viewed "through" a mobile device. There's lots of potential for abuse, but as these composite interactions becomes more popular I expect we'll see some really interesting innovations.

Monday, June 15, 2009

A Few Articles to Check Out

Now that project one is done (or is being wrapped up by some of you), I found a couple of links in my morning RSS feed that I thought some of you might have an interested in. Particularly, I think the report on Wikipedia's Usability Initiative will be helpful for all of you going forward with usability research as it details all of the steps they took in their evaluation process. Most corporate white papers and academic reports on findings don't detail their entire process, so having a resource available where they have put everything out there can be helpful in understanding the full scope of what goes on in usability research if you haven't had any experience outside of the limited scope of most classes (the scope for this project was slightly more realistic but due to the limited time frame shortcuts, of course, had to be made).

On a related note, and something I didn't emphasize enough on this project, is that a good designer documents all decisions and should be able to provide some type of justification for the decision. They might not like it, but that's the way the business world works (not just engineering)--if you can't give a reason then a firm hiring you is justified in dismissing your input since nothing backs it up. There should be a reason why the button is placed on the lower left instead of the upper right and you should have some type of reason behind choosing the line width and color (note: I only mean this to a degree, if the difference isn't perceptible to most people then there's no issue but a thicker line creates more of a disconnect between different types of content and different shades can convey different emotions). For this project I'm not making you justify the minor aesthetic details like that but be aware that some people will ask (and you better have an answer).

And here's something to get the marketing people all up in a tizzy (do people still use that word, really?), there's a piece written about how the products are worthless. This gets into a lot of what we're talking about with the social theories, Where the Action Is, and service design in that the product itself does nothing for a user until it becomes useful to the user in a given situation/context. For better or worse, we are transitioning into a service/knowledge economy in the Western world and talking about the world in terms of products is so last century--you have to think in terms of 'value added', 'experience', etc.

Sunday, June 14, 2009

So what's the symbology there?

As I read the opening chapter of Paul Dourish's Where the Action Is, I couldn't help but pause to reflect on the phases through which interaction has evolved over the years: electrical, symbolic, textual and graphical. Dourish characterizes the transition between phases as "a general trend that emerges in a number of different ways"(7). What he didn't mention is that each new interaction paradigm has served to supplement the former in a continual coevolution. This coevolution will be essential as we try to make the most of the new devices and manifestations of computational power, particularly with relation to multi-core and distributed processing.

As a mathematician and language geek, I'm most drawn to interaction optimizations at the symbolic level. Through my programming career, I've been exposed to dozens of symbolic abstractions, ranging from Logo and my TI-85 to academic amusement with Scheme to "real-world" languages like C and Java. For the most part, however, the languages have been more similar than different. And with the exception of a course on functional languages, the style of programming was always the same: tell the computer how to push these bits around. For the majority of problems, that was simple enough, particularly with the steady march of CPU speeds in step with Moore's Law.

Well physics decided to step in the way of our 20-GHz CPUs, leaving us instead with dual- and quad-core systems. There's still an increase in processing power, but taking advantage of it requires new ways of thinking. Among these are various new (or new-again) symbolic abstractions that will be essentially to get developers back to the point where the electrical systems are sufficiently hidden that we can get on with the real work of building the next generation of textual and graphical interfaces. As a developer on the Microsoft platform, there are two in particular that are of interest: Axum and F#.

Axum


According to Microsoft, "Axum is a language that builds upon the architecture of the Web and principles of isolation, actors, and message-passing to increase application safety, responsiveness, scalability, and developer productivity." In other words, it's a domain-specific language (DSL) built specifically for scenarios involving distributed and concurrent computation. I haven't built anything with it yet, and at this point it remains very much experimental, but the concept has a lot of promise to faciliate the creation of applications and frameworks that can seamlessly handle the new architectures that have emerged. For more information, check out the Axum page on MSDN.

F#


Functional languages have been around for a very long time, and they are based on the lambda calculus which has been around even longer. But only recently have they started to come back into mainstream view, due in part to new functional languages for the major development platforms: Clojure and Scala for the Java Virtual Machine, and F# for .NET. The other reason for this comback is the ease with which functional languages handle the problem of concurrency through the use of immutable values and data structures. These languages also provide other higher-level abstractions that move developers away from specifying "how to push these bits around" and more toward specifying what they are trying to accomplish.

In both cases, there is clear value in solving the difficult problems once and letting symbolic abstractions shield the rest of us from ever having to think about it at a low level. It will always be necessary to have some knowledge of what's happening behind the scenes, just as managed memory doesn't free the developer from memory considerations completely. But leaning on advanced symbolic abstractions is one more way to advance the state-of-the-art in HCI.

Thursday, June 11, 2009

Service Oriented Design Reality?

True Service Oriented Design vs. Large Corporate Politics

We have started to discuss the service design and it’s concept. I really like the idea of experience as a whole interaction and focuses on you as a customer. I just recently read relevant service design topics on the web.

Dustin Curtis, who is a talented User Interface Designer, decided to redesign American Airlines Website after experiencing terrible time booking a flight via AA website. Dustin redesigned a front page in a few hours and suggested some things that are very important, but most importantly “customer experience”. After his posted his redesigned site plus some suggestions, he actually head back from AA User Experience Architect! This architect explained a few things.

A letter to AA:
http://dustincurtis.com/dear_american_airlines.html

A reply from AA:
http://dustincurtis.com/dear_dustin_curtis.html

The letter from AA architect touched a few things about Front and Back Stage Design process and how it is completely messed up system that they have to work in bureaucratic/political corporate environment. The website, especially AA would suffer tremendously due to many departments are in charge of area in the website. Each department have its objectives and goals to meet. By the time all of the contents are added into the site, it become large mess to customers. As a customer, you know what you need to do from their website. It is not a very long list of tasks, but the site makes it complicated.

Dustin responded a letter and mentioned that “customer experience is the new brand”. I have to agree with the idea. Everything, I mean everything the company communicates to the customer or the other way around must be part of corporate strategic plan. You have to shake your head to the airline companies. I am not sure about you, but I haven’t had good experiences when I flew. I don’t care for security procedures, booking, pricing, airport, scheduling, food, seating, and the list goes on. It’s a “have-to” thing to get one place to the next quicker. So you have to ask why don’t the airlines make the experience more pleasant, especially in the front end where a customer has to interact before flying. The during a flight and after is important too.

The UI design on the web and services people have to interact to be served are necessary evil that the corporations must to rethink. Sell the experience!

Another interesting read about painful process of corporate product development:
http://www.core77.com/blog/business/the_painful_process_of_corporate_product_development_13589.asp

New experience in search engines:
Another quick note about the new development with a few search engines that just released to public.

Microsoft Bing is a new search engine that displays related content, but little more visually interesting to crawl the search engine. It shows you some related contents that you are looking for. Photos and videos search displays somewhat different from what you expected from Google. I especially like the thumbnail views of video that you can actually play a few seconds when you mouse over. $100 million marketing campaign is not just the hype only. It has some minus but the concept is good. However, there is already a talk about copyright infringement of music/movies in thumbnail’s view.

Bing
http://www.bing.com/

Wolfram Alpha, a maker of Mathmatica, released this interesting knowledge-based computing. They call it, “a computational knowledge engine”. Is this become the rise of machine? We have to see...

As of now, Wolfram|Alpha contains 10+ trillion of pieces of data, 50,000+ types of algorithms and models, and linguistic capabilities for 1000+ domains. Built with Mathematica—which is itself the result of more than 20 years of development at Wolfram Research—Wolfram|Alpha's core code base now exceeds 5 million lines of symbolic Mathematica code. Running on supercomputer-class compute clusters, Wolfram|Alpha makes extensive use of the latest generation of web and parallel computing technologies, including webMathematica and gridMathematica.

Wolfram Alpha
http://www.wolframalpha.com/

Are we seeing a new kinds of search engine here? Google was loved by everyone when it came out after a series of ugly search engines that didn’t provide anything that you can actually find information you need. Google’s simplicity made a difference. Is Google behind now? What users want from the search engine. Are they looking for a particular information, related information, or simple crawling. The users will be the judge to which search engine provides value/experience they want.

Tuesday, June 2, 2009

Model Driven Reqmnt vs Document Driven Reqmnts

The purpose of this blog is to discuss the idea of model driven requirements vs document driven requirements. One may say "this doesn't seem to have much to do with HCI" but it does because obtaining and maintaining requirements requires a lot of customer interaction, internal and external. You have you internal customers, for example subsystem and component level groups. These are the groups that will use your requirements to develop their systems. Stakeholders, end-users, etc can represent external customers. These are the groups that will/should deliver requirements to you. When it comes to usability studies or analyzing your customer's needs, which method is more effective? Sometimes in document-driven requirements (i.e. microsoft word) it is difficult for the requirement writer to express what they really mean. Would it be more helpful to supply the customer with a model that they can "play with" and provide feedback to the design group? In essence the design team could use a model or "service design" as a requirements development tool. Is this possible? A better way? Your thoughts....

Sunday, May 24, 2009

On Selling Usability

If anybody is having issues with the notion of "selling usability" here are a few resources to pull from:
Part of selling usability, in my opinion, is also making sure to include additional stakeholders (other than the users) within the process--especially since users aren't always the customers and it might be harder to sell usability if there's not that direct connection to the almighty dollar. Here are just a couple of links discussing involving stakeholders in the user experience process:
The other thing to keep in mind is that a lot of projects you may find yourself on you may run into issues of a project manager who views a project as their "baby" and becomes the chief impediment to fixing problems because they designed/coded it originally and they know what their product should be (in cases like that, if you can move to a different team, I recommend it--let the manager kill their own product). The other thing you will probably run into a decent amount is where everybody on a team uses the product and considers themselves a "user" and argue that they know what's best in terms of usability--that's where video clips of users flailing around the product are particularly helpful in convincing team members that a problem exists.

Thursday, May 21, 2009

Balsamiq as mockup tool?

Keith suggested using Balsamiq, maybe. (http://www.balsamiq.com/products/mockups)

It looks pretty, though I wondered if the widgets could be interactive.

Stephen