Thursday, June 30, 2011

What is Ethnography?

Ethnography is a research methodology and not a specific technique to collect data (unlike participant observation, or interviews). In fact, it is a multiple technique approach: an ethnographer uses a mixture of techniques appropriate to her/ his situation; and adapts each technique to her/ his situation. Ethnography tries to integrate the different methods into one holistic study.

Ethnographers frequently use participant observation to gather data. As a participant observer, an ethnographer participates in the society or culture being studied by living amongst those people. Yet, through reflection and analysis, the ethnographer retains an analytical or observational position so that s/he can describe and interpret the subject of the study. Through immersion in the field (the project and the context in which the project is working), the ethnographer accumulates local knowledge. Research takes the form of diverse relationships and ‘conversations.’ Even when it includes apparently impersonal methods like surveys they are treated as part of an ongoing conversation or relationship with a place and with people. Every experience, conversation and encounter can be treated as ‘data’ alongside more formal research activities such as interviews.

A research approach such as this does not require interviews and conversations to be completely structured. While the researcher is broadly aware of the issues to be addressed, the precise questions, and their sequence, emerge only as conversations/ interviews progress. Thus, data is collected through ‘chains of conversations’. Similarly, the researcher begins by identifying key informants. The reliability and veracity of those chosen as key informants is crucial for the ethnographer. To ensure reliable information, ethnographic researchers triangulate anything learnt from key informants with others. Talking to the key informants points the researcher to people who may provide further information. Thus, the collection of data progresses through chains of conversations and informants, and the emphasis on sampling is not adequacy in a statistical or numerical sense but in identifying events/ people that contribute to the narrative. Nevertheless, this narrative is scientific i.e. its acceptance/ rejection is subject to testing.

To reduce the influence of personal bias or ideology, ethnographers are trained to be constantly self critical and reflexive, especially on the field.

Reference:
Balaji, Parthasarathy, Aswin Punathambekar, G. R. Kiran, Dileep Kumar Guntuku, Janaki Srinivasan, and Richa Kumar. (2005) "Information and Communications Technologies for Development: A Comparative Analysis of Impacts and Costs from India" Project Report, Department of Information Technology, Ministry of Communications and Information Technology, Government of India.

Service Design: Global Conference

While working on our group project I have come across to the Service Design Global Conference From Sketchbook to Spreadsheet and would like share with you the all the details...

Call for Contributions
Open to everyone includes business people, enterpreneures, academicians and service design practitioners.. initial submission last date is June 30.. For more details
click here


Themes



  • Design and business collaborating, what working, learning and building together looks like, what works, what doesn't

  • Measuring success - what?where?when?how?

  • Service Design and how it works at different levels of organizations

  • Service Designers working on new, 'wicked' problems

  • Service Designers designing business, business designing services

  • Not everyone who creates a service calls themselves a service desinger

  • How does the business community view service design?

  • How organisations access, buy and value service design?

  • What makes a successful (service) design business?

  • Marketing and moitising service design

  • What might designers learn from business and vice versa

  • What will service desing look like 5, 10, 15 years from now?


Venue
Palace Hotel2 New Montgomery Street
San Francisco, CA 94105


For more details click here

Wednesday, June 29, 2011

Don’t Be So Quick To Criticize

I ran across an interesting article entitled “Six Things User Experience Designer Forget When They Criticize Websites” that discusses how UX designer are quick to criticize before understanding why certain decisions were made. This is something that I have caught myself doing.

In fact, recently, I was helping out a friend who is starting her own business. She has never created a web site, but had found a hosting company who catered to non- technical people. The site she came up with wasn’t what I would have done and I was quick to think about a million things she did wrong. Fortunately, I didn’t spit all of those out to her because it wouldn’t have sounded constructive at all which is what she asked for.

Once we were able to sit down to talk, I asked her what the purpose of the site was and who the target audience is. These two questions are critical for me to provide the best site design possible. After finding out those answers, several of the things I was criticizing actually made sense for her site. I was amazed at the amount of research she had done to come up with what she did and she was able to defend a lot of the decisions. Not all clients can do this, but when they can, it is very helpful.

One of the main points in the article talks about how business decisions can sometimes trump the user experience. In this example that was exactly what happened. Hopefully, when it does happen it’s not such a poor user experience that it hinders the bottom line.

A/B testing

Most of us are familiar with the concept of using A/B testing. A/B testing is also known as "split testing" or "bucket testing," A/B Testing is a method by which two design samples are presented to real life users in live circumstances. Each sample is tracked allowing for comparison of results. These results can make the business team can decide which option is better functionality for users.

For instance, one might typically test two different headlines on a landing page. One would then outperform the other, and you would know which is the top-performing page.

Why do we need A/B testing?

  1. Budget-Friendly
  2. Measure Minute Differences
  3. Resolves Conflicts
  4. Measures Actual Behavior

Before getting into Testing we need to make sure we accomplish few key points

1) Establish Testing Goals and Parameters

2) Determining the Sufficient Test Interval

3) Create 1-3 designs

4) Redesign based on testing results (After first round of results)

5) Evaluate the Redesigns in A/B Split Tests

In conclusion, A/B Testing is a valuable addition to other types of user research. It provides credible, real-world numbers and guidance to inform any design decision.

REACTable and New Directions In Mobile Design and Development

So, I finally re-discovered the name of the touch musical device I mentioned in class yesterday (the one on which you can place/move around different objects to create musical tracks): REACTABLE. Since I personally don't know much about "phicons," I can't say much about this machine's relevance to this concept; however, it is well within the range of "tangible user interfaces" and centers around the tracking/programming of actions (sounds) based on fiducial markers. Check out this basic demo to see its elementary functionality: http://www.youtube.com/watch?v=0h-RhyopUmc. If you want to take a look at actual production potential, there's a bunch of YouTube clips available, including: http://www.youtube.com/watch?v=Mgy1S8qymx0. Obviously, this is geared towards the electronic scene, but I thought it'd be something cool to share as tangible UIs relate well to some of the computing perspectives we looked at during our lecture yesterday.

Another interesting item I wanted to bring up in this post looks at the intersection of mobile computing and the psychological practice of "homing" (related of course to the sensation/feeling of "being at home"). What an article I read this morning highlights is that there are many psychological phenomena for which mobile computing could be adapted in order to enhance overall life experiences. For example, in this instance of "homing," there is a tendency for humans to "differentiate" based on location (some people have a house in an urban area and one in a more secluded, beach/suburban/rural area -- they leave different wardrobes in each to enforce "differentiation" of their various "homes"); what this rather directly entails in mobile computing is really context-dependent programming and functionality. How can devices and software be more tailored to the physical contexts in which we find and establish ourselves?

Like the REACTable example, this idea of complementing psychological "tendencies" with systems like mobile computing is one that I think we will see slowly evolve, and it has clear implications for practitioners of HCI and UX.

Tuesday, June 28, 2011

Mike Oren on Miscellaneous Topics: Service Design, Google Music, and Google+

It has been awhile since I've written a post, myself, on this blog. First, I want to thank all of my students who have posted so far--there have been some truly exceptional posts and the posts have been fairly solid in general. It's a fun and interesting challenge teaching a combination survey (wide range of topics covered, primarily at the surface level), project-based (in-depth hands-on work), seminar-style (open ended, discussion-based, and collecting guest speakers) course where all of the students are remote.

I imagine this can be equally challenging for students, even given the lack of required readings, since (as many of you have discovered) some outside reading is often required to do well on aspects of the class (the survey component is to make sure you have the base knowledge to find the resources you need to do the work). To be honest, it wasn't until Robert sent me an e-mail this weekend that it really struck me that what I am doing with this course is at all odd or different--I have taken courses with all of these components, but they were all separate. I know I have always said the course can easily be split in three, which I would do if the resources were available for three courses on these topics; however, I had always thought about that in terms of just the sheer amount of information I try to pack into the class, but I can see how that can be the case for the delivery method as well.

The class, when first taught in 2009, was not originally setup to be as much of the mixture it has become--while I'm reusing a decent amount of the slide content, I am also adding a lot of new content, activities, former HCI 521 (cognitive aspects of human-computer interaction), etc. to the mix. While part of the content mixture makes for the current format of the course--521 was designed as a survey course and the original 596 was designed as a predominantly project oriented course (with a heavy dose of theory, which I'm thankful I was allowed to maintain and even expand--my original draft for the 2011 course, cut the theoretical discussion due to the practitioner focus but when the lectures on situated action and phenomenology didn't bomb, I reintegrated them with some modifications).

One thing I have been pondering with the course though is why I see very little discussion during the lectures on Adobe Connect. In classes where I've monitored just the discussion feed or have taught a course in a chat room as an experiment, the side/back chatter is pretty dominant. Here, unless I specifically pause for comments, I rarely see much going on in the chat. Is this being done simply out of politeness/respect? Are there chats going on but private ones? In some ways it does make things nice because this way I don't have to parse the chat for things to share vs. random (but often related) talk but on the other hand, I wonder what conversations are being left out. In my own experience, much of the back chatter in these online discussions of live class sessions tend to be related--links being shared, personal experiences being exchanged, etc. Another possibility is that there's just simply an overload going on--am I simply presenting too many divergent topics too quickly? Transitions are something I'm acutely aware of--integrating the topics I am is often like throwing paint at a wall, some of it may form into a unified splotch while other things will appear far removed as little drops. My hope though is that everybody is getting something useful out of the course. I should also add, that some topics people ask to be covered in more detail, I purposely avoid as there are other courses that are setup to cover them (e.g. graphic design, research methods/full user research cycle, etc.)

Switching gears to my original intent for the post... I was chatting with a friend of mine who work for IDEO, which as I've mentioned in class does quite a bit of work on service design (they are one of just a few companies in the US that do this kind of work). She shared some of her favorite examples with me (from the Boston office where she works--although she was not on these projects), and I thought I'd pass them along as great case studies of the service design process and what can be accomplished. The first is one I think we can all have opinions on: TSA Checkpoints. The second, she wasn't sure if it was (strictly speaking) a service design project, but it does have some aspects of it: Bedsider. I should note that both of these examples take the more traditional service design approach rather than the information flow focus of Glushko's take, which I prescribe to more as I feel the equal focus on the backend data processing and flow has a lot of potential to really improve service quality and reliability (although it won't be as readily noticeable by users/customers). I am keeping a summary of these cases out of this post, both due to the current length and to allow a student to write up a blog post about one or both of them (note: it should be more than a summary--add some analysis or synthesize it with user experience design in some way).

Jumping into another unrelated topic... I received an invitation to join Google Music last Thursday. I managed to finish getting all of my DRM-free music on to the cloud by the end of the weekend, but during that time I tested it out on a decent (but by no means great) Internet connection, only to find it useless due to the apparent lack of buffering--the song was choppy and just didn't really play. I also tested it on my iPhone, but it apparently doesn't currently work on the iPhone (I have yet to test it on the iPad, but I am expecting a similar problem). It's still in limited beta, so these issues aren't terribly surprising. The interface itself is pretty slick and seems to work well--and with an Android phone, this could definitely be the media killer "app" (a solution just as, or possibly even more elegant than iTunes integration with the iPhone/iPod for music). I'm really not fully sold on this yet, but so far I like it much better than Amazon's Cloud Player, which I've also tried. I'm curious what everybody thinks about possible use cases for Google Music and general usefulness of cloud music players in general.

Final topic: I know we talks about this in class, but I have to rave, again, about Google Plus. Even the integrated chat feature ties in with the social circles--so not everybody is allowed to chat with you; however, I would actually prefer if there were settings where I could choose to allow certain circles to chat with me at various times (e.g. family and friends view me as offline during business hours; business contacts, who are not also friends, view me as offline when I'm on vacation; different status messages for different groups--e.g. my chat status for friends might be links to a funny video, but to work colleagues an intranet page about a major policy change). I also discovered that Sparks was not the semi-random social area around common interests that I thought it was (I had not viewed any demo videos)--it's apparently just a means of finding articles/photos/videos to share, which is not nearly as interesting to me (not sure if I'll ever use it again beyond my quick test). Shared messages also appear to lack a character limit, or (at the very least) are not arbitrarily low (e.g. a thousand characters can be typed). There was also a critical lesson learned from Google Wave--notifications from Google+ appear across all Google properties via the navigation bar, where a small status icon shows a number (for notifications on Google+)--the bar also has a share button built into it. While it is hard to truly evaluate Google+ until I get more of my contacts on to it, so far what I've seen has been vastly superior to any of the other social networking sites out there (for me, circles makes a huge difference). There are also a decent number of research questions that Google+ inspires and a lot of possibilities for digital social networkings future that could allow it to more closely mirror real social networks.

Note: The post title is intended for when I share this on Google Reader and ifttt.com automatically shares it to Twitter. Today, I had shared a post entitled "I wrote a book" and that's how it gets shared on Twitter (I had not written the book, of course). The book is recommended for folks new to UX: Designed for Use: Create Usable Interfaces for Applications and the Web (by the blogger at Ignore the Code). As a fun post, if anybody wants to discuss the usability of blog titles/news headlines then this would make a great case to include.

Monday, June 27, 2011

“3D Interactions between Virtual Worlds and Real Life in an E-Learning Community”

Collaboration is one of the central focuses of the Internet. The ability to use the Internet to improve communication, collaboration, and the exchange of ideas has become more important day by day. Those that work in the discipline of human computer interaction may also work with these virtual mediums to improve these highly engaging environments that offer few limits on the possibilities of what they can offer. Virtual worlds, like Second Life, offer a very dynamic and diverse virtual universe that has been proven successful in various applications like education. Presently, there is a large presence of real life universities, like Iowa State University, on the grid using this medium for engaging and collaborative purposes. Moreover, there are other entities that utilize virtual worlds in their everyday practices like IBM that depend on the medium to bring globally dispersed work groups together to help adjust to today’s globalization while saving money from travel expenses and the alike. Altogether, virtual worlds offer users an environment that is more dynamic and flexible as compared as SMS texting, email, and other lesser engaging technologies.


Virtual worlds may have positive impacts in other areas of HCI because a systematic combination of real life and virtual interaction is promising a huge benefit for electronic learning, in terms of (not only virtually) tangible E-learning interfaces that enrich the experiences of learners—and probably also those of teachers. By a felt-as-somatic interaction with the learning environment the cognitive capabilities of students can be exhausted to a much larger extent than in traditional classroom settings, where learners are typically acting in a much more passive and less individual way (Lucke, U., Zender, R., 2011). Learning using virtual worlds can offer learners the ability to experience inexpensive project-based learning of all ages from K-12, collegiate, corporate, and non-academic. Virtual worlds also offer a safe environment to learn cause and effect relationships that may help promote safety, education, collaboration, and various other needs that can extend far outside of the industrial realm and deep into the interpersonal human condition.




Lucke, U., Zender, R. (2011). “3D interaction between virtual worlds and real life in an e-learning


community”, 2011.


Retrieved June 27, 2011 from:


http://www.hindawi.com/journals/ahci/2011/684202/

“Museums and Human Computer Interaction”

I remember as a kid enjoying going to the museum because of all the neat and unexplainable things that I became exposed to. The big problem is that I had no real idea what I was viewing and if I did, I may have known very little about the object or its importance in order to be in a museum anyway. Recently, I got to spend almost a week in Washington D.C. and I spent a lot of time at The Smithsonian. This museum is broken-up into various themed sub-museums that have audio devices that allow the user to stand in front of an exhibit and be given an audio presentation of what the artifact is and its importance. Therefore the device gave me a better user experience because it shared information, provided education, and possible inspiration to promote further study of certain artifacts of history that may have really interested me. The minor drawback of this technology is that because of the private nature of the device (the use of headphones); the user’s attention may be drawn away from the environment around themcausing them to focus harder on the current experience. Because the external environments noise can be drowned-out by the headphones and the auditory presentation I listened to, I found myself separated from my group many times which resorted in our savvy uses to find each other using our smart phones. Nevertheless, this personal presentation technology makes the museum experience more enjoyable and substantive over the unfulfilling experiences I remember when I was a kid.



Today, there are new technologies that can give a museum visitor an even greater interactive experience that remains personal but also collaborative not only with the museum artifacts, but also with other museum visitors as well. This new technology is a remote collaborative multi-touch experience that offers an additional channel for museum visitors to explore the exhibition and increase the sense of connectedness and awareness between the two spaces. The experience flow includes stages of offering opportunities for exploration, negotiation, and cooperation (Arroyo, E., Righi, V., Tarrango, R., Blat, J., 2011). For museums like The Smithsonian, this technology can make the visitor experience be richer and more immersive while allowing group discussion and video communication between other visitors. Visitors can spend more time enjoying the museums with less worry of keeping track of others and their geographical locations as well. This kind of device also has numerous uses outside of museums. Instruments like this can be used in construction, plant management, transportation, and in many other environments where people may have to be in distributed areas while needing maximum mobility and the ability to communicate.





Arroyo, E., Righi, V., Tarrago, R., Blat, J. (2011). “A remote multi-touch experience to support



Collaboration between remote museum visitors”, 2011.



Retrieved June 27, 2011 from:



gti.upf.edu/a-remote-multi-touch-experience-to-support-collaboration-between-remote-



museum-visitors/

Sunday, June 26, 2011

Demographic Markup Language (DML)

Over the last few years I've been becoming more and more annoyed with how targeted web advertising is choosing to become. Between Facebook data mining and Google ad targeting, it seems like every ad on their websites I visit believes I want to get an IT degree from the University of Phoenix and participate in every Tough Mudder event happening on the West Coast simply because I've indicated I'm back in school and enjoy running events.

I laughed and was intrigued when Amber Case recently tweeted her blog post about creating a demographic markup language (DML). Essentially, such a markup language would allow one to define one's demographic information for all web services. For websites, the more data you have on your user, the better you can targets ads, and these ads bring more advertisers and, most importantly, money. So now it seems everyone collects some small bit of information so ads can be targeted. Amber notes, however, how sad it is how significantly one's user experience can change simply by changing basic information since most websites currently have only very small data points, like age or sex, with which to target their advertising. She herself set her gender to male and birthdate to 1973 so she could actually get advertising she likes. By defining a DML, the user would have a better say in the types of advertising they want to see. This is a win win for user and advertiser.

This would seem to pull the carpet out from under social media giants like Facebook. I've always felt that Facebook benefited more than its users simply because of all the data they have on their users. By giving the user more choice in the type of advertising they will see (since they have to), all websites will have equal access to user data and create a richer and more customized web user experience. This could be powerful for many different reasons and start to give identity to the faceless users of web services.

Corey's Axure RP Review

I recently created interactive prototypes using Axure RP for my individual project in the redesign of craigslist.org. I opted to use Axure RP since I've seen many job listings in the field of HCI inquiring about familiarity with the prototyping tool. In the process of using Axure RP in my project, I became intimately familiar with the program's strengths and weaknesses and thought I would share those. I'm not terribly familiar with competing software tools, such as iRise, so this review is based on my experience with Axure RP alone. I have used OmniGraffle, Balsamiq, and Microsoft Visio, primarily, fairly extensively for system and flow diagrams, but I would say that Axure RP is in a different league based on its interactive prototyping capabilities.

If you're not already familiar or aware, Axure RP is a
wire framing, prototyping, and specification software tool for design of user interfaces in applications and websites. It can used to quickly create wire frames that evolve into fairly high fidelity interactive prototypes all before any code needs to be written. This allow for quick evaluation of company or client designs. Axure RP also boasts an automatic specification generator that utilizes annotations from the wire frames to generate a Microsoft Word specification document.

The Newbie Level videos on Axure RP tutorial page are a good place to get started. I watched a few videos and was off designing in less than hour.

Axure RP's user interface was familiar given my previous experience with similar software tools. Beginning your wire frame is pretty straight forward. Widgets (what Axure calls all the buttons, text, images, panels etc.) are easily drug out on to your page from the Widgets panel. A grid, snapping, rulers, and guides provide an easy way to keep your wire frame all lined up and organized.

Depending on the widget, various properties can be set on the Widget Properties panel. This panel is where you will spend a lot of time with its three tabs: Annotations, Interactions, and Formatting. The Axure RP tutorial doesn't make a big deal of it and I wish it had, but the Annotations tab is where you can apply labels to your widets--this provides an easier way to find specific widgets when you're adding interactivity to your prototype and later, where the tool will pull your comments from when auto generating the prototype specification.

Different widgets have different event handlers (OnClick, OnMouseOver, etc.) and this is how you bring your prototype to life. You can add several Cases of interactivity with the Case Editor on the Interactions tab--a case being a possible interactivity that will occur after a specific event (e.g. opening a link in a current window, bringing a specific panel to the front, or disabling or enabling features of a specific widget). Its also possible to add simple conditions (e.g. if a widget is at a particular state, do this or if a variable has this text, disable this widget) if you want to start adding more high fidelity interactivity. You can add simple slide or fade in or out animations if you're looking to add that level of detail to your prototype.

Perhaps Axure RP's most powerful widgets are the dynamic panels. These contain stacks of states or diagrams. The dynamic panel can be hidden, shown and moved, or switched between its various states based on the interactivity you add to widget interactions. I ended up using dynamic panels quite a bit as opposed to pages since I wanted to maintain a constant menu bar across the top of the UI for my particular prototype. One thing that I liked about Axure RP was the widget libraries which are essentially dynamic panels other users had previously created.

I did not spend a lot of time on formatting but Axure RP does have the ability to create styles and masters, which are essentially templates if you're doing a larger scale design and have many different iterations you want to try out.

I was pleased with Axure RP. I will say that it seems at some point once you start adding complex interactions to your prototype with Axure's Case Editor you may have been better off coding the prototype since your are limited by Axure RP's event handlers and would not be with your coding creativity. For myself, I see Axure RP as a powerful tool to quickly generate lower fidelity interactive prototypes for clients and generating early specifications for statements of work.

“Privacy and HCI”

Currently, I am also taking POL S 533X and this is a class where we research and understand what makes quality e-government web sites and how they should be developed. Between the state, local, and federal levels of government, all levels are facing serious financial constraints which is pushing the e-government initiatives even hard as government store-fronts shut down and resort to offering forms, services, and information via the Internet and the WWW. Outside of the digital divide being a competing variable in the success of e-government, one of the biggest concerns is privacy. As designers and developers begin roll-out these e-government sites, how can they provide measures to protect the public provide that their personal information that they enter online into these sites is safe. Privacy, like security, concerns risk, its perception, and its management. Privacy problems often lie in the potential future consequences of present behavior, which may be deemed risky or safe according to standards of judgment (not necessarily those of the participants involved). As such, privacy harkens back to HCI’s origins in ergonomics and the safe operation of complex machinery (Ackerman, M., Mainwaring, S., 2011). History tells us that if there is a database that stores key personal information, like the databases some e-government sites may have, there is the opportunity for this information to be compromised. One such example really hit close to home here in Texas recently. The State of Texas migrated hundreds of thousands of personal data on State of Texas employees onto a public e-government site by accident. This personal data resided on the public database for years; however, because a link to this information was never coded into the actual e-government site, the state “feels” none of this data was further compromised. The problem is that the information was still accessible if this e-government site was hacked or internal unauthorized personnel found this information and dishonestly managed or benefited from this mistake. I feel one of the keys to insuring privacy in the HCI process is that designers need to carefully plan and map how data moves throughout the whole e-government processes and provide safety checks and balances that help protect personal users information using external and internal controls that are sufficient.



Ackerman, M., Mainwaring., S. (2011). “Privacy Issues and Human-Computer Interaction”, 2011.


Retrieved June 26, 2011 from:


scott.mainzone.com/pubs/05-privacy-issues-and-hci.pdf

“Design and Implementation of a Human Computer Interface Tracking System based on Multiple Eye Features”

Accessibility and inclusion have become new buzz-words in HCI. As practitioners in HCI, we are ethically challenged to keep accessibility in mind because we have varying assistive technologies that are common and available to us to use to help those with disabilities that promote further inclusion. Human eyesight is one of mankind’s most major senses. The eye is different from the other body parts that make up the human’s sensor array. A person’s eyes convey a great deal of information with regards to the meaning behind certain facial expressions. Also, the direction in which an individual is looking shows where his or her attention is focused. By tracking the position of the irises, useful interfaces can be developed that allow the user to control and manipulate devices in a more natural manner (Azam, S., Khan, A., Khival, M., 2009). Pupil tracking also has limitless possibilities outside of becoming a valuable assistive technology for those who are disabled and for those that are not disabled. Pupil tracking can be used in medical, educational, military, entertainment, and other applications.



What I like the most about pupil tracking is that it may allow for greater cognitive control during situations of stress and heavy multi-tasking where the speed of the human mind can work in an integrated fashion with speedy technologies, illuminating the slower response time from using a mouse or other slower input devices. This may also reduce errors as well.




Azam, S., Khan, A., Khiyal, M. (2009). “Design and implementation of a human computer


Interface tracking system based on multiple eye features”, 2009.


Retrieved June 26, 2011 from:


www.jatit.org/volumes/research-papers/Vol9No2/8Vol9No2.pdf

Respecting Embodied Cognition in Usability Tests

Typically, usability testing involves a testing laboratory, video cameras, and software to record actions. The user is placed in a foreign environment. It has been recently suggested that this isolation and sterilization of the user-environment may not be producing natural test results. In her interview with Dr. Lawrence Shapiro on the Brain Science Podcast website (www.brainsciencepodcast.com), Dr. Beverly Campbell explains that, "Embodied cognition is an approach to the study of cognition that regards cognition as something that involves, not just the brain, but also the body and its environment."

Standard cognition would explain the brain as more of a computer, processing stimulae as it comes in from the senses. Shapiro says, "Embodied cognition, in contrast, imagines not that the brain can be isolated from the body and the environment, but thinks of the body as in some sense shaping, or constraining, or involved in the very processing of the kinds of information that an organism needs to interact successfully with the world. So, it’s no longer possible, according to people in embodied cognition, to think about cognition as being the middle stage in the sense-think-act cycle. Instead, thinking involves active exploration—use of the body with things in the environment."

I decided to try this theory by doing my next two usability tests not only on-site, but literally at the desks of the users who typically use the software. I was presenting a beta version of the software to them, a clickable prototype, actually. But I ran into a couple problems with this scenario. The first problem was attempting to get the software to the user's machine. In other words, preparation is difficult unless you take over the user's machine, which is probably not appropriate. The next issue was the noise factor. The test is loud and the heavy users of our products are not the execs that have offices. We bothered the coworkers and they were good sports, but this was an inescapable distraction. Observation by the product owner was done over remote software and this also had to be setup on the user's machine. After all this preparation and distraction, it's a wonder they didn't toss me out on my butt. They were cordial, though, and I learned quite a bit. The key might be to try to setup a test environment that is "like" a user environment or at least to streamline the setup process.

But Shapiro says, "the last thing you want to do is place the subject in a kind of artificial environment where you’re removing all the kinds of what cognitive scientists think of as distractions, but what we embodied cognition people think of as necessary props in the environment that the subject uses to collect or simplify the sorts of information that the subject needs to be performing his or her cognitive tasks." So, what to do? I tend to believe the theory of embodied cognition but it puts us in a testing conundrum. I like the idea of testing in a user's own habitat so I'm going to continue working toward that. I think it can be done effectively with the right tools and process.

BRAIN SCIENCE PODCAST With Ginger Campbell, MD
Episode #73
Interview with Lawrence Shapiro, PhD, Professor of Philosophy and
Author of Embodied Cognition
http://www.brainsciencepodcast.com/storage/transcripts/year-5/73-bsp-Shapiro.pdf

Usability Testing Saves Design Flaw


A couple of weeks ago we performed our first external set of usability testing for a new website we are preparing to roll out.  It should be said that initial testing was done with our experts and it seemed to pass with only a couple of modifications.  As we gathered our subjects it was a diverse set of testers from various backgrounds and all over the United States.  We picked this time to do the testing because it coincided with our June Orientation and Enrollment.  Randomly the test subjects were selected and sent to our location.  What we were about to learn saved us from making a serious design flaw.  
We covered all of the legalities of the test and instructed them on how we would like them to talk out loud about their thoughts and explain what they are doing.  There were a total of 10 tasks they were given to perform on the site.  Out of the 8 people we tested that day all 8 had the exact same issue.  They continued to get sucked in to a menu bar that was part of the wrapping from our main page.  The menu didn't contain any items relating to our questions.  As we observed it was apparent the menu had to be changed.   Our marketing department required the header menu be added to all web pages.  Since we were using a screen capture and user voice and video utility we could show the results to our marketing department.  They agreed to drop the requirement and are now considering the menu requirements for other web pages.

Saturday, June 25, 2011

"Redesign Must Die", by Rosenfeld

http://www.slideshare.net/lrosenfeld/redesign-must-die

I've had Rosenfeld's polar bear branded O'Reilly book about information architecture for as along as I can remember. It's one of the books I recommend to folks getting started in web design. So I enjoyed this set of slides that I only discovered this week.

While there is not accompanying audio, for those who've been in the business of UI design for any time, the slides themselves tell the story.

I was particularly struck by this presentation because I work at a University and Rosenfeld's examples of how his alma mater continued to redesign and rebrand their home website hit close to home. Our communications department does much the same thing. They have a total homepage redesign schedule of every 3 years. This isn't based on user feedback or any analytics or, apparently, common web sense. It's a decision made years ago by someone in marketing. And it makes me nuts.

I'm a fan of "refine" rather than "redesign", and that is one of the central points of the slides. Hope you guys find the information interesting.

Understanding Programing –
The Pros and Cons

In a recent lecture Dr. Oren mentioned having the ability to code is an extremely good skill to have and I would agree. I discovered many years ago after doing basic coding in ColdFusion and PHP that I don't particularly like it because it's extremely frustrating for me. However, I like to code in HTML and CSS. These two tend to be a more visual way to code at least that's how I look at it.

Either way, understanding different languages of coding is a good skill even if you can't code things yourself. In many cases, I'm able to tweak code that someone else has written or at the very least understand how and why a programmer may ask the questions they do.

Another skill that is very valuable for me in my daily routine is the understanding of databases and data fields. This is especially important when designing data-driven sites, but even something as simple as a form. Knowing how a simple name change of a field could cause issues within a database, understanding the different data types and lengths and why they are important.

Both of these skills can also help in bridging the gap between the business/client and the designer/programmer. This has been a skill set that I didn't develop on purpose; it just worked out that way.

I found this article by Jared Spool who discusses three ways in which knowing how to code could make you a better designer.

http://www.uie.com/brainsparks/2011/06/06/3-reasons-why-learning-to-code-makes-you-a-better-designer/

Friday, June 24, 2011

Web Accessibility: Section 508 Compliance

Congress instituted section 508 in 1998 to make new online opportunities available to people with disabilities and to encourage the development of software and technologies to help make this happen. An amendment of the Rehabilitation Act of 1973, Section 508 requires federal agencies to make their electronic and information technology available to disabled citizens. Under the amendment, agencies must give disabled employees and citizens access to information accessible to others.

Why is Section 508 –Web accessibility Important!

Imagine surfing the web with your eyes closed. All of a sudden, those little things you take for granted — like quickly scanning a table for information, making judgments based on color-coded status updates, or filling out a protected form are no longer available to you. Many people who are blind or visually impaired cannot access the websites where as in recent days as these are evolving as human need.

To make sure everyone has access to all the websites, section 508 compliance was introduced. It has a number of features that make navigation easier for people using screen readers and uses a large high-contrast display for people who have difficulty seeing traditionally formatted web pages. The standards of Section 508 are extensive and greatly dependent upon the type of disability at hand. The standards establish a minimum level of accessibility, but many Web companies are going above and beyond for the disabled members of society.

Areas of compliance:

Website Navigation - All the web designers/developers now have a lot of stake. Website navigation is crucial for 508 compliances by which users can navigate all across websites.

Audio and Video – Videos and other multimedia are often used for ads and other programs on the website. But without the ability to hear, there can be a large disconnect for the hearing impaired. Add the necessary audio descriptions and captions.

Flash – Flash is extremely useful in catering to the hearing impaired. It allows Web designers to create complex and detailed pictures and charts that cannot be created with basic HTML. Flash is a valuable tool in Section 508 compliance.

Resizable Text – It gets difficult for people with disabilities to read some of the small text found on various Web sites. It is important to provide the resizable text option so they can adjust the size based on their visual needs.

Text Only Versions – Web sites must have a text only version for the hearing impaired individuals, especially if Web site has a lot of videos that require hearing important details.

More information

http://section508.gov/index.cfm?fuseAction=stdsdoc#Web

http://www.access-board.gov/sec508/guide/1194.22.htm

Thursday, June 23, 2011

Context Dependent Memory

"One's ability to retrieve (or recognize) an item is heavily influenced by the relation between that item's storage and retrieval contexts – is indisputable." 1

When it comes to choosing ways to conduct user research, it's very easy to put users into a conference room to get what you need quickly. Business types love this approach. But typically, we try to go a step or two beyond the conference room approach, based in part, on the unreliable nature of memory. It's too easy to forget the minutia.

Interestingly, from a UX perspective we all generally accept that this is true. And as an alternative (or complimentary) approach – we agree that contextual inquiry yields more rich, accurate information to inform the design process.

But why?

I've heard many great reasons – including "You get a better understanding of the environment." To which I totally agree. And there are any more good reasons.

I want to add one more good reason to the list.

Because we know that context can effect memory recall – doesn't it make sense that users will be more accurate and remember more detail about their work, when they are sitting at their desk? Memory just works better when you are trying to recall information from the same context that it was encoded.

There may be some research out there on this – specifically in the context of HCI research – but I wanted to share the thought. It seems like another good way to justify the time (and possible expense) of taking the time for thorough user research.

Here's a paper you can read if you're interested. I found it through Google Scholar and the ISU library.

1. Environmental Context and Human Memory
STEVEN M. SMITH and ARTHUR GLENBERG
University of Wisconsin, Madison, Wisconsin 5370

How Great Leaders Inspire Action

I wanted to post another quick resources that I found inspiring.

One of the things I have found exceptionally important in the business world, especially as of late, is the need to communicate effectively. It can literally make or break the content you have to share. I have found over and over that it doesn't matter what you have to say, how amazing it might be, if you cannot connect with your audience.

I am constantly looking for ways to simplify and improve the way I present information, findings, and concepts to business leaders and stakeholders within our organization.

And that's why I wanted to post this link. Simon Sinek is speaking at a TEDx event and shares his thoughts on why some people are good at inspiring people and rallying them around their beliefs, and why some others are not - even when they may all be great speakers.

Give it a watch if you can. It will be worth the time invested.

http://www.ted.com/talks/lang/eng/simon_sinek_how_great_leaders_inspire_action.html

Corporate Ethnography?

I received an interesting piece from Putting People First a few days ago referencing a book on corporate ethnography, edited by Melissa Cefkin of IBM's services research department: http://www.berghahnbooks.com/title.php?rowtag=CefkinEthnography. Although you have to pay to download specific chapters, some of them are particularly interesting, e.g. "Working in Corporate Jungles," and you can grab decently sized previews from Google Books (well, at least for that chapter).

Reading through the excerpts, I began drawing parallels between recent changes in my department and this act of corporate ethnography. Prior to my arrival, the standard practice with creative deliverables (say, for re-designs of the site or sections of it) would be to convene rather large stakeholder meetings, complete with the VPs from every single department. I had the chance (chance in terms of the motivation it gave me to do things otherwise) to attend such a meeting and, as an initial outsider, be able to identify its pitfalls. Amongst others, the size of the meetings made it hard to progress through a sequential list of deliverables and/or topics -- personalities would clash, departmental special interests would be aroused in light of others, etc. Since then, we've gone a far way. In the recent re-design I'm leading, we've separated wireframes and visual designs based on "groups" that represent certain sections on the website. In turn, these groups also represent specific departments. What this highly compartmentalized layout of deliverables has allowed us to do is to completely strike out the idea of a general stakeholder meeting. Instead, we've made for a plan that will allow us to meet with each stakeholder group that a certain group of wireframes is relevant to.. this brings down meeting attendee numbers from 15+ to 2-3... making such meetings simply more useful and manageable. This, in the end, helps us iterate more quickly.

Although my example definitely doesn't nearly touch the breadth nor specificity with which the book approaches the role of ethnography in corporations, I think it does show the value of either investing in actual corporate ethnographic studies or at least performing some internal, reflective operational reviews. As a newbie in the corporation, I was pretty much an unbiased observer "in the field" -- however, it was that outsider perspective that made the large stakeholder meetings seem a bit out of place to me, and that eventually led me to the idea of carrying out the re-design in this new way.

Much like HCI espouses user-based iteration in research, design, and implementation, internal user-based iteration (e.g., in this case, modifying the structure of creative-related debriefs to stakeholders based on "outsider" impressions of past practices) re: organizational practices can also help refine the productivity and efficiency of internal teams.

Wednesday, June 22, 2011

Communicating Mental Models

I wanted to post another illustration that I have beed using a lot lately. The drawing below by Jacques Carelman.

It's one that's been around for quite some time, but does a brilliant job of conveying the concept of a mental model. I think it works well, simply because we all understand bicycles. It's kind of like using Apple Computers, to support a story about innovation.

I love posting this on screen during a meeting and asking people "Why won't this work?" The conversation is always excellent.

Hope this is useful for those who have not seen it.




How is UX Research Different than a Traditional BA Lead Process?

Recently, I started working on a document management project for one of the departments within our company. This project has been running for quite some time, prior to my (and my team's) involvement. They have done quite a bit of legwork in requirements gathering. In fact, at least a year of it.

As my team has begun to engage, we strongly recommended doing user research as a component of our work. Pretty obvious, right? Not so fast. In this case, it requires us to fly another office, in this case NYC, and will require some substantial expenses. Not to mention extending our project timelines.

That said, I have confidence, that this work will pay-off with 5,10, and maybe even a 100x ROI.

But then something unexpected happened. During a meeting one of our executives asked me this question: "So why are you going to NYC? Team X has already done all of the requirements gathering. Why can't you just start the design work?"

That question left me wondering - so how do you answer this question in a way that a business executive can easily understand?

Which brings me to the point of this post – to share a diagram that was developed during a whiteboard session, that has become a tool we use, when answering the question of "Why do user research" to the business crowd.

The BA team that did the requirements spent most of their time asking "What do you want to be able to do?" But what they never asked, "Why is this a problem for you?

In the diagram, the point is this. When you start with tools and features, in the end you will have the same problem you started with, just in a different tool. But if you ask "Why" is it a problem, you have a chance to solve the root issue and fix the problem.

Example: if a fileshare is a mess, introducing a new tool is not going to fix the problem, unless the new tool accounts for the root cause. In this case – why is the fileshare an unorganized mess in the first place?

I'll end with a short anecdote, that has seemed to work for me in a few instances.

Imagine your neighbor comes out complaining about his shovel. "This darn thing never works. Can't stand it." So as the generous person you are, you go to your shed, and bring back a different shovel for you neighbor to try. A few minutes later, he comes back out and complains about the new shovel not working very well either. This prompts you to ask "Well. It works fine for me, what are you trying to do with it? Your neighbor replies: " I've been trying to trim my bushes!"

Detailed UX Article

I recently happened upon this article at Smashing Magazine, What Is User Experience Design? Overview, Tools And Resources. The article is a cornucopia of overviews for different aspects of UX. What I really found valuable is what the article suggests as "situations (that) would benefit from UX design". Complex systems, Start-ups, Projects with "OK" budgets and Projects with longer timeframes are the four that they list. Personally, I think that all "new" projects should be consider user experience. Budgets may not be able to afford a single UX person for the project, but as the article suggests about start-ups like 37Signals, everyone working in the field should be "well-rounded" - meaning being familiar with UX concepts and practices AND performing other project creation-related duties. This means that, being a 'Jack of all Trades' is once again a necessity.

The other section I found useful was "Criticisms of UX as a Profession". With a variety of arguments that companies might have about hiring a UX professional, this gives us, as those UX professionals, a chance to define why we are useful. When you have your argument planned out, you will be more prepared when confronted with opposition.

The article rounds out with a very long list of UX resources – UX Magazine, UX Booth, etc. I glanced at some of what is listed - most seem to be good resources, but so many seem to be lacking in tasteful graphics (I'm looking at you www.uxmatters.com!).

Tuesday, June 21, 2011

“The Promise of Flexible Displays”

One of the new technologies that have already come out has been very impressive to me has almost infinite uses; especially, in HCI. The flexible display is light and dynamic enough to be used with keyboards with auto-changing layouts, low-power shelf tags and point-of-sale ads, and display windows on credit-card-size smart cards. But designers are also rethinking product design around these new screens possibilities. Carl Taussig, director of Advanced Display Research at Hewlett-Packard, notes the Dutch company Polymer Vision has demonstrated a cell phone with a roll-out display, and suggests other possible form factors. “You might have a display that you keep folded up like a piece of paper, “ Taussig says. “You might open it halfway and use it that way, or you might open it all the way. You might have it partitioned with a keyboard on one part and a screen on the other (Communications of the ACM, 2011). These new possibilities can help promote limitless innovation in future product design and safety.




Communications of the ACM. (2011). “The Promise of Flexible Displays”, 2011.


06/2011 Vol. 54 No. 6 pp(s): 16-18.

Error Pages on a Web Site

There is nothing I hate more than receiving an error page that doesn’t tell me what happened or how to fix it. Unfortunately, no matter how careful you are with your site design, errors can happen especially with sites that are more data-driven. I’ll admit even though this is a personal pet peeve of mine, I still don’t give error pages a thought until they pop up.

Since these pages are likely to pop up from time to time, you need to think about their design and how to get the person back on track. The message should do the following:
  • Explain what happened and why
  • Explain how to fix it or who to call
It’s really that simple. Regarding the actual design, the second most important thing after the message is the design. The error page should look just like your site. Some people argue that a “cutesy” design for these messages is not a good business practice, but I think it really depends upon the site and audience.

Here are a few sites of some really good and sometimes funny 404-error page designs:

Usability testing through Heat Maps

To have a better understanding about users & their usage patterns online, we can do usability testing through “heat maps”. By using heat maps, we can produce visual displays of where end-users are actually clicking and which ones are the popular locations on your website (or a specific page).

The core idea of using a heat map is to analyze/test how a user is interacting with a web site. What links are they clicking on? What kind of page design may work well? What functionality placed at different zones can work better? We can get help from such heat maps to understand some missing factors in our application.

For example, if a link is not getting any or less clicks even though it’s a link, then there is a serious issue of “affordance“, which tells us that the design of the button is not intuitively implying its functionality and use.

There are tools like Heat map, Click map, Hover map which helps the Interaction designers/Business to decide the move things around in the websites and make a improvements for Information Architecture, optimize link and advert placements.

Resources for Heat Maps

www.clickdensity.com

www.clickheat.com

www.crazyegg.com

www.clicktale.com

Features of Heat Maps

The heat maps display red-hot zones where most users spend longer periods, and blue or cold areas where your users spend the least amount of time.

A very nice feature is the Form Analytics tool, which displays aggregate form field information. This information includes time of field completion, the number of entries and clicks as well as which form fields have the highest abandonments, or take the longest to complete, or have the most backtracks due to errors or confusion.

Sample of heat Map

Sunday, June 19, 2011

Responsive Layouts & Information Visualizations

As a web/UI designer, I have been asked to specifically design to fixed width and sometimes flexible/liquid layouts. When there is no specific direction, depending on the amount of information to be displayed on each page, I would either choose fixed width or flexible layout.

About six months ago, I received an email with the subject ‘Responsive Layout’ from my boss. The body of the email had few links, videos, and podcasts about the responsive layout. I was very curious to find out what it really is, so I clicked all the links that opened in my default Chrome browser with separate tabs. I went back to finish reading the rest of my email and in short, my boss had asked me to start thinking about converting our products to responsive layout. Here are the links that my boss had sent me:

http://adactio.com/journal/1696/

http://www.alistapart.com/articles/responsive-web-design/

After that day, I started googling and came across: http://bit.ly/m51JoB and http://www.hicksdesign.co.uk/

http://stuffandnonsense.co.uk/blog/about/hardboiled_css3_media_queries/

You might find many other sites that are adapting to the responsive layout design…

So, basically responsive layout design is a technique using flexible grids, images, css, and media queries to adjust layouts. Initially, I misunderstood and thought that this technique is not new. Well, it’s not new. But, how you do it—is, what’s new. Responsive layout adjusts to the size of a browser window, whereas the traditional way of supporting multiple devices would be to detect if a device is a desktop or mobile. For example, go to about.com on your desktop browser and resize the window, and notice the layout adjusts to fit perfectly to your viewing preference. Do the same with cnn.com. You might want to try this in your mobile browser; about.com would adjust to fit, whereas cnn.com would detect if the platform is desktop or mobile.

If I were to design a brand new site or redesign an existing site, I would prefer responsive layout over the platform detection technique. But, if an organization just wants their site to be on a mobile device and does not want to edit their existing source code, then I would choose the platform detection technique. I believe both of these techniques can be very effective if applied properly. But, the responsive layout adds to the list of choices for web/UI designers and probably stacks additional work to be done.

Saturday, June 18, 2011

UX and Systems Engineering Requirements Process



Lately I've the opportunity/stressful task of learning how to incorporate user experience methods into a bigger systems engineering process. Most of the things that I learned early on in my HCI studies focused on web design. Then as I my work life demanded and my research capabilities grew, I learned more about application design. Now the needs before me are taking me into yet another new direction. As I know that some of you all are working in defense orother complex system oriented industries, I thought I would share what I'm learning so far. For now I'm going to focus on the requirements part of system engineering because that's where the cycle begins and where we as UX/HCI folks want to start working on a project. (Not as the people who have to pretty something up at the end!)

Systems engineering typically follows a "V" process. If you are familiar with waterfall type software development process (versus an iterative or AGILE process) then you will notice that the systems engineering process is designed to go with that process. If you are not familiar with a waterfall process then the takeaway is that these processes want you define all of your requirements up front before you work on other tasks. In small projects this strategy makes sense. If you have lots of time this strategy can make sense. If you are a huge project with tight deadlines and large disparate groups, this strategy is not the way to go.


We've discussed so far that traditional systems engineering and old school software development processes don't work for large projects that need to be done in a reasonable amount of time. Now let's look at a better approach. If you'll remember we've touched on user scenarios/user stories in one of the earlier lectures in this class. In that example we looked at small stories to capture user behavior. One approach that has been suggested is to use these types of scenarios to driving the engineering requirements process. In this type of process you create natural language user stories at the beginning and use those as a basis to perform your other types of analysis. The benefits of having this collection of user stories are many but the main two things they help with are organizing and analysis. On the organizing front it's really helpful for all of your teams to "get" what it is you are building and how it will be used for real. You can also use your stories to organize your traceability. On the analysis front it's useful to test your features against the stories to see if you are having feature creep or if you are missing functionality that your operators will need.

Here are a few resources that I've found helpful as I'm learning more about this topic:

1) Government/DoD System of Systems oriented: Scenario Driven Systems Engineering
2) Role oriented scenario approach: A Scenario-driven Role Engineering Process for
Functional RBAC Roles3) Operational threads (high-level capability is organized into threads and then you create user scenarios for each thread): Operational Thread Development