Tag Archives: User Experience Research

The User Experience of Google Chrome

Pulse UX had a piece on Google Chrome (the browser I’m currently using to browse the web) in late 2008 that becomes more interesting every time I read it. After thinking about the piece for some times, it comes to two general conclusions: Google Chrome is not a well designed browser, but then Google Chrome is not primarily a browser at all.

The point about the danger of starting-from-scratch is obvious enough:


What does Google Chrome mean for the future of user experience design?

In an article by Steven Levy, from the October 2008 issue of WIRED magazine title: “Inside Chrome: The Secret Project to Crush IE and Remake the Web” the developers of Chrome described how they approached the UX design problem for their new “world-beating” browser. In part they described the UX design methodology as follows.

“When deciding what buttons and features to include, the team began with the mental exercise of eliminating everything, then figuring out what to restore.”

Whoa!…that IS an interesting UX design methodology. The problem is that the Google UX process ignored almost entirely the past 25 years of cognitive science and related skill acquisition theory. The Google Chrome UX design methodology created, to a significant extent, the perplexing complexity of Chrome by ignoring several billion “person-hours” of prior experience that users accrued with established browser interaction models. Arbitrarily deciding what to leave out or include in terms of features and functions is…how shall we say…1950’s UX design.

… and dovetails nicely to my thoughts on the science and art of user experience research. However, the Pulse UX piece then convincingly argues that the primary purpose of Google Chrome is to be a rendering engine for Google Docs and other software in the cloud. Thus, Google Chrome is not a competitor to Microsoft Internet Explorer so much as a competitor to Microsoft Live Mesh.

The post is fascinating. The “art” of Chrome’s long-term strategy, and the science of measuring user experience, tie together nicely. Read the whole thing.

The Science and Art of User Experience Research

Slashdot links to an an interesting article at Technologizer about Clippy, the automated assistant that former was activated by default in Microsoft Office. Clippy was perhaps the Bill Callahan of technologies: everyone who knows anything about him has a strong opinion, and those opinions tend to be negative, but true strong points keep shining through.

I found the graphics and mock-ups (way back to the Windows 3.11 days) interesting, but what inspires this post is a throw-away line from the Slashdot summary:

Most folks think that Microsoft Office’s Clippy, Microsoft Bob, and Windows XP’s Search Assistant dog were perverse jokes — but a dozen years’ worth of patent filings shows that Microsoft took the concept of animated software ‘helpers’ really, really seriously, even long after everyone else realized it was a bad idea. And the drawings those patents contain are weirdly fascinating.”

The slashdot writer is guilty of the same dogmatism that he accuses Microsoft of.

Research into user experience (UX) is both a science and an art. It is a science to the extent it uses quantitative methods to estimate the behavior of a population. So, for instance, when Microsoft applies multivariate regression functions to anonymous user-experience data to determine the relative learning curves of potential changes among different personas of users, it is engaged into scientific UX research. Likewise, when Microsoft conducts ethnographies, case studies, and interviews to understand the phenomana embedded into its software (such as affective UX), it is engaged in an artform.

Both science and art go far beyond what “everyone else” realizes. Indeed, the explanation of variation (the science of UX research) and the understanding of experience (the art of UX research) exist to help make software better than if the designers were stuck with what “everyone else” knew.

Like anyone in the computer science community, I have strong opinions of Microsoft. Windows Vista is awful. Windows 7 is pretty good. I have a feeling that the quality difference between these products relies more than a little on one decision to shortchange UX research, and another to look at it seriously.

Extra credit: What aspect of UX research is ignored in this presentation? Which is focused on?

Learning Curves and Time-Series User Experience

There is an interesting post titled “How the SUV User Experience Trashed Detroit” that talks about our friends in Detroit some. What I find interesting is not so much its attack on SUVs, as its division of “User Experience” into First-, Early-, and Deep user experience.

How the SUV User Experience Trashed Detroit
When we speak of the “user experience” and how it impacts purchase and adoption of products and services we divide the framework into three basic pieces: 1) the FUE or the first user experience, 2) the EUE early user experience and 3) the DUE the deep user experience. We know from extensive research for leading high technology and media companies that there is no EUE or DUE without a very compelling FUE…in other words, what the customer first experiences is all important and in fact my be uniquely critical to the success of products which in the end, like the SUV, are of marginal or even negative relative value in the larger context. If you get the FUE right you can sell almost anything and customers will thank you for it. When we employ more advanced psychometric testing methods to user experience design research problems this effect surfaces in web sites, cell phones, video games, automobiles and a wide range of other high tech products and services.

It appears that the First/Early/Deep division of user experience is a way of describing the learning curve (becoming more proficient with the tool over time) and the affective curve (becoming less “wowed” and more comforted with the tool over time). I don’t know if we need to split user experience into three separate sections, except for ease of memory (think back to when some folks took the idea of 4GW as a dialectically distinct gradient of war, for instance), but the idea of user experience naturally changing over time is an important one.

Tools that market themselves should have both a steep learning curve and an affective hook that gives people patience to learn how to use them.