Tag Archives: uxmanifesto

Artificial boundaries

BlaBla

This is the 4th and last part of a single post (1, 2, 3)

For all the things in our life, we constructed boundaries that sometimes are completely artificial with not much actual sense in the reality of things, presenting more and more false dilemmas

This series of posts is of course about user interfaces, and this last post is about two common beliefs about applications and UIs that I think are false distinctions (and particularly care of)

Workspace? Application?

The boundary between the workspace and the applications is somewhat artificial, as is the boundary between one application or the other. Is there a common function that most applications share?

If so whether it appears in the application window, in the workspace or as part of another application is just a matter of what are usability findings for that action in particular (see Share Like Connect in Plasma Active).

How much two applications have to be considered separated if they share most of their components? or are just two tools of the same procedure? the knife and the spoon of the dish application?

In the end how much corresponds the separation I see in windows? is what i see as different tasks in the taskbar actually a different task? We should think more about different semantic task rather than different process/window that is an implementation detail.

Everything is just part of the same tool, the same system.

But is system == the machine? No!

Desktop/phone/tablet/TV…

Even distinction between devices and type of devices is artificial: my work spreads trough multiple machines, some more capable than others.

I won’t be able to do the most complex data creation operations on a phone, but I can view that work and create something, so on a mobile device I don’t want a completely alien set of tools, and i want to be able to access the very same resources.

  • I want everything as much synchronized as possible
  • I don’t want to set up the very same mail accounts on every device
  • If I have a movie on my desktop I want it to be available on my tablet
  • And if I press “play” on my tablet I want to see it on my TV
  • And i want zero effort to do that, not having to play around with settings and shared folders for hours.

I also want as much as possible of the very same visual language shared between all my devices. There are differences in input methods and screen sizes (in few years differences between pixel densities will be nullified), but most of the aestetics, a part of the behavior and most of the applications should be shared between all the devices, even if with more faces.

I don’t want to hear anymore KDE mobile effort versus the desktop: they are the same system. They are an unity, a spectrum that scales smoothly without really distinct interruptions.

Devices used to be very different from one to another, but in the next years differences will be less and less, in 1-2 years probably 100% of the sold computers, desktop or not will have a touch screen, we will see as big X86 tablets as very small ARM laptops. the input method and the mobile-ness won’t be a boolean anymore, they will be a real, from the big desktop(that is definitely not going away) to the small phone there will be a million of in-between thinghies.

And this is a false dichotomy dying right now, a distinction that existed for technical reasons (small mobile processors not having enough power) is fading away, we better adapt or die as well.

We are seeing different approaches in software to be ready to this paradigm shift in hardware; both Windows 8 and OSX are trying to “mix” desktop and mobile paradigms. Here the problem (especially visible in Windows 8) is that the transition from different devices is too abrupt, you end up with an ui designed for a different kind of input method from what you are using (thik about Metro start screen on a 24″ monitor).

So:

  • One size does not fit all.
  • There aren’t anymore well defined boundaries for different devices.
  • The UI should be like lego, every little piece should be interchangeable to do a million different combinations: small tablet, large one, laptop with touchscreen, small laptop with a mouse, tablet dockable to a mouse/keyboard station, whatever we don’t imagine yet…

To do a synopsis of this last post of the series, a phone and a desktop are the same system.

There are the Bob’s and the John’s systems, not the tablet system and the laptop system, the only difference between two systems is belonging to different people, being tailored around different lives.

To conclude, by the way here is the complete version of the mockup that srarted this long brain dump:

Fictional Desktop?

Overdesign

BlaBla

This is the part 3 of 4.

As the popular saying goes, there is always an easy solution to every human problem: neat, plausible, easy to understand and wrong.

It’s now widely accepted that the graphical and behavioral presentation of an interface should be have some conformity to what we evolved to expect (as i talked about in the previous post: transitions, rounded shapes possibly textured, direct manipulation of objects are just some aspects of it), however this brought a very easy answer that introduces a whole new range of problems: skeuomorphism.

A skeuomorph design tries to mimic as much as possible real objects that are an analog version of the tool implemented by the interface, a typical example is an address book application that looks like a leather address book or an ebook catalog application that looks like an actual library.

Address book

This is the design direction Apple taken since a while, and now from mobile is starting to percolate in the desktop as well (and being Apple is influencing a whole lot of developers now).

This approach has several problems:

  • It kills consistency: the boundary between different applications becomes extremely evident, too evident, condemning to remain forever in the application based paradigm, while “application” is a technical detail, is not much really something that should be very evident in the user facing semantics. Every application has a completely different graphical language, even tough it was designed to ease the transfer of learning from the “real” world it ultimately hinders the transfer between learned part of the system.
  • Imposes artificial limits: by copying a real object you copy also its limitations. To stay in the example of the book browser that looks like a library, the mapping to the real object suddenly drops (and thus it starts to look unfamiliar, magical) when you perform functions like sorting or searching books. You can’t certainly make magically disappear from the library all books that aren’t from a certain author, with a snap of the fingers. This makes quite hard to create innovations, like in this case a more powerful intuitive way to browse books that leverages indexing and metadata extraction.
  • It’s uncanny: the application will mimic the real object, but not perfectly: it will always feel fake, there always will be something out of place (even if just because of the features it offers that are impossible in the real world just as a simple search), creating a cognitive dissonance: yes I am looking to something that looks like a leather address book, but I can’t use exactly like that, i have to use a completely different set of skills.
  • It’s expensive: last but not least, it’s extremely expensive and labor intensive to redo from scratch every single pixel of every application: not everybody that are not Apple can afford to deliver a complex product with quality good enough to be presentable. If the cost of this enormous amount of work was justified by a big benefit could be worth it, but as we seen causes more problems than what it solves.

Finding the balance

The debate between an hyper-realistic, application based design, and a more classical UI approach is very polarizing lately, with good arguments from both sides.

Problem is the detractors of skeuomorph UIs, as gets natural in every polarizing debate, advocate from stuff that looks like it comes out from some 80’s science fiction movie, with Windows 8 Metro or Android 4 Holo as examples (especially in the Android case, the similarity with Tron is quite again, uncanny)

As I said, I think Skeuomorphism is the easy and wrong answer to the need of interfaces that feels more natural (where natural doesn’t mean there isn’t need to learn it), easily learnable to make the machine being a desktop or a phone an extension of your arm rather than a weird machine that you have to dialog in a strange arcane dead language with.

There should be a natural looking language, natural looking (or even reality copying) materials to build the UI elements, with a correct lighting, but yet not trying to copy real objects in the end, what you have to construct is a new machine that looks realistic but yet doesn’t copy a library or an address book.

You want something that looks and behaves more “analog” than “digital” (even if is something quite hard to quantify) for the same reason the piano replaced the harpsichord very quickly.

The UI must “flow”: as square edges and shapes with spikes should be avoided, the movement of everything should be as smooth as well, nothing that just appears or disappears, everything that is obvious where it comes from.

Remote controls should be avoided, everytime you have to use a UI to configure another UI, rather than directly manipulating it, you are doing something wrong.

In one sentence, design a visual language that is new, consistent, rigurous, looks natural and stick to it.
A button that looks like a button, with the correct lighting, drop shadows to tell the brain what is the most important and secondary things are ok, replicating a fully functional rotary phone is not.
Also, small breaks from the visual grammar can sometimes enhance the value of a particular feature, but only if used extremely rarely.

Next and last, boundaries between applications and workspace, boundaries between devices (or, why they don’t exists)

Design and psychology of user interfaces

BlaBla

This is the part 2 of 4.

Some times an application, or its user interface, seem to exist more for themselves rather than around the purpose of being a tool to accomplish a task: it’s very important that always stays the first goal of the existence of any functionality the software offers, being an application, a particular screen of that application, or a piece of the shell.

  • The star of the user interface is the content, this has to always remain the center, what distracts from the content is something that is often not necessary and can be avoided.
  • You interact with objects represented on the screen with a particular input system, may be a mouse, a trackpad or the touch screen. Interaction should be direct: in the real world you interact with an object not with a remote control that remote controls the object. Objects represented on the screen are often quite abstract, from objects to “pure information”, so is not always easy avoiding levels of indirection, but they should be limited (one of the reasons you will see handles in Plasma and not things like text boxes that ask how many pixels you want your panel high).
  • Our mind is trained to recognize patterns, that’s why consistency in UI is extremely important: this is something that comes from our evolution: for our survival we have to memorize and then recognize anything new that we encounter, may be a menace or an opportunity. Once you learn what a snake is, when you see a very different type of snake you just run;). Same thing as learning to use a tool, and transferring this knowledge to the use of different and perhaps more advanced tools, and this is what interest us here. (see transfer of learning)
  • Organic user interfaces: same reason as the above point, our mind expect things that we have seen the past some million years: some things are hardwired in our brains: we expect that when something appears it comes moving from somewhere. Something appearing out of the blue without having being noticed it was coming it’s seen as a potential menace, if you just find yourself with a big spider in front of you all the sudden without having it seen coning from somewhere, is possibly even more terrifying.
    As important as UI element coming in from somewhere is “natural” shapes: even just using rounded corners when possible instead of perfect edges may do quite a lot of difference. There are two probable reasons here: a shape with sharp edges focuses the attention on the outside of the shape, where the edges are pointing, while a shape with rounded corners focuses the attention inside the shape. Moreover, in nature things with sharp edges again are a menace.
  • Finally, a good interface should be invisible. What? Again, the sole purpose of the interface is being a tool designed to do a particular task. Everything that is “more” that the strict use case of the particular UI hinders its learnability and efficience. Often most of an application UI can be seen as “Content”, “Document” that still is a UI artefact, but is a quite direct representation of what you are viewing or working on, with chrome as everything else, that is often necessary, maybe a necessary evil, but evil still.

Those are all concepts that gained quite a lot of traction over the last few years, and UI quality all around, from Windows to KDE to web apps improved a lot, most applications and environments we see around followed a very clear design procedure, but… there is a but 😉

There is also an ugly side about it.. in the next post.

A UX manifesto, universe and everything

Graphics

The next 4 posts were intended to be just one, then it evolved to be quite huge, so it got splitted in 4 posts that will be posted over the next days.

All started when i was working on a mockup for a theme. I am still unsure what do do about it, making a plasma theme (probable), a qstyle, or just a prototype in QML about what would be my ideal(tm) desktop UI.

This is a small glimpse of it, due the mandatory Back to the future quote: “You’ll have to forgive the crudeness of this model, I didn’t have time to paint it or build it to scale”

possible Plasma theme preview

Then this thinghie made me think about the current state of aestetics on all current platforms being KDE or GNOME, OSX, iOS, Android, Windows8, Unity… where they are going, what they try to achieve, if they are achieving it.

Quite a lot for a quick Inkscape mockup about a theme that is admittedly not that original, but maybe that’s actually the point.

It has some important characteristics, that will be quite important in the next posts.

  • It looks quite realistic, I paid attention to the lighting effects of the buttons.
  • It tries to have the least possible visual noise, borders between things are as few as possible, and big empty areas are used to make the various items “breath”.
  • Directions of the drop shadows, while looking as correct as possible, always try to represent a visual hierarchy: the thing that has an higher z-order is always more important. This not only between windows but also in the same window.
  • A theme like that is intended as an unified visual language for everything: consistency is more important than how realistic a particular application looks (hint: an address book application does not have to look like a real address book)

Since some years in KDE there are some projects that share a common goal, such as the Plasma Workspace and related projects, like KWin and Oxygen and Plasma Active; as I’ll talk about, distinctions between those, what they should be their goals and boundaries, are mostly an artificial limit that doesn’t actually exist.

This end goal is to make our software more useful, more easy, more pleasant to use. Computers (any kind of, from a desktop to your watch) should be an helpful tool that help the user to achieve a particular goal, for which the computer has facilities designed to accomplish.

In the next post I’ll talk about some of the central points in the UI design, both as in behavior and cosmetics of the Plasma desktop over the past years and Plasma Active over the last one.