How is UX for IoT different?

Editor’s note: this is an excerpt from our forthcoming book Designing Connected Products; it is part of a free curated collection of chapters from the O’Reilly Design library — download the entire Experience Design collection here.

experience-design-cover_sizedDesigning for IoT comes with a bunch of challenges that will be new to designers accustomed to pure digital services. How tricky these challenges prove will depend on:

  • The maturity of the technology you’re working with
  • The context of use or expectations your users have of the system
  • The complexity of your service (e.g. how many devices the user has to interact with).

Below is a summary of the key differences between UX for IoT and UX for digital services. Some of these are a direct result of the technology of embedded devices and networking. But even if you are already familiar with embedded device and networking technology, you might not have considered the way it shapes the UX.

Functionality can be distributed across multiple devices with different capabilities

IoT devices come in a wide variety of form factors, with varying input and output capabilities. Some may have screens, such as heating controllers or washing machines. Some may have other ways of communicating with us (such as flashing LEDs or sounds).

Some may have no input or output capabilities at all and are unable to tell us directly what they are doing. Interactions might be handled by web or smartphone apps. Despite the differences in form factors, users need to feel as if they are using a coherent service rather than a bunch of disjointed UIs. It’s important to consider not just the usability of individual UIs but interusability: distributed user experience across multiple devices.

The locus of the user experience may be in the service

Although there’s a tendency to focus on the novel devices in IoT, much of the information processing or data storage often depends on the Internet service. This means that the service around a connected device is often just as critical to the service, if not more so, than the device itself. For example, the London Oyster travel card is often thought of as the focus of the payment service. But the Oyster service can be used without a card at all via an NFC enabled smartphone or bank card. The card is just an ‘avatar’ for the service (to borrow a phrase from the UX expert Mike Kuniavsky).

We don’t expect internet-like failures from the real world

It’s frustrating when a web page is slow to download or a Skype call fails. But we accept that these irritations are just part of using the Internet. By contrast, real-world objects respond to us immediately and reliably.

When we interact with a physical device over the Internet, that interaction is subject to the same latency and reliability issues as any other Internet communication. So, there’s the potential for delays in response and for our requests and commands to go missing altogether. This could make the real world start to feel very broken. Imagine if you turned your lights on and they took two minutes to respond, or failed to come on at all.

In theory, there could be other unexpected consequences of things adopting Internet-like behaviors. In the Warren Ellis story The Lich House, a woman is unable to shoot an intruder in her home: her gun cannot contact the Internet for the authentication that would allow her to fire it. This might seem far-fetched, but we already have objects that require authentication, such as Zipcars.

IoT is largely asynchronous

When we design for desktops, mobiles, and tablets, we tend to assume that they will have constant connectivity. Well-designed mobile apps handle network outages gracefully, but tend to treat them as exceptions to normal functioning. We assume that the flow of interactions will be reasonably smooth, even across devices. If we make a change on one device (such as deleting an email), it will quickly propagate across any other devices we use with the same service.

Many IoT devices run on batteries and need to conserve electricity. Maintaining network connections uses a lot of power, so they only connect intermittently. This means that parts of the system can be out of sync with each other, creating discontinuities in the user experience. For example, imagine your heating is set to 19 degrees celsius. You use the heating app on your phone to turn it up to 21C, but it takes a couple of minutes for your battery powered heating controller to go online to check for new instructions. During this time, the phone says 21C, and the controller says 19C.

Code can run in many more places

The configuration of devices and code that makes a system work is called the system model. In an ideal world, users should not have to care about this. We don’t need to understand how conventional Internet services, like Amazon, work in order to use them successfully. But as a consumer of an IoT service right now, you can’t always get away from some of this technical detail.

A typical IoT service is composed of:

  • one or more embedded devices
  • a cloud service
  • perhaps a gateway device
  • one or more control apps running on a different device, such as a mobile, tablet, or computer.

Compared to a conventional web service, there are more places where code can run. There are more parts of the system that can, at any point, be offline. Depending on what code is running on which device, some functionality may at any point be unavailable.

For example, imagine you have a connected lighting system in your home. It has controllable bulbs or fittings, perhaps a gateway that these connect to, an Internet service, and a smartphone app to control them all. You have an automated rule set up to turn on some of your lights at dusk if there’s no one home.

If your home Internet connection goes down, does that rule still work? If the rule runs in the Internet service or your smartphone, it won’t. If it runs in the gateway, it will. As a user, you want to know whether your security lights are running or not. You have to understand a little about the system model to understand which devices are responsible for which functionality, and how the system may fail.

It would be nice if we could guarantee no devices would ever lose connectivity, but that’s not realistic. And IoT is not yet a mature set of technologies in the way that ecommerce is, so failures are likely to be more frequent. System designers have to ensure that important functions (such as home security alarms) continue to work as well as possible when parts go offline and make these choices explicable to users.

Devices are distributed in the real world

The shift from desktop to mobile computing means that we now use computers in a wide variety of situations. Hence, mobile design requires a far greater emphasis on understanding the user’s needs in a particular context of use. IoT pushes this even further: computing power and networking is embedded in more and more of the objects and environments around us. For example, a connected security system can track not just whether the home is occupied, but who is in it, and potentially video record them. Hence, the social and physical contexts in which connected devices and services can be used is even more complex and varied.

Remote control and automation are programming-like activities

In 1982, the HCI researcher Ben Shneiderman defined the concept of direct manipulation: user interfaces based on direct manipulation “depend on visual representation of the objects and actions of interest, physical actions or pointing instead of complex syntax, and rapid incremental reversible operations whose effect on the object of interest is immediately visible. This strategy can lead to user interfaces that are comprehensible, predictable and controllable.” Ever since, this has been the prevailing trend in consumer UX design. Direct manipulation is successful because interface actions are aligned with the user’s understanding of the task. They receive immediate feedback on the consequences of their actions, which can be undone.

IoT creates the potential for interactions that are displaced in time and space: configuring things to happen in the future, or remotely. For example, you might set up a home automation rule to turn on a video camera and raise the alarm when the house is unoccupied and a motion sensor is disturbed. Or you might unlock your porch door from your work computer to allow a courier to drop off a parcel.

Both of these break the principles of direct manipulation. To control things that happen in future, you must anticipate your future needs and abstract the desired behavior into a set of logical conditions and actions. As the HCI researcher Alan Blackwell points out, this is basically programming. It is a much harder cognitive task than a simple, direct interaction. That’s not necessarily a bad thing, but it may not be appropriate for all users or all situations. It impacts usability and accessibility.

Unlocking the door remotely is an easier action to comprehend, but we are distanced from the consequences of our actions, and this poses other challenges. Can we be sure the door was locked again once the parcel had been left? A good system should send a confirmation, but if our smartphone (or the lock) lost connectivity, we might not receive this.

Complex services can have many users, multiple UIs, many devices, and many rules and applications

A simple IoT service might serve only one or two devices: e.g. a couple of connected lights. You could control these with a very simple app. But as you add more devices, there are more ways for them coordinate with one another. If you add a security system with motion sensors and a camera, you may wish to turn on one of your lights when the alarm goes off. So, the light effectively belongs to two functions or services: security and lighting. Then add in a connected heating system that uses information from the security system to know when the house is empty, and assume there are several people in the house with slightly different access privileges to each system. For example, some can change the heating schedule, some can only adjust the current temperature, some have admin rights to the security system, and some can only set and unset the alarm. What started out as a straightforward system has become a complex web of interrelationships.

For a user, understanding how this system works will become more challenging as more devices and services are added. It will also become more time consuming to manage.

Many differing technical standards make interoperability hard

The Internet is an amazing feat of open operating standards, but, before embedded devices were connected, there was no need for appliance manufacturers to share common standards. As we begin to connect these devices together, this lack of common technology standards is causing headaches. Just getting devices talking to one another is a big enough challenge, as there are many different network standards. Being able to get them to coordinate in sensible ways is an order of magnitude more complicated.

The consumer experience right now is of a selection of mostly closed, manufacturer-specific ecosystems. Devices within the same manufacturer’s ecosystem, such as Withings, will work together. But this is the only given. In the case of Withings, this means that devices share data with a common Internet service, which the user accesses via a smartphone app. Apple’s Airplay is an example of a proprietary ecosystem in which devices talk directly to each other.

We’re starting to see manufacturers collaborating with other manufacturers, too. So, your Nest Protect smoke detector can tell your LIFX lightbulbs to flash red when smoke is detected. (This is done by connecting the two manufacturer’s Internet services rather than connecting the devices).

There are also some emerging platforms that seek to aggregate devices from a number of manufacturers and enable them to interoperate. The connected home platform Smart Things supports a range of network types and devices from manufacturers such as Schlage and Kwikset (door locks), GE and Honeywell (lighting and power sockets), Sonos (home audio), Philips Hue, Belkin, and Withings. But the platform has been specifically configured to work with each of these. You cannot yet buy any device and expect it to work well with a platform such as Smart Things.

For the near future, the onus will be largely on the consumer to research which devices work with their existing devices before purchasing them. Options may be limited. In addition, aggregating different types of devices across different types of networks tends to result in a lowest common denominator set of basic features. The service that promises to unify all your connected devices may not support some of their more advanced or unique functions: you might be able to turn all the lights on and off but only dim some of them, for example. It will be a while before consumers can trust that things will work together with minimal hassle.

IoT is all about data

Networked, embedded devices allow us to capture data from the world that we didn’t have before, and use it to deliver better services to users. For example, drivers looking for parking spaces cause an estimated 30% of traffic congestion in US cities. Smart parking applications such as Streetline’s Parker use sensors in parking spaces to track where spaces are open for drivers to find via a mobile app. Likewise, Opower uses data captured from smart energy meters to suggest ways in which customers could save energy and money.

Networked devices with onboard computation are also able to use data, and in some cases act on it autonomously. For example, a smart energy meter can easily detect when electrical activity is being used above the baseload. This is a good indicator that someone is in the house and up and about. This data could be used by a heating system to adjust the temperature or schedule timing.

To quote another phrase from Mike Kuniavsky: “information is now a design material.”

Editor’s note: this is part of our ongoing exploration looking at experience design and the Internet of Things.


Go to Source

Tags: , , , ,

Please RSVP Now: Is 3D Printing a Relevant Technology for Development?

3d-printing

IREX Tech Deep Dive – RSVP now

Just as the Internet changed the communications world in the 1990’s, 3D printing is set to change the physical world. We are already way beyond trinkets and keychains. Cheap 3D printers can create prosthetic arms, and even living ears and livers, not to mention metal parts worthy of aviation grade uses.

This new technology can revolutionize the way that we make products, by bringing the factory into the community and allowing computers and the Internet to become the new conduit for skills, innovation and creativity in manufacturing.

Or such is the promise of 3D printing in development. However, what is the reality? And how might it be applicable to the development context, where the poor are often the last ones to benefit from new technologies? Amidst the hype, there are serious questions to ponder:

  • What are the 3D printing opportunities in developing economies?
  • Where could 3D Printing be catalytic or transformational in development?
  • Who is using it now? And what lessons are already learned?
  • What funding and support is needed to develop a successful 3D printing program?
  • How do we ensure that 3D printing value chains are inclusive, and communities can own their own 3D destiny?

Please RSVP now to join the next IREX Tech Deep Dive to explore the potential potential and pitfalls of 3D printing in development. To help us navigate where we are headed, we’ll have three thought leaders sharing their knowledge and opinions:

RSVP-Now

Please RSVP now to join this active, practical event. We’ll have an overview of the state of 3D printing and its usage across the development spectrum, a lively brainstorming on what the future of 3D printing might look like, and small teams creating frameworks for how to get us from the present to the future.

We’ll go from talk to action in just one morning!

3D Printing for Development
IREX Tech Deep Dive
8:30 am -12:30pm
Wednesday, November 5th
Washington, DC, 20005

We will have hot coffee and a catered breakfast for a morning rush, but seating is limited RSVP now, before its too late. Note that this event is in-person only, and RSVP is required to attend.


About IREX Tech Deep Dives

IREX Tech Deep Dives are an interactive discussion series on technology for development hosted by the Center for Collaborative Technologies at IREX in partnership with Kurante.

We convene small groups of established experts to have critical and substantive discussions on the application and impact of new and emerging technology solutions and their relevance to international development.

Participants will gain new insights on current technology trends and gain practical insights they can apply immediately, and over the long term. RSVP now to join us!


Go to Source. Reprinted from ICTWorks

Tags: , ,

What’s the implication of 3D printers for the World Bank’s mission?

What is the implication of 3D printers on the World Bank’s mission of poverty reduction and boosting of shared prosperity? While figuring out the specifics is likely impossible, we do have a few hints at the possibilities.

3D Printer + Internet = Inclusive Education
The internet search engines we use almost every day have changed our lives, in terms of access to information, knowledge, and much more. But for the visually impaired, this invention has had little impact so far. However, through an innovative application of 3D printers, “search experience” for the visually impaired became possible using a voice-activated, 3D printer-installed, Internet search engine.

Go to Source

Tags: , , ,

3D printing: Pimp my ride

As three-dimensional (3D) printers, which make objects layer by layer, have fallen in price, their use has expanded beyond industry. A number of artists now also employ the technology. One of them, Ioan Florea—Romanian-born but now based in America—used a 3D printer to customise his classic 1971 Ford Torino for a recent exhibition. Mr Florea prints parts in plastic, coats them with other materials or uses the printed parts as moulds. For his car, he developed a process that produces what he calls a “liquid-metal” finish. Ford, which uses 3D printers to make prototype parts, has shown interest in his work, but Mr Florea is keeping his methods secret.
Go to Source

Tags: , ,