Solid 2015: submit your proposal

Last May, we engaged in something of an experiment when Joi Ito and I presented Solid, our conference about the intersection between software and the physical world. We drew the program as widely as possible and invited demos from a broad group of large and small companies, academic researchers, and artists. The crowd that came — more than 1,400 people — was similarly broad: a new interdisciplinary community that’s equally comfortable in the real and virtual worlds started to, well, solidify.

I’m delighted to announce that Solid is returning. The next Solid will take place on June 23-25, 2015, at Fort Mason in San Francisco. It’ll be bigger, with more space and a program spread across three days instead of two, but we’re taking care to maintain and nourish the spirit of the original event. That begins with our call for proposals, which opens today. Some of our best presentations in May came from community members we hadn’t yet met who blew away our program committee with intriguing proposals. We’re committed to discovering new luminaries and giving them a chance to speak to the community. If you’re working on interesting things, I hope you’ll submit a proposal.

We’re expecting a full house at this year’s event, so we’ve opened up ticket reservations today as well — you can reserve your ticket here, and we’ll hold your spot for seven days once registration opens early next year.

It’d be an understatement to say that the hardware movement and the Internet of Things (IoT) are hot right now. According to Google, search interest in the IoT has more than doubled in the last 12 months. The race by software companies to reach into the physical world, and the parallel race by manufacturers to develop their software and intelligence offerings, is bringing about all sorts of exciting collisions.

GoogleTrendsScreenShotA screen shot of the Google Trends results looking at the interest in “Internet of Things” and “IoT” over time.

I’d like to hear from you about what’s going on in hardware right now: how to design great products, how to build them in socially responsible ways, how to program them so that they’re efficient and delightful. Solid will be rich with these kinds of stories, told by engineers, artists, scholars, and executives from giant enterprises and nascent start-ups.

That said, my greatest pleasure in programming the 2014 edition of Solid was in featuring presentations that framed our conversation in terms of art, craft, societal impact, theoretical depth, and long-term context. Thoughtful, fresh takes on the hardware movement and the Internet of Things are welcome.

If you’d like to speak at Solid 2015, please visit our call for proposals. If you’d like to attend Solid, you can reserve your ticket here. If you’re interested in sponsoring Solid, please contact Sharon Cordesse. We look forward to hearing from you!


Go to Source

No tags for this post.

Firms’ Resource Deployment and Project Leadership in Open Source Software Development

International Journal of Innovation and Technology Management, Ahead of Print.

When using the open source software (OSS), development model firms face the challenge to balance the tension between the integration of knowledge from external individuals and the desire for control. In our investigation, we draw upon a data set consisting of 109 projects with 912 individual programmers and 110 involved firms and show how those different projects are governed in terms of project leadership. Our four hypotheses show that despite the wish for external knowledge from voluntary programmers firms are relying on own resources or those from other firms to control a project, that projects with low firm participation are mainly led by voluntary committers, and that projects with high firm participation are mainly led by paid leaders. This research extends the dominating literature by providing empirical evidence in that area and helps to deepen our understanding of firm participation in OSS projects as a form of open innovation activity.
Go to Source

Tags: ,

Great user experience + clear value proposition = value innovation

Editor’s note: this is an excerpt from our forthcoming book UX Strategy; it is part of a free curated collection of chapters from the O’Reilly Design library — download a free copy of the Experience Design ebook here.

Value! Value! Value!

The word seems to be used everywhere. It’s found in almost all traditional and contemporary business books since the 1970s. In Management: Tasks, Responsibilities, Practices, Peter Drucker talks about how customer values shift over time. He gives an example of how a teenage girl will buy a shoe for its fashion, but when she becomes a working mother, she will probably buy a shoe for its comfort and price. In 1984, Michael Lanning first coined the term “value proposition” to explain how a firm proposes to deliver a valuable customer experience. That same year, Michael Porter defined the term “value chain” as the chain of activities that a firm in a specific industry performs in order to deliver a valuable product.

All these perspectives on value are important, but let’s fast-forward to 2004 when Robert S. Kaplan discussed how intangible assets like computer software were the ultimate source of “value creation.” He said, “Strategy is based on a differentiated customer value proposition. Satisfying customers is the source of sustainable value creation.”

There are a lot of things in that quote that align with what we just learned [earlier in the chapter] about business strategy — differentiation and satisfied customers. But there’s one thing that we didn’t discuss — the fact that we are designing digital products: software, apps, and other things that users find on the Internet and use every day. Often, the users of these digital products don’t have to pay for the privilege of using them. If a business model is supposed to help a company achieve sustainability, how can you do that when the online marketplace is overrun with free products? Well, we learned how many companies, like Waze, found a sustainable business model: sharing their crowdsourced data made them lucrative to other companies like Google. But in order to get the data, they had to provide value to their customer base for mass adoption, and that value was based entirely on innovation.

“Innovative” means doing something that is new, original, and important enough to shake up a market. As W. Chan Kim and Renée Mauborgne describe in Blue Ocean Strategy, value innovation is “the simultaneous pursuit of differentiation and low cost, creating a leap in value for both buyers and the company.” This is accomplished by looking for ways that a company can reduce, raise, lower, and eliminate factors that determine the cost and quality of a product.

When we transpose this theory to the world of digital products, the value proposition manifests itself as a unique feature set. Features are product characteristics that deliver benefits to the user. In most cases, fewer features equals more value. Value can be created by consolidating features from relevant existing solutions (i.e. Meetup and Evite) and solving a problem for users in a more intuitive way (i.e. EventBrite). Value can be created by transcending the value propositions from existing platforms (i.e. Google Maps + crowdsourcing = Waze). Sometimes it’s consolidated from formerly disparate user experiences into one single solution (one-stop shop for a user task), i.e. sharing a video you made with your phone on YouTube, into one elegant simple solution (i.e. Vine and Instagram). We will deconstruct these complex techniques in Chapter 7: Storyboarding Value Innovation for Digital Products.

In a blue ocean, the opportunity is not constrained by traditional boundaries.But for now, let’s discuss the most important reason that we want to be unique and disruptive with both our products and our business models. There are bigger opportunities in unknown market spaces. We like to call these unknown market spaces “blue oceans.” This term comes from the book Blue Ocean Strategy that I mentioned earlier. The authors discuss their studies of 150 strategic moves spanning more than 100 years and 30 industries. They explain how the companies behind the Ford Model T, Cirque du Soleil, and the iPod chose unconventional strategies rather than fighting head-to-head with direct competitors in an existing industry. The sea of other competitors with similar products is known as a “red ocean.” Red oceans are full of sharks that compete for the same customer by offering lower prices and eventually turning a product into a commodity.

In the corporate world, the impulse to compete by destroying your rivals is rooted in military strategy. In war, the fight typically plays out over a specific terrain. The battle gets bloody when one side wants what the other side has — whether it be oil, land, shelf space, or eyeballs. In a blue ocean, the opportunity is not constrained by traditional boundaries. It’s about breaking a few rules that aren’t quite rules yet or even inventing your own game that creates an uncontested new marketplace and space for users to roam.

A perfect example of a company with a digital product that did this is Airbnb. Airbnb is a “community marketplace” for people to list, discover, and book sublets of practically anything from a tree house in Los Angeles to a castle in France. What’s amazing about this is that their value proposition has completely disrupted the travel industry. It’s affecting the profit margins of hotels so much that Airbnb was banned in New York City. Its value proposition is so compelling that once customers try it, it’s hard to go back to the old way of booking a place to stay or subletting a property.

For instance, I just came back from a weekend in San Francisco with my family. Instead of booking a hotel that would have cost us upwards of $1,200 (two rooms for two nights at a 3.5-star hotel), we used Airbnb and spent half of that. But for us, it wasn’t just about saving money; it was about being in a gorgeous and spacious two-bedroom home closer to the locals and their foodie restaurants. The 3% commission fee we paid to Airbnb was negligible. Interestingly, the corporate lawyer who owned this SF home was off in Paris with her family. She was also staying at an Airbnb, which could have been paid for using some of the revenue ($550+) from her transaction with us. Everybody won! Except, of course, the hotels that lost our business.

Airbnb achieves this value innovation by coupling a killer user experience design with a tantalizing value proposition. A value proposition is the reason why customers accept one solution over another. Sometimes the solution solves a problem we didn’t even know we had. Sometimes it creates an undeniable desire. A value proposition consists of a bundle of products and/or services (“features”) that cater to the requirements of a specific customer segment. Airbnb offers a value proposition to both sides of its two-sided market: the people who list their homes and those who book places to stay.

Value innovation is about changing the rules of the game.Airbnb chose not to focus on beating the existing competition (other subletting sites and hotels) at their game. Instead they made the competition irrelevant by creating a leap in value for all of their potential users. They did this by creating a marketplace that improves upon the weaknesses of all of their competition. Airbnb is more trustworthy than Craigslist. It has much more inventory than HomeAway and VRBO because listings are free. They provide value along the way — from the online experience (booking/subletting) to the real-world experience (showing up on vacation/getting paid for your sublet).

To create a blue ocean product, you need to change the way that people think about doing something. Value innovation is about changing the rules of the game.

Airbnb did this by enabling a free-market sub-economy in which quality and trust were given a high value that spanned the entire customer journey from the online experience to the real-world experience. And they catered to both of their customer groups (subletters and renters) with distinct feature sets that turned what was once a potentially creepy endeavor (short-term subletting) into something with incredible potential for everybody involved.

There are many other products causing widespread disruption to the status quo. Uber, which matches drivers with people who need rides, is threatening the taxi and limousine industries. Kickstarter is changing the way businesses are financed. Twitter is disrupting how we get news. And we can never forget how Craigslist broke the business models of local newspapers by providing a superior system for personal listings.

Jaime Levy, author of UX Strategy, can be reached through her Twitter handle, @JaimeRLevy.

Editor’s note: this is part of our ongoing exploration looking at experience design and the Internet of Things.


Go to Source

Tags: , ,

We need open models, not just open data

Opening_Up_Sonny_Abesamis_Flickr

Writing my post about AI and summoning the demon led me to re-read a number of articles on Cathy O’Neil’s excellent mathbabe blog. I highlighted a point Cathy has made consistently: if you’re not careful, modelling has a nasty way of enshrining prejudice with a veneer of “science” and “math.”

Cathy has consistently made another point that’s a corollary of her argument about enshrining prejudice. At O’Reilly, we talk a lot about open data. But it’s not just the data that has to be open: it’s also the models. (There are too many must-read articles on Cathy’s blog to link to; you’ll have to find the rest on your own.) Read more

Tags: , ,

Artificial intelligence: summoning the demon

Blue_Swirl_edward_musiak_Flickr

A few days ago, Elon Musk likened artificial intelligence (AI) to “summoning the demon.” As I’m sure you know, there are many stories in which someone summons a demon. As Musk said, they rarely turn out well.

There’s no question that Musk is an astute student of technology. But his reaction is misplaced. There are certainly reasons for concern, but they’re not Musk’s. Read more

Tags: , , ,

The problem of managing schemas

filing_cabinets_foam_Flickr

When a team first starts to consider using Hadoop for data storage and processing, one of the first questions that comes up is: which file format should we use?

This is a reasonable question. HDFS, Hadoop’s data storage, is different from relational databases in that it does not impose any data format or schema. You can write any type of file to HDFS, and it’s up to you to process it later.

The usual first choice of file formats is either comma delimited text files, since these are easy to dump from many databases, or JSON format, often used for event data or data arriving from a REST API.

There are many benefits to this approach — text files are readable by humans and therefore easy to debug and troubleshoot. In addition, it is very easy to generate them from existing data sources and all applications in the Hadoop ecosystem will be able to process them.

But there are also significant drawbacks to this approach, and often these drawbacks only become apparent over time, when it can be challenging to modify the file formats across the entire system.

Part of the problem is performance — text formats have to be parsed every time they are processed. Data is typically written once but processed many times; text formats add a significant overhead to every data query or analysis.

But the worst problem by far is the fact that with CSV and JSON data, the data has a schema, but the schema isn’t stored with the data. For example, CSV files have columns, and those columns have meaning. They represent IDs, names, phone numbers, etc. Each of these columns also has a data type: they can represent integers, strings, or dates. There are also some constraints involved — you can dictate that some of those columns contain unique values or that others will never contain nulls. All this information exists in the head of the people managing the data, but it doesn’t exist in the data itself.

The people who work with the data don’t just know about the schema; they need to use this knowledge when processing and analyzing the data. So the schema we never admitted to having is now coded in Python and Pig, Java and R, and every other application or script written to access the data.

And eventually, the schema changes. Someone refactors the code generating the JSON and moves fields around, perhaps renaming few fields. The DBA added new columns to a MySQL table and this reflects in the CSVs dumped from the table. Now all those applications and scripts must be modified to handle both file formats. And since schema changes happen frequently, and often without warning, this results in both ugly and unmaintainable code, and in grumpy developers who are tired of having to modify their scripts again and again.

There is a better way of doing things.

Apache Avro is a data serialization project that provides schemas with rich data structures, compressible file formats, and simple integration with many programming languages. The integration even supports code generation — using the schema to automatically generate classes that can read and write Avro data.

Schema changes happen frequently, and often without warning.Since the schema is stored in the file, programs don’t need to know about the schema in order to process the data. Humans who encounter the file can also easily extract the schema and better understand the data they have.

When the schema inevitably changes, Avro uses schema evolution rules to make it easy to interact with files written using both older and newer versions of the schema — default values get substituted for missing fields, unexpected fields are ignored until they are needed, and data processing can proceed uninterrupted through upgrades. When starting a data analysis project, most developers don’t think about how they’ll be able to handle gradual application upgrades through a large organization. The ability to independently upgrade the applications that are writing the data and the applications reading the data makes development and deployment significantly easier.

The problem of managing schemas across diverse teams in a large organization was mostly solved when a single relational database contained all the data and enforced the schema on all users. These days, data is not nearly as unified — it moves between many different data stores, structured, unstructured or semi-structured. Avro is a very versatile and convenient way of bringing order to chaos. Avro formatted data can be stored in files, in unstructured stores like HBase or Cassandra, and can be sent through messaging systems like Kafka. All the while, applications can use the same schemas to read the data, process it, and analyze it — regardless of where and how it is stored.

Decisions made early in the project can come back to bite later. Hadoop offers a rich ecosystem of tools and solutions to choose from, making the decision process more challenging than it was back when data was always stored and processed in relational databases. File formats are no exception — there are probably 10 different file types that are supported through the Hadoop ecosystem. Some of the formats are easy to use by beginners, some offer special performance optimizations for specific use-cases. But for general-purpose data storage and processing, I always tell my customers: just use Avro.

Gwen Shapira will talk more about architectural considerations for Hadoop applications at Strata + Hadoop World Barcelona. For more information and to register, visit the Strata + Hadoop World website.

Cropped image on article and category pages by foam on Flickr, used under a Creative Commons license.

This post is part of our on-going investigation into the evolving, maturing marketplace of big data components.

Related:


Go to Source

Tags: , , ,

How is UX for IoT different?

Editor’s note: this is an excerpt from our forthcoming book Designing Connected Products; it is part of a free curated collection of chapters from the O’Reilly Design library — download the entire Experience Design collection here.

experience-design-cover_sizedDesigning for IoT comes with a bunch of challenges that will be new to designers accustomed to pure digital services. How tricky these challenges prove will depend on:

  • The maturity of the technology you’re working with
  • The context of use or expectations your users have of the system
  • The complexity of your service (e.g. how many devices the user has to interact with).

Below is a summary of the key differences between UX for IoT and UX for digital services. Some of these are a direct result of the technology of embedded devices and networking. But even if you are already familiar with embedded device and networking technology, you might not have considered the way it shapes the UX.

Functionality can be distributed across multiple devices with different capabilities

IoT devices come in a wide variety of form factors, with varying input and output capabilities. Some may have screens, such as heating controllers or washing machines. Some may have other ways of communicating with us (such as flashing LEDs or sounds).

Some may have no input or output capabilities at all and are unable to tell us directly what they are doing. Interactions might be handled by web or smartphone apps. Despite the differences in form factors, users need to feel as if they are using a coherent service rather than a bunch of disjointed UIs. It’s important to consider not just the usability of individual UIs but interusability: distributed user experience across multiple devices.

The locus of the user experience may be in the service

Although there’s a tendency to focus on the novel devices in IoT, much of the information processing or data storage often depends on the Internet service. This means that the service around a connected device is often just as critical to the service, if not more so, than the device itself. For example, the London Oyster travel card is often thought of as the focus of the payment service. But the Oyster service can be used without a card at all via an NFC enabled smartphone or bank card. The card is just an ‘avatar’ for the service (to borrow a phrase from the UX expert Mike Kuniavsky).

We don’t expect internet-like failures from the real world

It’s frustrating when a web page is slow to download or a Skype call fails. But we accept that these irritations are just part of using the Internet. By contrast, real-world objects respond to us immediately and reliably.

When we interact with a physical device over the Internet, that interaction is subject to the same latency and reliability issues as any other Internet communication. So, there’s the potential for delays in response and for our requests and commands to go missing altogether. This could make the real world start to feel very broken. Imagine if you turned your lights on and they took two minutes to respond, or failed to come on at all.

In theory, there could be other unexpected consequences of things adopting Internet-like behaviors. In the Warren Ellis story The Lich House, a woman is unable to shoot an intruder in her home: her gun cannot contact the Internet for the authentication that would allow her to fire it. This might seem far-fetched, but we already have objects that require authentication, such as Zipcars.

IoT is largely asynchronous

When we design for desktops, mobiles, and tablets, we tend to assume that they will have constant connectivity. Well-designed mobile apps handle network outages gracefully, but tend to treat them as exceptions to normal functioning. We assume that the flow of interactions will be reasonably smooth, even across devices. If we make a change on one device (such as deleting an email), it will quickly propagate across any other devices we use with the same service.

Many IoT devices run on batteries and need to conserve electricity. Maintaining network connections uses a lot of power, so they only connect intermittently. This means that parts of the system can be out of sync with each other, creating discontinuities in the user experience. For example, imagine your heating is set to 19 degrees celsius. You use the heating app on your phone to turn it up to 21C, but it takes a couple of minutes for your battery powered heating controller to go online to check for new instructions. During this time, the phone says 21C, and the controller says 19C.

Code can run in many more places

The configuration of devices and code that makes a system work is called the system model. In an ideal world, users should not have to care about this. We don’t need to understand how conventional Internet services, like Amazon, work in order to use them successfully. But as a consumer of an IoT service right now, you can’t always get away from some of this technical detail.

A typical IoT service is composed of:

  • one or more embedded devices
  • a cloud service
  • perhaps a gateway device
  • one or more control apps running on a different device, such as a mobile, tablet, or computer.

Compared to a conventional web service, there are more places where code can run. There are more parts of the system that can, at any point, be offline. Depending on what code is running on which device, some functionality may at any point be unavailable.

For example, imagine you have a connected lighting system in your home. It has controllable bulbs or fittings, perhaps a gateway that these connect to, an Internet service, and a smartphone app to control them all. You have an automated rule set up to turn on some of your lights at dusk if there’s no one home.

If your home Internet connection goes down, does that rule still work? If the rule runs in the Internet service or your smartphone, it won’t. If it runs in the gateway, it will. As a user, you want to know whether your security lights are running or not. You have to understand a little about the system model to understand which devices are responsible for which functionality, and how the system may fail.

It would be nice if we could guarantee no devices would ever lose connectivity, but that’s not realistic. And IoT is not yet a mature set of technologies in the way that ecommerce is, so failures are likely to be more frequent. System designers have to ensure that important functions (such as home security alarms) continue to work as well as possible when parts go offline and make these choices explicable to users.

Devices are distributed in the real world

The shift from desktop to mobile computing means that we now use computers in a wide variety of situations. Hence, mobile design requires a far greater emphasis on understanding the user’s needs in a particular context of use. IoT pushes this even further: computing power and networking is embedded in more and more of the objects and environments around us. For example, a connected security system can track not just whether the home is occupied, but who is in it, and potentially video record them. Hence, the social and physical contexts in which connected devices and services can be used is even more complex and varied.

Remote control and automation are programming-like activities

In 1982, the HCI researcher Ben Shneiderman defined the concept of direct manipulation: user interfaces based on direct manipulation “depend on visual representation of the objects and actions of interest, physical actions or pointing instead of complex syntax, and rapid incremental reversible operations whose effect on the object of interest is immediately visible. This strategy can lead to user interfaces that are comprehensible, predictable and controllable.” Ever since, this has been the prevailing trend in consumer UX design. Direct manipulation is successful because interface actions are aligned with the user’s understanding of the task. They receive immediate feedback on the consequences of their actions, which can be undone.

IoT creates the potential for interactions that are displaced in time and space: configuring things to happen in the future, or remotely. For example, you might set up a home automation rule to turn on a video camera and raise the alarm when the house is unoccupied and a motion sensor is disturbed. Or you might unlock your porch door from your work computer to allow a courier to drop off a parcel.

Both of these break the principles of direct manipulation. To control things that happen in future, you must anticipate your future needs and abstract the desired behavior into a set of logical conditions and actions. As the HCI researcher Alan Blackwell points out, this is basically programming. It is a much harder cognitive task than a simple, direct interaction. That’s not necessarily a bad thing, but it may not be appropriate for all users or all situations. It impacts usability and accessibility.

Unlocking the door remotely is an easier action to comprehend, but we are distanced from the consequences of our actions, and this poses other challenges. Can we be sure the door was locked again once the parcel had been left? A good system should send a confirmation, but if our smartphone (or the lock) lost connectivity, we might not receive this.

Complex services can have many users, multiple UIs, many devices, and many rules and applications

A simple IoT service might serve only one or two devices: e.g. a couple of connected lights. You could control these with a very simple app. But as you add more devices, there are more ways for them coordinate with one another. If you add a security system with motion sensors and a camera, you may wish to turn on one of your lights when the alarm goes off. So, the light effectively belongs to two functions or services: security and lighting. Then add in a connected heating system that uses information from the security system to know when the house is empty, and assume there are several people in the house with slightly different access privileges to each system. For example, some can change the heating schedule, some can only adjust the current temperature, some have admin rights to the security system, and some can only set and unset the alarm. What started out as a straightforward system has become a complex web of interrelationships.

For a user, understanding how this system works will become more challenging as more devices and services are added. It will also become more time consuming to manage.

Many differing technical standards make interoperability hard

The Internet is an amazing feat of open operating standards, but, before embedded devices were connected, there was no need for appliance manufacturers to share common standards. As we begin to connect these devices together, this lack of common technology standards is causing headaches. Just getting devices talking to one another is a big enough challenge, as there are many different network standards. Being able to get them to coordinate in sensible ways is an order of magnitude more complicated.

The consumer experience right now is of a selection of mostly closed, manufacturer-specific ecosystems. Devices within the same manufacturer’s ecosystem, such as Withings, will work together. But this is the only given. In the case of Withings, this means that devices share data with a common Internet service, which the user accesses via a smartphone app. Apple’s Airplay is an example of a proprietary ecosystem in which devices talk directly to each other.

We’re starting to see manufacturers collaborating with other manufacturers, too. So, your Nest Protect smoke detector can tell your LIFX lightbulbs to flash red when smoke is detected. (This is done by connecting the two manufacturer’s Internet services rather than connecting the devices).

There are also some emerging platforms that seek to aggregate devices from a number of manufacturers and enable them to interoperate. The connected home platform Smart Things supports a range of network types and devices from manufacturers such as Schlage and Kwikset (door locks), GE and Honeywell (lighting and power sockets), Sonos (home audio), Philips Hue, Belkin, and Withings. But the platform has been specifically configured to work with each of these. You cannot yet buy any device and expect it to work well with a platform such as Smart Things.

For the near future, the onus will be largely on the consumer to research which devices work with their existing devices before purchasing them. Options may be limited. In addition, aggregating different types of devices across different types of networks tends to result in a lowest common denominator set of basic features. The service that promises to unify all your connected devices may not support some of their more advanced or unique functions: you might be able to turn all the lights on and off but only dim some of them, for example. It will be a while before consumers can trust that things will work together with minimal hassle.

IoT is all about data

Networked, embedded devices allow us to capture data from the world that we didn’t have before, and use it to deliver better services to users. For example, drivers looking for parking spaces cause an estimated 30% of traffic congestion in US cities. Smart parking applications such as Streetline’s Parker use sensors in parking spaces to track where spaces are open for drivers to find via a mobile app. Likewise, Opower uses data captured from smart energy meters to suggest ways in which customers could save energy and money.

Networked devices with onboard computation are also able to use data, and in some cases act on it autonomously. For example, a smart energy meter can easily detect when electrical activity is being used above the baseload. This is a good indicator that someone is in the house and up and about. This data could be used by a heating system to adjust the temperature or schedule timing.

To quote another phrase from Mike Kuniavsky: “information is now a design material.”

Editor’s note: this is part of our ongoing exploration looking at experience design and the Internet of Things.


Go to Source

m4s0n501
Tags: , , , ,