This is the second in a series of three posts about the use of mobile technology during the GO open studio weekend. The first looked at basic trends and SMS texting; this post will delve into the iPhone app, and the last will consider general lessons learned.
Few people know that we almost pulled the iPhone app from the store right before open studio weekend. Late Friday afternoon, the app was rendered totally unusable as it crashed at every tap and we wondered if it would be a better user experience to pull the app in favor of using other less crash-prone methods to check in.
As it turns out, for as much testing as we did, the one thing we couldn’t replicate easily was the load on our servers going into the open studio weekend and the app wasn’t handling spikes in server traffic very well. After a very late night we managed to stabilize the situation prior to the open studio weekend, but not before we frustrated a lot of participants.
I wish that was the extent of the problems we faced, but there was more. At first load, the app would check to make sure its list of studios was up-to-date. In the week leading up to the open studio weekend we had about a hundred artists cancel their participation, so the app had a lot of work to do to update. You can imagine what’s next…yup…crash. Once the app crashed on the initial startup, it wouldn’t crash again, but this was on the heels of the Friday night issue and there was plenty of frustration with no easy way to tell users to hang in there.
We went into the weekend knowing there were problems, but that most had been resolved or would be resolved in good time. We had no idea if participants would stick with us or ditch the app in favor of the other platforms we offered, so we held our breath and waited to fight more fires, but I remember seeing this tweet at some point over the weekend and feeling some sense of relief that someone out there was having a good experience.
There were some additional positive experiences around the app that came as part of the “share your story” feedback, but it felt as if these were the exception to the rule. So, I was floored to see this big, blue sea of check-in data in the chart below—despite all the trouble, this app was used quite a bit. Given this popularity, we are not sure if the app worked mostly as intended or if people used it with continued trouble. Were the people having trouble just louder about it or was their experience indicative of most users?
That said, one more glitch cropped up during the weekend. After a while, the app stopped displaying users’ most recent check-ins. Even though the check-in would register on our side, the app wouldn’t tell the user. Some folks told us they went home to find all their check-ins and figured out this was a display bug, but we know we probably lost others in what I’d deem a user experience problem of the highest order.
In terms of functionality, we heard mostly good feedback, but not all, about features we provided. Participants reported that the mapping feature was solid and being able to follow an itinerary a real win, but the data we saw around the app’s nomination functionality found this feature to be of little use; most people turned to web when casting nominations (web 84%, mobile-web 10%, mobile 5.1%). The nomination functionality was a late day addition because I was absolutely (and incorrectly!) certain that users who started with one platform would expect every step of the process to be included. As it turned out, most users only used mobile when it would be most efficient (open studio weekend), and they turned to web once the need to be mobile had elapsed. Note to self: avoid the feature creep—it’s always better to start with a set of features and then wait and see what people ask for before attempting to develop for every situation.
Tomorrow we’ll talk through all the lessons learned, but in the meantime we are curious about your own experiences with the app. Did your own experience match up with the trends we were seeing with the data and the feedback?