Seeking a Home on the Range

A very typical day in the lobby. Visitor liaison tries to help stem the tide of questions, but once one person is there asking...more follow.
A very typical day in the lobby. Visitor liaison tries to help stem the tide of questions, but once one person is there asking...more follow.

As summer draws to a close, so does our testing for the location of our ASK team. You may remember the results from our earlier testing in our pavilion and just off the lobby. For the remainder of the summer we've continued testing in locations throughout the building to learn how various spaces work.

Testing in the lobby proved to be an incredibly tough spot. In this location, the team was highly visible, but this visibility was confusing because visitors saw them as general information points. And the kind of information visitors were looking for included everything from, "Isn't there a zoo around here?" (referring to the Prospect Park Zoo) to "I need to sign up for the Bernie Sander's campaign." There was so much of this questioning going on, in fact, that it became difficult for the team to actually work and, in some cases, there were delays answering questions coming in via the app because interactions were proving to be too distracting. It should also be said that this working environment also included plenty of noise.

Simply put, this location proved to be too early in a visitor's trajectory for visitor to be aware that there is an app and who the ASK team is in relation to it. They need to hear about the app from the ticketing transaction and see the team as a second (or even third) point of contact for everything to really gel.

The sheer amount of traffic and pre-visit questions coming to the team necessitated the use of "staff workspace" signage. Normally, these signs are used only when desks are not occupied, but here the use has been adapted off the cuff.
The sheer amount of traffic and pre-visit questions coming to the team necessitated the use of "staff workspace" signage. Normally, these signs are used only when desks are not occupied, but here the use has been adapted off the cuff.

These findings do not necessarily mean the ASK team won't eventually end up in the lobby, but they do help us figure out what that presence would need to be more like in order to be more successful. A full marketing plan at the entry could help the awareness factor, so the team becomes a second point of contact even at this early stage. Also, a "glass box" with planned interaction time a la Southbank Center could also work in this location helping allow the team to get their work done. The planned interaction time would become key, though, in keeping with the project's engagement goals (something Southbank did well through meetups and other scheduled interventions).

One big thing the lobby testing has taught us? Even with traffic patterns that now have much better clarity, the human presence is still something people really crave. We need to do some thinking here about the greeting process especially in light of how to work with our new information desk, which is part of the Situ Studio designed furniture set; our visitors services area is on this one.

We also tested team location in the galleries and some of the findings here have proven interesting. How close should the team be to works of art? How best to handle directional questions? When in the visit is the public most responsive to the team's presence?—all of these questions are things we've been evaluating in this series of moves.

Testing in Connecting Cultures where the team was more embedded in and among the works of art.
Testing in Connecting Cultures where the team was more embedded in and among the works of art.

The team was placed in our Connecting Cultures exhibition located on our first floor; this location is post-ticketing, but fairly early in a visit because this is considered an introduction to the Museum's collection where some visitors begin their visit. Testing here was a little complicated due to construction in the area, which created a considerable amount of noise (the team requested ear plugs at one point). Construction also didn't help us much because it closed off exits, so many visitors would get in the space and some of the questions they had for the team were directional along the lines of, "Now how do I get out of here?" Interestingly, we don't get many of these directional queries when people are using the app itself and that's great, but we ideally want the team in a location that can foster in-person conversations about art. This space proved interesting because once the team was embedded in the exhibition, the conversations about art were on the rise. In the data collected the construction seemed to cause an imbalance of directional questions, but this tide would likely be stemmed once the space was fully restored to its normal state.

Testing in our forth floor elevator lobby where the team presence is more cohesive as a unit, there's proximity to works of art, but the space is also transitional.
Testing in our forth floor elevator lobby where the team presence is more cohesive as a unit, there's proximity to works of art, but the space is also transitional.

Our next testing (going on now) has involved our elevator lobbies on the fourth and fifth floors. These are small spaces, so the team has a concentrated visible presence. These spaces are used for small exhibitions and/or have works installed, but they are also transitional in that most people passing through them are on their way somewhere. Both spaces are in a direct traffic line to special exhibitions. The fifth floor is unique in that most people start their visit on the fifth floor and start to work their way down the building, so the team in the fifth floor elevator lobby is earlier in a visit. The fourth floor elevator lobby is still in the traffic line, but more of a mid-way point in someone's visit.

Fourth floor testing showed us that being in the middle of a visit pattern may be very beneficial. In this location, people seem more ready to talk about art and the team's presence is more recognized because in-building marketing prior to this point helps with the connection. In one recent interaction, I watched as someone stepped off the elevator quickly making her way through the space. She spotted the team and you could see the lightbulb go off—"Oh, you're the one answering questions in the app? The answers are so great. Thank you so much." This is exactly the kind of thing we hope to see with the team being so accessible.

We're still testing these areas more fully, but there are some things we know already that will help us in our quest to find an appropriate home for this team:

  • Proximity to art helps drives art-related conversations.
  • Discovery of the team mid-visit helps recognition.
  • Transition spaces might be a good fit if the team is not overwhelmed with directional questions.
  • Directional questions are an inevitable part of being on the floor, so being in a space where it's easy to give instructions—Bathroom? ....Take the elevator down one flight. Basquiat exhibition? ...Right down this hall.—helps put us in a position where we can at least quickly answer with minimal distraction.

During all of this testing, one thing has remained a constant. While the visibility of the ASK Team is important for the engagement goals of the program, their very presence does not seem to change our app's usage numbers, so seeing the team at work does not necessarily help advertise the program.

As summer closes we've got a lot more to work with and we'll begin some internal discussions about where this team might eventually land. This will, of course, involve many more factors because we have to take the learnings and align them with the most important thing of all—institutional goals.

Measuring Success

We all struggle with how to measure success. We're thinking a lot about this right now as we begin to put the pieces together from what we've learned over the last ten weeks since ASK went on the floor. Three components help us determine the health of the ASK: engagement goals, use rates, and (eventually) institutional knowledge gained from the incoming data. When we look at engagement goals, Sara and I are really going for a gold standard.  If someone gets a question asked and answered, is satisfied, and the conversation endsthat's great, but we’ve already seen much deeper engagement with users and that’s what we’re shooting for. Our metrics set can show us if those deeper exchanges are happening. Our engagement goals include:

  • Does the conversation encourage people to look more closely at works of art?
  • Is the engagement personal and conversational?
  • Does the conversation offer visitors a deeper understanding of the works on view?
  • How thoroughly is the app used during someone’s visit?

We doing pretty well when it comes to engagement. We regularly audit chats to ensure that the conversation is leading people to look at art and that it has a conversational tone and feels personal. The ASK team are also constantly learning more about the collection and thinking about, experimenting with, and learning what kinds of information and conversation via the app open the door for deeper engagement and understanding of the works. In September, we'll begin the process of curatorial review of the content, too, which will add another series of checks and balances ensuring we hit this mark of quality control.

Right now the metrics show us conversations are fairly deep; 13 messages on average through this soft launch period (starting June 10 to date of this post). The team is getting a feel for how much the app is used throughout a person's visit; they've been having conversations throughout multiple exhibitions over the course of hours (likely an entire visit). Soon we'll be adding a metric which will give us a use rate that also shows the average number of exhibitions, so we'll be able to quantify this more fully. Of course, there are plenty of conversations that don't go nearly as deep and don't meet the goals of above (we'll be reporting more about this as we go), but we are pretty confident in saying the engagement level is on the higher end of the "success" matrix. The key to this success has been the ASK team who’ve worked long and hard to study our collection and refine interaction with the public through the app.

Use rate is on the lower end of the matrix and this is where our focus is right now. We define our use rate by how many of our visitors are actually using the app to ask questions. From our mobile use survey results, we know that 89% of visitors have a smartphone and we know from web analytics that 83% of our mobile traffic comes from iOS devices. So, we've roughly determined that, overall, 74% of the visitors coming through the doors have iOS devices and are therefore potential users. To get our use rate, we take 74% of the attendance rate (eligible iOS device wielding users) and balance that with the number of conversations we see in the app giving us a percentage of overall use.

Use rate during soft launch has been bouncing around a bit from .90% to 1.96%, mostly averaging in the lower 1% area. All kinds of things affect this number from the placement of the team, how consistent the front desk staff is at pitching the app as first point of contact, the total number of visitors in the building, and the effectiveness of messaging. As we continue to test and refine, the numbers shift accordingly and we won't really know our use rate until we "launch" in fall with messaging throughout the building, a home for our ASK team, and a fully tested process for the front desk pitch and greeting process.

Our actual download rate doesn't mean much especially given the app only works to have a conversation in the building. Instead, the "use rate" is a key metric.  The one thing the download rate stats does show us is the pattern of downloads  runs in direct parallel with our open hours. Mondays and Tuesdays are the valleys in this chart and that's also when we are closed to the public.

Still, even with these things in flux, our use rate is concerning because one trend we are seeing is a very low conversion on special exhibition traffic. As it stands, ASK is being used mostly by people who are in our permanent collection galleries. Don't get me wrongthis is EXCELLENTwe've worked for years on various projects (comment kiosks, mobile tagging, QR codes, etc) that would activate our permanent collections; none have seen this kind of use rate and/or depth of interaction. However, the clear trend is ASK is not being taken advantage of in our special exhibitions and this is where our traffic resides. We are starting with getting effective messaging up more prominently in these areas. Once we get the visibility up, we'll start testing assumptions about audience behavior. It may be that this special exhibition traffic is here to see exactly what they came for with little want of distraction; if ASK isn't on the agenda it may be an uphill battle to convert this group of users. Working on this bit is tricky and it will likely be a few exhibition cycles before we can see trends, test, and (hopefully) better convert this traffic to ASK.

There’s a balance to be found between ensuring visibility is up so people know it’s available (something we don’t yet have) and respecting the audience’s decision about whether to use it. Another thing we are keeping in mind is the ASK team is in the galleries and answering questions in personthis may or may not convert into app use, but having this staff accessible is important and it’s an experience we can offer because of this project. Simply put, converting traffic directly may not be an end goal if the project is working in other ways.

The last bit of determining successinstitutional knowledge gained from the incoming datais something that we can't quantify just yet. We do know that during the soft launch period the larger conversations have been broken down into 1,241 snippets of reusable content (in the form of questions and answers) all tagged with object identification. Snippets are integrated back into the dashboard so the ASK team has previous question/answer pairings at their fingertips when looking at objects. Snippets also tell us which objects are getting asked about, what people are asking, and will likely be used for content integration in later years of the project. The big step for us will come in September when we send snippet reports to curatorial so this content can be reviewed. We hope these reports and meetings help us continue to train the ASK team, work on quality control as a dynamic process, and learn from the incoming engagement we are seeing.  

Is ASK successful?  We’re leaving you with the picture that we have right now. We’re pretty happy with the overall depth of engagement, but we believe we need to increase use. It will be a while before we can quantify the institutional knowledge bit, so measuring the overall success of ASK is going to be an ongoing dialog. One thing we do know is the success of the project has nothing to do with the download rate.

A Personal Invitation to ASK

Knowing what we know about our visitors, we figured pretty early on that we would need to offer face time with staff as part of our ASK onboarding, that people might need a little help downloading and getting started. Turns out we were only sort of correct.

We thought people would have trouble with downloading and enabling the sheer number of settings our app requires, but turns out this part was easy.
We thought people would have trouble with downloading and enabling the sheer number of settings our app requires, but turns out this part was easy.

People have needed that face-time, but not so much for help with the download process per se, but in order to actually explain the app and encourage people to download it in the first place. This was quite surprising to us, considering we require users to turn on multiple services for the app to function properly (wifi, location-aware, bluetooth, notifications, and privacy settings for the camera).

As I mentioned in my previous post, we've had some challenges figuring out messaging around ASK. After much initial testing, we think we've landed on some ways in which to move forward. This process was heavily informed by the work of our Visitor Liaison team. These three individuals, each of whom has worked with us in the past, were brought on board (in a part-time, temporary capacity) specifically to help us determine the how to talk about the app—the "pitch" in both long and short form—and where visitors are most receptive to hearing it.

Visitor Liaisons are identified by cycling caps, which so far has worked pretty well. We my find as the lobby gets busier, they may need to wear t-shirts or something even more visible in addition. From left to right: Emily, Kadeem, and Steve.
Visitor Liaisons are identified by cycling caps, which so far has worked pretty well. We my find as the lobby gets busier, they may need to wear t-shirts or something even more visible in addition. From left to right: Emily, Kadeem, and Steve.

Steve Burges is a PhD student in Art History at Boston University and former curatorial intern in our Egyptian, Classical, and Ancient Near Eastern Arts department. Kadeem Lundy is a former floor staff member at the Intrepid Air Sea and Space Museum and was a teen apprentice here for three years. Emily Brillon was one of the gallery hosts for our first pilot test and has recently completed her Bachelor’s in Art History, Museum, and Curatorial Studies at Empire State College.

This team has been really key in helping us hone the messaging and in encouraging visitors to participate in ASK. From their efforts, we've learned what the key characteristics about the app experience that visitors respond to the most including that it’s a customized, personalized experience; that it’s about real people, or the idea of an expert on demand; and the immediacy, that it’s right away, or on-the-spot.

Most people are receptive when they are in line.
Most people are receptive when they are in line.

We are also beginning to see patterns in where visitors are most receptive. We've been using the lines during busy weekends to our advantage, both for ticketing and the elevators—captive audiences help. But what has been most interesting to discover is that the Liaisons can most effectively get people downloading and using the app if they are the second point of contact.

At the ticketing desk visitors are asked if they are iphone users. If so, they get a special tag (right) which helps us differentiate them.
At the ticketing desk visitors are asked if they are iphone users. If so, they get a special tag (right) which helps us differentiate them.

As Shelley introduced in her previous post, so far the most important point in our messaging is our ticketing process. A few weeks ago, our admissions staff began telling people about the app at the point of sale. The goal here is to identify iPhone users early (our potential audience) and to inform them about the app. iPhone users are given a branded tag so that Liaisons know who to approach. When this process is in play, the Liaisons' job is that much easier because visitors know we have an app. Then the Liaison can focus on the hard part—explaining how it works.

Building is easy, but launching is hard.

If you think about it, building a project is fairly straightforward. It's a one way street of sorts; a controlled process with steps involved, tests we can run, and timelines that make sense. Launching something like ASK feels harder because there are a lot of moving parts. Things don't always fall into the order you think they should, data doesn't always make sense at first glance, and you've got an unpredictable audience of visitors working with you. It's a little more like jumping off a cliff and needing a lot of help to figure out how to parachute. ASK is a little bit more complicated than an average app launch; we reconfigured the whole entry experience because the ASK team—who work in full view of the public—need to be a part of it. Early testing has shown us that visitors are interested seeing the ASK team work and the knowledge there are real people answering questions has been very compelling. The ASK team working in full view of visitors seem to be vital to the project’s success and this means we're launching two related, but separate components: an app and a visible human presence within the experience.

There are some days when Sara and I think movable furniture was both the best and worst idea we ever had. Even our director has been seen helping configure furniture setup at times.

In our case, the moving parts are quite literal. The lobby now consists of movable furniture which will let us configure and reconfigure how the ASK team becomes part of this space. As we continue through the summer, you'll find us testing the ASK team setup in various locations. When thinking about how to place the ASK team, we are considering both the visibility and working process of the team—are they visible, is the space too distracting to get work done, are visitors able to approach the team while also being mindful of the work they are doing? The physical space is a consideration—are there enough power outlets for their desks, does traffic flow work in our favor? There are also practical considerations, too, like how much we have to move furniture to make a location feasible.

Our original setup with the ASK team working in the glass pavilion didn't work so well.

We spent two weeks testing in our glass pavilion. The team was located just before the brick piers and they were on view as people entered the building; power benches were also located in the same area. Visitors were more interested in getting inside the main lobby to get to ticketing and they would breeze by the team quickly often not reading signs or even noticing their presence.

Visitors loved the benches in the pavilion setup, but they felt disjointed from the ASK team.

In this location, visitors would sit on the benches to charge their devices, but often they were at the end of their visit past the point at which the app would be useful. Limited availability of power outlets in the pavilion also meant that the benches and the hubs—the ASK team desks—could not work together as visual components and the setup felt disjointed. The pavilion also proved to be further complicated by the sheer number of events in the space that required moving the furniture almost daily. In the end, this location proved to have little traction and the result was our lowest use rates since launch.

Our second location test began when we moved the team to the area just off the lobby beyond the "art" doors. As part of the ticketing process, admissions staff would ask visitors if they had an iphone and, if so, they were given a special admission tag. Our visitor liaisons would see the special tags as people entered the “art” doors and could begin the greeting process, introducing the team, and helping visitors get started with their download.

Our second round of testing moved the ASK team to an area just beyond the "art" doors. This worked well when traffic patterns were in our favor.

This setup worked exceedingly well for a couple of weeks because the natural traffic flow of the building worked in our favor. After most visitors received their ticket, they proceeded through the “art” doors to begin their visit where they would see the team and be greeted by our liaisons. However, once our special exhibitions—Sneakers and Faile—opened on July 10, many visitors began using the elevator on the opposite side of lobby, bypassing the team totally.

The setup just beyond the "art" doors allowed the ASK team to function as a cohesive unit.

Unlike the pavilion, the area off the lobby provided ample room for signage and integrated seating. Our visitor liaisons had a natural place to greet visitors as they came through the “art” doors. Even with our temporary set up, the area felt unified—it was clear this was a space dedicated to ASK where our staff were working and visitors were lounging. However, once the traffic flow changed, it was clear the benefits of the space couldn't outweigh the lack of visibility.

Our next steps will be to move the team out into the lobby and into various galleries where their presence is feasible. On a logistical level, these locations are tough—that careful balance of team in the space, but not in the way...combined with endless moving of furniture controlled by the location of the nearest power outlet—makes for complicated puzzle. The trick is knowing just because the furniture is movable, doesn't always mean it should move. During testing, we'll need be disciplined to set things up and try a location fully before moving the parts around, so we can get an accurate read on what's working (or not).

The Pedagogy of a Text Message: First Response

In my last post, I discussed our “opening response” and slight tweaks to make that a better experience.  Our “first response” (the first message the user receives from the Audience Engagement Team after the user answers the opening response) is equally important because this frames how the user will experience the app and it functions as the hook, encouraging them to continue their app experience. For our testing of the first response, we wanted to learn if an information-based or inquiry-based first response was most effective at engaging users with art and the app. From the standpoint of the user, what type of response would help the user understand the nature of the app experience, and look more closely at art or engage more deeply with the artwork? From the standpoint of the Team, what type of response was best at providing an immediate response, and what type would be most compelling to the user?

In our first round of tests we experimented with having the first response from the Team be strictly information.  The user received the opening prompt, “What work of art are you looking at?” and the Team responded with information only. Information-based responses varied, some provided information about a specific detail that the user would be able to observe on an object; other answers included broader information that provided geographic, historical, or other types of contextual information about the object. For example, here is an exchange the draws the user to look at specific details, and provides contextual information:

Screen Shot 2015-07-01 at 2.58.49 PM

In testing information-based responses, we learned users appreciated information—it provided context and a broader view of the object they are looking at, and had the potential to support closer looking. We also learned that the right type of information helps to establish the authority of the Team. Users trusted the app experience more when they believed that the individuals responding to them were knowledgeable, and could offer information that they would not have access to just by looking at the work of art or reading the label.

Our second round of tests experimented using inquiry-based response, the question was simple, and we used the same wording consistently, “What drew you to that object?” A majority of users told us that they liked being posed a question, “I liked being asked the question, it made me look at it [the object] again,” and “I liked that I had to think.” Additionally, using this simple and proscribed first response had the advantage of providing an immediate response to the user, and also helped the Team member have additional time to gather information about the object.  While the user was crafting a response, the Team was able to collect information about the object using the Dashboard, and our collection of wikis.

In addition to providing time, using inquiry had the advantage requiring action on the part of the user, and functioned as tool to immediately engage the user with the work.  It addressed one of our big picture goals for the ASK app experience -- provide visitors with an experience that has them engaged with works of art.  By asking, “What drew you to this object?” the Team was able to quickly gather some information about the user’s interest, and helped us engage the user on a more personal level, and generally led to a deeper discussion about the work of art.  By answering our question, users found themselves looking back at works of art more closely, and thinking more critically about the objects in the Museum.

With this in mind we will use inquiry as our first response moving forward, and also integrate it into the next version of the opening prompt.  In my next blog post I will discuss our process of crafting a new prompt, and what we learned from our second round of testing using inquiry.

The Pedagogy of a Text Message: Opening Prompt

What is the pedagogy of a text message conversation?  Can you actually have a pedagogy of texting? If so, what does it look like? How do you define it? How does one begin to find the answers to these questions? The ASK app functions like a text message conversation between users and the Audience Engagement Team.  Users can send a text message or a photo.

In our first few testing sessions we learned, very quickly, some basic rules which have remained constant in our two months of testing—in retrospect these basics are obvious—users wanted the experience to be similar to how they use text messages in their daily life, and they wanted the experience to feel personal:

  • Users wanted to receive an immediate response after they sent their first message.
  • Users preferred short messages in response, rather than a large blurb of text.  We could send the same information, we just needed to do it in bite size bits.
  • Users enjoyed when the conversation had an informal tone to it as it helped establish that there was indeed a real person responding.
  • Users appreciated receiving new information that they didn’t know, and they also appreciated when we revealed that we didn’t have an immediate answer to their questions—it actually helped to create more trust from the users—as per above, it helped to establish a sense of familiarity and a personal conversation.
Our original prompt.
Our original prompt.

Using this basic information as a starting point we set out to deconstruct our text message conversations, focusing specifically on the first message within the text message exchange.  We wanted to learn how users would respond to our “opening prompt” (the first message that the user receives when opening the ASK app).

The opening prompt that the app presents to the user has a huge responsibility. We learned from early testing that users did not want to read lengthy directions or go through a multistep “onboarding” process.  With this in mind we knew that the prompt needed to be short, and needed to get the user actually using the app immediately.  We created a prompt that was short, directed, and began with art: “What work of art are you looking at right now?”

Through our testing sessions, we wanted to know if the opening prompt was effective in quickly generating a conversation between the visitor and ASK team. From the user’s standpoint, what will get people interested and using the app quickly? From the Team’s standpoint, what will provide the best starting point for conversation?

Prompt was changed to elicit more deliberate action on the part of the user, a prompt that would require to the user to not just immediately engage with the app, but also immediately engage with the art in the Museum in a thoughtful manner.
Prompt was changed to elicit more deliberate action on the part of the user, a prompt that would require to the user to not just immediately engage with the app, but also immediately engage with the art in the Museum in a thoughtful manner.

Data from post-testing feedback sessions (group conversations with testers), and information gathered from surveys brought us to the conclusion that the opening prompt was successful in getting users to use the app because it was easy to respond to and testers began using the app immediately. However, while the prompt was easy to respond to, testers were confused as to what would happen next. Additionally, we’d see users arbitrarily choose an artwork to send and that was frequently the first work of art they saw, and not necessarily an object they were interested in.

Based on this information we knew that we needed a prompt that, like this one, motivated testers to begin using the app immediately.  The prompt needed to be equally directed, but somehow provide the user with an idea of what the app experience would be, and have the user motivated to want to continue the conversation.  We decided that the prompt needed to elicit more deliberate action on the part of the user, a prompt that would require to the user to not just immediately engage with the app, but also immediately engage with the art in the Museum in a thoughtful manner.  This led us to our new prompt, “Find a work of art that intrigues you.  Send us a photo.”

It immediately proved to be a positive change.  As with the previous prompt, users engaged with the app immediately, and in addition, they remarked on how the prompt initiated them to start looking at the art more closely, to really consider what work of piqued their curiosity and interested them.  Some users continued to note some confusion as to what the full app experience was “supposed” to be.  However, we received at least half of the number of these types of comments as compared to when users tested with the first prompt.

We will continue to use this new prompt, and experiment with ways in which the Team is following up to users first message.  I will discuss the process of finding the best type of first response in my next blog post.

Performance Optimization, Not Premature Optimization

At the Brooklyn Museum, we like to take inspiration from many things. After recently watching "Mad Max: Fury Road," we realized to make our servers go faster, we should have a dedicated staff member spit gasoline into a combustion engine connected to one of our servers...vroom vroom!

All jokes aside, for most consumer/public facing apps, performance is a very serious consideration. Even if your app has the best design, bad performance can make your Ferrari feel like a Pinto. While performance can mean many things in a technical context, in this post I'll just be talking about the performance of our back-end.

As I mentioned in an earlier post, we use an internal API to power both our mobile app and the dashboard our Visitor Engagement team uses to chat with visitors. This API has to be able to not just handle requests, but do it in a very performant way. This is especially true given the nature of ASK which revolves around a real-time conversation between visitors and our Visitor Engagement team.

When taking performance into consideration, it's easy to fall into one of the deadly programming sins: premature optimization. Premature optimization is what happens when you try to optimize your code or architecture before you even know if, when, and where you have bottlenecks. To hedge against this, the solution is simple: benchmark your application. Since I'm just talking about the back-end in this post, application in this context means our API.

When we benchmark our API, we're not just benchmarking the webserver the API is being served from; we're benchmarking every component the API is comprised of. This includes (but not limited to) our webserver, API code, database, and networking stack. While our back-end is relatively simple by industry standards, you can see from this list that there are still many components in play that can each have an impact on our performance. With so many factors to account for, how do we narrow down where the bottlenecks are?

Similarly to power plants, back-end servers also need to be able to meet peak demand.
Similarly to power plants, back-end servers also need to be able to meet peak demand.

Well first we have to ask ourselves, "What is an acceptable level of performance?" This is a question you can't answer fully unless you also add the variable of time to the equation. Similarly to the way power utility companies determine how much electricity they need to generate, we also look at the same thing: peak load (see also: Brown Outs). Peak load is simply how much load do you anticipate having during the busiest times? If we know our system can handle peak load, then nothing more needs to be done in terms of optimization.

In practice, our real bottlenecks are most likely to be the human element of the equation: our Visitor Engagement team. Since we only have a few working at any given point in time, and the fact that quality answers can sometimes take a little while to come up with, having too many people asking questions and not enough people answering can be our worst bottleneck at times. That being said, when we're optimizing for a certain load average on our backend, we didn't want to just aim for that number; we wanted to aim a bit higher to give ourselves some cushion.

So how do we actually figure out where our bottlenecks are? In essence, this is a basic troubleshooting problem. If something is broke, where is it broke? Often times the simplest way to figuring this out is by isolating each component from each other and benchmarking each by itself. Once you have a baseline for each, you can then figure out where the bottleneck lies. Depending on what the actual bottleneck is, the solution can vary wildly and can be a massive engineering effort depending on the scale at which your application operates at. I recommend reading engineering blog posts from Facebook, Netflix, and other companies dealing with extremely large scales to get a better sense of what goes into solving these type of technical problems.

At the end of the day our number one priority is providing a great experience for our visitors. Our back-end is just one piece of the overall effort that goes into make sure that happens, and when it's running well, nobody should notice it at all. Kind of like a well-oiled machine running quietly in the background...so cool, so chrome....

Managing Expectations

We've talked a lot about how user expectations helped shape our implementation. There are times when it's incredibly valuable to listen to your users, but there is also a time when you need to manage those expectations, too. As we got closer to launching the project more fully onto the floor, it became clear to us that if we were considering this an iterative project we needed to start doing a better job of communicating that to our users.

We're being careful to say "test" our app instead of "download" our app.
We're being careful to say "test" our app instead of "download" our app.

One thing that we've looked at is our language; how are we positioning the app to our audience? In early drafts, we were using a lot of wording that everyone uses. Things like "download our app" and "use our app to ask us questions" were commonplace. We started a significant shift to always say "try our app" and "test our app." Also, when speaking about the app on the floor, we want to introduce the idea that we are in early stages and "your feedback will help us shape our app." This was surprisingly easy to do once we started to view our messaging through a testing lens.

Testing with iPhones first hinting at more to come.
Testing with iPhones first hinting at more to come.

On large signage, you'll also find that we are including language that acknowledges we're deploying on iPhones first. That was a clear path after seeing that 83% of our smartphone carrying visitors were on the iPhone; in an iterative process, we wanted to start simply by releasing and perfecting on one device before we move on to others. However, we've still got many visitors on other platforms and we need to manage the expectation that while we may be releasing on the iPhone first, we're thinking about all of our users. (In the meantime, anyone can ask a question using our iPad kiosks located in select galleries throughout the institution.)

The app store icon that we couldn't use.
The app store icon that we couldn't use.

It's interesting that the one idea we had, which would have been the most clear to users, couldn't be done. We wanted to put a "beta" ribbon over our app icon, so at the point of use it was reinforced that we were in testing. Know what? Apple has a rule where you can't have anything "beta" in the store, so this was a no go.

Starting tomorrow our project will hit the floor for our soft launch. At every turn, we ask ourselves how we can impart to users that this incredibly iterative program is in its very earliest of days.

Graphics Tie It All Together

When we first began thinking about the lobby reconfiguration, the need for flexible and moveable was paramount and all of our discussions with the design teams stressed this. SITU Studios took this directive to heart and designed a furniture solution that addressed this need, and MTWTF did the same with graphics that helped communicate the different functions of the furniture. At first we were all pretty excited about MTWTF’s initial design concept. We liked the look and the creative solution to offering a menu of options. We walked away from the first concept presentation pretty stoked.

We adored this in concept; taking inspiration from old desk calendars, the signage could be "flipped" to switch to various signs.
We adored this in concept; taking inspiration from old desk calendars, the signage could be "flipped" to switch to various signs.

Then we reconvened to see scale prototypes. All I have to say is thank goodness for prototyping because once we saw this solution in the space, we quickly realized it wouldn't work. It was simply too small in scale. We were focused on the furniture and the need for mobility that we neglected to see the bigger role graphics needed to play in the reconfigured space.

Prototyping revealed that as much as we loved the concept, the reality was too small to see from across a busy lobby.
Prototyping revealed that as much as we loved the concept, the reality was too small to see from across a busy lobby.

This reality check caused us all to pause and take a look at the entry experience with fresh eyes. How could graphics help make choices clearer? How could they help us be more welcoming? How could they help fill the space? MTWTF went back to the drawing board with this bigger picture in mind came back with more layered approach.

Large messaging at our entrance identifies where you are. We are using the existing canopy for the additional signage, seen here in a prototype.
Large messaging at our entrance identifies where you are. We are using the existing canopy for the additional signage, seen here in a prototype.

Beginning with large messaging at our entrance that says Brooklyn Museum (we didn’t say this anywhere on the exterior previously) and a large “welcome” banner hanging high in the brick arcade. This pairing nicely echoes the welcome sign we have at the north entrance and does a number of things: it confirms that you’re in the right place, adds a nice splash of color to the exterior and for the arcade banners in particular, sends a nice message that doubles as a visual cue to help pull visitors to the center of the arcade {link to traffic post} and through into the lobby.

A welcome sign running throughout the brick piers helps guide you toward ticketing area in the main lobby.
A welcome sign running throughout the brick piers helps guide you toward ticketing area in the main lobby.

In the lobby, we needed a way make the tall ceilings feel comfortable as opposed to cavernous, so MTWTF proposed hanging large banners from the second floor mezzanine advertising our special exhibitions. These banners, working in conjunction with a painted wall treatment, will draw attention to the south wall, where ticketing will be located, and visually fill the space. A dedicated banner (and ticketing bar) for Membership gives them a larger presence as part of the entry experience.

A ticketing area is defined with large exhibition banners and a blue wall to help draw you across the main lobby.
A ticketing area is defined with large exhibition banners and a blue wall to help draw you across the main lobby.

At the more human scale, an updated approach to wayfinding and signs stays true to our need for flexibility and mobility, but provides clearer choices for our visitors. The graphic treatment for shop/art/eat was moved from the wall to the doors in order to help ameliorate confusion around entry points into the rest of the building.

Signage for "art," "eat," and "shop" has been moved to the doors for better wayfinding. We also used two color vinyl so the signs will be visible in both daylight and night situations.
Signage for "art," "eat," and "shop" has been moved to the doors for better wayfinding. We also used two color vinyl so the signs will be visible in both daylight and night situations.

Signs for various ticketing scenarios stacked on stands provide flexibility to change out messaging based on need. Point size for the typeface was selected only after seeing scale print outs in the lobby, so the layers of messaging are clear at the right moments. For example, “Admissions” will be visible for most visitors from the arcade. As they walk closer, the admissions price becomes legible, about the center of the lobby, right at the spot before having to commit to standing in line.

Prototyping included text sizes for admissions signs, seen here.
Prototyping included text sizes for admissions signs, seen here.

Finally, we’re working on a new museum directory so that visitors can see all their options at once. The directory itself will be rather large and prominently placed so that it’s purpose is clear from afar, however the listing of what’s on view will require visitors to approach more closely.

A new directory will show visitors all of their options just as before they enter through the "art" doors.
A new directory will show visitors all of their options just as before they enter through the "art" doors.

We have left ourselves some flexibility (no surprise there) in that we have the option for larger messaging on the directory as well in order to advertise programs like Target First Saturdays, Thursday Late Nights, and, of course, Bloomberg Connects.

Clearer Choices for Better Flow

Shelley and I like to cast a wide net when looking for inspiration and ideas, often looking outside the museum sector from the customer experience at Apple and Fairway to transparent web design at Southbank Centre. When it came to re-thinking our entry experience, we felt pretty strongly that we needed an outside voice. Inspired by the fascinating data and ideas coming from Janette Sadik-Kahn’s work for New York City, we worked with Situ Studio to hire Arup as a traffic consultant. The team at Arup would help us evaluate our current traffic patterns, look at the proposed changes and flexible furniture coming in, and help think about pedestrian traffic as part of the project. The goal was to intelligently place the various components and their associated functions—ticketing, security, ASK team hubs—to help visitors navigate the space and understand their options. As we’ve talked about before, exactly how these components work together for the best possible visitor experience is something we need to determine through testing. Arup’s recommended placements are the first set we will test.

In an early pilot, we tracked visitor traffic patterns in our lobby using pencils and photocopies.
In an early pilot, we tracked visitor traffic patterns in our lobby using pencils and photocopies.

We had a feel for the general traffic pattern in the lobby because one of our early pilot projects involved tracking and timing visitors in the lobby to see what a typical entry sequence was like—where visitors stopped to speak with staff, what path most people take through the space, popular gathering points, etc. Happily, our conclusions matched up pretty well to Arup’s assessment of our existing patterns (score BKM).

Arup's version after their own analysis was very similar to what we had found during the pilot.
Arup's version after their own analysis was very similar to what we had found during the pilot.

As I mentioned in my last post, while we were originally aiming to test placement of each component, we ended up having to “fix” the location of ticketing in order to proceed with design. Naturally, this decision also affected traffic planning by limiting the number of variables. Arup was able to focus on how best to move people through the space to ticketing using the info desk, security desk, hubs, and benches as guides.

Info and security desk placement will help guide visitors to the center of the brick arcade, so they are lined up with ticketing when they enter the lobby.
Info and security desk placement will help guide visitors to the center of the brick arcade, so they are lined up with ticketing when they enter the lobby.

A key component of a good visitor experience is clear choices; it’s important for people to see and understand their options. Think of all the questions you ask yourself when entering a museum for the first time--where are the restrooms, where do I get tickets and how much are they, what can I see, etc. Ideally architecture, furniture, and wayfinding, all work together to help visitors understand their options at key decision points. Our architecture is such that there is little (or no) line-of-sight from the main entrance into the lobby. This means visitors have two moments of orientation: one when the enter the revolving doors and one once they pass through the brick arcade. To help with this, Arup focused on clear pathways using security and info desks centered on the brick arcade to draw visitors over. Info is placed ahead of ticketing in case visitors have any questions before they commit to purchasing a ticket. The info and security desks are centered with the goal of drawing people to the center of the arcade so that they enter the lobby through the middle of the space. Once inside the lobby, ticketing is straight ahead.

Movable furniture in the form of power benches and hubs can help further direct traffic. Circles show areas of gathering spaces.
Movable furniture in the form of power benches and hubs can help further direct traffic. Circles show areas of gathering spaces.

After ticket purchase the next decision point is what to see, so we are placing a new museum directory at a natural gathering spot beyond the point of sale. In another effort to make choices clearer, we moved our previous SHOP/ART/EAT graphic to the doors as we noticed some confusion around which door led to which, particularly since there are two entrance to the galleries. This is one way we’re using graphics to help communicate options; I’ll discuss this a bit more in a future post.

New graphics and adjustments to existing graphics help guide visitors. In the case shown here, our old "art," "shop," and "eat" are being moved from walls to the actual doorways. Additionally, they are reoriented higher so they are visible over people's heads.
New graphics and adjustments to existing graphics help guide visitors. In the case shown here, our old "art," "shop," and "eat" are being moved from walls to the actual doorways. Additionally, they are reoriented higher so they are visible over people's heads.

A big challenge for us is the fluctuating nature of the space—not only does traffic vary based on exhibition season, time, and day of the week, but we are constantly holding programs, performances, special events, and even film shoots in the pavilion and lobby. We have to be able to adjust set-up based on our needs of the day, but still help visitors navigate the space and this is really where Arup’s insight has been most useful. They have offered us several placement scenarios: a typical day, a busy lobby day (long queuing needs), a busy pavilion day (event), and Target First Saturdays.

Traffic patterns differ when we need more room for ticketing lines like on busy days or at Target First Saturday. The benches toward the brick piers are the key move here so we can create more room in the lobby proper.
Traffic patterns differ when we need more room for ticketing lines like on busy days or at Target First Saturday. The benches toward the brick piers are the key move here so we can create more room in the lobby proper.

Interestingly, the maximum number of hubs Arup ever recommends placing in the lobby is four, not the full six. Instead, they recommend deploying two hubs elsewhere in the building to take advantage of traffic patterns to special exhibitions and/or other gathering areas in the building for the purpose of reminding visitors about this “thing” they saw in the lobby. This got Shelley’s and my attention since we’ve always wondered a bit if the lobby is too early in the entry experience for really engaging visitors around ASK.

We’ll definitely try Arup’s route—two or four at a time—and see how it goes, but as I said, this is only a first set of placements to try out. We’ll need to adjust as we go and take time to land on the set that works for our varied needs.

Solving Three Clicks to the Art

As you've been reading, ASK Brooklyn Museum isn't just about an app—it's an initiative that seeks to re-envision our visitor experience from top to bottom. That "top" starts at our plaza and continues to our lobby and throughout the building. Over the next few weeks we'll be talking about various ideas—learnings from the Apple store, how to create an entry experience where the focus is on people, how we greet you, and why the ASK team should be a part of it—but today I'll talk a little about our hopes and dreams for the lobby as a flexible space that works to better incorporate the most important thing for a museum—more art.

Our heavily used plaza serves as a front porch for the community.
Our heavily used plaza serves as a front porch for the community.

If you looked at our entry experience today, I think you'd find it lacks focus. We have an amazingly beautiful building and an enormously successful front plaza that draws people acting as a front porch for our community. We see people, especially now in the incredibly nice weather, using it to lounge, gather, and play. We offer free wifi in case you want to sit and work, but have you noticed something missing? There's no art. Arguably, our fountain is designed by WET, so we could consider that an artful experience, but while many museums have art installed before you enter, we don't.

Connecting Cultures provides the introduction to our collection, but located after ticketing makes it the "third click" in your visit.
Connecting Cultures provides the introduction to our collection, but located after ticketing makes it the "third click" in your visit.

As you move into the lobby proper, we've got a similar issue going on. There are some works on view in our lobby—notably our collection of Rodin sculptures, our American owls and lion, and The Rebel Angels. But, overall, when you come into our main lobby your experience is overwhelmingly one about "transaction." Our current visitor desk is the biggest and most powerful symbol in the entry experience of the museum and we started to question what kind of message that was sending. Our incredibly successful introductory exhibition, Connecting Cultures, only begins after you pass the threshold of ticketing and then pretty far into the building itself. I've often used this analogy in my own industry...our entry experience is a little like "three clicks to the art" and if the museum's primary function is the display of art, that's a big issue—simply put, we think you should experience art much more quickly in your experience because that is the primary function of why you come here.

The front desk is the overwhelming experience of the lobby; it's circular form confuses traffic patterns and the fixed nature of the desk is limiting.
The front desk is the overwhelming experience of the lobby; it's circular form confuses traffic patterns and the fixed nature of the desk is limiting.

When we looked around, it was clear that we needed to rethink that central desk. Installed during our 2004 renovation, the round desk was meant to service visitors from our south and north entrances equally, but in practice it became confusing to visitors who didn't know how to orient themselves. Also, it became clear that because we couldn't move it or change the configuration, we couldn't easily accommodate more art—the desk became the elephant in the room.

As part of this project, we are replacing the current desk with ticket bars designed by Situ Studio. At our June start, we'll be anchoring the ticket bars to our back wall, so we can pair it with large wayfinding signage, but the key is we can pretty easily change that configuration if it's not working; something the fixed desk never afforded us. This means we can also now incorporate more art and do so in a way that makes it more central to the visitor experience.

There are so many factors when it comes to putting art in our lobby.  For this, I'm turning to Kevin Stayton, our Chief Curator:

Bringing art and people together is why we are here. Art can astonish and amuse; it can be stimulating and it can be moving. We think you should encounter art as soon as you enter the building. However, we have to balance the presentation of art in the lobby with a number of other factors: Will the artwork get in the way of traffic flow or, perhaps, be in danger because of it? Will it infringe on our ability to offer programs in the space, or to use the space for events like movie shoots? And, perhaps most important, does the space provide the right environment for the art? Will the work of art look good in such a large space and will it be safe from damage with the amount of light and the temperature fluctuations of a lobby environment? These are complicated questions that we are committed to tackling in order to make the experience of art an immediate one when you arrive. We think ALONG THE WAY, a monumental sculpture by the artist KAWS will be a perfect introduction to the Brooklyn Museum, and when you see it we hope you will too.

KAWS (Brian Donnelly, American, b. 1974). ALONG THE WAY, 2013. Wood, 216 x 176 x 120 in. (548.6 x 447 x 304.8 cm) overall. Brooklyn Museum; Gift in honor of Arnold Lehman, TL2015.27a‒b. (Photo: Adam Reich, courtesy of Mary Boone Gallery, New York)
KAWS (Brian Donnelly, American, b. 1974). ALONG THE WAY, 2013. Wood, 216 x 176 x 120 in. (548.6 x 447 x 304.8 cm) overall. Brooklyn Museum; Gift in honor of Arnold Lehman, TL2015.27a‒b. (Photo: Adam Reich, courtesy of Mary Boone Gallery, New York)

When you come into our lobby in June, you'll find an exhibition by KAWS, which includes two paintings in addition to this enormous sculpture, to greet you. We're now much closer to "one click to the art" and can't wait to see how art in all its forms changes the visitor experience from the get go.

Scaling Back

In every project there's always a moment where the timeline starts to shrink. You start to look at your launch date and the to do list (ours is in the form of Trello cards); there's that moment of reckoning where you make the decision to scale back. These are not easy decisions, but you do it in an effort to not over-extend the project and with the aim to do whatever is in that launch product really well. This is that story for us.

ios1-outside
ios1-outside

You may remember we were aiming to tie pre-visit information into the app and present it to users based on their location. This gives our app dual functionality; outside you get what you need about getting here, inside the app is focused on the activity of seeing art. While this is a needed feature, it became clear that we could launch with a single purpose ASK app and add the pre-visit functionality in a future release. Why did we make this call?  Well, that timeline problem hit us again...and that user expectation problem hit us, too. It's an interesting confluence of a lot of the things I've been blogging about lately.

Let's look at the timeline problem. Alongside ASK development we've been in a massive clean up of our back end systems. We've been migrating our website to a new CMS (Expression Engine) because our own home baked CMS (yeah, that's right, we made the excellent fail years and years ago to try and roll our own) was a total disaster. Knowing a website redesign would be in our near future and the goal to integrate ASK content into the pages at some point, we needed to get our website off our current systems and onto something less like a house of cards; this meant moving content into a new CMS, designing slightly more modern layouts, and thinking about responsiveness.

This website migration was on a parallel timeline with ASK and this presented us with problems when deploying the pre-visit information in the app. The best way to do a project like this is to launch the website migration first with an API behind it, then launch the app which uses that API to draw the pre-visit info. In our case, the website migration couldn't quite happen fast enough, so the mobile application timeline started to become affected.

For now, we're going to release a single purpose ASK app and tie the pre-visit information into a later release after our website migration.
For now, we're going to release a single purpose ASK app and tie the pre-visit information into a later release after our website migration.

Our first thought, as a stopgap, was to reskin our existing mobile website. Currently, if you hit our website from a mobile device, we give you an old-school (this predates responsive design) slimmed down version of what you need—exhibitions, events, directions, hours. The thought was, let's put the geofencing in the app and let's give the user a web view of the mobile site. This made sense in theory, but in practice we hit the user expectation problem.

A web view in a native app feels slow and buggy. Users accept that glitchy feeling on mobile web—we've come to expect it—but inside an app the story is totally different. That feeling of slower response becomes immediately apparent and glaring. We tried to speed things up, change the way elements loaded, but the base problem was still there; our shortcut just didn't make sense.

So, we had to adjust. The mobile app will launch in June as a single in-person experience. Then we'll finish our website migration with an API and soon after bake the pre-visit stuff back into the app for a future release. We don't expect this process to be a long one and likely this functionality will be added by the end of the summer if not sooner, but the problems were obvious.  No matter how much we wanted this info there from the start, the scaling back made the most sense.

For now, we're aiming to do a good job of ASK. Then, we'll aim to do a good job of pre-visit information.

Location, Location, Location

Last month we had the pleasure of introducing the six members of our audience engagement team, the specialists who will be engaging with visitors via the app. Since then you’ve heard a bit about our training process, how we’re gathering and sharing information in order for the team to feel comfortable and confident about our encyclopedic collection. What we haven’t talked about is where all this is taking place. When the team was first brought on board, we created an impromptu work space for them on our second floor mezzanine—a space that is adjacent to the construction area of the second floor galleries and currently off-limits to visitors. If you saw our LaToya Ruby Frazier or GO exhibitions, you’ve been in this space. A little sterile at first, they made it their own with posters, working note boards, and the like, jokingly referring to it as the “command center.” Generally, the space worked well and gave the team a place to gather and gave us a place to hold discussions after app testing sessions.

The ASK team fielding questions using our second floor mezzanine as a temporary office space.
The ASK team fielding questions using our second floor mezzanine as a temporary office space.

As we approach soft launch of the app and the arrival of the new furniture, the team has relocated to a public area just inside the Great Hall on the first floor. This area is a main thoroughfare for most foot traffic (hence it’s internal nickname “42nd Street”), which admittedly makes it a challenging work environment, but that’s kind of the point. The team will eventually be in the lobby, which can be quite chaotic, so we wanted to give them a transition period in a busier space to start getting used to such distractions. Mainly though, we wanted to make the working process more visible and transparent in order to drum up excitement and anticipation on the part of our visitors. And we’re not the first ones to try this. Southbank Centre did this for their website redesign, though in an even more formal fashion. In true Brooklyn Museum fashion, ours is a little scrappier.

The ASK team has relocated to "42nd Street" to help acclimate them to working in a busy space before their lobby move in June.
The ASK team has relocated to "42nd Street" to help acclimate them to working in a busy space before their lobby move in June.

Taking a cue from our colleagues across the pond, we are also advertising our testing sessions and visibly sharing feedback, though for us it's in the form of sticky notes on the wall,on  which we invite testers to write down the one thing we should know from their testing experience. Now, I have a love/hate relationship with sticky notes, as I’ve shared before, but their appeal is undeniable. Testers jump at the opportunity to leave us their thoughts in this way, and the notes have been useful for the team to read as most of them are quite positive and a total morale booster.

What's the one takeaway we should know from your experience using our ASK app?
What's the one takeaway we should know from your experience using our ASK app?

It’s interesting to see how quickly this is working. Most visitors walking by automatically slow down a bit to figure out what’s going on and read our sign, even more so during really active periods when the team is answering incoming queries during testing sessions or using the conference table for feedback discussions. I hope this continues to drum up visitor interest and helps acclimate our team to working in a hectic environment.

Agile by Design

A series of furniture designed by Situ that we could use in a modular and reconfigurable fashion. The design of the components helps differentiate function.
A series of furniture designed by Situ that we could use in a modular and reconfigurable fashion. The design of the components helps differentiate function.

As I introduced in a previous post, SITU Studio was brought on board to design a mobile, flexible, and temporary set of furniture components that would allow us to test different configurations in the lobby.

There were several parameters we knew going into the design process:

  • We need to be able to clear the lobby and pavilion of furniture for programming or special events on a fairly regular basis, but have no good place to store the furniture elsewhere in the building.
  • Security is a required part of the entry experience, but we wanted to somehow make it more inviting, more integrated.
  • We have need for a separate information desk, particularly during busy times when admissions staff is focused on ticketing and don’t have as much time to devote to answering general queries.
  • The furniture components themselves needed to help communicate that there are different services going on, i.e. the ticketing desks need to look different from the Audience Engagement team “hubs” to help underscore the different functions.
Seeing prototypes in the space has been incredibly helpful.
Seeing prototypes in the space has been incredibly helpful.

Throughout the design process, there was much back-and-forth as we hammered out the particulars. As we discussed traffic flow (more in a future post about this) and began to really delve into our needs for the space, we were able to narrow down the components and their functions.

However, finalization of the design only happened after we made the decision to place the ticketing bars in a row against the south wall. This placement was based on previous configurations of the lobby (pre-circular desk) and in consultation with the traffic folks. Turns out that modularity and flexibility only get you so far in planning. You have to put a stake in the ground for that flexible solution to anchor to or there’s no consistency. The ticketing bars themselves are still moveable, but as you’ll see in a future post, we’re centering messaging and traffic flow decisions around this location so while we could move them, we hope they work there.

The ASK team is working using a temporary setup, but we've found that being together as a team has been important. Often, they need to lean over and ask each other questions.
The ASK team is working using a temporary setup, but we've found that being together as a team has been important. Often, they need to lean over and ask each other questions.

Unfortunately, what we are finding now is that in at least one instance, the furniture is suffering from the same agile fail that Shelley just wrote about. By necessity, furniture design had to progress ahead of staff hiring, which means the hubs as designed may not meet the needs we are now seeing. We envisioned the hubs as individual desks for ultimate flexibility in placement, which means the Audience Engagement team members work individually at their desks. But now that we have the team in place, what we’re seeing is that they work as just that—a team. They are currently at a table together and so far during our app testing sessions, they speak with each other and in some cases crowd-source the answer from among the team members. This will be difficult to do with the current hubs. What’s more, this team process is fascinating to watch. And since one of the main goals of placing the team in the lobby to begin with is drumming up interest in ASK, we can’t ignore the draw of the team’s working process.

The hub design, the place where the ASK team will be working in the lobby, had to be finalized prior staff hiring. The design is reflective of one person working on their own when the reality is they are working as a group.
The hub design, the place where the ASK team will be working in the lobby, had to be finalized prior staff hiring. The design is reflective of one person working on their own when the reality is they are working as a group.

All is not lost, however, and we are working with the traffic consultants to see how and if we can group the hubs together in a way that works. At the very least we can pair the hubs so team members are always partnered. And although the team works a certain way now, as we continue to test the app, see how visitors use it, and where they are interested in engaging with us around it (might not be in the lobby), the individual hubs may end up being exactly what we need.

We're Only Human

When you've got any tool that is designed to answer questions the danger is that people think it's an automated system; with ASK we need to get across very quickly that you're being connected to real people. This means personalizing the app to a certain degree while not going so far as to create a community. We want the experience through the app to be human and we want that personality to shine through from start to finish.

Comparing the first version of the ASK logo with our latest.
Comparing the first version of the ASK logo with our latest.

This first thing a person sees is our branding and we wanted a look and feel for ASK that reflected a personal touch. Our first attempt, which you've seen on this blog quite a bit, used handwriting to convey that personal interaction. In our most recent incarnation—the one we'll launch with—we've taken this one step further. The ASK that you see now has moved toward something a little less restrained and a bit more active; we hope our new ASK is more spontaneous just like the messaging interactions in the app itself.

A new onboarding process presents a series of messages that introduce the team.
A new onboarding process presents a series of messages that introduce the team.

The next thing we adjusted was the onboarding process. Previous user testing sessions had shown us that our start question—What work of art are you looking at right now?—worked well. Users immediately made a beeline for the nearest object and actively started using the app to answer that question; we didn't need any instructions or those pesky tutorial screens because the question itself functioned as the tutorial. This question alone, however, couldn't convey that there was a team of people ready to connect with you about art. While this would eventually be accomplished through the team's responses to a visitor's question—all varied and with a personal style—that was coming too late in the game. So, we adjusted the onboarding to include a couple of messages that would introduce the team to the user before the question that gets them started.

Another adjustment involved the protocol of the ASK team. They had already been working with less formal language, but user testing sessions demonstrated how critical that initial contact was. With our first response, especially, the team has been experimenting with a more friendly approach often encouraging the user with "that's a great question" or providing an indiction of their own personality, "I love that object." One critical lesson the ASK team learned was to admit when they didn't know an answer. In some cases, saying "actually, we're not sure, but we do know this about the object" is better than trying to redirect the conversation without the admission; turns out, admitting when we don't know is about the most human thing we could do. Subsequent user testings have shown these shifts to be working quite well with testers specifically citing the team's tone and responses as feeling personal and nowhere near automated. The team continues to do a lot of experimentation around language, directed and open questions, and tests of our start message. Monica will be blogging more about those lessons learned in the coming weeks.

We've ditched the scale that indicated wait time in favor of inline messaging.
We've ditched the scale that indicated wait time in favor of inline messaging.

We have an unavoidably tricky bit with ASK because you are connected to a real person on the other end; no matter what there's going to be some wait time. The reality is we answer questions as they come in and we can only type so fast. In early days we tried to tell users where they were in the queue—total fail because users didn't parse the small notification. In a second incarnation we decided to implement a wait scale, but users found this equally confusing and the scale itself felt automated. It was pointed out to us quickly that even the word "queue" made very little sense.

We ditched the scale.
We ditched the scale.

In the end, we switched to inline system messages. If your wait is too long, we'll fire off a nicely worded system message in the chat. Now, you have an idea there's a wait, but you don't have to learn an architecture (a scale, a position number)—the message is something you already know and have been using. A friendly message is just more human.

In our case, we're going with Google's "so and so is typing" instead of the three dots to reinforce there are real people behind the answers.
In our case, we're going with Google's "so and so is typing" instead of the three dots to reinforce there are real people behind the answers.

In addition to finding a friendly and not-so-automated way to tell you about wait time, we also needed to give users an indication that there was activity going on—that someone is working on a response—and the three dots are what users expect. In our implementation, we're going the Google "so and so is typing" route and we'll be using the first name of the ASK team member that you are connected with. This is a great example of pairing the need to be more human with a our technical implementation. By using the name instead of just three dots were solving user expectation and doing so in a way that we hope makes the experience a more personal one.

Some of these changes (and others) came on the heels of a visit from Genevive Bell, who I was fortunate enough to meet at Webstock, where we were both speaking. Genevive was intrigued by ASK and was generous enough to come by to talk with our team and test the app. Her visit came at a pivotal moment for us.

Genevive Bell speaking at Webstock 2015.
Genevive Bell speaking at Webstock 2015.

We had been testing ASK with small groups and we were just moving into the phase of testing on the floor more widely. During those early testing phases we had been given plenty of feedback from users, but we we were also holding on a lot of implementation in the name of agile. When Genevive came in, she zeroed in on the major issues we had been grappling with and she could articulate those things so clearly that it put much of the earlier feedback from user testing into perspective for us. We were incredibly fortunate in both the timing and the person—thank you, Genevive—because we were inspired to keep solving difficult problems.

I wouldn't go so far as to say we've got all these issues solved, but if feels like we've got a decent start.

Fighting the Three Dots of User Expectation

In my previous post, I talked a lot about agile development and where we failed it. Agile has also thrown us some serious curves in the realm of user expectation. In an agile process, you want to produce a "minimum viable product," which means sending a product to the floor that is often unfinished and a changeable state. This means making serious choices about what to prioritize in an attempt to not overbuild before your users tell you what they need. That sounds great in theory, but it rubs up some serious user expectation problems because users don't know anything about MVP—they just see what we give them and they assess that product based on the technologies they use every day. There are two features in ASK that were a clear user expectation agile collision.

One feature was "the three dots." You know the three dots, right? If you use iMessage or WhatApp those little three dots appear when someone is typing a response for you on the other end. They are an incredibly important indicator that something is happening and they've become the de facto standard in messaging clients. Google handles this a bit differently by saying, "so and so is typing," but it's the same feature. The three dots, however, are incredibly difficult to implement and in the early days of our development we decided to table it. Given the complication of implementation and the time it would take, we figured we should let our users tell us if they needed it. They did and they were loud and clear about it.

Most messaging clients use the three dots to indicate activity on the other side.
Most messaging clients use the three dots to indicate activity on the other side.

The other feature was the "push to resend." In this case, if your message does not go through the app usually gives you an indication and then you push, swipe, or pull down to initiate a resend.  In the early days of our app, we thought we'd be more helpful. We'd show you an exclamation point indicating a problem with the message and then in the background we'd do some jumping jacks to try and resend ourselves. Users didn't know what to make of this because every other implementation that they were familiar with did the opposite; the user is required to try the resend and those apps make it a simple push, swipe, or pull down to do so.

As seen on Instagram, "push to retry" is functionality users have grown to expect.
As seen on Instagram, "push to retry" is functionality users have grown to expect.

There's a careful balance here that we've learned from. You want an agile process, but you also need to think about what users are expecting and in some cases you should go ahead and just bake it in. Why? Well because in cases where there's a high degree of how something should work based on popular example, you can't fight it. While you can wait (yay, agile), the result is you can end up fighting massive fires when users are frustrated and then you end up needing to push this complicated stuff out quickly.

The other massive learning is don't try and reinvent the wheel. If something is being used out in the world in overwhelming ways, copy it with the best of them. Anything else is fighting an uphill battle of user training that you just don't need; we learned this very early on when after a testing session with incredibly frustrated users one member of my team came back into the office asking, "What's the budget code for therapy?"

User expectation has hit us in other ways not related to agile and this problem is still ongoing as we move the project to the floor. Simply put, visitors don't seem to expect much from a museum app. I hate to say that, but it's true. In test after test, we would show them the app home screen and ask what they thought our app did. Without really looking at the screen to assess what was in front of them, they would tell us what they expected a museum app would do. Guess what? They gave us answers about finding exhibition information, events, and playing audio/video. The thought that a museum app might actually do something different was not in the vocabulary.

This says much more about the state of technology in our industry than anything else and it presents us with an uphill battle moving forward. On some days this has left me feeling like we should have gone the route of developing something that had no existing vocabulary, but most days I think this is a darn good fight to have.

Learning from Agile Fails

As we march toward our June launch for ASK, it's a good moment to look back at some of the issues we've faced along the way. This post will talk about our agile implementation and not where it failed, but where we failed it. There's been a lot of talk in the museum world about agile, so this may be a worthwhile read if you are moving toward using it. For the most part, we are extremely proud of implementing agile across the project. We've used this learn-as-you-go planning methodology not just for software development, but also concept development and project workflow. Agile has given us critical discipline; everyone here thinks in terms of honing in and reduction in an attempt to create a minimal viable product that is fully user tested. At every turn, we've asked questions, A/B tested solutions, and responded to product use. As a team (which includes staff throughout the institution), I can definitively say we've come an extraordinarily long way; the project creation cycle demonstrated by ASK is very different than that of past projects I've been involved with. As valuable as the agile process has been, we've learned so much in where we've failed at agile principles that they are specifically worth exploring.

Most often, we failed at agile when timelines started to collide. On a project as large as changing the visitor experience from entry to exit, there are many parts of the project, all related, and all running on parallel timelines. Once something happens to one timeline, everything else has to shift accordingly. That's easier said than done and snafus in timelines are often unavoidable.

The technical timeline started back in April 2014; we had come off a series of pilots which determined what we were going to build, so we could immediately get started on mobile; no problems here. Issues started to crop up, however, when we started the dashboard build. The dashboard is what the audience engagement team uses to field incoming questions via the mobile app. On the technical side we needed to build both the dashboard and the mobile app on parallel tracks because the two products inform each other. The breakdown began when the audience engagement team hiring process got delayed; it took us longer than we thought to get the leads in place and then the team hired. Getting the right people for these positions was critical and this delay was unavoidably worthwhile. Not having the ASK team in place sooner, however, meant that we didn't have our user base at the critical build stage. If we were going to make our technical timeline, we had to take our best guess on what the dashboard should do and how it would be used.

We took our best guess in architecting the dashboard, but that wasn't always the right one.
We took our best guess in architecting the dashboard, but that wasn't always the right one.

Our best guess was a good one, but I can count the number of things we should have waited on. In looking back, we should have just worked to get messaging working seamlessly and delayed other aspects that added to dashboard complexity. Some of those aspects included ways for the ASK team to break down larger conversations into "snippets" of usable content. Snippets could be forwarded to curators when they couldn't answer questions. Snippets could be used to train the ASK team by giving them a way to access that content later; snippets are tagged with object IDs for easy reference when future users are asking about the same object. Snippets, also, can be used throughout the building and on our website in the form of FAQs—a critical integration that's part of our eventual project scope.

All of these things are vital parts of the project and would have to be done eventually, but in looking back only one of them—snippets for staff training—was critical right now. After all, we could use email to forward unanswered questions to curatorial. While that's not efficient, it would allow us to see how this functionality needed to work prior to building it into the dashboard infrastructure. Ditto for worrying about the eventual integration which isn't needed until later years of the project and will require much discussion with cross departmental staff. It's just better to wait on that until we know how we want to use the content.

So, we implemented all of this functionality because we had the time months ago to do so, but when the audience engagement team came in and started using it we had a problem. How this staff needed to use the dashboard differed quite a bit from how we designed the dashboard. As a result, we've had a period where we've had to make a lot of adjustments and changing things quickly, as we all know, gets more difficult the more complex the product happens to be. Our dashboard now had log-in, activity (messaging queuing), snippet creation and categorization, forwarding, archiving, beacon results, etc. Every adjustment affected every component....and you get the idea.

So, we started to scale back and reduce functionality to streamline the process of getting things working smoothly for the team who needs to use it. Now, we'll have to go back in and re-add that functionality later. The good news is that code is done, it's just a matter of shifting the implementation. The bad news is we could have probably waited all along and that's where we failed agile.

Timeline issues have come into play in other non-technical ways, too. One of our more critical hires was the Curatorial Liaison—the staffer who would coordinate communication with the curatorial staff and work with them to help train the ASK team. Without this person on board sooner, we had a technical timeline running in the building faster than the communication timeline. As I'm sure you can imagine, this did not go over well; we had a lot of people wanting to help shape the project with no outlet to help make that happen.

In the end, this problem was resolved as soon as we hired Marina Klinger, but it has meant she (and all the curators) have had to work on a faster timeline given our June launch was right around the corner from her February hire. I will take the time right now to thank the curatorial staff who have done everything to help make this project happen on this unavoidably worthwhile tight timeline. This is a key example of an agile fail—in an agile setup, the methodology makes it possible to start the technical side prior to the content side integration, but these two things should be running in parallel.

So, if you hear me talking up agile at conferences, you find I'm a believer, but I will also tell you that agile can't stop you from making bad calls. That, you have to do yourself and, for us, that's been a continual learning process knowing we're not perfect. Luckily, agile also makes it possible to reshuffle the deck of tasks fairly quickly, so when you don't make the right call you can self-correct much more easily.

Connecting with Curators

Our ASK team has a number of exciting challenges ahead of them. How do you communicate information about art in an informed and engaging way over text message? How do you prepare yourself to answer questions about any andevery object in the museum? How do you make sure your answers and language convey your personality (so visitors know its a human being on the other end) as well as curatorial intent and institutional philosophy? This last challenge is one that I’ve been thinking about a lot lately and one that I hope to help the team meet head on. Connecting with curators is a priority that Sara, Monica and I are tackling on a number of fronts. The first and most direct has been listening to curators speak about their collections—what they contain, how they’ve changed over time, and how they are installed.

The ASK team getting a tour of Judith Scott—Bound and Unbound from Catherine Morris, Sackler Family Curator for the Elizabeth A. Sackler Center for Feminist Art.
The ASK team getting a tour of Judith Scott—Bound and Unbound from Catherine Morris, Sackler Family Curator for the Elizabeth A. Sackler Center for Feminist Art.

In their first month of training, the ASK team attended sessions with curators from every area of the Brooklyn Museum’s collection that is currently on view—Asian art, which will be reinstalled on the museum’s second floor in 2017, is currently on the back-burner. These talks have been indispensable in helping the team become familiar with each curator’s unique voice and perspective. For instance, Barry Harwood, Curator of Decorative Arts, captivated us with anecdotal stories about the previous inhabitants of the periods rooms while also emphasizing traditional art historical styles and the museum’s great strength in progressive machine-made and patented design for the middle classes.

On my end, I have relied on these initial curatorial sessions, as well as follow-up conversations and museum publications, to write wikis about the history, curatorial philosophy, and critical issues of each collection area. The team will be able to reference these when faced with particularly tricky questions for which curatorial departments have specific scholarly or philosophical viewpoints.

Egyptian, Classical, and Ancient Near Eastern Art wiki.
Egyptian, Classical, and Ancient Near Eastern Art wiki.

For instance, the Egyptian, Classical, and Ancient Near Eastern Art wiki includes the department’s stance on critical issues like the ethics of collecting antiquities, the race of the ancient Egyptians, and iconoclasm in the Middle East, both past and present. It also provides the ASK team with language formulated by Ed Bleiberg for how to respond to visitors’ surprisingly frequent questions about supernatural and extraterrestrial theories for origins of ancient Egyptian civilization. Critical issues for other curatorial departments include topics like what it means to curate with a feminist methodology at the Elizabeth A. Sackler Center for Feminist Art, the importance of historical change and adaptation in African art, the expanded definition of “American” in the American Identities galleries, and issues of repatriation in the Arts of the Americas collection.

Curators have not only helped to identify these topics but will also contribute to the form they take in the ASK wiki. Like myself, Monica and the ASK team, they are being set up with accounts on Confluence, our wiki platform, and invited to review its contents. That is, they can comment, critique, add to, or rewrite both my collection area wikis and the object-based wikis that are being researched and written by the ASK team. To protect curators’ time, however, articles will only be flagged for curatorial attention once they have been reviewed by the ASK team member “majoring” in that particular collection, as well as by me.

Joan Cummins, Lisa and Bernard Selz Curator of Asian Art, works with the ASK team to answer questions during a testing session.
Joan Cummins, Lisa and Bernard Selz Curator of Asian Art, works with the ASK team to answer questions during a testing session.

Curators are also participating in ASK app testing. Over the next month and a half, as our team continues to learn the collection in preparation for launch on June 10, curators will be on hand to help answer questions during testing sessions taking place in their galleries. This not only gives the team a sense of how particular curators handle incoming queries in their collection areas, but will also allow us to populate our initial knowledge base with curator driven language. During the post-processing of these initial testing sessions, particularly useful segments of these early conversations (called “snippets”) will be tagged (via accession number) to specific objects so they will appear alongside objects in the Dashboard. These can then be referenced or reused by the team in later sessions. In the future, questions that stump out team will also be forwarded to curatorial departments and answers tagged back into the database for reuse.

Connecting with curators has been an essential part of the ASK team’s training so far and their continued involvement through the ASK wiki and other means will be crucial to the team’s success.

Amassing Encyclopedic Knowledge

ASK is a tool that allows any museum visitor using the Museum's app to have the opportunity to be in direct and immediate contact with Museum staff (the ASK team) knowledgeable about the Museum and its collection. More specifically, the app connects visitors with people who have specialized information. Information and understanding about individual works on display—not only these objects as individual works, but these objects in context with history and culture, within the context of the Museum's collections, and their current installation. Furthermore, the app connects our visitors with people who have specialized knowledge about museum visitors, and the multiple ways in which they experience works of art. I delineate here the type of information that the ASK team will have because it is this type of information that makes this app more than just a “human Google.”  Anyone can Google a question, and look up information—what ASK is allowing our visitor to do is to connect with a person who has a nuanced understanding of the works of art, AND an understanding of the different ways in which people interact with art.

As part of training, our Audience Engagement team is walking through the galleries with each collection curator. Here they are getting a tour through American Art with Terry Carbone.
As part of training, our Audience Engagement team is walking through the galleries with each collection curator. Here they are getting a tour through American Art with Terry Carbone.

With all of this in mind, how do these six individual humans engage museum visitors with 5,000 years of art? How can the team prepare to be at-the-ready to answer questions and engage in dialogue thoughtfully about any object in the collection at any given moment? It is a daunting task indeed!

To best address this challenge, we have decided that each individual team member will have a "major" and "minor" collection area of focus, and of course, each will have an understanding the many different ways in which museum goers engage with art.

Nancy Rosoff, Andrew W. Mellon Curator of the Arts of the Americas, works with the team to take a closer look at our Life-Death Figure.
Nancy Rosoff, Andrew W. Mellon Curator of the Arts of the Americas, works with the team to take a closer look at our Life-Death Figure.

To begin our work together we've started learning about the full collection in tandem with experimenting with the app. Although everyone will have two collection areas on which they are focusing, it is important that everyone has a broad understanding and familiarity with the full collection so that we can make connections across collection areas (and if we're overloaded with a high volume of inquiries, we'll be prepared to respond to some queries that are outside our focus areas). Over the course of training and our soft launch the full team will meet with all of the curators, write one comprehensive wiki for each collection area, write 7-9 object wikis in their respective "major" collection areas of focus, and practice manning ASK's dashboard as much as possible.