Learning from Agile Fails

As we march toward our June launch for ASK, it's a good moment to look back at some of the issues we've faced along the way. This post will talk about our agile implementation and not where it failed, but where we failed it. There's been a lot of talk in the museum world about agile, so this may be a worthwhile read if you are moving toward using it. For the most part, we are extremely proud of implementing agile across the project. We've used this learn-as-you-go planning methodology not just for software development, but also concept development and project workflow. Agile has given us critical discipline; everyone here thinks in terms of honing in and reduction in an attempt to create a minimal viable product that is fully user tested. At every turn, we've asked questions, A/B tested solutions, and responded to product use. As a team (which includes staff throughout the institution), I can definitively say we've come an extraordinarily long way; the project creation cycle demonstrated by ASK is very different than that of past projects I've been involved with. As valuable as the agile process has been, we've learned so much in where we've failed at agile principles that they are specifically worth exploring.

Most often, we failed at agile when timelines started to collide. On a project as large as changing the visitor experience from entry to exit, there are many parts of the project, all related, and all running on parallel timelines. Once something happens to one timeline, everything else has to shift accordingly. That's easier said than done and snafus in timelines are often unavoidable.

The technical timeline started back in April 2014; we had come off a series of pilots which determined what we were going to build, so we could immediately get started on mobile; no problems here. Issues started to crop up, however, when we started the dashboard build. The dashboard is what the audience engagement team uses to field incoming questions via the mobile app. On the technical side we needed to build both the dashboard and the mobile app on parallel tracks because the two products inform each other. The breakdown began when the audience engagement team hiring process got delayed; it took us longer than we thought to get the leads in place and then the team hired. Getting the right people for these positions was critical and this delay was unavoidably worthwhile. Not having the ASK team in place sooner, however, meant that we didn't have our user base at the critical build stage. If we were going to make our technical timeline, we had to take our best guess on what the dashboard should do and how it would be used.

We took our best guess in architecting the dashboard, but that wasn't always the right one.
We took our best guess in architecting the dashboard, but that wasn't always the right one.

Our best guess was a good one, but I can count the number of things we should have waited on. In looking back, we should have just worked to get messaging working seamlessly and delayed other aspects that added to dashboard complexity. Some of those aspects included ways for the ASK team to break down larger conversations into "snippets" of usable content. Snippets could be forwarded to curators when they couldn't answer questions. Snippets could be used to train the ASK team by giving them a way to access that content later; snippets are tagged with object IDs for easy reference when future users are asking about the same object. Snippets, also, can be used throughout the building and on our website in the form of FAQs—a critical integration that's part of our eventual project scope.

All of these things are vital parts of the project and would have to be done eventually, but in looking back only one of them—snippets for staff training—was critical right now. After all, we could use email to forward unanswered questions to curatorial. While that's not efficient, it would allow us to see how this functionality needed to work prior to building it into the dashboard infrastructure. Ditto for worrying about the eventual integration which isn't needed until later years of the project and will require much discussion with cross departmental staff. It's just better to wait on that until we know how we want to use the content.

So, we implemented all of this functionality because we had the time months ago to do so, but when the audience engagement team came in and started using it we had a problem. How this staff needed to use the dashboard differed quite a bit from how we designed the dashboard. As a result, we've had a period where we've had to make a lot of adjustments and changing things quickly, as we all know, gets more difficult the more complex the product happens to be. Our dashboard now had log-in, activity (messaging queuing), snippet creation and categorization, forwarding, archiving, beacon results, etc. Every adjustment affected every component....and you get the idea.

So, we started to scale back and reduce functionality to streamline the process of getting things working smoothly for the team who needs to use it. Now, we'll have to go back in and re-add that functionality later. The good news is that code is done, it's just a matter of shifting the implementation. The bad news is we could have probably waited all along and that's where we failed agile.

Timeline issues have come into play in other non-technical ways, too. One of our more critical hires was the Curatorial Liaison—the staffer who would coordinate communication with the curatorial staff and work with them to help train the ASK team. Without this person on board sooner, we had a technical timeline running in the building faster than the communication timeline. As I'm sure you can imagine, this did not go over well; we had a lot of people wanting to help shape the project with no outlet to help make that happen.

In the end, this problem was resolved as soon as we hired Marina Klinger, but it has meant she (and all the curators) have had to work on a faster timeline given our June launch was right around the corner from her February hire. I will take the time right now to thank the curatorial staff who have done everything to help make this project happen on this unavoidably worthwhile tight timeline. This is a key example of an agile fail—in an agile setup, the methodology makes it possible to start the technical side prior to the content side integration, but these two things should be running in parallel.

So, if you hear me talking up agile at conferences, you find I'm a believer, but I will also tell you that agile can't stop you from making bad calls. That, you have to do yourself and, for us, that's been a continual learning process knowing we're not perfect. Luckily, agile also makes it possible to reshuffle the deck of tasks fairly quickly, so when you don't make the right call you can self-correct much more easily.