In the Gallery vs. Online: How a Split Second Can Differ

Kiosks in Split Second
Kiosks in Split Second

One of the questions people always ask me is how web differs from what happens in the building and that's a difficult thing to get metrics on.  With Split Second, we are in a unique position to answer that question because we've been running the same online activity on kiosks in the gallery.  In this final Split Second blog post, I'm going to compare these two sets of data.

You may remember from an earlier post, even though part of the project took place online, we were surprised to see a mostly local audience taking part.  Overall, that local audience spent an average of 15 minutes completing the online activity (as opposed to the general average of 7 minutes).  In the gallery, our visitors spent an average of 4 minutes 18 seconds completing the activity at the kiosks.  Even though they spent less time doing the activity, the average ratings per person were quite similar:  online - 39.1 vs. gallery - 36.7. Also, the in-gallery vs. online completion rates were very similar, which suggests a highly focused visitor consuming content at the kiosks very quickly.  Here are a few charts to show off some of the online vs. in-gallery differences.

[portfolio_slideshow include="5444,5451,5446,5445,5450,5449,5449,5448,5447"]

When it came to some of the data that Beau's been delving into, he ran a comparison of in-gallery versus online data and found his original findings still held:

  • No correlation between experience and time spent.
  • Slight negative correlation between rating and birth-year. i.e. older people give slightly higher ratings.
  • Women rank things slightly higher than men.
  • Slight positive correlation between rating and experience, but women consistently rate themselves as being more experienced, so it's hard to tell whether the aforementioned correlation is caused by experience or gender or what.
  • Older people tend to self-identify as slightly more experienced.
  • Complexity and information findings still hold.
  • Engagement and rating variance, the finding also still holds, though there is an interesting change. In the gallery, rating variance tended to be much higher than online. For the control task, online variance was 520.6, while in-gallery variance was 668.5. For the free task, online variance was 459.1, while in-gallery variance was 510.1. So we're still seeing massive reductions in variance, but the variance in the gallery was higher to begin with.
  • Adding information, the finding still holds, though in the gallery the increase in ratings was not quite as big. (The muting of this effect might be related to the age/mean rating issue discussed above.)
Intoxicated Lady at a Window
Intoxicated Lady at a Window

Beau also took a look at the rankings data and found, for the most part, the same works win and lose.  As he notes, "There are some minor upsets, and a few things which might be worth a story. In particular, Intoxicated Lady at a Window seemed to always do quite a bit worse in the gallery than online."  While we are not totally sure why this painting didn't do so well in the gallery, it's interesting to note that this was the image that the New York Times used when the project was first announced.  It's very possible that we had an information cascade happen online with participants rating this work higher because they might have been more familiar with it. This is one case where the in-gallery metrics might actually be more accurate and it shows just how delicate subconscious effects may be.

As Joan mentioned in one of her posts, Split Second closes at the end of the year.  If you have not managed to see it in the gallery, we hope you can come take a visit because the show will be gone in the blink of an eye.

Split Second: A Curator’s Reaction to the Results

I’ve had a lot of time to mull over the results of the Split Second, so here are a few of my thoughts—roughly one week before the Split Second exhibition closes. Please bear in mind that I don’t bring any expertise on Sociology or Psychology or Statistics to the picture.  What I do bring is many years of experience working with Indian art and with people who are looking at Indian art for the first time. The original intent of the Split Second experiment was to measure people’s reactions to works of art as they encountered 1) objects that varied in degrees of complexity and 2) viewing situations that varied by time of exposure or degree of engagement. In theory the experiment could have used almost any type of art, and participants would have behaved in the same ways whether they were looking at Japanese prints, or Goya etchings, or Plains Indian ledger drawings.  After looking at the outcome I have to say that I’m not totally sure that we would have gotten the same patterns of response for different genres or traditions of art.  I think the fact that we used Indian paintings affected the outcome and here’s why:

First let me say that I wasn’t one bit surprised that people liked the objects better after they were given some information about them. My first experience of Indian art could best be described as “love at first sight” but I know that was unusual. The vast majority of people can’t get comfortable with an image of a guy with an elephant head and extra arms  —no matter how gorgeous—until they know why he has that head and what those extra arms mean.  When we ask visitors what they liked or disliked about installations of Indian art they almost always assess the quality and quantity of the information we offered first and then talk about the beauty or selection of the art as a very secondary concern. I am pretty sure that this is not the case when the same people are asked their opinion of displays of Western art, particularly paintings.

I have to wonder whether there would have been a marked difference in reaction to the same object in informed versus uninformed viewing experiences if we had used American still life paintings or French landscapes.  I think that the unfamiliarity of Indian painting—which I cited as a good quality for the project in my last blog—led to more dramatic results in the informed/uninformed section of the experiment.

The other place where I think our use of Indian paintings affected the data was in the complexity issue. I was initially really surprised that complex images rated as highly as they did in the split second viewing.  Advertisers know that you can grab people’s attention in an instant using big, bold graphics and a simple message.  I would have thought that the more brightly colored images with less going on would have rated higher because people could take them in quickly.  But the opposite was true.  Straight-forward, easily legible images like this one didn’t do very well at all (in fact it was among the least popular)…

Nayika Awaits Her Lover
Nayika Awaits Her Lover

…while very complex images with more than one focal point fared very well despite the fact that there was no way people could take in all the info in 4 seconds. Here’s an example of one that did really well:

Krishna and Radha Under a Tree in a Storm
Krishna and Radha Under a Tree in a Storm

I think the preference for complexity comes from the fact that participants knew they were rating art, and people have different criteria for judging art than they do for other means of communication.  Even in an age when conceptual art and minimalism are part of the canon, I think a lot of people retain an old-fashioned preference for art that looks like it took some effort to create.  And I would argue that this is particularly true among those who know even a little bit about Indian art: people expect Indian art to display virtuoso craftsmanship and lots of elaborate detailing. So participants—consciously or not— gravitated toward objects that looked the way they thought Indian art should look.  Again, I have to wonder if complexity would have been as popular if participants were judging British portraiture or Greek sculpture.

People have asked me if the results of the Split Second experiment will change anything about the way I present works of art in the galleries and I have to say that the answer is probably no.  Mostly that’s because I’m not trying to sell anything in the galleries.  I’m not in the business of giving people what they like.  I’m in the business of informing people and of introducing them to things that they haven’t seen before.  Obviously we want the art to look as beautiful as possible, and if visitors leave the galleries feeling that they like the art, that’s great, but that’s not the only response I’m hoping for.

One of the most universally rejected paintings in the Split Second experiment is also one of the most significant from a historical and even political vantage point:

A Maid’s Words to Radha
A Maid’s Words to Radha

This painting comes from a manuscript that is important to art historians because it can be dated to a precise year (most early Indian paintings cannot) so it serves as a landmark for dating all other paintings of its type. It’s also in a style that one very influential Indian art historian promoted and popularized as “quintessentially Indian,” a designation that was particularly important in the first half of the twentieth century as India was struggling to gain independence and to re-establish its own traditional culture after centuries of change brought by foreign conquerors.  I’m hoping that these facts enhance your interest in the painting, but I’m guessing that they don’t make you like the painting any more than you did before.  Because the truth is that it’s kind of crudely painted and you either appreciate its rough simplicity or you don’t.  But the fact that you didn’t like it doesn’t mean that I’m going to stop showing it in the gallery.

The one place where we want to give people art that they can instantly like (or at least find engaging) is in choosing the images we use for our advertising.  Maybe the results of Split Second can give us some insight into the kinds of Indian paintings we choose for promotional materials in the future. Those images can get people into the galleries and then I’ll take it from there.

Split Second: Why Indian Paintings?

I am listed as a contributor to the Split Second project, but I really wasn’t the brains behind it; I’m just the person who okayed the use of Indian paintings and then wrote the accompanying labels.  Think of me as the grocer who provided the ingredients for the meal that Shelley and Beau cooked up. I’ve been silent so far because the analysis of the results is really a matter for someone with a more statistical bent.  But since the project assessed the perception of works of art there might as well be a little discussion of the art we used.  I’m going to give you a little background info here and then later I’ll talk about my responses to the data we gathered.

First of all, a plug: the exhibition closes December 31, so I encourage you to get to the Museum before that.  The paintings are really wonderful and won’t be on view again for a while because they’re light sensitive. We’ve got some serious masterpieces on view.  This one in particular is a show-stopper, made by a team of the best artists in India for an emperor who spared no expense:

Led by Songhur Balkhi and Lulu the Spy, the Ayyars Slit the Throats of Prison Guards and Free Sa'id Farrukh-Nizhad
Led by Songhur Balkhi and Lulu the Spy, the Ayyars Slit the Throats of Prison Guards and Free Sa'id Farrukh-Nizhad

If you come to see the paintings in person I think you’ll be surprised.  They’re definitely not as flat as they seem on a computer screen and they’re all different sizes—something you just don’t comprehend when you look at reproductions, even if the dimensions are listed.  This painting, for instance, is the size of a subway poster (for a train not a station) while most of the others are more the size of a page in a coffee table book or even smaller. In many cases, you can see the exquisitely painted details far better in person.  So hurry over!

Let me tell you a little about why we chose Indian paintings in the first place.  First of all there are the nuts-and-bolts reasons: we have a lot of high-quality Indian paintings in the Brooklyn Museum collection and all of them had been photographed in color thanks to a big digital capture project we did a couple of years ago.  It also seemed like a nice complement to, and subtle promo for, the big Vishnu exhibition, which was going to be on view for much of the same period as the Split Second installation (Vishnu closed in October).

Then there are the more intellectual reasons: Indian paintings are basically flat, and they are unfamiliar territory for much of our audience.  Flat is good because photographic reproductions of flat objects are more straight-forward and uniform than photographs of three-dimensional objects.  We were worried that variable factors like background color and dramatic lighting would influence participant reactions to photos of teapots or scarabs.  There are variables in the photography of Indian painting—whether one uses raking light to pick up the glint of metallic paint, whether one includes all or some or none of the border that appears around most Indian paintings—but they’re not as significant as those for photography of 3D objects.

Unfamiliar is good because we wanted people to come to the material with fresh eyes and few preconceptions.  We didn’t want them to recognize masterpieces or famous artists and rate them more highly because they felt like they should.  We had people describe their level of expertise or familiarity with Indian art before doing the experiment and most were complete newcomers.

There is one way in which Indian paintings were inappropriate material for a split-second experiment: these paintings definitely weren’t designed to be glimpsed quickly.  “In your face” impact isn’t a quality many of them were supposed to have. With the exception of the oversized painting illustrated here, they were all gathered or bound into manuscripts; their aristocratic owners held them in their hands or on a table. In intimate groups or solo, the viewers went slowly through the pages, looking at the paintings as a form of entertainment. Book illustrations require a different style and approach to image-making than wall-hung paintings that might be seen from across the room. The many tiny details that you can find in Indian manuscript paintings are a result of their relatively small size, but they are due even more to the practice of looking at manuscripts closely and at length: the artist wanted the viewer to have plenty to look at, to make new discoveries every time he or she opened the book.  So these illustrations were rarely judged on their ability to make a split-second impression—until now!

Split Second Stats #7: Contentiousness

A big part of experiencing art is talking about it. Sometimes (or, uh, frequently) artworks are successful because they provoke disagreement, and along with that disagreement, some good conversation. Because the participants in the Split Second online experiment weren't communicating with one another, we didn't get an opportunity to measure conversations about the artworks directly. However, we did want to get a sense of which works might be contentious, and to make an effort to figure out why. To measure contentiousness, we looked at the variance of the ratings for each work. If most participants gave a work roughly the same rating, then it's safe to say that work is not contentious. However, if participants disagree, if there's a large amount of variance in the ratings, then that work might be contentious. (I say "might" for a good reason: while high variance of ratings may indicate disagreement, it could also simply indicate confusion. I'll come back to this later.)

In Split Second Stats #4: Engagement we found that certain tasks in the experiment had a strong effect on the variance of ratings. This is important because it indicates that the context of presentation and the way participants engage with a work can change the variance. Here, however, we'll take a look at how variance and contentiousness were related to specific properties of the works themselves. All of the analyses below apply to the unlimited time experimental tasks only.

As in many of the analyses described in previous blog posts, complexity played a big role here. We found that as paintings got more complex, they became less contentious. That is, we found a negative correlation between complexity and variance (cor = -.35, p = 0.03). This is not too surprising: we found previously that when time was unlimited, people tend to rate complex paintings very well, a finding which already implies inter-participant agreement. A more puzzling finding concerned color: The higher the overall saturation of the colors in a work, the higher the variance (cor = .42, p < 0.01). One possible, but entirely speculative, explanation for this effect is that one large group of our participants reacted very positively to highly saturated color palettes, which another large group reacted very negatively. Similarly, we found that the larger the frame of the painting, the more variance in ratings. This again might suggest (speculatively!) a division of the participant population into two groups: those that found large frames interesting, and those that found them to get in the way of the work.

Some of the strongest effects concerning variance were not clearly related to quantifiable properties of the works themselves. One very strong, reliable finding was that as the average amount of time participants spend looking at a work increased, the variance of the ratings of that work decreased (cor = -.47, p = 0.002). That is, the more time was spent looking at a work, the more our participants tended to agree about how to rate it. Though this finding seems to push against the gist of the thin slicing theory, it also seems like an encouraging experimental result: in order to get people to agree about art, you just need to get them to hold still and look at it for a long time. However, it's a little bit more complicated than that. People decide for themselves whether or not they want to spend a long time looking at an artwork. This finding lets us know that when our participants spent that time, they tended to agree, but it doesn't tell us why they decided to spend their time in the first place. There is also a cause-and-effect problem: it could be that the decreasing variance and the increasing time are themselves caused by a third factor we didn't measure. (Though complexity looks like it may account for some of this effect, it certainly doesn't account for all of it.)

Indian. Utka Nayika, late 18th century. Opaque watercolor on paper, sheet: 9 13/16 x 7 9/16 in. (24.9 x 19.2 cm). Brooklyn Museum, Gift of Dr. Ananda K. Coomaraswamy, 36.241

Indian. Utka Nayika, late 18th century. Opaque watercolor on paper, sheet: 9 13/16 x 7 9/16 in. (24.9 x 19.2 cm). Brooklyn Museum, Gift of Dr. Ananda K. Coomaraswamy, 36.241

Finally, we found that some of the works in the experiment were simply contentious on their own terms. The most contentious object, Utka Nayika (pictured above), is unfinished. Though we have no quantifiable measure that points toward it being an unfinished work, it seems like a safe bet that this peculiarity accounts for the high variance in participants' ratings. As I mentioned before, it's important to differentiate between contentiousness and confusion. We can identify this work as being truly contentious, and not simply confusing, by looking at a histogram showing how it was rated.

utka-nayika-histogram.png

In the case of a work which was simply confusing, we would expect a uniform distribution of ratings, where any one rating was as likely to occur as any other. Instead, what we see here are distinct peaks and valleys. There are small peaks around 25 and 100, and larger peaks around 50 and 75. This indicates participants' opinions about the work split them into at least three groups: those who did not like it (the peak at 25), those who were decidedly indifferent (the peak at 50), and those who liked it a lot (the peaks at 75 and 100). A similar situation can be seen in the rankings histogram for the second most contentious object, The Bismillah, a work which is distinguished by its calligraphic, non-representational nature:

Indian. The Bismillah, 1875-1900. Opaque watercolor and gold on paper, sheet: 19 5/8 x 11 13/16 in. (49.8 x 30.0 cm). Brooklyn Museum, Gift of Philip P. Weisberg, 59.206.8

Indian. The Bismillah, 1875-1900. Opaque watercolor and gold on paper, sheet: 19 5/8 x 11 13/16 in. (49.8 x 30.0 cm). Brooklyn Museum, Gift of Philip P. Weisberg, 59.206.8

the-bismillah-histogram.png

In both of these cases, symbolic factors not accounted for by our experimental model had an extremely strong effect on the results, strongly suggesting a direction for further research. As interesting as it is to see the symbolic world bursting out of our tightly constrained experimental framework, it's not surprising: we are, after all, looking at art.

Split Second Stats #6: Subconscious Effects

In the previous post I closed by noting that depending on what participants were asked to do, visual complexity could affect their ratings. Indeed, we found that the effect of complexity changed depending on the task completed before providing a rating. Complexity affected almost every section of the experiment in some way or another, but some of those effects were more interesting than others. In particular, we found a very interesting set of interactions between the complexity of the frame of a work, the task participants were asked to complete, and rating. In the time-limited, Split Second task, we found various attributes of the frame of a painting had strong effects on how that painting was rated. The strongest effect was caused by the frame size, where bigger frames resulted in lower ratings. However, we also found that the surface complexity of the frame had a positive effect on ratings (cor = 0.19, p = 0.014). This effect was smaller, but definitely significant.

Indian. Krishna and Balarama on Their way to Mathura, Folio from a Dispersed Bhagavata Purana Series, ca. 1725. Opaque watercolor and gold on paper, sheet: 9 1/2 x 12 in. (24.1 x 30.5 cm). Brooklyn Museum, Gift of Mr. and Mrs. Paul E. Manheim, 69.125.4

Indian. Krishna and Balarama on Their way to Mathura, Folio from a Dispersed Bhagavata Purana Series, ca. 1725. Opaque watercolor and gold on paper, sheet: 9 1/2 x 12 in. (24.1 x 30.5 cm). Brooklyn Museum, Gift of Mr. and Mrs. Paul E. Manheim, 69.125.4

A major goal of this experiment was coming up with some preliminary answers to the question of what, exactly, is factored into a split-second judgment. When we make judgments in time-limited contexts, we're not able to make a thorough survey of the thing we're judging. Instead we produce a judgment based on a number of subconscious processes which may be affected by more than the thing itself. In this particular case, we were interested in knowing whether the complexity of the frame was affecting conscious, systematic judgments, or was operating on a subconscious level.

To answer this question, we looked at how the complexity of the frame affected ratings in all of the other tasks. In the time-unlimited control task, where participants were given as much time as they liked to rate a work without being asked to do anything else, the frame complexity effect disappeared completely. That is, when people were allowed to take a thorough look at a work, the complexity of its frame did not affect their judgment. This was also true for all of the engagement tasks, which makes sense because those tasks require participants to take a systematic approach to evaluating each work's surface.

In the time-unlimited Think tasks, where participants read information about the work, the frame complexity effect returned. That is, when participants paid attention to information about the painting, their judgment was again affected by the complexity of the frame. This suggests that attention paid to curatorial labels was also attention shifted from the work itself, and that this shift allowed certain aspects of the work to have a subconscious effect which would not occur in other circumstances. This effect was strongest when the full curatorial label was added (cor = 0.4, p = 0.01).

This finding is important from an exhibition design perspective. Curatorial interventions in the gallery space are always engaged in a kind of struggle with the art itself for spectator attention. Depending on how the attention of the spectator is focused, certain properties of artworks may be activated or suppressed. Some of these properties, such as the complexity of the frame, may only be activated when viewer attention is diverted or split in some way. A key aspect of the role of the curator is awareness of and sensitivity to the complex interdependencies between in-gallery interventions and various properties of the works. This experiment suggests an analysis of these interdependencies in terms of attention management: for any given curatorial intervention, how is attention diverted or split, and how does that activate or suppress properties of the work itself?

Split Second Stats #5: Complexity

Complexity is an important factor in the evaluation of art. In all of the previous Split Second blog posts I've talked about how the complexity of artworks dramatically affected participants' reactions. But I never explained what, exactly, was meant by "complexity." In this post I'm going to describe the kind of complexity we focused on in our analysis of the Split Second results, and also talk a bit about the kinds of complexity we didn't study, and the limits that imposes on the applicability of our results. There are lots of ways a work of art could be complex. Complexity could be a function of the visual surface of a work, as in the arrangement of contrasting elements within it, or of things outside of it, as in the network of references pointed to by its content. Complexity could also come from a connection between the work and the viewer, as in the use of multiple perspective or other perceptual effects, or the viewer's specific, personal relationship to a given historical context. Further complicating the situation, when people talk about a work being "complex" they usually don't refer to only one of these possibilities—a complex work of art is rarely complex in just one way.

Indian. King Solomon and His Court, 1875-1900. Opaque watercolor and gold on paper, sheet: 19 11/16 x 11 7/8 in. (50.0 x 30.2 cm). Brooklyn Museum, Gift of James S. Hays, 59.205.16

Indian. King Solomon and His Court, 1875-1900. Opaque watercolor and gold on paper, sheet: 19 11/16 x 11 7/8 in. (50.0 x 30.2 cm). Brooklyn Museum, Gift of James S. Hays, 59.205.16

When approaching the study of complexity (or any subjective idea) from a scientific perspective, its necessary to pick a one of two approaches. The first approach is to pick a specific way of quantifying the idea. This is always painful, because it means the implicit rejection of other approaches which may be really important to the feel of the idea. The second approach is to ask lots of people what they think—rather than trying to quantify the idea itself, you quantify peoples' judgments about the idea. The problem with this approach is that it often requires an extra experiment in order to quantify those judgments before you can even get started working on the question you're really interested in.

We chose the first approach. Rather than studying the complex, subjective idea of complexity, we decided to focus on one specific, measurable type of complexity, which could be called "surface complexity." We were interested in how much was going on in a work without considering its content. For example, a work with a busier visual surface (with more dots, lines, curves, scratches, brush strokes, marks, etc.) would have greater surface complexity than a work with just a few lines, a couple of repeating patterns, and lots of open space.

Indian. Portrait of Rao Chattar Sal of Bundi, ca. 1675. Opaque watercolor and gold on paper, sheet: 7 5/16 x 4 11/16 in. (18.6 x 11.9 cm). Brooklyn Museum, Gift of Amy and Robert L. Poster, 82.227.1

Indian. Portrait of Rao Chattar Sal of Bundi, ca. 1675. Opaque watercolor and gold on paper, sheet: 7 5/16 x 4 11/16 in. (18.6 x 11.9 cm). Brooklyn Museum, Gift of Amy and Robert L. Poster, 82.227.1

This can be confusing, because it doesn't necessarily match up with a normal idea of what complexity means. An image which is just a big field of scratches might have a much higher surface complexity than an intricate portrait—it just depends on how they were both painted, i.e. what's happening on the surface of the work, ignoring its content.

Indian. Nanda Requests a Horoscope for Krishna, Page from a Bhagavata Purana series, ca. 1725. Opaque watercolor and gold on paper, sheet: 9 1/8 x 10 1/2 in. (23.2 x 26.7 cm). Brooklyn Museum, Gift of Mr. and Mrs. Robert L. Poster, 78.260.5

Indian. Nanda Requests a Horoscope for Krishna, Page from a Bhagavata Purana series, ca. 1725. Opaque watercolor and gold on paper, sheet: 9 1/8 x 10 1/2 in. (23.2 x 26.7 cm). Brooklyn Museum, Gift of Mr. and Mrs. Robert L. Poster, 78.260.5

We quantified the surface complexity of an image in terms of the amount of data a computer needs to store in order to recreate it. That is, if one could describe the form of an image in one sentence, it would be less complex than an image requiring ten sentences. Because we were working with digital files, this approach was quite convenient: the surface complexity of an image corresponds exactly to that image's file size after compression. The bigger the compressed file size, the more complex the image. Specifically, we used the ratio of the file size of the image to the number of pixels, allowing us to compare images with different dimensions.

One of the most interesting things we found out about surface complexity was that it variably affects participants' reactions to artworks depending on what task they were completing. This means that what people were doing and paying attention to affects their reaction to complexity. Based on our results, we speculate that during certain tasks, complexity has a profound subconscious effect on participants' reactions. I'll discuss this in more detail in the next post!

Split Second Stats #4: Engagement

In previous Split Second blog posts, we looked at the effects of thin-slicing, textual information, and gender. Put another way, we were studying the effects of how long you look at the art, what sort of accompanying text there is, and who you are when you look at it. However, these don't cover the full breadth of the museum-going experience. Viewers are increasingly asked to engage in some way with the art on display; for example, in our current exhibition Vishnu: Hinduism's Blue-Skinned Savior, we ask viewers to identify avatars of Vishnu in different works throughout the gallery. We wanted to see what effects tasks like this had on ratings in our Split Second experiment. To do this, we had participants do what we call the "engagement" task. In this task, participants were split up in groups. Each group was asked to perform a specific task which required them to engage with the content of the work they were looking at. The tasks were as follows:

  • Counting: Type in the number of figures in the work.
  • Description: Describe the work in your own words.
  • Color: Name the dominant color in the work.
  • Free association: Type the first thing that comes to mind when looking at the work.
  • Tagging: Type a single word which describes the subject or mood of the work.
  • No task: our control group, as described in our first stats blog post.

I expected that after completing any of these tasks participants would have a stronger emotional connection to the work, so the average rating would go up. Surprisingly, this was not the case. None of the engagement tasks had a statistically significant effect on average rating. Our curator Joan Cummins was not surprised by this, saying that curatorial interventions such as engagement tasks were not intended to make people enjoy the work more, but to get them to learn about it.

However, though the engagement tasks did not affect the average rating, they did affect the way ratings were distributed, i.e. how all of the participants' ratings were spread out around the scale. We found that when participants completed an engagement task, their ratings clustered much more tightly together. In statistical terms, engagement tasks reduced the variance of the ratings. This means that, though engagement tasks don't make people like things more, they make people's ratings more consistent, or increase agreement about a work across the whole population of participants.

Indian. Episode Surrounding the Birth of Krishna, Page from a Dispersed Bhagavata Purana Series, late 17th-early 18th century. Opaque watercolor on paper, sheet: 10 1/8 x 15 15/16 in. (25.7 x 40.5 cm). Brooklyn Museum, Gift of Emily Manheim Goldman, 1991.180.10

Indian. Episode Surrounding the Birth of Krishna, Page from a Dispersed Bhagavata Purana Series, late 17th-early 18th century. Opaque watercolor on paper, sheet: 10 1/8 x 15 15/16 in. (25.7 x 40.5 cm). Brooklyn Museum, Gift of Emily Manheim Goldman, 1991.180.10

Also surprising to me was which task reduced the variance the most. I expected that the description or tagging tasks would create the most agreement across participants, because they require people to evaluate what's being portrayed in the work in linguistic terms. However, the counting task reduced variance the most, followed by the color and free-response tags (a tie for second place), then tagging, with the description task coming in dead last. We've speculated that this may be because of how the various tasks manipulated conscious attention—the description task focuses conscious attention on the content of the painting, whereas the counting task focuses your conscious attention on a more-or-less objective formal property (the number of figures).

Chart showing reduction in variance after counting task
Chart showing reduction in variance after counting task

Why, exactly, this would reduce variance is unclear. It may be because focusing on form instead of content means people don't pay attention to things that might otherwise affect their rating of the painting, e.g. controversial subject matter. It may also create a situation where evaluation of the quality of the painting (as opposed to evaluate of its form) is passed along to the subconscious, and that (to extend Gladwell's thin-slicing hypothesis) subconscious judgments may naturally tend to have less variance. This suggests yet another interesting direction for further research.

Split Second Stats #3: Gender and Information

In the last blog post about Split Second, I talked about how adding extra information about a work changed what people thought about it. In general, adding information about a work causes ratings to increase. However, this isn't the whole picture. We found that men and women reacted differently to the addition of information. While ratings increased for both men and women when information was added, women's ratings increased more. Men's ratings go up by an average of 7.58 points, while women's ratings go up by an average of 10.4 points. This indicates women react more enthusiastically to the addition of information than men. Now, this finding is an average calculated across all 40 objects included in the information section of the experiment. When we look at individual objects, the story gets more complicated. For certain objects, one gender increased their ratings substantially more than the other, and sometimes men and women would change their ratings in the opposite direction.

Indian. The Bismillah, 1875-1900. Opaque watercolor and gold on paper, sheet: 19 5/8 x 11 13/16 in. (49.8 x 30.0 cm). Brooklyn Museum, Gift of Philip P. Weisberg, 59.206.8

Indian. The Bismillah, 1875-1900. Opaque watercolor and gold on paper, sheet: 19 5/8 x 11 13/16 in. (49.8 x 30.0 cm). Brooklyn Museum, Gift of Philip P. Weisberg, 59.206.8

The largest difference between men and women when information was added was for The Bismillah, pictured above. When information was added to this painting, men's ratings increased by 2.36 points, while women's ratings increased by 20.37 points—an order of magnitude larger. Because these rating changes are a function of the form of both the painting and the additional information, it's very hard to say why the changes are they way they are. The Bismillah is a non-representational painting, but we didn't have enough similar paintings in our sample to determine whether that was the determining factor. Similarly, without conducting more controlled research, we can't determine whether there was a specific element or topic of the additional information which caused the change, or whether there was some interaction between the non-representational nature of the painting and the description of its religious function.

Karam and Mahata Chandji. Double-sided Leaf from a Chandana Malayaqiri Varta series, 1745. Opaque watercolor and gold on paper, sheet: 11 3/8 x 7 7/8 in. (28.9 x 20.0 cm). Brooklyn Museum, Gift of Mr. and Mrs. Paul E. Manheim, 69.125.5

Karam and Mahata Chandji. Double-sided Leaf from a Chandana Malayaqiri Varta series, 1745. Opaque watercolor and gold on paper, sheet: 11 3/8 x 7 7/8 in. (28.9 x 20.0 cm). Brooklyn Museum, Gift of Mr. and Mrs. Paul E. Manheim, 69.125.5

The gender-switched counterpart to The Bismillah is the Double-sided Leaf from a Chandana Malayaqiri Varta series. While women increased their ratings dramatically more than men for The Bismillah, men increased their ratings dramatically more than women for the Double Sided Leaf. Women's ratings increased by an average of 2.25, while men's increased by an average of 12.86. Again, this result is perplexing. The content of the painting doesn't include any of the familiar tropes of gender-difference discussions like sexuality or violence. Although livestock are pictured, and there are some prominently featured mustaches, it's hard to say whether these factors were decisive.

Mughal (style of). Lady with a Yo-yo, ca. 1770. Opaque watercolor and gold on paper, sheet: 9 1/4 x 6 3/16 in. (23.5 x 15.7 cm). Brooklyn Museum, Gift of Alan Kirschbaum, 80.268.1

Mughal (style of). Lady with a Yo-yo, ca. 1770. Opaque watercolor and gold on paper, sheet: 9 1/4 x 6 3/16 in. (23.5 x 15.7 cm). Brooklyn Museum, Gift of Alan Kirschbaum, 80.268.1

For a few paintings, men's and women's ratings moved in opposite directions. The most dramatic example is Lady with a Yo-yo, pictured above. Women's ratings increased by an average of 8.4, while men's ratings dropped by an average of 5.6.

We did find a relationship between these ratings changes and the complexity of an image. For women, as paintings got more complex, their ratings increases got lower (cor = -.34, p = 0.028). That is, women were less affected by additional information as the complexity of the image increased. (Viewed a different way, women were more affected by additional information when paintings were simpler.) Men showed the same pattern, but it was much weaker—so weak, in fact, that we can't be completely sure it was a statistically significant effect (cor = -.27, p > 0.08). Unfortunately, this finding does not explain the dramatic differences described above. To decide that question decisively, we'd need to design an experiment which would allow us to analyze these ratings changes in terms of the type of content included in a work.

Stay tuned for the next post!

Many Hours for a Split Second

Split Second paintings in the Conservation Lab
Split Second paintings in the Conservation Lab

With the initiation of the project Split Second, Joan Cummins, Curator of Asian Art selected a very large number (185) of works from the Museum’s Indian Painting collection to post on our website for the Split Second survey. Both Conservator and Curator assessed this checklist to preemptively eliminate any works with condition problems requiring extensive treatment.

Our time frame for conservation of the paintings was relatively short: images of the ~180 works were posted online in February and March. The data was assessed in April and 11 paintings were selected. Thus we had about 8 weeks prior to the exhibition to complete our examination of each painting and undertake any needed treatment and framing. We brought the works to the lab in late April for review.

A very common condition problem with Indian painting is paint instability. There are several reasons for this: these paintings are made from opaque watercolors, applied in many layers between burnishing and often thick dots of paint (impasto) are applied over the surface as decorative elements. These multiple layers and peaks of paint are subject to cracking, lifting and detaching.

Photomicrograph
Photomicrograph

Seven of the eleven works in Split Second had loose and flaking paint when examined inch by inch under the microscope. In this photomicrograph (left) you can see small previous losses in the pink pigment as well as a lifting flake of white paint at the center. Though it looks obvious at this magnification, paint instability is often only discovered with the aid of a microscope. If not secured, flaking paint can detach completely leaving a void. Usually the paint surrounding a void then becomes loose. Thus consolidation of loose and lifting paint using a variety of adhesives is critical.

Previous losses are usually accepted as part of the age of the painting in Indian miniature paints and the responsibility of Conservation is to prevent additional losses from occurring, rather than to cover up old losses. Sometimes, however, a decision is made by the curator and conservator to fill a previous paint loss; this was true in the case of Dhanashri Ragini (80.277.9).

Lastly housing each of the paintings in archival rag mats to accommodate the paint and support is considered. Note that Chandhu La’l (59.206.2) and the folio from the Qissa-I Amir Hamza (24.46) both have strong undulations in the sheet, (i.e. they do not lie flat as most of the other paintings do.  This is because both are double sided and have multiple layers of paper, fabric etc. which naturally cause distortions).

Decisions can be made within a split second but conservation and preservation take much longer. Enjoy the exhibition.

Split Second Stats #2: Adding Information

Last week I talked about our Split Second: Indian Paintings exhibition and Malcolm Gladwell's book Blink: The Power of Thinking Without Thinking. In the previous post I described the first section of the online experiment we created for Split Second, and described one of our findings: thin-slicing reduces the positive effect complexity can have on judgment of a work. Today I'm going to discuss another section of the experiment, along with another finding about how people make decisions about art. In the Split Second section of the online experiment, participants were asked to pick one of two paintings in less than 4 seconds. We compared the Split Second results with results from a control task, where participants rated images one by one on a linear scale and had unlimited time to make their decisions. Now I'm going to talk about a third task: the "info" task. In the info task, participants were asked to read some information about the painting they were looking at, and then to rate the painting on a linear scale. In the info task there was unlimited time to make decisions. Participants in the info task were split into three groups: one group read the painting's caption, another read a list of "tags" (single word descriptions), and the final group read a full curatorial label.

Whether or not labels and other additional information help or hurt our experience of art is a point of contention. The cases for and against labels both seem intuitively strong: labels enhance our understanding of the art and its history, but at the same time they can cloud our intuitive reaction to the work, interfering with the purely visual aspects of the "thin-slicing" process as described in Blink. The relevant question, from the perspective of a museum (as opposed to an art gallery) is whether education is at odds with enjoyment. Do educational labels just get in the way of the work itself?

Our data suggest the answer is a decisive "no." For every painting ratings improved decisively with the addition of information. Adding captions improved ratings by an average of 5 points, tags an average of 6 points, and full labels a remarkable 10 points. Not only do the labels not get in the way of the art, they seem to make people enjoy it even more.

Graph of control vs full label scores
Graph of control vs full label scores

Now, we have a few more findings which complicate this picture:

Captions and tags seem to cause about the same boost in score. At first this was confusing—captions seem to be more information rich, so why weren't they causing bigger boosts than tags? We think this slightly puzzling result has to do with our choice of Indian art. Unlike much contemporary art, where the title often provides crucial information for interpreting a work, the titles of our Indian paintings tend to be very tag-like, simply describing the work's content. So, in this case, our assumption that captions were more information-rich than tags may have been incorrect.

We found that some works improved much more with the addition of information than others. Images which got low scores in the Split Second and control tasks made the biggest improvements. That is, Split Second scores were negatively correlated with score improvement (cor = -.39, p = .013), as were scores in the control task (cor = -.62, p < .0001). Additionally, as with last week's story, visual complexity is a significant factor. We found that simpler images reliably got bigger score boosts than complex images (cor = -.42, p = .006). Further, once information was added, the visual complexity of paintings was no longer correlated with their score. That is, the addition of information doesn't just mute the positive effect of visual complexity, but completely removes it from the picture. This finding suggests that the addition of information about a work changes the process of judgement in a deep, fundamental way, activating an entirely different set of evaluative criteria.

Indian. An Old Man, ca. 1730. Opaque watercolor on paper, sheet: 9 3/8 x 5 7/16 in. (23.8 x 13.8 cm). Brooklyn Museum, Gift of the executors of the Estate of Colonel Michael Friedsam, 32.1322

Indian. An Old Man, ca. 1730. Opaque watercolor on paper, sheet: 9 3/8 x 5 7/16 in. (23.8 x 13.8 cm). Brooklyn Museum, Gift of the executors of the Estate of Colonel Michael Friedsam, 32.1322

Finally, at the Split Second event at the museum last Thursday, an audience member asked if we found any relationship between between the length of the label and the score. We did find such a relationship, but it wasn't strong enough that we're sure that it's meaningful. We found that the scores given by participants who read the full label were correlated with the length of the label (cor = .28), but that we weren't absolutely sure that this finding wasn't due to random chance (CI: -.04 to .54, p = 0.08). The concern is that it may not be the content of the label which people are responding to when they give higher scores, but simply its length. We can't yet answer this question, but our finding is strong enough (and the question tantalizing enough!) that this certainly suggests a direction for further research.

Split Second Stats #1: Thin-slicing vs. unlimited time

A big inspiration for Split Second: Indian Paintings was the book Blink: The Power of Thinking Without Thinking by Malcolm Gladwell. Blink introduced the general public to the idea of "thin-slicing," the notion that "decisions made very quickly can be every bit as good as decisions made cautiously and deliberately." This idea has been widely studied and applied, in tasks as banal as deciding who to "friend" on Facebook, or as serious as recognizing potential terrorists at airports. In this, the first in a series of posts about the Split Second experiment and our findings, I'm going to describe the first part of the experiment, and then say something about what some of the results might tell us about thin-slicing.

screengrab_timed.jpg

In the first part of the experiment, participants were presented with a series of pairs of Indian paintings, making snap decisions about which of each pair they liked best. We called this the "Split Second" task. Decisions made during the Split Second task were "thin" in two ways: First, each decision had a time limit of 4 seconds. Second, participants had no extra information about the painting, and had to "go from their gut." The results from the Split Second task told us which paintings did better in thin-sliced conditions.

But looking at thin-slicing alone wasn't quite enough for us. In order to really learn about how thin-slicing works, we needed to compare thin-sliced decisions to other kinds of decisions. To do this, we split off a number of participants into a "control group." Rather than completing the second section of the experiment like the rest of the participants, the control group completed a neutral, unlimited time task with which we could compare all of the other tasks (like the Split Second task). The control group was presented with a series of individual paintings, with no additional information, and given unlimited time to rank each painting on a linear scale from "Meh..." to "Amazing!"

The result of each of these tasks came in the form of a ranking. The first ranking was based on thin-sliced decisions, and the second was based on decisions made with unlimited time. When we analyzed the two rankings, this is what we found:

 

  • However, when the time restriction was lifted, there were some huge rankings upsets! You can look at a comparison of the rankings on the Split Second exhibition page. Some objects saw big drops in ranking; e.g. Utka Nayika. More dramatically, we see King Solomon and His Court jump from somewhere near the middle of the pack all the way up to number one! This is an extremely dense, complex painting, which leads us to the next finding...
  • Visually complex paintings did pretty well in the Split Second task, but did extremely well in the control task. That is, paintings that had a lot of different stuff in them (patterns, many different colors, lots of changes throughout the painting, intricate detail) had higher ratings than other paintings in both tasks, but when time was unlimited, those paintings were ranked a lot higher. This suggests the 4-second time limit muted some of the positive effect complexity can have on judgment of a work. In science-speak, we found that complexity was correlated with ranking in both tasks, but that the correlation was much stronger in the control task (Split Second task: cor = .25, p = .001; control task: cor = .49, p = .001).
  • Paintings whose central four colors were very different from each other did worse in the unlimited time task (cor = -.38, p = .013).
  • Finally, we found that paintings with big frames did much more poorly in the Split Second task than in the control task. This might not sound like an impressive finding at first, but it's quite strong: the negative hit paintings with large frames took in the Split Second task (cor = -.48, p < .000001) was just as big as the boost complex paintings got in the unlimited time task (cor = .49, p = .001). This suggests thin-sliced judgments were strongly biased against paintings with big frames.

These results paint a complicated picture of thin-slicing. I think one of the big issues with Gladwell's Blink is that it doesn't really give a good idea of when thin-slicing makes sense and when it doesn't. Thin-slicing is clearly a powerful, effective tool, but it privileges certain qualities over others. Hopefully, by studying these qualities, we can help figure out in what circumstances thin-slicing works best, and when thicker slices might do a better job.

The results summarized above suggest thin-slicing privileges images which are vibrant and clear. On a computer monitor, a large frame means a smaller central image, and high complexity makes paintings harder to understand quickly—but it seems wrong to suggest that the complexity or frame style of King Solomon and His Court are flaws which should cause its ranking to drop. Indeed, when participants were able to take their time, it rose to the top of the list. This suggests that, despite its overall effectiveness, thin-slicing doesn't reliably engage with complexity, and this can cause us to overlook some gems.

Come visit your data in Split Second

Split Second Installation
Split Second Installation

Watching Split Second: Indian Paintings get installed into the gallery this week has been a real thrill for me. I believe it is vital that digital projects inhabit the museum in real space, not just sit online and I'm privileged enough to work at an institution that sees this also.

Whenever we start a project that is digitally born, we wonder if will find a local audience.  In Split Second's case participants came from 59 countries with most traffic coming from the United States, but there was an overwhelming majority of participants coming from NYC and they were the most dedicated of the bunch.  On average, a participant spent 7 minutes and 33 seconds in the online activity, but if you were from NYC the average was 15 minutes.  I'll never forget seeing this map in Google Analytics, which so clearly visualizes what happened online—a local audience found us and they took this project very, very seriously.

Google Analytics - Split Second - New York Participants
Google Analytics - Split Second - New York Participants

Even knowing that we reached a local audience, the challenge with this show would be to  create an installation that would make sense to every visitor coming in the door, whether they participated with us online.  Only time will tell if we effectively accomplished this, but I feel lucky to work with a talented team of editors, designers and interpretative materials staff who could take this material and help shape it while keeping every visitor's perspective in mind.

At the end of the 10-week online activity 4,617 participants created 176,394 ratings and this resulted in an incredibly rich data set to explore. In the gallery, we followed Joan Cummins' mantra..."But, what's the so what?" and we took the most conclusive findings—the things we thought would have the most impact—and installed the works that illustrated those points.  On the web, you can review the results by exploring a visualization comparing works, data stories and a profile of each painting using the data gathered. Saying there was a lot of nuanced data to explore is putting it lightly and in the coming weeks Beau Sievers is going to be blogging about the incredibly complex findings.

In Blink: The Power of Thinking Without Thinking, Gladwell isn't endorsing or disputing the quick decisions we make on the spur of the moment—he just explores the notion of them through a lot of different avenues. Our data very much supports his theories; rapid cognition does have an effect on what we see and how we process the art before us, but it shifts and changes under various conditions.

If you took part with us online, I can't thank you enough for your contribution.  We hope you can come see the installation, explore the website and stay tuned for Beau's analysis of the data.

Split Second Thank You

The online evaluation phase of Split Second: Indian Paintings came to a close yesterday evening and now it's time to say thanks to everyone who gave us some of their time to help us build the show that will open this summer.  In the end, 4,617 participants created 176,394 ratings and spent 7 minutes and 32 seconds on average in their session with us. Those of you who took part in this helped contribute a massive amount of data to the project and we can't thank you enough for your time. We'll release the data that surrounds the paintings when the show opens on July 13th, but in the meantime I'll publish some pretty graphs and charts that Paul has been working on; these will give you an quick view of the demographic breakdown.

Split Second Stats

This breakdown will show you how far participants got during the online evaluation and where they stopped in the process.  We're pretty happy that the majority of participants made it all the way through and some even gave us more of their time by completing extra rounds of the speed trial. If you need a refresher on how the tool was designed, check out previous Split Second blog posts.

Split Second Completion

Speaking of time, you may notice that we've got a fairly large window between now and when the show opens and we're going to need it!  One thing we learned from Click! is we could have used more time between the close of the online evaluation and the opening of the exhibition.  In Split Second's case, this is especially true.   The data set is much more complex, conservation needs time to review the paintings in the final checklist, objects need to be framed and matted, there's a massive amount of writing to be done and the gallery design is going to be challenging.  This promises to be a rigorous few months, but we're looking forward to it. Stay tuned...we'll be sharing a lot more come mid-July.

Next up, what you see is what you get.

This post continues the discussion about the tool we developed for Split Second.  Once you get past stressing and (possibly) scrolling in the timed trial, the tool asks you to slow down and consider a work in various ways prior to rating it.  What you may not know is different people are randomly assigned into groups and asked different things during these stages, so your own experience is often different from other participants. Section two of the tool is designed to get you thinking in various capacities about a work prior to rating it. Participants are split into six groups and each group is given a question or activity about ten works. Either you are asked one of five questions (shown below) or you are just given the rating scale alone.

In terms of the data coming back to us, we'll be looking at a lot of different aspects.  Do any of the activities have an effect on the eventual rating?  How widely do answers vary to these questions?  Do participants bail from the tool and,  if so, which question/activity triggers this? How long do participants spend with works prior to answering and rating?

The third section of the tool is a bit of an information showdown.  Unlike the first section where we are looking for gut reactions or the second which gauges whether thinking/participation has an impact on rating, this final section looks at how given information may change things.  This time, we are specifically looking at the information that the institution produces to see how effective it is (or isn't).

Participants are randomly split into one of three groups and presented with ten objects. Some people only see the object's caption, others are given tags to consider and the final group gets what we think of as the museum "gold," the interpretive label.  I've spoken to more than a few participants who were really disappointed when they were randomly selected to just review tags or captions; don't you just love it?  Folks disappointed that they can't dig into label copy is a bit of trip.

In this activity we are measuring a few things.  Most obvious, which type of information changes ratings. Less obvious, we're going to be looking at length of label copy and tone of the label to see if ratings are effected simply by how we compose these materials.  Also thinking about how long people linger with these materials prior to rating.  We've got a chance to look at tagging in a new light, too. We know tagging has a positive effect on our collection's searchablity, but do tags as information sitting on a page help or hurt a participant's rating of an object?

The rating scale used in both of these sections is also worth noting because it's a notch above what we used for Click!. In both Click! and Split Second, we recognize participants are rating art and, with its many complexities, wanted to stay away from simple 1-10 or 5 star scales.  In both cases, we implemented a slider with some general direction, but otherwise want to give folks as much granularity as possible. Split Second's slider differs from Click! in that there's no fixed position of the slider mark itself.  With Click!, the slider was fixed in the center and then moved by the participant.

Click! Rating Tool

In Split Second, the slider isn't fixed until a participant hovers to encourage participants to move from center and use the breadth of the scale.

Split Second Rating Tool

This is a subtle change that will likely have a big impact and many thanks goes to Beau for this idea and implementation. For as simple as the tool is, there's a lot of complexity behind the scenes and Beau and Paul have done incredible work as the team behind it.

The preliminary data is incredibly rich and the questions and ideas that I've talked about here barely scratch the surface of what we are seeing, so stay tuned for more.  If you've not taken part in Split Second yet, you've got until April 14—go for it!

Stressing and Scrolling in a Blink

One of the things we wanted to do with Split Second is talk about the tool that we developed for the online activity.  Much like the evaluation tool we developed for Click! A Crowd Curated Exhibition, a lot of thought went into how to design something that would work to capture a result among many participants using certain theories as a framework.  What I love about working on a project like this is how much response we get from participants about the tools they use, so let's take a look.  You'll get some anecdotes from me, but I'd also love to hear your own responses in the comments. Split Second Ratings Tool - Part One: Speed Trial

The thing that I keep hearing over and over again is how much everyone is stressing out about the timed choice between two works in part one. We've had a range of responses, but many indicate that looking at art this way feels wrong.  I see that point, but it had me thinking about what happens when I walk through a museum.  Upon entry into any room, I have dozens of opportunities to look at works and limited time, so I make quick choices and often without even realizing it.  What are the works that I'd cross a room for?  What object am I drawn to enough to spend time with?  What catches my eye? The tool is artificially presenting this situation to the viewer without the benefit of other devices (gallery design, wall labels, installation choices).  Interestingly, I was discussing this with a friend of mine and the thought had her wondering if this was why she often didn't like going to art museums and found them stressful.  Several other friends I've spoken with indicated that they noticed that the tool had them recognizing how much they were drawn to certain types of works when faced with quick decisions.  I'd love to hear about your own experience in the comments.

The second most talked about issue is scrolling; participants using smaller screens or laptops are having to scroll to fully see certain images. The scrolling combined with the quick decision requirement is driving some people slightly nuts.  So, how are people dealing with this?

Image Display Stats

According to our stats, roughly 51% of participants can see both images clearly.  The other 49% are handling this in two ways—either by scrolling and casting ratings within the time limit (30%) or casting ratings while the images are partially obscured (19%).   The average timeout per session is 1.25, so most participants have figured out how to deal with this situation one way or another.  By far, this has been my favorite comment about the situation:

Comment about scrolling

So, why not design so everything can be seen without the need to scroll?  Well, the short answer is we tried, but the objects have to be seen and many of these paintings have a lot of detail, so we were conscious that they can't be displayed too small. In order to determine image size, we created an internal tool and asked staff members to run through our collection of Indian paintings given the size we were considering using in the final design.

Image Testing Tool

Some paintings were eliminated right off—staff using the tool thought certain paintings couldn't be seen well enough given the size and, as a result, those were not included in the final pool of Split Second works.  Given the four second timeout we couldn't provide a zoom function, so we had to split the difference and go with image sizes that could mostly be seen knowing that some users were going to have to scroll.  This is not ideal, but we'll be able to separate the data to see how this may be changing things.

I'll discuss what's happening in sections two and three in my next post, but in the meantime I'd love to hear from you about your own experiences.  Did we stress you out? Did scrolling cause you to fumble? What did you notice when making split-second decisions in part one?

If you haven't already, take Split Second for a spin—you've got until April 14.

What do you see in a split-second?

Today, we are launching Split Second: Indian Paintings and it's something I've been excited about for quite a while. Split Second is an opportunity to facilitate a collaboration between our curators and our online community using technology and the web to learn more about the visitor experience.  Our online experiment and resulting installation will explore how someone's initial reaction to a work of art is affected by what they know, are asked, or are told about the object in question. Blink: The Power of Thinking Without Thinking

This project's main source of inspiration is Malcolm Gladwell's book Blink: The Power of Thinking Without Thinking.  The book explores the power and pitfalls of initial reactions. After reading it, I started to wonder how the same theories might apply to a visitor's reaction to a work of art. How does a person's split-second reaction to a work of art change with the addition of typical museum interpretive text? As visitors walk through our galleries, what kind of work are they drawn to? And if they stop, look, read, or respond, how does their opinion of that work change?

Over the course of the next several months, I'll be blogging more about the concepts and choices we've made in developing an online activity to collect data on this, so stay tuned. In the meantime, we'd love it if you would take part. By participating, you're helping determine the content of a small installation of Indian paintings—along with an analysis of the data we get on the questions above—opening in July 2011. The more data we have, the better the installation will be, so we'd be grateful if you'd help us spread the word by encouraging others to contribute a few minutes of their time.

The online activity is ready and waiting for you. It will be up until April 14, 2011, at midnight (Eastern Standard Time).