GalleryPal

Overview

As a staunch admirer of art, I know that art is more than just a visual experience. Art is also a cultural experience, a moment for self-reflection, a sociopolitical statement, and much more. It’s for this reason that I chose to design for the hypothetical company, GalleryPal, in a week-long, one-person design sprint modified from the Google Ventures Design Sprint. GalleryPal seeks to improve the in-person viewing experience for guests at art museums and galleries by instantly accessing relevant information about a piece through image recognition technology. 

For this project, I was provided a briefing on the general background and context, notes from user research, a persona, and an interview recording of a tour guide at the Museum of Natural History.  

Day 1: Understand the Problem

ANALYZING THE RESEARCH

From the user research highlights provided to me, I gained a few key insights:

  1. Too much information about an artist or the art can be overwhelming.

  2. Museum-goers like having enough information about a piece to gain a better appreciation for it and form their own opinions.

  3. Museum-goers like to go at their own pace (i.e. group tours are a no-no). 

Essentially, I was designing for viewers of art that feel unfulfilled due to a lack of understanding about the art pieces they viewed, but don’t enjoy sorting through infinite google search results and 2,000 word articles during a visit. 

HOW MIGHT WE

  • How might we help users to quickly obtain relevant information about an art piece while viewing it?

  • How might we present information in such a way that does not (1) overwhelm the viewer or (2) distract from the viewing experience? 

From this understanding, I formed two questions:

I then drafted a few user maps depicting potential end-to-end experiences for the solution. 

Here is a user map I had initially drafted:

USER MAPS

And here’s a user map I drafted after realizing there was a way to streamline the process of obtaining information about an art piece - through the phone camera and the wonders of image recognition technology: 

Day 2: Sketching

I performed lightning demos on a few apps to get inspiration, including Google Lens, Spotify, Notes, and Snapchat.

I demoed Google Lens, Notes, and Snapchat for their scan features. The fixed scan icon in Google Lens effectively communicates to the user how to position the camera; however, Notes’ document highlighting ensures that users receive visual confirmation of the proper document being scanned. All three apps use a familiar and easily identifiable shutter button, which eases the load on the user in learning its functionality - this was something I knew I wanted to adopt in my design.    

I was most interested in Smartify’s art scanning feature and audio tours. The lack of a shutter button on the app’s scanner was an object of deliberation. Having the app automatically scan a piece could make the process more seamless; however, it could also cause frustration for the user due to a lack of control.


Most notable on Spotify was its expandable lyric screen, which I contemplated implementing for the audio transcripts. Although I wanted audio to be the primary vehicle for communication, having a text-format transcript was necessary in designing for accessibility.

LIGHTNING DEMOES

I used the Crazy 8’s method to sketch out a few ideas for the screen I felt was most critical - the results page after scanning, since this is where information about a piece would be located.

CRAZY 8’S

After carefully evaluating each screen, I opted for the last design (bottom right). Having a list view of the audio recordings allows the user to easily scan through tracks in order to select the ones they’re interested in. An expandable transcript enables inclusion of all necessary components in one page so that information is centralized for quicker access, without overwhelming the user by its presence.

The minimalistic nature of the user flow can be seen in this quick 3-panel sketch.

Day 3: Storyboarding

When deciding on my screens, I honed in on the idea of the minimum viable product, or MVP. I prioritized simple navigation and quick access to well-categorized info over other functionalities that may have fleshed out the app more but were not necessarily imperative to the objective, due to the time constraints of the sprint. This mean throwing out a few ideas I played with in my head, such as having a save art/view history feature and giving suggested pieces.  

Ultimately, my focus was on improving the interaction a user has with a singular art piece and not with navigating an entire museum or exhibit.

This left me with the following screens: 

Day 4: Prototype

I prototyped the sketches I made to depict the key navigational features, although implementing certain core functionalities such as the audio recordings were not possible due to the narrow scope of the project. 

Day 5: Validate

I interviewed five users that have been to a museum in the past three months in order to test for facile navigation and points of friction. I wanted to see if users would be able to 1) quickly access information about a specific art piece and 2) easily navigate through the resulting information. 

Users were able to complete all tasks, but the duration of task completion was more relevant for testing an authentic in-museum experience. All participants spent no more than 20 seconds scanning and playing audio, indicating that quickly accessing and navigating to the relevant information wasn’t an issue.        

During debriefing however, users expressed confusion about which audio track would play due to the lack of an audio title on the fixed player (figure 1). This suggested a need for a track label on the audio player in addition to the visual feedback on the page. I also found that the use of the chevron icon for the expandable transcript made the audio tracks less recognizable as such and appeared more as a drop-down menu.

USABILITY TESTING

Figure 1

Figure 2

Ideally, I would iterate on my design by parroting a more recognizable audio track format for the track list (similar to figure 2) and placing the transcript on a separate screen, since users are more likely to quickly navigate through a recognizable interface than one that needs to be learned from scratch. Although I don’t have the time to do so, my final design still reduces the overall feeling of unfulfillment caused by a lack of understanding about an art piece by informing about relevant information in a way that does not distract or overwhelm the user. For now, I would consider that a mission accomplished.

CONCLUSION

Next Steps

  1. Flesh out app to involve the entire museum/exhibit-wide experience (i.e. search function, save art, view history, queue, suggested art, etc.).

  2. Implement different language options for accessibility. 

Vancouver Laptop