Understanding the user experience is key to designing great products. Here are some examples of user tests from school projects.

Virtual Event Ticket Website!

Our team organized a card sorting activity for users to help us to design a user-centered ticketing website for a virtual event venue. Here’s a presentation of our process and findings below!

Hopin.com

With a different group, I explored the problem space for Hopin.com. Hopin is a virtual event platform. We created a design brief by performing competitive analysis, interviewed and observed users, created a user persona, and presented our findings. You can view our reports by following this link (https://drive.google.com/drive/folders/1wDc2iIXFq_5fyQxvHRTfuaWBqY9retzg?usp=sharing).

A/B TEST FOR 3240 ENTERTAINMENT

A/B TEST FOR 3240 ENTERTAINMENT

A/B Test variable 1

Try it yourself!

For my capstone project, I created two prototypes for an interactive narrative and changed one varriable between them to perform an A/B test. Read my capstone project descrition, or play my oral presentation at the foot of this page for an overview of my project and results.

A/B Test variable 2

Try it yourself!

If you’d like to check out my prototypes, hit the buttons to the left!

Capstone Project.

I am working for 3240 Entertainment Group LLC, which is a company that I own and run. I created prototypes of narrative experiences that differ by one variable in order to test users through an A/B test. This A/B test observed users’ choices and behaviors as they played through the narrative experience. This capstone project is significant because it provides data on audience behavior in an interactive narrative. I wanted to find out if there is a measurable pattern in visual narrative experiences and text only narratives, and if users would show patterns in their choices based on those variable differences. Data like this can lead to information about how audiences behave based on UI and can be helpful to any narrative artists who use computers. Improv teams who perform online would be interested in behavioral story choices of their audiences as well as other online theatrical performance and metaverse experiences that require audience interaction.

I am studying user behavioral story choices in narrative design because I want to discover distinguishing factors in audience story path decisions based on UI in order to assist narrative designers, theme park designers, and virtual and digital storytellers in understanding the experiences of their audiences, so that live narrative "performances" on devices can expand as a new paradigm of entertainment.

Learning objectives.

In my capstone proposal, I stated that I wanted to use a design project experiment that does not vary in narrative but has varying UI features that I would test with A/B testing. My learning objectives were:

1.)    Evaluate if UI affects story path choices in interactive narrative

design.

2.)    Determine what UI users find most comfortable when playing an

interactive narrative.

3.)     Research what UI leads to audiences experiencing the story as it

was intended by creators.

My method involved using a control prototype and a variable prototype. The design project’s control was a website with plain text that read as a screenplay, though had limited UI functionality to format it exactly as a screenplay should be. The variable was the same design project with images accompanying the text. I thought that other variable design prototypes might emerge after background interviews were performed, and users did include their comments about font, formatting, information architecture, and other UI changes they desired. At the start of my project, stakeholders weighed in on what variables should be tested as well, however to keep the scope of my project manageable over two terms, I only created one variable prototype to compare against the control. The outcome of this A/B test revealed if there is a pattern of behavior from audiences when selecting story paths on a plain text website compared to the same story on a website with pictures.

What I have achieved.

I evaluated if UI affects story path choices in interactive narrative design by running my A/B test on 20 participants. Ten participants were assigned to each prototype. I observed a clear pattern in story paths. All participants followed a similar behavior pattern, but I could see a clear divergence between variable A and variable B at the user’s third card decision. There seems to be a huge difference in play pattern between text and pictures when it comes to the Fool and the Empress with 60% of participants picking the Empress as a third choice during variable A with text only. Comparatively, 10% of participants picked the Empress, at this same point in the story, during variable B with text and images. The Fool is the definitive most picked third-choice card with text and images, showing that the users’ pattern breaks with the other variable prototype at this third-choice mark. After the third-choice mark, users’ patterns continue to diverge according to variable prototype.

The chart below shows how first choices compare to final choices. In both variables A (text) and B (images) participants were split on choosing the Empress and the Magician first, and the remainder picked the Fool. However, by the end of their user test, final card choices show a pattern of divergence between the Empress and the Fool according to variable prototype, with 50% picking the Empress as their final card in the text only version, and 60% picking the Fool as their final card in the images and text version.

My second learning objective was to determine what UI users find most comfortable when playing an interactive narrative. When conducting post-test surveys to collect subjective data about their experiences, I was able to understand what users wanted in their UI experience. I analyzed subjective data and coded responses by similarity. For the text and images variable, users wanted UI changes such as a more defined script format, bold or highlighted text in some places, and more contrast between the text and the background color. There was also a criticism of the art in which the user stated that the art did not progress the story forward. Thirdly, many users requested more instructions for how to read the scene headings, and what to do in general. Similarly, those who received the text only variable also requested more instructions. They also wanted UI improvements in text formatting and information architecture. Notably, three users commented on how the experience would be improved if pictures were added. Overall, from this feedback I conclude that information architecture and formatting was most important across the board.

My third objective was to research what UI leads to audiences experiencing the story as it

was intended by creators. From user feedback I observed that those who took the text-only version could recall the story in more detail than those who took the version with images and text. However, those who received the images and text version still understood the story and had a more general approach to summarizing what they got out of the story. I found it notable that three participants who got the text-only version mentioned the narrative detail that images were illegal in the world of the story. However, none of the participants who got the images and text version mentioned that detail at all in any of their answers. This suggests to me that there is not only divergence in story path decisions between variables, but there is a difference in what details of the narrative are remembered or thought of as important.

How what I learned will help in my future career.

I used Useberry to run my user-tests, and knowing how to use UX research software like Useberry was most helpful going forward into my future career. Going through this user test allowed me to learn how to collect the data I needed. Observing that there were patterns between variable A and variable B was fascinating to me, and made me want to join a larger project that could study a similar subject on a larger scale.  

My second learning objective was to determine what UI users find most comfortable when playing an interactive narrative. I learned that information architecture was most important for users. If this project was scaled up, I would run another A/B test after a better platform was built for the game. This better platform would address formatting issues and information architecture issues, and try to make it so users are confident about knowing what they are doing as they play the game. This A/B test can be achieved in Unity with the Unity projects that compare the project with images to the project without images.

My third learning objective was to research what UI leads to audiences experiencing the story as it was intended by creators. I learned that adding images somehow changes what details users see as being important. Without images, users pick up on more detail, and with images users understand the general idea. This is an interesting find that I want to explore further. For now, if I want a specific detail to be remembered in an experience with picture, that detail needs to be put into a picture for users to remember it as important. However, if I’m only using text, details will be remembered over a general theme. Images, moreover, can be distracting or misleading if they do not reflect the details of the text.

What else I learned.

I learned that images might contribute to an overhaul of cognitive load. I thought too much text would lead to cognitive overload, but 50% of participants who received the test with images quit before the end of the narrative story compared to only 30% who quit the text-only experience. This surprised me. I thought there would be more engagement during the image and text variable, but it seemed to turn out the other way around. Maybe the type of images played a role. One user commented on the AI art as not progressing the story forward. This user wanted more directly related images to the text. Maybe human-made artwork would have been more engaging.  

Project milestones.

My milestone estimate was revised last term because I needed to fix bugs that were present in my prototypes before user testing. Therefore my milestones differ from those in my proposal. This term I wanted to:

Milestone #1: Finalize Useberry implementation of complete variable prototypes by the end of Week 2

Milestone #2: Organize testing cohorts and testing plan by the end of Week 3

Milestone #3: Recruit cohort 1 by the end of Week 4

Milestone #4: Recruit cohort 2 by end of Week 5

Milestone #5: Collect all user testing results by end of Week 8

Milestone #6: Analyze data by the end of Week 9

What I have achieved.

Milestone #1: Finalize Useberry implementation of complete variable prototypes by the end of Week 2

I needed to trouble shott a little here because my prototypes were hosted on itch.io and Useberry does not collect click information on embeded content. I tested my implementation of my prototypes on Useberry and discovered that itch.io embeds the prototype on their page. To troubleshoot, I reached out to Useberry staff and told them my problem. The staff emailed me back with direct links to my prototypes that they were able to pull from code on itch.io’s site. I used these direct links to implement my prototypes into Useberry, and my problem was solved.

After thoroughly testing out Useberry, I was ready to recruit by week 3.

Milestone #2: Organize testing cohorts and testing plan by the end of Week 3

I used a Google Form to draft an explanation of the study and requested that interested participants submit their email address in the form.

Milestone #3: Recruit cohort 1 by the end of Week 4

When I received interested participants, I organized a spreadsheet to keep track of our communications so I knew what participant received what variable test. I created a tab for Cohort 1 and Cohort 2 to keep track of when participants took the test. I did this to try and account for time periods incase results were affected by the time period in which users took the test.

Milestone #4: Recruit cohort 2 by end of Week 5

I sent an even number of A and B tests out at first. Once I saw that I had 10 participants, I assigned 5, A tests and 5, B tests to Cohort 1. I did the same for Cohort 2. However, I received data from more B variable testers than A variable testers, so I started recruiting more participants and only sending out A variable tests to catch up to the B variable data I had received. So, Cohort 2 has more participants than Cohort 1, and most of Cohort 2 received A tests. In total I sent user tests to thirty-one interested people who signed up to participate, but only twenty people completed the user test. I sent out nineteen variable A tests, and twelve variable B tests in order to get an even number of results back so that I had ten variable A results and ten variable B.  

Milestone #5: Collect all user testing results by end of Week 8

I collected data in a spreadsheet by playing the recording of users’ screens and observed what choices they made as they played the interactive story.

This organized the data so that I could analyze by comparing story paths that user took. I made one tab for A variable tests, and another tab for B variable tests.

Milestone #6: Analyze data by the end of Week 9

            Once all data was collected, I was able to analyze in another tab that documented findings for each variable.

Some data was qualitative and some quantitative. The quantitative data can be seen above. Comparing numbers between variables was a simple processes that showed that there was a divergence between variables when following story paths. Qualitative data came from essay feedback in a post-test survey. To analyze this qualitative data, I needed to code responses by category. I did this by color coding like-responses. Then I compared these responses between variable tests.

What else I accomplished with my capstone work.

I taught myself how to create an interactive narrative prototype with a coding language called INK from Inkle. The revised list of milestones takes into account the importance of making sure the tech works. I was able to complete the prototypes by the end of Week 4 last term, and they seemed to be working great until I went through each possible story path. All varieties of story paths did not result in success. Some combinations of story path choices ended in error causing the game to crash, or causing the story to not give the user choices where they should have had a choice. If I played through by only testing linearly, the prototypes worked great! But once I played as though I was a user, several bugs that would affect the test results were found.

With this in mind, I added more time to set up user tracking to make sure it would work. Testing before the A/B test was what made my milestones change, and I learned that I needed to spend more time in development of the user tests or my results will be skewed. So, the entire first term I devoted to perfecting prototypes and finding the right method of tracking user behavior. 

Other milestones I would have liked to have achieved, if I had more time and more resources.

Milestone #1: Come up with follow up tests based on results of analyses, end of Week 11

Milestone #2: Choose a test to build as a follow up, end of Week 13

Milestone #3: Find the right staff to complete the project, end of Week 15

Milestone #4: Schedule staff to recruit participants, end of Week 17

Milestone #5: Lead staff in performing user tests, end of Week 20

Milestone #2: Lead staff in analysis of collected data, end of Week 25

Success metrics.

I started testing this term, so I can finally judge my success metrics accomplishments. These success metrics are listed below.

A.)  Success metric for learning objective #1: A/B test an interactive story with a user group. If story paths show a pattern based on the two variables tested, this will indicate that UI does play a role in story path choices. I will use quantitative data to determine number of clicks and where users clicked.

B.)  Success metric for learning objective #2: Survey user groups on their experiences to

collect qualitative information of how they felt when they were playing the interactive

narrative. Ask users what was most comfortable in either a survey or an interview.

C.)  Success metric for learning objective #3: Compare model situation mapping from the

developers with collected data from observations of user groups playing the experience. Results will show how users diverged from the expected path.

The extent to which I have achieved each one.

I have successfully completed each metric. With Useberry, I collected quantitative data on user clicks and compared variable A to variable B data. I was able to see clear patterns and concluded that there is a difference in story path decisions between an interactive narrative with pictures and text verses one with text only. If I had more time and resources I would take this a step further and research to find a reason for why there is a difference in story path decisions.

My second success metric was based on collecting qualitative data. I was able to collect qualitative data through Useberry as well, through a post-test survey that included open essay questions, likerd scales, and multiple choice. To analyze this data, I coded answers by color to organize similar responses together, then compared variables A and B. Some of the big take-aways were that variable A (text only) users remembered details of the story and described the story with specific details like images being illegal in this cyberpunk world. However, variable B (images and text) users spoke in general summaries, and no one mentioned the detail of images being illegal in the cyberpunk world. From qualitative it seems that adding images signals to users what parts of the story are most important. Without images, users pick out the important details on their own without distraction. The divergence in story paths might have been due to what users understood to be important, and adding pictures influenced their perception. Now with this insight, I would perform more tests to confirm or disprove this new assertion if I were to continue this study.

My third success metric was to compare user story paths to the ideal path that stakeholders would want users to take. There was a little variation of how this success metric was achieved because stakeholders did not have an expected story path, however stakeholders wanted participants to play until they reached act 1, and they wanted users to understand the story. What is interesting was that 50% of variable B (text and images) user did not play until the end of act 1 as stakeholders wanted. However, 30% of variable A (text only) users did not play to the end of act 1. On this quantitative fact, text only aligns best for the goals of stakeholders. However, qualitative data shows that text only users have more detailed ideas about what will happen next in the story. But about the same amount of users had some idea of what was going to happen next for both variable tests. Quantitatively, both were not confidant about understanding the plot. Comparing quantitative survey questions, we can see how users responded to both variables. Users who received the text only test were more likely to choose that they were looking forward to seeing what happens in act 2, however the majority of people who received the text and images test did not care about what would happen in act 2. This tracks with observation that 50% of those who received images and text quit the narrative experience early. On the other hand, users who received images thought that the narrative was easier to navigate than those who received text only, and there were many comments from text only users that they would have improved the experience by adding pictures, better information architecture, and better formatting. I would say that further testing needs to be conducted to understand why users felt this way, and why text only was better received by users narratively, even though those users wished there were images.

Test A (text only)

Test B (images and text)

My own ideas of success meet my success metrics for my project.

My success metrics were well researched, and well thought-out and I still believe they determine how successful my learning objectives have been. Setting these metrics allowed me to validate that I have achieved the goals that I set out to accomplish.  

What I learned about identifying realistic success metrics

I learned that if success metrics are set, after the user-test is complete and I see room for further testing, I can identify how I was successful by comparing what I completed with my metrics. Without these metrics, I might feel the need to continue the study until I got a definitive answer, so I might feel as though I failed if I stopped short of finding that answer. But, setting these metrics allows me to see that I reached the goals set, and that any further testing would require another project on a larger scale. I would set learning objectives and success metrics for the next test in further experimentation as well.

Impact assessment.

I think it’s a big discovery that adding pictures could change the experience of an audience. In a way, this is already known because we know that people who read the book that a movie is based on report a different experience. However, I’m not sure that it has been studied in this way. My study actually shows where audiences’ experiences diverge. With the same text, just the addition of pictures shows that users chose different story pathways and end the experience remembering different details. The pattern between what details are remembered and what paths are taken are clearly drawn based on which variable test the user received.

Impact on my organization

My organization structures stories for interactive narratives, literature, theater, film, and video games. Further studies of the findings from my capstone project could solve the reason for why there is a difference in understanding details between variables, and what this means for interactive narratives and other more traditional narrative experiences like movies or novels. Normally you cannot track a readers’ experiences as they are reading a book because the narrative is set without any story path choices. So, you cannot see how a reader is digesting the story. But, with interactive narrative studies, we can see patterns of experience. This information might come in handy when structuring stories.

Community that the organization serves

Interactive narratives are growing in popularity and will become more prevalent in metaverses and in other places where AI is used to interact with users. Understanding that users are having different experiences that affect the story is good to know, especially when it comes to creating experiences for blind or deaf audiences. We know that these experiences will differ, but I do not think we understand that there might be a pattern in understanding the story and remembering details that affect the experience.

Concluding thoughts.

Over this term I was able to learn how to use a UX research tool, and collect and analyze quantitative and qualitative data. I observed recordings of user tests and achieved my learning objectives to satisfy my success metrics. This term has been a success.  

Project overview.

Overall, learned that the internal testing process is imperative to creating a user test such as this one. In term one I needed to fix bugs that would have affected data in user testing. This term I was able to start recruiting early in term. I learned that recruitment can be challenging when you don’t have a budget for incentivizing participation. I also observed that more users responded if they received test B, so I had to recruit a greater number of people for test A so that I could have an even turnout.

Learning objectives.

My learning objectives slightly changed throughout the two terms because I needed to scale down my project in term 1. I learned that reevaluating your objectives is fine to do as you learn more about what is required for your project.  

Project milestones.

My project milestones also changed in the first term. I learned that you need to be flexible as you plan your project because things happen.

Success metrics.

I understand why success metrics are important because I was able to judge the success of my project based on the metrics I set, even though I did not find a distinct answer during this experiment. There is a pattern suggesting different experiences between variables, and that more details are remembered with text only. I also was able to see that those who received images were less likely to want to know what would happen in act 2, and this tracked with the observed behavior that half of those who received images quit before act 1 was over. I would need to do further testing to find out why these finding happened, but my success metrics were met.

Challenges and What I Hope to Learn Going Forward.

My biggest challenges have been fixing bugs and recruitment. I think I will be reaching out to developers to see if an experienced game dev can fix the bugs going forward. I learned that reaching out for technical help is imperative. Going forward, I hope to apply what I have learned about testing interactive narratives to my current job. We are considering user testing prototypes for our interactive narratives at work, and I would like to be a part of it now that I have done this as a capstone.

Previous
Previous

Prototypes

Next
Next

Design Thinking