Brandables
A B2B SaaS platform specializing in brand strategy consulting for professional service providers and brand management solutions for marketing executives.
Tools - Figma, Optimal Workshop
Role - UX Researcher
Timeline - 5 weeks
project overview
At the outset of this project, Brandables consisted of a collection of high-fidelity screens and an evolving MVP with an abundance of features. Our client had completed one expert interview, developed user stories for two different user personas: a professional service provider (“employee”) and the employee’s marketing manager (“marketer”). Our client asked us to develop out and prioritize features for the MVP, establish and intuitive information architecture, and craft comprehensive wireflows for both platform experiences.
the big questions
What features will set Brandables apart from competitors?
Does the platform’s information architecture make sense to users?
Is there a desire and need for this product?
my role
I conducted research to
guide feature prioritization
inform the development of the platform’s information architecture
further establish product-market fit.
feature prioritization
question: what features will set Brandables apart from competitors?
method: competitor SWOT analysis
I chose competitor analysis as a method to easily identify industry best practices, areas for differentiation, and market gaps within Brandables’ market space. As Brandables is a personal brand management platform, I decided the most appropriate spaces to investigate were social media management and employee advocacy platforms. I analyzed four market competitors.
findings
The market is saturated with social media management platforms.
Existing platforms intend to make it easier for employees to share company-approved content to their social networks.
takeaways
Brandables will set itself apart by focusing on employee-generated content and branding activities, not just company-created or third-party content.
Brandables is different in that it is instructive in it’s suggested content and branding activities.
impact: the competitor analysis takeaways helped to shape ongoing feature prioritization conversations with our client.
validating information architecture (employee experience)
question: does the platform’s existing information architecture make sense to users?
method: tree test
NN/g
I chose a tree test to retroactively validate the employee experience information architecture. As the employee experience had already been wireframed, I completed a tree test with the existing menu and category labels to evaluate the effectiveness of the existing structure.
Seven participants were recruited using Optimal Workshop participant recruiting services. Participants completed six tasks. (see note 1)
-
Due to a limited timeframe, only seven participants were included in the analysis. Participants were limited to professional service providers. Twenty-three participants were screened out, one participant abandoned the study, and three participants were thrown out as junk data. As the sample size is small, all results and insights should be taken as preliminary. However, even with a smaller sample size, conducting iterative tests and being open to qualitative insights can lead to valuable improvements in the design.
Participants generally completed several tasks with high success and directness rates. However, one task had a very low success rate (14%) and a high directness rate (100%), meaning users confidently navigated to the incorrect place. One low success/high directness task in particular indicated it was necessary to examine if the menu item in question effectively communicated the purpose and content of the section.
impact: tree testing sparked a discussion about the importance of the platform’s sections, labels, and copy, leading to a re-naming of two menu items to better reflect their purpose and function.
exploring information architecture options (marketer experience)
question: how do users intuitively organize the platform’s information architecture?
method: card sort
NN/g
I chose a card sort to explore possible options for the marketer experience information architecture. The marketer experience had not yet been wireframed out and there was still time to complete exploratory research and learn how target users would intuitively structure the platform.
Seven participants sorted 18 feature and content cards into groups that would ultimately represent the left-hand navigation categories. (See Note 2)
-
The ideal sample size for a card sort is around 30 participants. Due to a limited timeframe and challenging participant recruitment, only seven participants were included in the card sort analysis. 17 people participated in the study. Four participants were screened out, four participants abandoned the study, and two participants were excluded as junk data. As the sample size is small, all results and insights should be taken as preliminary. Participants were recruited using Optimal Workshop participant recruiting services.
On average, users sorted features/content into four groups (representing four left-hand navigation categories), visualized here in 3D cluster views of the data. This deviated from previous iterations of the IA. (See Note 3)
3D cluster
-
The suggested number of groups was four, which corresponds to the average number of groups the participants created.
In particular, users created two categories for a grouping of features/content that had previously all been grouped together into one category. Specifically, the red cluster and blue cluster were initially all lumped into one category. This grouping was seen at a medium-high agreement (71%) level in the dendrogram.
dendrogram
impact: card sort results led to splitting one left-hand navigation category into two separate categories to better conform to users’ expectations.
establish product market fit
question: Is there a desire and need for this product?
method: concept testing
Before finalizing an MVP and pitching the product to investors, it was important to continue to gauge user interest and identify pain points. I decided to establish product market fit by completing concept testing sessions, which allowed us to gather valuable feedback early in the product development process, make informed decisions, and increase the likelihood of success in the market.
Five target users participated in concept testing sessions with a rudimentary clickable MVP prototype. (See Note 4)
-
In the early stages of the project, the target users included three persona groups (employee, marketer, and VP/entrepreneur). However, as the project progressed, the VP/entrepreneur persona was excluded from the MVP.
Thematic analysis of testing sessions yielded several valuable insights. For example, the functionality of one of the product’s key features was frequently misidentified as a chatbot due to its visual representation. In-session discussions suggested that changing the visual representation of the feature would help to clarify its intended functionality and better meet user expectations.
initial representation
second iteration
impact: concept testing results led to the decision to change the visual representation of the platform’s key feature.
next steps
question: what next?
method: research summary and research roadmap
As my time with the client came to an end, I put together a research insight summary of all the research I had already completed and created a research roadmap of my suggestions for research next steps.
Recommendations for research to be included prior to product launch included:
Further Concept Testing
The marketer experience would benefit from concept testing with target users to determine if the design concept resonates with their needs and expectations.
Once a high-fidelity prototype is created, the employee and marketer experiences would significantly benefit from usability testing to identify and address pain points prior to product launch.
Usability Testing
Recommendations for research to be included after product launch included:
Both the employee and marketer experiences would benefit from diary studies to gather insights into users' experiences while using the product (identify pain points in their daily workflow), as well as provide longitudinal data on users’ behaviors (are they still using the product? why or why not?). These insights will dictate future design iterations.
Diary Studies
what I learned
How to consult with and create a plan for the client to ensure user needs and business goals are met long term.
How to identify what makes this product unique, determine market gaps, and prioritize features to create a realistic MVP.
How to mitigate risks associated with assumptions by validating or challenging existing design decisions and conducting research to inform future ones.