Designers recognize the importance of user research in delivering excellent user experiences. Even when designers prioritize research, many cognitive biases can influence outcomes and risk digital goods.Cognitive biases influence how individuals understand information and make judgments. While all humans are susceptible to cognitive biases, many are unaware of their consequences. Research indicates that individuals may have a bias blind spot, where they believe they are less biased than their colleagues, even if this is not true.
With seven years of experience conducting surveys and getting user feedback, I’ve seen how cognitive bias may impact results and design decisions. Designers can conduct research that effectively reflects consumer demands by being conscious of their own cognitive biases and utilizing effective ways to remove bias from their work. This informs the solutions that can actually improve a product’s design and better serve customers. This article covers five types of cognitive bias in user research and how designers can reduce them for more effective products.
Confirmation Bias: Choosing Facts That Support a Predisposed Belief.
Confirmation bias is the tendency to seek information that supports an existing belief or assumption while ignoring facts that contradict this viewpoint. Confirmation bias in user research can lead to designers emphasizing feedback that confirms their own beliefs over constructive feedback that contradicts them. This approach will automatically result in design solutions that do not effectively address users’ needs.
I seen this bias in action when a design team I was working with recently solicited user feedback on a software development company’s website. Several participants indicated a wish for a shorter onboarding procedure on the website. That startled me because I assumed it was an intuitive approach. Instead of addressing that input, I prioritized suggestions that did not focus on onboarding, such as button placement or distracting color design.
Our team’s analysis of feedback using an affinity map revealed a significant number of complaints concerning onboarding. This helped me understand my own prejudice.
To address the issue with onboarding, we reduced the number of questions asked on-screen and shifted them to a later stage.User tests indicated that the revised process seemed shorter and smoother for them. Affinity mapping helped us visualize all data points and avoid focusing too much on one component of customer input.
The Six Thinking Hats analysis method helps eliminate confirmation bias. The de Bono Group developed a user research method that allocates team members to one of six personas: rational, positive, cautious, emotional, creative, or managerial. Each of these roles is symbolized by a unique color hat. Assigning the green “creative” hat to a team member during a brainstorming session encourages them to share innovative solutions and viewpoints. Meanwhile, the team member with the blue “managerial” hat would be in charge of following and implementing the de Bono methodology principles. The Six Thinking Hats method employs a checks and balances strategy, allowing teams to spot one other’s faults and effectively combat cognitive biases.
Anchoring Effect: The provided options can skew feedback.
The anchoring effect refers to how the first piece of information about a situation influences decision-making. Anchoring has a significant impact on many daily decisions. For example, seeing that an item you want to buy has been reduced can make the lower price appear like a good deal—even if it is more than you intended to spend in the first place.
Anchoring can impact user feedback, either intentionally or accidentally. Consider a multiple-choice question that asks the user to estimate how long it will take to accomplish a task; the options supplied can constrain the user’s thinking and lead them to choose a lower or greater estimate than they would otherwise provide. When questionnaires inquire about amounts, measurements, or other numerical values, the anchoring effect can be especially powerful.
Anchoring’s negative impacts can be mitigated by word choice and presentation of options. If you’re asking people about a certain measure, for example, you can let them make their own estimates instead of giving them options. If you must present options, consider utilizing numerical ranges.
Because anchoring can have an impact on qualitative feedback, avoid asking leading questions that set the tone for subsequent responses. Instead of asking, “How easy is this feature to use?” Ask the user to explain their experience with the feature.
The order in which options are presented might influence choices.
The order of options in a survey can influence responses, known as the order effect. People prefer to select the first or last choice on a list because it is the first thing they notice or the last thing they recall; the options in the middle may be ignored or overlooked. In a survey, the order effect might impact the answers or options participants choose.
The sequence of the questions can also influence the outcome. The order of questions may provide signals about the research purpose, influencing the user’s choices, or participants may become fatigued and lose focus as they progress through the survey. These issues can result in feedback that does not accurately reflect the user experience.
Suppose your team is evaluating the usability of a mobile app. When creating the questionnaire, your team arranges the questions according to how you plan the user to explore the app. It asks about the homepage first, and then about the subpages in the navigation menu, working from top to bottom. However, answering questions in this manner may not provide valuable feedback because it directs the user and does not reflect how they would browse the app on their own.
To offset the order effect, randomize the order of survey questions, reducing the probability that earlier items would influence responses to later ones. To avoid skewing findings, randomly organize the response options in multiple-choice questions.
Peak-end Rule: Remembering Specific Moments of an Experience More Than Others
Users evaluate their journeys primarily on their emotions at the beginning and end, rather than the overall experience. The peak-end rule may influence research participants’ feedback on a product or service. For example, if a user has a terrible encounter near the conclusion of their user journey, they may score the entire experience adversely, even if the rest of the procedure went smoothly.
Consider the following scenario: you’re updating a mobile banking app that requires customers to provide data to onboard. The initial feedback on the new design is bad, and you’re concerned that you’ll have to start from scratch. During user interviews, participants reported a problem with a screen that automatically refreshes after a minute of inactivity. Users typically require more time to acquire the necessary information for onboarding, and they are understandably disappointed when they are unable to proceed, resulting in an unfavorable perception of the app. By asking the proper questions, you may discover that the rest of their interactions with the app are seamless—allowing you to focus on addressing that single source of friction.
To acquire thorough feedback on questionnaires or surveys, inquire about each step of the user journey so that the user may pay equal attention to all parts. This method will also help determine which step is the most difficult for users. You can also organize survey material into parts. For example, one segment may focus on tutorial-related questions, while the next may inquire about an onboarding page. Grouping enables the user to process each feature. To reduce the potential of the order effect, randomly assign questions within sections.
The observer-expectancy effect influences user behavior.
The observer-expectancy effect refers to how the experimenter’s actions influence the user’s response. This bias produces erroneous results that are more consistent with the researcher’s preset expectations than the user’s ideas or feelings.
Mariia Borysova, a Toptal designer, identified and corrected a prejudice while mentoring junior designers at a healthtech business. The less experienced designers would inquire about consumers, “Does our product provide better health benefits when compared to other products you have tried?” along with “How seamlessly does our product integrate into your existing healthcare routines?” These questions discreetly directed participants to provide answers that were consistent with the researcher’s expectations or ideas about the product. Borysova assisted the researchers in reframing the questions to seem more unbiased and open-ended. For example, they changed the questions to read, “What are the health outcomes associated with our product compared to other programs you have tried?” along with “Can you share your experiences integrating our product into your existing healthcare routines?” Compared to these more neutral options, the researchers’ initial questions induced participants to see the product in a specific way, which can result in inaccurate or unreliable data.
To avoid directing users’ responses, properly frame your queries. Use neutral language and look for preconceptions in questions; if you discover any, reframe them to be more objective and open-ended. The observer-expectancy effect can also occur when participants are given instructions at the start of a survey, interview, or user test. Make sure to write instructions with the same attention to detail.
Protect User Research from Your Biases
Cognitive biases influence everybody. They are tough to prevent because they are an inherent component of our brain processes, but designers can take steps to reduce bias in their study. It’s important mentioning that cognitive shortcuts aren’t inherently harmful, but by being aware of and avoiding them, researchers can obtain more trustworthy data during user research. The tactics given here can assist designers in gathering accurate and actionable user feedback, which will ultimately lead to improved products and loyal returning customers.
Know more about vist: https://www.toptal.com/designers/user-research/cognitive-bias-in-user-research