BACKGROUND

“There should be a smarter way to find this.”
“I don’t know the exact format they want.”
“I’m wasting time on something that should be fast.”
“Why doesn’t it show recent or saved searches?”
Thinks

Says
“I have to type the exact name or I get no results.”
“Filters are hard to find and not very helpful.”
“I wish it could guess what I meant.”
“It doesn’t learn what I usually search for.”
“Sometimes I just give up and ask someone else.”

Repeats searches with different keywords.
Avoids using filters due to confusion.
Relies on colleagues when stuck.
Types long, overly specific search queries just to get a match.
Copies and pastes IDs to avoid typos.
Does

Frustrated by lack of flexibility and responsiveness.
Anxious when searches return no results.
Relieved when searches work as expected.
Annoyed by repetitive filtering for frequent tasks.
Hopeful for improvements that align with modern search behavior.
Feels
2.3 Empathy Mapping
Match Dependence
Users often fail to find results unless they type precise terms, creating inefficiencies and repeated search attempts.
Unintuitive Filtering Options
Filters are hidden or hard to use, resulting in missed opportunities to narrow down search results.
Lack of Predictive Assistance
No autocomplete, spelling correction, or "Did you mean?" suggestions frustrate users who make minor typos.
No Recent or Frequent Search Recall
Users must start fresh each time, even for repeated searches.
Unclear Search Syntax or Expectations
Users are unsure how to format queries, such as dashes in IDs, partial names, lab codes.
Slow Result Surfacing
Search response time can feel sluggish, especially for larger databases.
Participant Screening Criteria
Active LigoLab users for ≥ 3 months
Mix of roles: clinicians, lab managers, QA
Active users of LigoLab who rely on the search function for daily tasks (like looking up patient cases, specimen IDs, reports)
Mix of beginner and advanced users
Users with varying typing styles, terminology familiarity, and comfort with filtering options
Research Goals
Understand how users currently interact with LigoLab’s search bar and what content they search for most often.
Identify user pain points around keyword matching, filtering, speed, and results accuracy.
Define the expectations users have for a high-performing search feature, including language, structure, and personalization.
To uncover opportunities for improving LigoLab’s search functionality, I conducted user and competitor research focused on usability, categorization, and information display. I analyzed how leading platforms structure their search experiences: examining filters, result clarity, and relevance.
1.2 Conducting Competitive Analysis


I interviewed 12 internal LigoLab users, including lab staff, managers, and support team members, to learn how they currently use search and where they run into issues. I also ran several surveys to gather broader feedback on search habits and expectations. The goal was to understand what users actually need from the search experience and where the current system falls short.
1.3 Running User Interviews
1.1 Outlining the Objectives
2. SYNTHESIZING INSIGHTS
2.2 Synthesizing User Insights
I synthesized the interview and survey findings into four core categories: Says, Thinks, Does, and Feels, to better understand our users' search experiences. This empathy map helped me uncover key patterns in how users interact with the search function, what frustrates them, and what they expect from an improved system. These insights provided a solid foundation for identifying pain points and shaping the direction for a more intuitive and efficient search experience.
Says
Does
Thinks
Feels
3.1 User Pain Points
Through interviews and surveys, I identified common frustrations with the search functionality across different roles. Key pain points included issues with search accuracy, relevancy, and ease of use.
3.2 User Personas
Based on the research, I created user personas to represent the diverse needs and pain points of the target users. These personas helped guide design decisions, build empathy, and stress-test solutions to ensure they addressed real user challenges.

2.1 Competitive Audit - Organizing Findngs
I organized the research findings into four distinct solution categories, detailing the pros and cons of each. I evaluated how each category aligned with our platform model, highlighting its strengths and potential weaknesses, and presented these insights to my team for discussion.




Conclusions
My research surfaced clear opportunities to make LigoLab’s search experience more intuitive and efficient. Through interviews, surveys, and a competitive audit, I identified common user struggles, like unclear filter placement and a lack of search guidance, and compared them with best practices in the industry. These insights are shaping a more streamlined, flexible design that supports diverse workflows, reduces friction, and helps users get to the right information faster.

How It All Started
Lab staff were spending valuable minutes poking around screens for the right record instead of moving cases forward. That pain point kicked off an SEO‑style research sprint focused on our own search: mapping the exact terms clinicians use, studying how other lab platforms surface results, and prototyping smarter filters and ranking logic.
Steps Conducted
Outlined Research Objectives
Audited the Existing Search Experience
Ran a Competitive Analysis
Collected User Insights
Synthesized Findings
Shared Actionable Recommendations
To capture both depth and breadth, I blended qualitative and quantitative research. This mixed‑methods approach let me ground design decisions in real user behavior, validate them with numbers, and benchmark them against industry standards, all while keeping the feedback loop fast and focused. Now that we’ve covered the context, let’s get into the work. Shall we?
Problem
LigoLab users face inefficiencies and confusion when searching for information due to a lack of intelligent matching, intuitive filtering, and search assistance features.
Goal
Redesign the search experience in LigoLab to support smarter, faster, and more user-friendly search workflows across user roles.
1. FOUNDATIONAL RESEARCH
After collecting feedback through surveys and interviews, I went through the data and looked for patterns using affinity mapping. I used FigJam to organize the themes, which helped me spot common frustrations and needs. This step really helped me understand what was most important to users, so I could prioritize the changes that would make the biggest impact.
It was key to turning all that feedback into something actionable and making sure the changes we proposed actually addressed the real pain points.
Learnings
Through this project, I learned how powerful it is to combine both user feedback and data to truly understand the issues at hand. Conducting surveys and interviews helped me dig into real user pain points, and affinity mapping and FigJam let me organize everything in a way that made sense. I also realized how important it is to keep stakeholders in the loop, so everyone stays focused on solving the right problems. Overall, I found that the key to good design is continuous testing and refining based on what users need.
What went well
Mixing methods paid off. Blending interviews, surveys, and a competitive audit gave me both the “why” and the “how‑often,” so findings felt trustworthy to the team.
Using FigJam for affinity mapping helped me spot patterns quickly and made it easy to share insights with designers and PMs.
Early stakeholder touch‑points kept momentum. Regular check‑ins meant decisions were rooted in research rather than assumptions, and design trade‑offs happened faster.
What I’d improve next time
Broader sampling. I’d recruit a wider range of labs (sizes, specialties) to ensure the insights scale beyond our immediate user base.
Deeper quantitative follow‑up. A larger‑scale survey or log‑file analysis could confirm how often specific search errors occur and measure time saved after changes.
Structured prototype testing. Scheduling formal usability sessions sooner would give richer feedback before development ramps up.
We are currently in the high-fidelity prototyping stage, using early designs to explore and refine key interactions. These prototypes are helping us validate search flows, gather feedback, and align the experience with user needs and internal goals. As we finalize the product, we’re focused on ensuring the solution feels intuitive, responsive, and built for real lab workflows!
At LigoLab, a provider of advanced laboratory and diagnostic software solutions, I worked on improving our software’s search functionality. To uncover pain points and define improvement opportunities, I conducted user and competitor research.
Through interviews, surveys, and a competitive audit of industry-leading solutions, I identified key usability gaps and developed a shortlist of user-driven recommendations, which I presented to the team for potential implementation.
Making search smarter, simpler, and seamless
OVERVIEW
Methodologies
Contextual interviews
Task‑focused surveys
Competitive audit
Empathy mapping & affinity clustering
Tools
Figma / Miro / Microsoft Office
Timeline
2 months
Search Experience Optimization: User & Competitor Insights
I kicked off the research by working with stakeholders to define clear goals, mainly around how users were using (or struggling with) the search feature. I also set up participant criteria to make sure we heard from a mix of users with different roles and experience levels. This helped keep the research focused, relevant, and grounded in real-world use.
3. DELIVERABLES

The project deliverables include a report summarizing key insights from user interviews and surveys, an empathy map to visualize user behaviors, and personas that represent our target audience.
Contribution to the Team
As lead UX researcher, I handled the discovery end‑to‑end. I interviewed clinicians and lab staff, then followed up with several targeted surveys to quantify search habits, error rates, and feature wishes across user groups.
I also benchmarked six competing lab systems. Turning these data into empathy maps and a ranked pain‑point list, I defined clear design requirements that now guide a smarter, faster search flow headed into development.
Priyanka
Age: 39
Role: Lab Operations Manager
Experience: 5 years
Frustrations
Must reapply filters every time
Frustrated when search doesn't understand near-miss terms
Goals
Quickly access staff assignments and pending cases
Review reports and track flagged issues
"I don’t have time to guess the right keyword. The system should meet me halfway."
Morgan manages staff and cases across multiple lab sites. She’s constantly jumping between reports and reviews, so reapplying filters or guessing the right search term slows her down. A smarter, more adaptive search would save her time and stress.
Dylan
Age: 46
Role: QA Analyst
Experience: 8 years
Frustrations
Can’t search by task status or flags
No shortcut to frequently used case types
Goals
Spot inconsistencies in data fast
Track incomplete or delayed workflows
"Why can’t I just search ‘incomplete’ and get all pending cases?"
Dylan’s day revolves around spotting errors and delays before they escalate. Without the ability to search by task status or quickly access case types, he wastes valuable time. A more flexible, status-aware search would boost his efficiency.
Igor
Age: 26
Role: Entry-Level Technician
Experience: 2 months
Frustrations
Often mistypes search terms
Unsure how to format queries
Goals
Get comfortable using LigoLab
Find patient records and specimen IDs with minimal errors
"I feel like I’m doing it wrong. Even a hint or suggestion would help a lot."
Jessie is still getting the hang of the system and often second-guesses her search inputs. When search doesn’t guide her or tolerate small mistakes, it leaves her feeling stuck. A more forgiving, hint-driven search would help her gain confidence.
Want to connect?
Let’s discuss how I can contribute to bringing your user experience to a new level.
Contact me
Search Experience Optimization: User & Competitor Insights
Making search smarter, simpler, and seamless
OVERVIEW
At LigoLab, a provider of advanced laboratory and diagnostic software solutions, I worked on improving our software’s search functionality. To uncover pain points and define improvement opportunities, I conducted user and competitor research.
Through interviews, surveys, and a competitive audit of industry-leading solutions, I identified key usability gaps and developed a shortlist of user-driven recommendations, which I presented to the team for potential implementation.
Methodologies
Contextual interviews
Task‑focused surveys
Competitive analysis
Empathy mapping & affinity clustering
Empathy mapping & affinity clustering
Tools
Figma
Miro
Microsoft Office
Timeline
2 months


BACKGROUND
How It All Started
Lab staff were spending valuable minutes poking around screens for the right record instead of moving cases forward.
That pain point kicked off an SEO‑style research sprint focused on our own search: mapping the exact terms clinicians use, studying how other lab platforms surface results, and prototyping smarter filters and ranking logic.
Problem
LigoLab users face inefficiencies and confusion when searching for information due to a lack of intelligent matching, intuitive filtering, and search assistance features.
Goal
Redesign the search experience in LigoLab to support smarter, faster, and more user-friendly search workflows across user roles.
Steps Conducted
Outlined Research Objectives
Audited the Existing Search Experience
Ran a Competitive Audit
Collected User Insights
Synthesized Findings
Shared Actionable Recommendations
To capture both depth and breadth, I blended qualitative and quantitative research. This mixed‑methods approach let me ground design decisions in real user behavior, validate them with numbers, and benchmark them against industry standards, all while keeping the feedback loop fast and focused. Now that we’ve covered the context, let’s get into the work. Shall we?
FOUNDATIONAL RESEARCH
I kicked off the research by working with stakeholders to define clear goals, mainly around how users were using (or struggling with) the search feature.
I also set up participant criteria to make sure we heard from a mix of users with different roles and experience levels. This helped keep the research focused, relevant, and grounded in real-world use.
Participant Screening Criteria
Active LigoLab users for ≥ 3 months
Mix of roles: clinicians, lab managers, QA
Active users of LigoLab who rely on the search function for daily tasks (like looking up patient cases, specimen IDs, reports)
Mix of beginner and advanced users
Users with varying typing styles, terminology familiarity, and comfort with filtering options
Research Goals
Understand how users currently interact with LigoLab’s search bar and what content they search for most often.
Identify user pain points around keyword matching, filtering, speed, and results accuracy.
Define the expectations users have for a high-performing search feature, including language, structure, and personalization.
1.1 Outlining the Objectives


1.2 Conducting Competitive Analysis
To uncover opportunities for improving LigoLab’s search functionality, I conducted user and competitor research focused on usability, categorization, and information display.
I analyzed how leading platforms structure their search experiences: examining filters, result clarity, and relevance.




1.3 Running User Interviews
I interviewed 12 internal LigoLab users, including lab staff, managers, and support team members, to learn how they currently use search and where they run into issues.
I also ran several surveys to gather broader feedback on search habits and expectations. The goal was to understand what users actually need from the search experience and where the current system falls short.


SYNTHESIZING INSIGHTS
After collecting feedback through surveys and interviews, I went through the data and looked for patterns using affinity mapping. I used FigJam to organize the themes, which helped me spot common frustrations and needs.
This step really helped me understand what was most important to users, so I could prioritize the changes that would make the biggest impact.
It was key to turning all that feedback into something actionable and making sure the changes we proposed actually addressed the real pain points.
Says
Does
Thinks
Feels
2.3 Empathy Mapping


DELIVERABLES
The project deliverables include a report summarizing key insights from user interviews and surveys, an empathy map to visualize user behaviors, and personas that represent our target audience.
3.1 User Pain Points
Through interviews and surveys, I identified common frustrations with the search functionality across different roles. Key pain points included issues with search accuracy, relevancy, and ease of use.
Match Dependence
Users often fail to find results unless they type precise terms, creating inefficiencies and repeated search attempts.
Unintuitive Filtering Options
Filters are hidden or hard to use, resulting in missed opportunities to narrow down search results.
Lack of Predictive Assistance
No autocomplete, spelling correction, or "Did you mean?" suggestions frustrate users who make minor typos.
No Recent or Frequent Search Recall
Users must start fresh each time, even for repeated searches.
Unclear Search Syntax or Expectations
Users are unsure how to format queries—e.g., dashes in IDs, partial names, lab codes.
Slow Result Surfacing
Search response time can feel sluggish, especially for larger databases.
3.2 User Personas
Based on the research, I created user personas to represent the diverse needs and pain points of the target users. These personas helped guide design decisions, build empathy, and stress-test solutions to ensure they addressed real user challenges.






We are currently in the high-fidelity prototyping stage, using early designs to explore and refine key interactions. These prototypes are helping us validate search flows, gather feedback, and align the experience with user needs and internal goals.
As we finalize the product, we’re focused on ensuring the solution feels intuitive, responsive, and built for real lab workflows!
Contribution to the Team
As lead UX researcher, I handled the discovery end‑to‑end. I interviewed clinicians and lab staff, then followed up with several targeted surveys to quantify search habits, error rates, and feature wishes across user groups.
I also benchmarked six competing lab systems. Turning these data into empathy maps and a ranked pain‑point list, I defined clear design requirements that now guide a smarter, faster search flow headed into development.
My research surfaced clear opportunities to make LigoLab’s search experience more intuitive and efficient. Through interviews, surveys, and a competitive audit, I identified common user struggles, like unclear filter placement and a lack of search guidance, and compared them with best practices in the industry.
These insights are shaping a more streamlined, flexible design that supports diverse workflows, reduces friction, and helps users get to the right information faster.
Conclusions
Learnings
Through this project, I learned how powerful it is to combine both user feedback and data to truly understand the issues at hand. Conducting surveys and interviews helped me dig into real user pain points, and affinity mapping and FigJam let me organize everything in a way that made sense.
I also realized how important it is to keep stakeholders in the loop, so everyone stays focused on solving the right problems. Overall, I found that the key to good design is continuous testing and refining based on what users need.
What went well
Mixing methods paid off. Blending interviews, surveys, and a competitive audit gave me both the “why” and the “how‑often,” so findings felt trustworthy to the team.
Using FigJam for affinity mapping helped me spot patterns quickly and made it easy to share insights with designers and PMs.
Early stakeholder touch‑points kept momentum. Regular check‑ins meant decisions were rooted in research rather than assumptions, and design trade‑offs happened faster.
What I’d improve next time
Broader sampling. I’d recruit a wider range of labs (sizes, specialties) to ensure the insights scale beyond our immediate user base.
Deeper quantitative follow‑up. A larger‑scale survey or log‑file analysis could confirm how often specific search errors occur and measure time saved after changes.
Structured prototype testing. Scheduling formal usability sessions sooner would give richer feedback before development ramps up.
Want to connect?
Let’s discuss how I can contribute to bringing your user experience to a new level.
Contact me
Want to connect?
Let’s discuss how I can contribute to bringing your user experience to a new level.
Contact me