Improving Navigation in the Knowledge Center
During my UX research internship at Centene, I evaluated and improved the navigation of an existing hub of knowledge articles, across both the web (mobile & desktop) and app versions, uncovering usability issues and providing actionable recommendations for the design team.

Due to confidentiality reasons, I cannot provide images of Centene's Intellectual Property.
My role:
UX Research Intern
Team
2 UX designers, 1 content designer, product stakeholders
Timeline
8 weeks (Summer 2025)
Methods
Competitive analysis, heuristic evaluation, tree testing, unmoderated usability testing (mobile web and app)
[IMPACT SUMMARY]
Evaluated the Knowledge Center's navigation using competitive analysis, heuristic review, tree testing, and two rounds of unmoderated usability testing to identify critical friction points in how members find information.
Provided clear, prioritized recommendations that helped UX and content designers refine information architecture, labeling, and mobile navigation patterns as the project progressed.
Adapted the research plan mid-project from a desktop-first to a mobile-first focus, ensuring findings aligned with current product and stakeholder priorities.
[CONTEXT AND GOALS]
The Knowledge Center is a self-service hub where members learn about plans, benefits, and how to use their coverage. As a health insurance member portal, the platform serves thousands of users seeking critical information—often when they're stressed or confused about their coverage. The Knowledge Center is currently a page with a simple grid of categories, each with a list of hyperlinks to articles. My team was assigned to bring life to the page, and match the rest of the Ambetter website’s visual and accessibility standards.
My internship focused on understanding how well members could find and understand information in this space. The goal was straightforward: improve the usability of the Knowledge Center's navigation by uncovering where users struggle, why they get lost, and how the structure and labeling could better support their core tasks.
Key research questions:
Can members efficiently find the information they need without excessive clicks or back-and-forth?
Do the current labels and categories match how members naturally think about health insurance topics?
Where do mobile web and app navigation patterns diverge, and which is more intuitive?
[RESEARCH: UNDERSTANDING THE INFORMATION GAP]
As the research lead, I designed and executed a mixed-methods study to understand how young adults currently navigate skincare decisions and where they struggle.
Research approach:
Literature review – Established a foundation in existing skincare research and common consumer pain points.
Survey – Distributed to 50+ young adult skincare users, asking about information sources, decision-making barriers, and product discovery pain points.
Key finding: Social media was the primary source of skincare advice, despite users recognizing it often spreads misinformation.
Users reported feeling overwhelmed by product options and unsure which products matched their skin type.
User interviews – Conducted 8 in-depth interviews with college students and recent graduates (ages 19–25) to understand their skincare journeys, frustrations, and what information would make them feel confident.
Key finding: Ingredient information was critical but inaccessible. Users wanted to know why a product was suitable for their skin, but existing product pages didn't explain ingredient effects.
Users mentioned wishing for peer reviews from people with similar skin types, not just overall ratings.
Competitive analysis – Reviewed platforms like OnSkin, Hwahae, Olive Young, and Ulta Beauty to identify how competitors addressed these gaps.
Finding: Most competitors were commerce-driven (selling products) rather than education-driven. This meant they prioritized sales over unbiased recommendations.
Opportunity: Skinfo could differentiate by being purely educational, building trust through transparency.
Research synthesis:
I synthesized these findings into three core design principles:
Unbiased information first. The platform should be an information hub, not a marketplace. This meant users wouldn't see "buy now" buttons, only educational content.
Personalization through data. By collecting skin type early (via an onboarding quiz), we could show personalized product rankings, ingredient suitability, and peer reviews from similar skin types.
Simplify complexity. Ingredient lists needed to be visual, categorized, and clickable—not a wall of technical text.
[COMPETITIVE ANALYSIS & HEURISTIC EVALUATION]
Purpose: Understand how the market approaches health information architecture and identify obvious usability issues against established best practices.
What I did: I reviewed how competitors like www.unitedhealthgroup.com, other insurance portals, and health information sites structured their navigation. Then each of my team members evaluated Ambetter's Knowledge Center against Nielsen's 10 usability heuristics (visibility of system status, user control and freedom, error prevention, consistency, etc.).
Output: A report highlighting gaps—e.g., navigation labels used internal terminology rather than member language, and the hierarchy buried common tasks (like "how to pay") under unintuitive categories.
[TREE TESTING]
Purpose: Measure how easily users could locate key pieces of information based solely on the site structure and category labels, without visual design distracting them.
What I did: I designed a tree test in UserZoom focusing on 8–10 common member tasks (e.g., "Find out how to pay your premium," "Learn what preventive care is covered," "Understand your out-of-pocket costs"). Participants saw only text labels and hierarchies—no interface design—so the results isolated information architecture issues.
Sample size: 40+ participants
Output: Success/failure rates for each task, plus heatmaps showing where users hesitated or chose incorrect paths. For example, a task with 35% success rate revealed that the label "Plan Details" wasn't clear enough; users expected "Coverage & Benefits" instead.
[UNMODERATED USABILITY TESTING - MOBILE WEB (Round 1)]
Purpose: Observe how users navigated the Knowledge Center on mobile web in a realistic talk-aloud environment, seeing where they clicked, paused, and backtracked.
What I did: I recruited members who had recently interacted with the Knowledge Center and asked them to complete 3–4 realistic tasks using a think-aloud protocol. For example: "You just got a bill in the mail and aren't sure why it's this amount. Use the Knowledge Center to understand your costs."
Sample size: 12 participants
Output: Video recordings and click-heat data showing where navigation failed (e.g., members clicking the wrong category, then backtracking 2–3 times before finding the right page).
[UNMODERATED USABILITY TESTING - MOBILE WEB (Round 2)]
Purpose: After learning that the company was shifting to a mobile-first strategy, I ran a second test focused on the app interface. This let me compare how the app's navigation design performed vs. the mobile web, and whether new patterns were emerging.
What I did: Similar task-based testing, but with 8 participants who primarily used the Ambetter app. I focused on navigation-heavy flows: "Find information about your copays" and "Search for a specific provider benefit."
Sample size: 8 participants
Output: Comparative data showing that the app's bottom-navigation pattern worked better than the mobile web's hamburger menu, but secondary navigation within the app was still confusing.
[KEY FINDINGS & DESIGN IMPLICATIONS]
Insight 1: Labels used internal terminology, not member language.
Evidence: In tree testing, 12 out of 40 participants struggled to find "benefits covered under your plan" when it was labeled "Plan Benefits." They were looking for "What's Covered?" or "My Coverage."
Implication: Recommended a content audit to rewrite labels in plain language aligned with how members naturally spoke about their insurance. Worked with the content designer to prioritize label changes on the most-visited pages.
Insight 2: Navigation depth required too many clicks on mobile.
Evidence: Usability testing revealed members taking 4–5 taps to reach commonly needed information. One participant described it as "feeling like a maze." On mobile, this was especially problematic—small touch targets and pagination made navigation feel slow and error-prone.
Implication: Recommended flattening the information architecture and surfacing top tasks at the first level, reducing depth by 1–2 levels. This required designing a more compact navigation structure, which the design team implemented in the Spring release.
Insight 3: Mobile app bottom-nav outperformed mobile web's hamburger menu.
Evidence: Task completion times were 30% faster in the app (with bottom tabs) compared to mobile web (with hamburger menu). Participants said the app felt "more stable" because navigation was always visible.
Implication: Recommended applying a similar persistent, scannable navigation pattern to mobile web—either a sticky tab bar or reorganized top navigation—rather than hiding navigation in a menu. The team adopted a persistent tab-bar approach in the next iteration.
Insight 4: Context-specific help was missing.
Evidence: 8 participants asked "Is this the right page?" or spent time confirming they were in the right section. They wanted validation that their chosen path matched their question.
Implication: Recommended adding inline help text and clarifying subheads on key pages so users could confirm they were on-task without guessing.
[HOW RESEARCH INFORMED DESIGN]
Rather than just delivering a report, I worked closely with the UX and content designers to translate these findings into actionable changes.
Immediate changes (implemented in later sprints):
Content rewrites — Top 20 navigation labels were rewritten in plain language (e.g., "My Costs" vs. "Cost Sharing Information")
IA flattening — Removed 1–2 levels of navigation depth on mobile for the highest-value tasks
Navigation redesign — Designers prototyped a persistent tab-bar for mobile web, tested it internally, and scheduled it for development
Strategic recommendations for next phase:
Conduct moderated discovery sessions to validate primary member use cases (when do people visit the Knowledge Center, what are they trying to do?)
Run A/B tests on the new label set and navigation structure to measure impact on task completion rates
Consider building contextual help or chatbot support for members who still feel lost despite clearer architecture
I presented findings in multiple formats—a formal PowerPoint report for stakeholders, a simplified one-pager for designers, and annotated screenshots in Figma that highlighted friction points directly on the interface.
[CHALLENGES & STRATEGIC LEARNINGS]
The Challenge: Shifting Scope Mid-Project
The research plan began with a desktop-first approach, focused on the full Knowledge Center ecosystem. Halfway through, stakeholders clarified that mobile was the priority and the scope was the mobile web and app only. This meant restructuring the research strategy, which cost time in early weeks.
What I Learned:
Navigating ambiguity is a skill. Instead of viewing the scope shift as a setback, I reframed it as an opportunity to focus on higher-impact areas. The mobile-first pivot actually made the findings more actionable and relevant to the team's immediate priorities.
Stakeholder alignment matters. I learned to ask clarifying questions early ("Who's the primary user? What platform matters most?") and to validate assumptions with the team rather than working in isolation. This prevented larger rework later.
Synthesis and presentation are as important as data collection. Collecting usability testing videos is one thing; translating them into a clear recommendation ("Change the label from X to Y because Z participants struggled with it") is where research becomes impactful. I practiced creating bite-sized insights that designers could act on without wading through raw data.
[NEXT STEPS AND FUTURE DIRECTIONS]
If I had more time or were to advise a continuation of this work, I'd recommend:
1. Clarify core use cases through moderated discovery.
Conduct 8–10 in-depth interviews with members asking "When do you visit the Knowledge Center? What are you trying to do? What's your emotional state?" Understanding the why and when of visits would help anchor the IA even more tightly to real needs.
2. Pair qualitative insights with quantitative validation.
Run larger-scale surveys or analyze Google Analytics data on drop-off patterns to confirm that the most-struggled navigation paths are also the most-used ones. This would help the team prioritize changes by impact.
3. Test the new label set and navigation structure before full launch.
Use A/B testing or lightweight usability testing to validate that the proposed changes actually improve task completion rates and member satisfaction, rather than assuming they will.
4. Build for progressive disclosure on mobile.
As content grows, consider ways to surface the most relevant information first and allow members to dig deeper if needed, rather than flattening the IA further (which could create other problems).
[CONCLUSION: What This Internship Taught Me]
Research is a bridge, not a silo. My role wasn't to design solutions; it was to uncover truths about member behavior and hand them to designers with confidence and clarity. The impact came not from the volume of data I collected, but from my ability to distill patterns and translate them into decisions the team could act on.
Constraints force clarity. The shift to mobile-first initially felt disruptive, but it forced me to prioritize. I couldn't test everything, so I tested what mattered most. That discipline made my findings more useful than a generic "test all the things" approach would have.
Real-world research is messier than textbooks suggest. Time constraints, shifting priorities, and incomplete data are the norm, not exceptions. The skill isn't avoiding ambiguity—it's navigating it thoughtfully, communicating what you know and don't know, and always tying findings back to actionable next steps.





