Try FoodHub

A navigation-first shopping assistant for cognitively accessible supermarket experiences.

This is a fully interactive prototype. Add items to your list, start a shop, and navigate through the store. Built as a proof of concept after the university project using Claude Code and Cursor AI.

Tap to interact

Project Context and Timeline

Supermarkets are high-stimulus, information-dense environments. For individuals with cognitive and learning disabilities such as dyslexia, ADHD, and ASD, they are often actively hostile spaces rather than neutral ones. Key challenges include difficulty navigating complex store layouts with unclear or inconsistent signage, limited access to timely assistance, and sensory overload caused by lighting, noise, crowds, and visual clutter. These barriers don't just slow shopping down, they increase stress and reduce independence. Despite this, accessibility in supermarkets is still largely framed around physical mobility, leaving cognitive accessibility under-addressed.

Project Timeline

Click a phase to jump to that part of the case study.

Project Context

This is a long question or heading to teaser the hidden content

This project was completed as part of a semester-long group project in DECO3200: Interactive Product Design Studio, spanning 13 weeks. The course emphasised identifying, justifying, and responding to real-world problem spaces through research-driven product design, rather than jumping straight to solutions.

Scope

This is a long question or heading to teaser the hidden content

Our group chose to pursue the UX stream of the studio. This meant the primary focus was on user experience and product thinking: defining the problem space, grounding decisions in research, and justifying why a particular intervention was worth building. There was less emphasis on polished UI or engineering feasibility. While wireframes and flows were developed in Figma, there was no expectation to deliver a fully built or production-ready high-fidelity prototype. The value of the work sat in the clarity of the problem framing, the research depth, and the logic of the proposed system.

My Role

This is a long question or heading to teaser the hidden content

This was a group project with four members, and responsibilities were intentionally distributed. I took on a stronger leadership role within the team, helping guide the overall direction of the project and keeping the work aligned with our core problem statement.

I played a major role in shaping the research approach, contributing heavily to both secondary research and the design of primary research and user testing methodologies. I was also solely responsible for building the wireframes and interaction flows in Figma, translating user testing insights into a coherent product structure and user journey.

After the university project wrapped up, I independently built and coded the functional app mockup you see on this page using Claude Code and Cursor AI, as a proof of concept and an experiment in AI-assisted development.

Research Landscape

Before talking to anyone, we needed to understand the existing landscape. What does accessibility actually mean in a supermarket context? What's already been tried? And where are the gaps that no one's addressed?

Primary Research

Three out of four of us live with a cognitive disability. We weren't designing for an abstract user group, we were designing for ourselves. Our research spanned interviews, surveys, in-store observations, and online ethnography. Across all of it, the same story kept emerging: supermarkets are built around store logic, not human cognition.

15 pages
Secondary research into cognitive disabilities, accessibility frameworks, and design patterns
9 interviews
In-depth sessions (20+ min each) with people living with cognitive and learning disabilities
32 responses
Targeted survey from individuals with cognitive and learning disabilities
100+
General supermarket user survey responses

We ran 9 semi-structured interviews with a mix of participants: people with ADHD, dyslexia, ASD, and memory difficulties, plus one supermarket staff member who worked across online picking and nightfill. Transcripts were thematically grouped using affinity diagrams, which clustered findings into four areas: Navigating Stores, Signage, Reducing Cognitive Load, and the broader Shopping Experience.

View sample transcript
Interview findings

This is a long question or heading to teaser the hidden content

People navigate supermarkets through memory, not signage. Navigation

Across interviews, participants described relying heavily on positional memory and mental maps built over repeated visits. Once that internal map exists, shopping feels manageable. When layouts change, or when entering an unfamiliar store, that map collapses and frustration spikes. Aisle signage was widely described as too broad, inconsistent, or unable to meaningfully represent the complexity of what's actually stocked in an aisle.

"The signs on the aisles… they're pretty useless most of the time."
"You're trying to encapsulate a whole aisle in a couple of words. That just doesn't work."

This reliance on memory explains why layout changes feel disproportionately disruptive, even when signage technically exists.

Category logic often conflicts with how people think. Navigation

Participants repeatedly described confusion caused by inconsistent categorisation: products grouped by brand, origin, promotion, or 'inspiration' rather than by how people mentally associate them. Items that logically belong together (toothpaste with bathroom items, sauces with similar cuisines) are often separated across different aisles.

"Sometimes things are sorted by brand, other times by country, other times just scattered."
"You think you know where it should be… and it turns out it technically fits two categories."

This mismatch forces people to scan entire aisles, double back, or abandon items altogether, especially for niche or infrequently purchased products.

Cognitive load accumulates through small frictions. Distraction

Very few interviewees described a single catastrophic problem. Instead, stress emerged through stacking micro-frictions: unclear signage, dense shelves, bright lighting, noise, crowds, promotional clutter, and time pressure. For people with ADHD or ASD, this often translated into distraction, decision paralysis, or a strong desire to get in and get out as fast as possible.

"There's just a lot going on… it's more interactive than it needs to be."
"I could spend an hour comparing things if I don't have a plan."

Help exists, but people avoid using it. Communication

Asking staff for help was consistently framed as a last resort, not a support mechanism. Reasons included social anxiety, difficulty finding staff, low confidence that staff would know the answer, or previous negative experiences.

"I don't feel confident asking, and half the time they just point to an aisle anyway."
"It feels like I'm interrupting them."

Avoidance is already a coping strategy. Distraction

Several participants described actively avoiding large supermarkets, busy hours, certain aisles, or specific stores altogether. Others shopped at off-peak times, rushed their trips, or abandoned items when friction became too high.

"I just want to leave as fast as possible."
"If it's too busy, I'll come back another time."

This reinforces that accessibility issues don't just slow people down. They change behaviour.

The interviews revealed that supermarket accessibility issues aren't caused by a lack of effort from users, but by environments that demand constant cognitive work. Navigation, categorisation, and assistance all rely on users adapting to the system, rather than the system supporting diverse ways of thinking.

Affinity Diagramming

By the end of our research phase, we had three weeks' worth of data from four team members running interviews, surveys, observations, and ethnographic analysis in parallel. That's a lot of raw material. We needed a way to make sense of it all and turn it into something we could actually design from.

Our tool of choice was affinity diagramming. We started at a whiteboard and turned each piece of primary research into an individual data point: a finding, a quote, a behaviour, an observation. Then we grouped those points by similarity, iterating until roughly eight themes emerged. These themes centred around pain points, disability-specific experiences, and behavioural patterns our participants had described. The three insights that follow were distilled directly from these themes.

Insights

We synthesised our user research findings into three key insights.

01
Insight

🧭 Navigation

Difficulty finding items
Navigation
Weird locations, weird categories, confusing layouts
Layout confusion
“Products grouped by brand in one store, by category in another”
“Moved items with no indication of where they went”
“Had to walk every aisle just to find pasta sauce”
Mental mapping
“I only know where things are because I’ve been coming here for years”
“New stores are basically impossible without help”
“If they rearrange, my whole system breaks”
Signage doesn’t help
Navigation
Hard to read, too broad, sometimes missing entirely
Signage readability
“Font is too small to read from the end of the aisle”
“Signs say ‘International Foods’ but that could be anything”
“Some aisles just don’t have signs at all”
Category mismatch
“Toothpaste is in ‘Health and Beauty’ not ‘Bathroom’”
“You’re trying to encapsulate a whole aisle in a couple of words”
“I guessed wrong three times before I found it”
No product availability info
Navigation
Empty shelves, unclear substitutes, wasted trips
Stock uncertainty
“Went specifically for one item and the shelf was empty”
“No way to know before going whether they have it”
“Staff said ‘try tomorrow’ which doesn’t help me right now”
Substitution difficulty
“I don’t know what’s a good alternative if my item isn’t there”
“Decision paralysis kicks in when my plan falls apart”
02
Insight

💬 Communication

People avoid asking for help
Communication
Social anxiety, bad experiences, guilt
Emotional barriers
“I feel like I’m interrupting them”
“I don’t feel confident asking”
“It feels stupid to ask where the bread is”
Past negative experiences
“Half the time they just point to an aisle anyway”
“I asked once and the staff member didn’t know either”
“They seemed annoyed, so I stopped asking”
Staff can’t always help anyway
Communication
Hard to find, unsure of answers, inconsistent knowledge
Availability
“You can never find someone when you actually need them”
“They’re always stocking shelves or busy”
Knowledge gaps
“Staff are often new and don’t know the layout”
“They pointed me to the wrong aisle twice”
“Online picking staff know better than floor staff”
No alternative ways to get assistance
Communication
When staff aren’t the answer, there’s nothing else
Digital gap
“The app tells me what’s on sale but not where anything is”
“No search function for in-store location”
Workarounds
“I just Google it and hope for the best”
“Sometimes I call ahead to check if they stock something”
03
Insight

🧠 Distraction

Overstimulating environments
Distraction
Crowds, lighting, noise, dense shelves
Sensory overload
“The fluorescent lights give me headaches after 20 minutes”
“Music plus announcements plus people talking is a lot”
“There’s just a lot going on… it’s more interactive than it needs to be”
Physical environment
“Aisles are too narrow when other people are in them”
“Shelves packed floor to ceiling feel claustrophobic”
Decision paralysis
Distraction
Too much choice, hard to compare, fear of wrong pick
Choice overload
“There are 40 types of pasta sauce and I can’t tell the difference”
“I could spend an hour comparing things if I don’t have a plan”
“The worry that we might choose the wrong option gives us unwanted stress”
Label difficulty
“Nutritional info is tiny and dense”
“Price per unit vs price per item confuses me every time”
Impulse buying
Distraction
Sales, promotions, hunger, unstructured roaming
Triggers
“Sale stickers pull my attention even when I don’t need the item”
“Limited edition stuff creates fear of missing out”
“Shopping hungry is a disaster every time”
Loss of structure
“If I don’t have a list I just wander and grab things”
“I end up buying stuff I already have at home”
“Getting off track is where the overspending happens”
Individuals with Cognitive Learning Disabilities, such as Dyslexia, ADHD, and ASD, face significant challenges in navigating supermarket environments due to a lack of universally accessible design.

Ideation

We needed to go wide before we could go narrow. Using brainstorming, brain writing, Crazy 8s, and reverse thinking, we generated as many ideas as possible across every pain point we'd identified, without filtering for quality yet.

How we combined ideas

This is a long question or heading to teaser the hidden content

Our strongest concept didn't come from a single brainstorm. It emerged from combining separate ideas that each targeted different parts of the problem—threads like structured shopping lists Distraction, map-led wayfinding Navigation, and refreshed aisle signage Navigation.

The process was informal but deliberate. Each team member pitched their ideas to the group, including the impractical ones. We'd then build on top of them: adding features, identifying advantages, or connecting them to specific pain points from our research. At this stage nothing came off the table. We were expanding, not narrowing. The narrowing came later through a decision matrix.

Other ideas we explored

Decision Matrix

With a wall of ideas in front of us, we needed a structured way to narrow them down. We built a decision matrix that scored each concept against our three core insights (Navigation, Communication, and Distraction), general accessibility criteria, and specific support for cognitive disabilities like dyslexia, ADHD, ASD, and memory impairment.

The winner: a shopping list and map app. It scored highest across the board because it could realistically address navigation, assistance, and cognitive load within a single interface, while also meeting the accessibility standards we'd identified through our secondary research.

Why this concept won

This is a long question or heading to teaser the hidden content

The app concept gave us the widest coverage across our decision criteria: it directly addressed all three insights.

Navigation Navigation through in-store mapping and guided routing.

Communication Communication through built-in assistance flows that reduce the need to find and ask staff.

Distraction Distraction through structured shopping lists and planned routes that keep users focused.

It supported cognitive disability-specific needs we'd researched: simplified language and audio alternatives for dyslexia, minimal layouts and checklists for ADHD, predictable interfaces and sensory customisation for ASD, and step-by-step guidance with progress tracking for memory impairments.

It also met general accessibility benchmarks across physical, visual, auditory, and speech accessibility.

One constraint worth noting: this was a university project that required our solution to span digital, physical, and spatial design. This influenced some early decisions, like incorporating a physical sectioned basket system into the concept. That feature was later dropped after testing showed it wasn't effective, but at this stage it was part of our thinking.

Prototyping

We developed prototypes across three spaces: digital, spatial, and physical. Each one tested different aspects of the concept, from screen-level interaction design through to how someone would physically move through a store with the app guiding them.

User Testing

We ran three rounds of user testing, each building on the last. Every round was measured against our three core insights so we could track whether the design was actually solving the problems we'd identified, not just whether it was usable. NavigationCommunicationDistraction

Round 1 wasn't about testing the idea. It was about testing the prototype.

We had one shot at a large-scale user testing fair later in the semester, with close to 200 participants available. We couldn't afford to spend that day debugging UI issues or explaining confusing icons. So we ran a preliminary round of think-aloud sessions, having users work through 9 core features while verbalising their thought process. Adding items, removing items, navigating the store, using the basket, moving the spatial prototype figurine.

The goal was simple: find everything that's broken or confusing now, fix it, and present a functional prototype at the fair.

What we found.

This is a long question or heading to teaser the hidden content

The basket feature failed completely. Not a single user understood how to use it from the app's explanation alone. Every participant had to ask us for clarification. Even after understanding it, most ignored it and just placed items in the basket without sectioning. We responded by adding an onboarding animation showing the basket being divided, along with clearer instructions and visuals.

We also discovered the app lacked positive feedback loops. Users would add an item and nothing would happen visually. They'd start navigation and have no confirmation it was working. We added confirmation states throughout: 'item added,' 'starting shop,' progress indicators.

Beyond those, we caught dozens of smaller issues: unclear icons, confusing page transitions, button placements, colour contrast problems. All the things that would have eaten into our testing time at the fair if we hadn't caught them here.

Evaluation

We set out to test whether FoodHub actually addressed the problems we'd identified in research. Not just whether it was usable, but whether it moved the needle on navigation, communication, and distraction. Here's how it performed.

Across four SUS respondents, FoodHub scored an average of 69.4, just above the industry benchmark of 68. For a mid-fidelity Figma prototype, that's a solid baseline. Learnability scored particularly high: all four respondents agreed most people would pick the app up quickly. The slightly lower scores on confidence and consistency suggest the prototype's fidelity level affected how reliable the experience felt, a polish issue, not a conceptual one. All four think-aloud participants said they could see themselves using the app.

Evaluation goals by insight: result and summary
InsightGoalResultSummary
NavigationProduct findingMetUsers found items directly instead of wandering
NavigationStore navigation clarityPartially metRoute worked; map orientation needed iteration
NavigationTime savingMetAll users reported faster shopping with the app
CommunicationReduced staff relianceMetLocation self-service reduced help-seeking
CommunicationEffective communicationPartially metCall feature worked but needed more modalities
DistractionMinimise distractionMetImpulse buying dropped from 4/5 to 1/5
DistractionRefocus speedInconclusivePrototype unfamiliarity skewed dual-task results
DistractionBasket organiserNot metDropped after testing showed it added cognitive load and social stigma
Navigation Navigation

This is a long question or heading to teaser the hidden content

Did the app help users find items?

Yes. In baseline tests without the app, users searched every shelf and wandered aisles without direction. One participant described their usual approach:

"as there's no labels I guess I'll just walk through every aisle."
With the app, every participant navigated directly to their items. The difference was immediately visible in both behaviour and confidence.

Was the navigation clear?

Partially. The guided route worked. Users understood where they were going and could follow the path. But three out of four participants raised the same issue: when the map reoriented between items, they lost their sense of position.

"the map sort of changes orientation and I'm trying to figure out where I am."
"what confused me with the first map was the orientation."

The concept was right. The specific map implementation needed iteration, which led directly to our round 3 A/B test.

The A/B test confirmed that map navigation was preferred over compass navigation overall. But the transcripts revealed something more nuanced: the compass didn't suffer from the orientation problem because it gave direction without requiring spatial understanding of the whole store. Several users suggested having both available, using the map for overall navigation and switching to compass for close-range finding.

Did it save time?

Yes. Users described the app as "definitely more efficient" and noted they could go directly to items rather than scanning. One participant said it "probably would have saved me time rather than looking at every single aisle and then going back and forth."

What was missing?

Users consistently asked for more spatial specificity. Three participants wanted left/right indicators for which side of the aisle an item was on. One asked for vertical shelf position (top or bottom). The app solved the macro navigation problem (which aisle) but left a gap in the last metre of finding.

Communication Communication

This is a long question or heading to teaser the hidden content

Did the app reduce the need to ask for help?

Yes. Users required less staff assistance when using the app because the navigation itself answered the most common question ("where is this item?"). This aligns with our research finding that 70 to 80 percent of in-store help requests are about location.

Was the call feature useful?

Users found the call button intuitive and appreciated having it available. One participant said they "preferred calling for staff than talking in person," validating our research that face-to-face requests carry emotional cost. However, one user completed the call task easily but then asked what it was actually for, suggesting the value proposition needed clearer framing in the UI.

What was missing?

Multiple users asked for alternatives beyond voice calling. One said "I personally don't like phone calls so I would prefer another way to ask for help." Users suggested text chat, video call options, and a chatbot. The call feature was the right instinct, reducing the friction of seeking help, but it was too narrow. Not everyone's communication barrier is the same.

Distraction Distraction

This is a long question or heading to teaser the hidden content

Did the app reduce distractions?

Yes, and this was the strongest result. We placed a promotional distraction in the testing environment. Without the app, 4 out of 5 users noticed the promotion and decided to purchase. With the app, only 1 out of 5 did. The structured route kept users focused on their list and moving through the store with purpose, leaving less opportunity for unplanned browsing.

Users described this directly. One participant said:

"helped avoid searching every shelf and finding the promotion, thus avoiding impulse buying."

The navigation feature wasn't just solving a navigation problem. It was solving a distraction problem too, because aimless wandering is what creates exposure to impulse triggers.

Did the app reduce cognitive load?

This was our most complicated finding. We ran a dual-task test where users memorised a sequence of 5 numbers before shopping, then recalled them afterwards. In the baseline test without the app, 5 out of 5 users recalled the sequence. With the app, only 1 out of 5 did.

On the surface, that looks like the app increased cognitive load rather than reducing it. But the context matters. Users were interacting with a brand-new, unfamiliar prototype for the first time. They were processing a novel interface, learning new interaction patterns, and navigating a physical diorama simultaneously. One user acknowledged this tension directly: the app was

"hard to tell because they have to learn a whole new app, inherently overloading."

The learning curve of an unfamiliar tool consumed the cognitive bandwidth that the tool was designed to free up.

We'd expect this to invert with repeated use. Once the interface becomes familiar, the structured route and guided workflow should reduce cognitive effort compared to unassisted shopping. But we can't claim that from this test. It's an honest limitation of testing a new prototype on first-time users.

What about the basket?

The basket feature was designed to make purchasing categories physically visible and reduce impulse buying through awareness. It failed. The A/B test results were decisive, but the qualitative feedback was even more telling:

"It's a whole extra step I have to do, thinking which section to put the item, moving the slider, making space… I'd rather just ignore it."
"It's kind of belittling… it constantly reminding me to do something like I'm a 3 year old. It's frustrating, like I know how to shop."
"I get the overall idea of the basket, but I don't think it'll be used by the people who it's designed for. It feels almost like people would give funny looks if you're using it."

That last quote is particularly significant. We were designing for people who already feel self-conscious in supermarkets. A feature that risks social stigma works directly against our goals. The basket was intended to reduce cognitive load, but it added a new decision at every item and carried an emotional cost we hadn't anticipated.

We dropped it. The app's digital shopping list and post-shop report already addressed impulse buying more effectively and without the physical friction or social risk.

Unexpected findings

This is a long question or heading to teaser the hidden content

Unexpected findings

Some of the most valuable feedback wasn't about what we'd built but about what users wanted next. Without knowing our insight framework, participants independently requested features that mapped directly to our three insights:

A progress bar showing how far through the shop they were Distraction — maintaining focus and motivation.

Shelf-level directions indicating left/right and top/bottom positioning Navigation — last-metre finding.

A post-shop summary with comparisons to previous trips, similar to Strava Distraction — behaviour reflection and awareness.

Text and chat alternatives to calling staff Communication — non-verbal help options.

A zoom function to switch between store overview and item-level detail Navigation — spatial orientation.

The fact that users independently identified the same problem areas we'd researched, and proposed solutions aligned with our insights, is strong validation that the research framework was accurately targeting real needs.

Feasibility & Roadmap

The concept has been validated. Here's where it is now and where it goes next.

The current prototype represents the supermarket as a linear sequence of aisles. Users move along a single forward path, with each aisle appearing in a fixed order. Items are assigned to a category, a position in the store sequence, and a location within their aisle.

This was an intentional scoping decision. It let us test whether structured step-by-step guidance actually reduces cognitive load and improves navigation, without solving the much larger problem of full indoor spatial mapping. For a conceptual prototype, it was the right trade-off. We validated the interaction model and its accessibility value while keeping the system lightweight enough to build, test, and iterate on quickly.

This project started with a simple observation: supermarkets are built for the store, not for the shopper. Everything we researched, designed, and tested came back to that. Accessibility isn't a feature you add at the end. It's a lens that changes how you design from the start. That's what I'll carry into everything I work on next.