Through Their Eyes

A Unity-based dual-perspective behavioural simulation

Overview

Through Their Eyes is a scenario-based behavioural simulation built in Unity in response to a design brief: develop software that helps family members gain empathy for loved ones living with dementia. The simulation places players alternately in the perspective of Richard — a grandfather experiencing memory loss, environmental disorientation, and emotional confusion — and Alex, his grandson reluctantly spending time with him at his father's insistence.

Rather than telling users about dementia, the simulation makes its cognitive and perceptual effects experiential. What should feel familiar — a living room, an ordinary afternoon — becomes disorienting for the player, mirroring the reality of Richard's condition. The emotional arc moves from tension and confusion through realisation to empathy, with the player ultimately confronted with a choice: stay and care, or turn away.

This was developed independently as part of an interview process for an Experience Designer and Developer role at MAGES Studio.

 

The Problem Statement

Many family and community members today lack essential caregiving skills. It can be difficult to fully understand what a loved one with dementia is going through — and without that understanding, even well-meaning family members feel unsure, unprepared, or emotionally distant. Empathy is the first step toward meaningful care, and existing educational approaches often fail to bridge the gap between knowing about dementia and genuinely understanding it from the inside.

 

Simulation Design

The core design principle is dual-perspective scenario simulation: players inhabit both the caregiver and the individual in need across the same narrative scenario. This forced perspective shift is the primary mechanism for building empathy — not through instruction, but through direct experiential modelling of a cognitive state that is otherwise inaccessible to observation.

Key simulation mechanics were designed to model dementia's cognitive fragmentation:

Memory fragmentation as a game mechanic — what appear to be errors or glitches are intentional representations of the way memory loss disrupts environmental coherence. Familiar objects and spaces behave unpredictably, not as a technical failure but as a designed simulation of cognitive disorientation.

Random disruptions — sudden scene changes, unexpected sounds, and fragmented dialogue simulate the involuntary, non-linear nature of memory loss. These are scripted stochastic events rather than random noise, calibrated to create disorientation without overwhelming the player.

First-person RPG perspective — players inhabit each character's viewpoint directly, rather than observing from outside. This removes the emotional distance that third-person framing creates.

Player agency at the closing moment — the simulation ends with a meaningful binary choice: accept the responsibility of care, or walk away. This design decision ensures the experience demands an active response rather than passive observation, and generates data on player decision patterns for evaluation purposes.

gameplay screen 1
gameplay screen 2
gameplay screen 3
gameplay screen 4

 

Game Pipeline

The simulation was architected around a scene state machine managing six sequential scenes with branching logic at the decision point. Core systems included a Session Manager (DontDestroyOnLoad) for persistent session tracking across scenes, a Cursor Look system responding to OnClick and OnStart events, a Shuffle On Hover mechanic for UI interaction, Next Scene Trigger logic via InputKey RETURN, and value tracking for Option0Click and Option1Click to capture player decisions.

The pipeline was designed with future scalability in mind — the same session management and scene transition architecture can accommodate additional scenario modules built on the same core principle without requiring architectural changes.

Game Pipeline Diagram

 

Data Pipeline — Unity to AWS to React

Beyond the simulation itself, a full data collection and analysis pipeline was implemented to support evaluation and iterative improvement. Session data generated within Unity is serialised as JSON and uploaded to an AWS Lambda function URL via the SessionManager script. The Lambda function routes data through two pathways: SaveGameFunction writes session records to Amazon DynamoDB, while GetGameFunction retrieves aggregated data for analysis.

Session records capture a unique session ID, number of attempts, timestamp, and player decision values (optionClick0 and optionClick1) — enabling quantitative analysis of how players respond to the simulation's closing choice across a population of users. This data feeds a React-based dashboard web application for real-time monitoring and review.

This end-to-end data architecture — from in-simulation event capture through cloud storage to a web dashboard — demonstrates the kind of instrumented simulation design relevant to concept experimentation and AI verification use cases, where capturing and analysing behavioural data from simulation runs is as important as the simulation itself.

Data Flow Diagram

 

User Testing Plan

A structured six-step user testing plan was proposed to evaluate both the simulation's usability and its effectiveness as an empathy-building tool. The plan calls for 5 to 8 participants, beginning with an enquiry into each participant's prior experience and relationship to caregiving before gameplay. During play, challenges and pain points would be documented. Post-session questions would probe what the experience was trying to communicate, how participants felt during and after, and collect open feedback.

This testing framework mirrors the kind of validation process used in training simulation development — where measuring not just technical performance but actual behavioural and attitudinal outcomes is critical to assessing whether the simulation achieves its intended effect. Testing has not yet been conducted.

User Testing Plan

 

Future Directions

The simulation was scoped as a proof of concept within the interview brief and was built using Unity's High Definition Render Pipeline (HDRP) to achieve the visual fidelity needed to make the experience feel grounded and believable. Running HDRP on a 2021 MacBook Pro introduced performance constraints — frame rate limitations and latency that affected the smoothness of the experience under live conditions.

The primary area for improvement in a production version is hardware and rendering optimisation. This includes running the build on a dedicated Windows machine with a discrete GPU capable of sustaining HDRP at full fidelity, optimising lighting bakes and shadow settings to reduce real-time render cost, and refactoring the Shuffle on Hover mechanic which currently uses random index-based positioning — a pattern that is not optimally performant at scale. DOTween Lerps would also replace current animation transitions for smoother, more controllable UI feedback without additional render overhead.

Beyond performance, expanding the experience with additional scenario modules built on the same dual-perspective core principle would extend the simulation's range without requiring architectural changes. The longer-term direction is a VR adaptation — moving from a screen-based first-person experience to full immersion, which would significantly increase effectiveness as both an empathy tool and a training platform, and bring the simulation into the same domain as training systems used in medical, military, and emergency response contexts.

 

×