Methodology
The pyETA tool is built to handle eye-tracking data with a real-time focus. The idea is all about grabbing raw gaze data, processing it with a velocity-based approach (I-VT), and producing fixation start timestamps & duration to assist the researcher.
The toolbox leans on tobii_research for communicating with the eye-tracker device, PyQt6 for the GUI, click for CLI and pylsl / mne_lsl for LSL streaming.
Core Components
Data Collection: Hooks into a Tobii eye tracker or mocks it if you’re testing.
Processing: Uses velocity thresholds to tag fixations, smoothed out with
OneEuroFilter.Validation: Cross-checks gaze against known targets of 3x3 grid for accuracy/precision.
Visualization: Real-time gaze plot & fixation tracking plot.
Workflow
application.py orchestrates two main use cases in the GUI with reader.TrackerThread and reader.StreamThread for tracking and validation process
track.py in CLI is does the tracking and window.py does the validation process.
StreamThreadthread runs in the background and gets data from an LSL stream on-demand, the LSL streams is being send continuously by TrackerThread and being stored in buffer of max length 10000 for gaze and 10 for fixation.
This process starts a new window, the validate_eye_tracker() pops a dialog to pick a screen, then launches a validation window of 3x3 grid with a blue dot moving and being fixated at targets randomly.
Here, TrackerThread runs along with when the validation window launches with an intention to stop when all targets are traversed, collecting gaze data while the validation grid runs. The thread is also set to save the data to a file which are later used for calculating metrics.
Last updated