# Stream Generation

The `pyETA` GUI initiates a 22-channel[ Lab Streaming Layer](https://labstreaminglayer.org/) (LSL) stream named `'tobii_gaze_fixation'` for real-time eye-tracking analysis. This stream supports gaze tracking and fixation detection, with data sourced from either a `Tobii eye tracker` or a [mock service](https://github.com/moses-palmer/pynput).

***

### Stream Generation

* **Script:** `track.py`
* **Class:** `Tracker`
  * In `application.py`, the `start_stream` method creates a `StreamThread` thread with user-defined parameters and starts it:

    ```python
    self.stream_thread = StreamThread()
    self.stream_thread.set_variables(tracker_params=tracker_params)
    self.stream_thread.start()
    ```
  * `StreamThread` spawns a `TrackerThread`, which instantiates and runs a `Tracker` object from `track.py`.
* **Stream Creation:** When `push_stream=True` (set via GUI checkbox), `Tracker` creates an LSL stream or when `--push_stream` flag is passed in CLI `pyeta track --push_stream`
* **Channel modification:** Combines raw, filtered, and metadata into a 22-channel array.
  * Applies `OneEuroFilter` to raw gaze coordinates.
  * Computes velocity and fixation status using `velocity_threshold`.
  * Produces `left_filtered_gaze_x`, `left_filtered_gaze_y`, `right_filtered_gaze_x` and `right_filtered_gaze_y` used for fixation detection.

<details>

<summary>OneEuroFilter Algorithm</summary>

This algorithm reduces noise and jitters in raw gaze data while preserving responsiveness, which is later used for fixation detection.,

* **Steps:**
  1. **Derivative:** Calculates rate of change:

     ```python
     current_derivative = (current_value - self.previous_value) / time_elapsed
     ```
  2. **Derivative Smoothing:** Applies fixed cutoff (1.0 Hz):

     ```python
     alpha_derivative = self.smoothing_factor(time_elapsed, self.derivative_cutoff)
     filtered_derivative = self.exp_smoothing(alpha_derivative, current_derivative, self.previous_derivative)
     ```
  3. **Adaptive Cutoff:** Adjusts based on velocity:

     ```python
     adaptive_cutoff = self.min_cutoff + self.beta * abs(filtered_derivative)
     ```
  4. **Value Smoothing:** Applies exponential smoothing:

     ```python
     alpha = self.smoothing_factor(time_elapsed, adaptive_cutoff)
     filtered_value = self.exp_smoothing(alpha, current_value, self.previous_value)
     ```

</details>

***

### Stream Properties

* **LSL stream Name:** `'tobii_gaze_fixation'`
* **Channels:** 22

Channel Structure of the stream is described below:

<table><thead><tr><th width="90"></th><th width="225">Channel Name</th><th width="125">Type</th><th width="111">Unit</th><th>Description</th></tr></thead><tbody><tr><td><strong>Left Eye</strong></td><td></td><td></td><td></td><td></td></tr><tr><td>1</td><td>left_gaze_x</td><td>gaze</td><td>normalized</td><td>Raw X gaze position (0-1)</td></tr><tr><td>2</td><td>left_gaze_y</td><td>gaze</td><td>normalized</td><td>Raw Y gaze position (0-1)</td></tr><tr><td>3</td><td>left_pupil_diameter</td><td>pupil</td><td>mm</td><td>Pupil diameter</td></tr><tr><td>4</td><td>left_fixated</td><td>fixation</td><td>boolean</td><td>Fixation status (True/False)</td></tr><tr><td>5</td><td>left_velocity</td><td>velocity</td><td>px</td><td>Gaze velocity</td></tr><tr><td>6</td><td>left_fixation_timestamp</td><td>timestamp</td><td>s</td><td>Time of fixation start</td></tr><tr><td>7</td><td>left_fixation_elapsed</td><td>duration</td><td>s</td><td>Fixation duration</td></tr><tr><td>8</td><td>left_filtered_gaze_x</td><td>filtered_gaze</td><td>normalized</td><td>Smoothed X gaze position</td></tr><tr><td>9</td><td>left_filtered_gaze_y</td><td>filtered_gaze</td><td>normalized</td><td>Smoothed Y gaze position</td></tr><tr><td><strong>Right Eye</strong></td><td></td><td></td><td></td><td></td></tr><tr><td>10</td><td>right_gaze_x</td><td>gaze</td><td>normalized</td><td>Raw X gaze position (0-1)</td></tr><tr><td>11</td><td>right_gaze_y</td><td>gaze</td><td>normalized</td><td>Raw Y gaze position (0-1)</td></tr><tr><td>12</td><td>right_pupil_diameter</td><td>pupil</td><td>mm</td><td>Pupil diameter</td></tr><tr><td>13</td><td>right_fixated</td><td>fixation</td><td>boolean</td><td>Fixation status (True/False)</td></tr><tr><td>14</td><td>right_velocity</td><td>velocity</td><td>px</td><td>Gaze velocity</td></tr><tr><td>15</td><td>right_fixation_timestamp</td><td>timestamp</td><td>s</td><td>Time of fixation start</td></tr><tr><td>16</td><td>right_fixation_elapsed</td><td>duration</td><td>s</td><td>Fixation duration</td></tr><tr><td>17</td><td>right_filtered_gaze_x</td><td>filtered_gaze</td><td>normalized</td><td>Smoothed X gaze position</td></tr><tr><td>18</td><td>right_filtered_gaze_y</td><td>filtered_gaze</td><td>normalized</td><td>Smoothed Y gaze position</td></tr><tr><td><strong>Screen Data</strong></td><td></td><td></td><td></td><td></td></tr><tr><td>19</td><td>screen_width</td><td>screen</td><td>px</td><td>Screen width</td></tr><tr><td>20</td><td>screen_height</td><td>screen</td><td>px</td><td>Screen height</td></tr><tr><td>21</td><td>timestamp</td><td>timestamp</td><td>s</td><td>Data timestamp</td></tr><tr><td>22</td><td>local_clock</td><td>timestamp</td><td>s</td><td>Local system clock</td></tr></tbody></table>

***

### Stream reading for plotting

* **Script:** `reader.py`
* **Class:** `StreamThread`
* Resolves and connects to `'tobii_gaze_fixation'`
* Continuously pulls samples
* Upon requests, gaze data is being parsed.&#x20;

```python
# For gaze data
StreamThread.get_data()

# For fixation data
StreamThread.get_data(fixation=True)
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://vinayin.gitbook.io/pyeta/description/stream-generation.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
