Your First Analysis
You have a recording. Maybe it’s one you captured this morning from the Neurofield Q20 sitting on your desk. Maybe it’s a sample file you downloaded to try the software before committing your clinical data to it. Either way, the next twenty minutes will take you from a raw EEG file to a structured clinical reading—spectral content, temporal dynamics, connectivity, source localization, and perturbation-response patterns, all organized into a step-by-step dashboard you can read like a clinical narrative.
Here’s how that works.
Starting a New Session
Section titled “Starting a New Session”Open your workspace and either select an existing subject or create a new one. Then create a new session—give it a date, and point it to the folder on your computer that contains the recording files for this visit.
The application scans the folder and discovers what’s there. It classifies each file by condition—resting eyes-open, resting eyes-closed, ERP task data—based on the file names and directory structure. You’ll see a summary of what it found before anything is processed. If a file is misclassified, or if something you expected to see is missing, you can correct it here.
Most recordings from the Neurofield Q20/Q21 will discover cleanly without any manual intervention. If you’re importing from a different system, the File Formats page covers what to expect and what to watch for.
Choosing Your Settings
Section titled “Choosing Your Settings”Before you run the analysis, the session setup screen shows you a collapsible set of Pipeline Options. These control which analysis modules the pipeline will run on your recording.
Vigilance Analysis assesses arousal state and drowsiness regulation across the recording. How the nervous system transitions between vigilance stages—and whether those transitions are smooth or abrupt—tells you something about state regulation that power spectra alone can’t reveal.
Connectivity Analysis computes coherence and phase relationships between electrode sites using debiased weighted phase lag index. This is the analysis that shows you which regions are communicating and which are isolated—the functional architecture beneath the spectra.
Source Connectivity extends connectivity estimation into source space. It’s more computationally expensive and adds several minutes to processing time, but it gives you connectivity between estimated cortical sources rather than scalp electrodes. Worth enabling when you want source-level network analysis; fine to skip for a quick first look.
HRV Analysis derives heart rate variability metrics from the PPG channel in the Neurofield headset. This gives you autonomic nervous system data alongside the EEG—a window into regulatory capacity that the brain data alone doesn’t provide.
The defaults are sensible. Leave them on unless you have a specific reason to skip a module. For your first recording, run everything—you want to see the full picture before deciding what matters most to your clinical workflow.
Workspace-Level Defaults
Section titled “Workspace-Level Defaults”If you find yourself adjusting the same settings for every session, you can set workspace-level defaults through Settings → Pipeline. This is where you tune filter bands (the default 0.5–45 Hz bandpass covers the clinically relevant range), choose your notch frequency (60 Hz for North America, 50 Hz for Europe), set ICA classification thresholds, and select your reference method. These defaults apply to all new sessions automatically—you can still override them per-session when you need to.
ICA Component Review
Section titled “ICA Component Review”One setting worth understanding early: ICA component review. When this option is enabled, the pipeline pauses after ICA decomposition and classification so you can inspect every independent component, see how it was classified, and approve or override the automated decisions before the analysis proceeds.
We recommend enabling this for your first several recordings. Not because the automated classification is unreliable—it’s good—but because reviewing components teaches you what clean brain sources look like in this decomposition, what artifact components look like, and where the boundary cases are. That intuition makes you a better reader of every analysis the workstation produces, even when you eventually turn review off and let the automation run.
The Processing Pipeline Runs
Section titled “The Processing Pipeline Runs”Click Analyze, and a progress screen shows each processing stage in real time. Here’s what’s happening behind the scenes, in the order it happens:
Filtering removes electrical noise and drift from the raw signal. A bandpass filter keeps the frequencies you care about (by default, 0.5–45 Hz) and a notch filter removes powerline interference at 60 Hz and its harmonics. This is the same first step you’d take in any EEG processing workflow—the Coherence Workstation just handles it automatically with parameters you can verify and adjust.
Bad channel detection identifies channels that aren’t giving you usable data. A flat channel (the electrode lost contact), an excessively noisy channel (poor impedance or a bad connection), or a channel that doesn’t correlate with its neighbors (something is wrong with it specifically, not with the brain beneath it). Detected bad channels are interpolated from their neighbors—reconstructed mathematically rather than dropped, so your montage stays complete.
Re-referencing establishes a common reference so that voltage comparisons between channels are meaningful. The workstation detects the recording reference your amplifier used and applies an average reference, which is standard practice for the connectivity and source localization analyses that follow.
ASR (Artifact Subspace Reconstruction) handles the grossest transient artifacts automatically. Large muscle bursts from a jaw clench, head movement, electrode pops—ASR identifies signal segments that deviate dramatically from clean EEG and reconstructs them statistically rather than simply deleting them. Think of it as a first-pass cleanup that preserves as much brain signal as possible while removing the contamination that would otherwise distort every downstream analysis.
ICA decomposition separates your multi-channel recording into statistically independent sources. Some of these sources correspond to genuine brain activity—a posterior alpha generator, a frontal midline theta source, a sensorimotor mu rhythm. Others are artifacts that have been mathematically isolated from the brain signal: the characteristic spatial pattern of an eye blink, the broadband high-frequency signature of a muscle contraction, the rhythmic pulse of a heartbeat.
ICLabel classification runs a trained neural network classifier on each independent component and assigns probabilities: brain, eye, muscle, heart, line noise, channel noise, or other. Components with high brain probability are kept; obvious artifacts are flagged for rejection. This classification is the same system used in EEGLAB’s ICLabel plugin—it’s well-validated and widely used in the research community.
The software is doing what you’d do by hand if you had unlimited patience—but systematically, reproducibly, and with a complete paper trail of every decision. Every parameter is logged. Every classification is recorded. Nothing happens in a black box.
Cleaning the Signal: Artifacts and Manual Review
Section titled “Cleaning the Signal: Artifacts and Manual Review”Automated artifact handling gets you most of the way there. ASR catches the transient contamination. ICLabel classifies the sustained sources. But automated methods have limits, and the Coherence Workstation gives you a manual review pass where your clinical eye has the final say.
After the automated processing completes, the artifact review screen shows you the cleaned signal with threshold-based detection already applied. Three filters work in parallel: a voltage threshold catches remaining high-amplitude spikes, a slow-wave filter identifies residual drift in the 0.5–4 Hz range, and a fast-wave filter flags high-frequency contamination between 20–35 Hz. Segments that exceed any threshold are highlighted on the signal trace.
You can adjust the thresholds—making them more or less aggressive depending on the quality of your recording. You can draw additional exclusion regions directly on the signal if you see something the automated filters missed. And you can remove automated selections you disagree with. This is your clinical judgment layer on top of the automated cleaning. The workstation makes suggestions; you make decisions.
The summary panel shows you exactly how much data was excluded, broken down by filter type and manual selection. If more than a small percentage of your recording is being rejected, that’s worth noticing—it may mean the recording quality needs attention, or the thresholds need adjustment.
Reviewing ICA Components
Section titled “Reviewing ICA Components”If you enabled ICA component review, this is where the pipeline pauses and hands control to you. If you didn’t, the automated classifications are applied directly and processing continues—skip ahead to the next section.
The ICA review screen presents every independent component the decomposition extracted. A strip of thumbnails runs along the side, each showing a miniature topographic map with a color-coded badge: green for components classified as brain, red for eye or muscle artifacts, and intermediate colors for less certain classifications.
Click any component to see its full profile: the topographic map showing its spatial distribution across the scalp, its power spectrum, its estimated dipole source location in the brain, and the ICLabel classification probabilities. This is where you develop your intuition for what brain looks like versus what artifact looks like in an ICA decomposition.
What brain components look like: A clean brain component typically has a focused topography that corresponds to a recognizable cortical region—posterior alpha shows a strong occipital/parietal distribution, frontal midline theta concentrates at Fz/FCz, sensorimotor mu shows a clear lateralized central pattern. The power spectrum has identifiable peaks in physiologically meaningful frequency bands. The dipole fits well inside the brain volume. ICLabel assigns it a high brain probability.
What artifact components look like: Eye blink components have the unmistakable frontal topography you’ve seen a thousand times—strong at Fp1/Fp2, falling off sharply with distance. Lateral eye movement shows a left-right dipolar pattern. Muscle components are typically peripheral, concentrated over temporal and frontal electrodes, with broadband high-frequency power and no clear spectral peaks. Heart components show a rhythmic signature that doesn’t match any brain oscillation.
When to override the classifier: ICLabel is good, but it’s not infallible. If a component is classified as “brain” but has the frontal bilateral topography of a blink artifact, reject it. If a component is flagged as “muscle” but shows clear posterior alpha with a sensible occipital dipole, keep it. Your eyes—trained on years of EEG—are the final arbiter. The classifier is a starting point, not a verdict.
Each condition (eyes-open, eyes-closed) has independent component selections. A component that’s artifact in one condition might be brain in another—or more commonly, the same component is artifact in both, but you want the freedom to decide independently.
The Analysis Runs
Section titled “The Analysis Runs”After you approve your component selections—or after the automated classifications are applied, if you skipped review—the second phase begins. The software applies your component rejections to the cleaned data, then runs every analysis stage you enabled: spectral analysis, connectivity estimation, source localization, vigilance staging, HRV computation, ERP processing, time-frequency decomposition, and more.
Another progress screen tracks each stage. This phase takes a few minutes depending on your recording length and how many modules you enabled. Source connectivity and surrogate significance testing are the slowest stages—if you’re impatient, those are the first candidates to skip on a quick first pass.
Your Dashboard Is Ready
Section titled “Your Dashboard Is Ready”When processing finishes, the dashboard opens to Case Overview—your starting point for the structured clinical reading. The left sidebar lists every analysis step, organized in the sequence that mirrors how a structured reading unfolds: from raw signal quality through spectral content, temporal dynamics, connectivity, source architecture, and perturbation-response patterns.
Each step builds on what came before. The order isn’t arbitrary—it reflects the logic of moving from simpler descriptions (what does the spectrum look like?) to more complex ones (how are regions communicating? how does the system respond to challenge?). You can jump to any step directly, but working through them in order—at least your first time—gives you the full structural picture.
The Dashboard Overview page covers the interface in detail: how to navigate between steps, what the interactive visualizations show, how the condition tabs work, and how to engage the AI Research Assistant.
Where to Go from Here
Section titled “Where to Go from Here”If you’re new to structured EEG reading, start with the Dashboard Overview, then work through the Reading the Analysis section in order. Each page teaches you to read one analysis stage—what it measures, why it matters clinically, and how to connect its findings to the stages before and after it.
If you’re experienced with QEEG and want to explore at your own pace, jump to the analysis stages you care about most. The AI Research Assistant can help you think through what you’re seeing in the context of your specific recording.
If you want to understand the interpretive framework that organizes the whole reading—the Three-Layer Clinical Model, the five organizational dynamics, the coherence basin model—start with Why Interpretation, Not Classification. The framework isn’t required to use the software, but it transforms the experience from “looking at metrics” to “reading a nervous system.”