.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/multicsc/sample_evoked_response_dicodile.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end <sphx_glr_download_auto_examples_multicsc_sample_evoked_response_dicodile.py>` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_multicsc_sample_evoked_response_dicodile.py: ========================================================================= Extracting artifact and evoked response atoms from the MNE sample dataset ========================================================================= This example illustrates how to learn rank-1 [1]_ atoms on the multivariate sample dataset from :code:`mne`. We display a selection of atoms, featuring heartbeat and eyeblink artifacts, two atoms of evoked responses, and a non-sinusoidal oscillation. .. [1] Dupré La Tour, T., Moreau, T., Jas, M., & Gramfort, A. (2018). `Multivariate Convolutional Sparse Coding for Electromagnetic Brain Signals <https://arxiv.org/abs/1805.09654v2>`_. Advances in Neural Information Processing Systems (NIPS). .. GENERATED FROM PYTHON SOURCE LINES 17-27 .. code-block:: Python # Authors: Thomas Moreau <thomas.moreau@inria.fr> # Mainak Jas <mainak.jas@telecom-paristech.fr> # Tom Dupre La Tour <tom.duprelatour@telecom-paristech.fr> # Alexandre Gramfort <alexandre.gramfort@inria.fr> # Romain Primet <romain.primet@inria.fr> # # License: BSD (3-clause) .. GENERATED FROM PYTHON SOURCE LINES 28-29 Let us first define the parameters of our model. .. GENERATED FROM PYTHON SOURCE LINES 29-43 .. code-block:: Python # sampling frequency. The signal will be resampled to match this. sfreq = 150. # Define the shape of the dictionary n_atoms = 40 n_times_atom = int(round(sfreq * 1.0)) # 1 s # Regularization parameter which controls sparsity reg = 0.1 # number of processors for parallel computing n_jobs = 5 .. GENERATED FROM PYTHON SOURCE LINES 44-48 Next, we define the parameters for multivariate CSC We use below the BatchCDL and not GreedyCDL as dicodile does not yet support greedy learning (i.e. add one atom at the time) .. GENERATED FROM PYTHON SOURCE LINES 48-87 .. code-block:: Python from alphacsc import BatchCDL # noqa: E402 cdl = BatchCDL( # Shape of the dictionary n_atoms=n_atoms, n_times_atom=n_times_atom, # Request a rank1 dictionary with unit norm temporal and spatial maps rank1=True, uv_constraint='auto', # apply a temporal window reparametrization window=True, # at the end, refit the activations with fixed support and no # regularization to remove the amplitude bias unbiased_z_hat=True, # Initialize the dictionary with random chunk from the data D_init='chunk', # rescale the regularization parameter to be a percentage of lambda_max lmbd_max="scaled", reg=reg, # Number of iteration for the alternate minimization and convergence # threshold n_iter=100, eps=1e-4, # solver for the z-step solver_z="dicodile", solver_z_kwargs={'tol': 1e-3, 'max_iter': 100}, # solver for the d-step solver_d='alternate_adaptive', solver_d_kwargs={'max_iter': 300}, # sort atoms by explained variances sort_atoms=True, # Technical parameters verbose=1, random_state=0, n_jobs=n_jobs) .. GENERATED FROM PYTHON SOURCE LINES 88-92 Load the sample data from :code:`mne` and select the gradiometer channels. The MNE sample data contains MEG recordings of a subject with visual and auditory stimuli. We load the data using utilities from :code:`mne` as a Raw object and select the gradiometers from the signal. .. GENERATED FROM PYTHON SOURCE LINES 92-107 .. code-block:: Python import os # noqa: E402 import mne # noqa: E402 import numpy as np # noqa: E402 print("Loading the data...", end='', flush=True) data_path = mne.datasets.sample.data_path() subjects_dir = os.path.join(data_path, "subjects") data_dir = os.path.join(data_path, 'MEG', 'sample') file_name = os.path.join(data_dir, 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(file_name, preload=True, verbose=False) raw.pick_types(meg='grad', eeg=False, eog=False, stim=True) print('done') .. GENERATED FROM PYTHON SOURCE LINES 108-111 Then, we remove the powerline artifacts and high-pass filter to remove the drift which can impact the CSC results. The signal is also resampled to 150 Hz to reduce the computationnal burden. .. GENERATED FROM PYTHON SOURCE LINES 111-119 .. code-block:: Python print("Preprocessing the data...", end='', flush=True) raw.notch_filter(np.arange(60, 181, 60), n_jobs=n_jobs, verbose=False) raw.filter(2, None, n_jobs=n_jobs, verbose=False) # high-pass above 2 Hz raw = raw.resample(sfreq, npad='auto', n_jobs=n_jobs, verbose=False) print('done') .. GENERATED FROM PYTHON SOURCE LINES 120-121 Load the data as an array and reshape it to be 3d .. GENERATED FROM PYTHON SOURCE LINES 121-129 .. code-block:: Python X = raw.get_data(picks=['meg']) info = raw.copy().pick_types(meg=True).info # info of the loaded channels print(info) X_split = X[np.newaxis, :, :] print(X_split.shape) .. GENERATED FROM PYTHON SOURCE LINES 130-131 Fit the model and learn rank1 atoms .. GENERATED FROM PYTHON SOURCE LINES 131-134 .. code-block:: Python cdl.fit(X_split) .. GENERATED FROM PYTHON SOURCE LINES 135-137 Then we call the `transform` method, which returns the sparse codes associated with X, without changing the dictionary learned during the `fit`. .. GENERATED FROM PYTHON SOURCE LINES 137-140 .. code-block:: Python z_hat = cdl.transform(X[np.newaxis, :, :]) .. GENERATED FROM PYTHON SOURCE LINES 141-146 Display a selection of atoms ---------------------------- We recognize a heartbeat artifact, an eyeblink artifact, two atoms of evoked responses, and a non-sinusoidal oscillation. .. GENERATED FROM PYTHON SOURCE LINES 146-196 .. code-block:: Python import matplotlib.pyplot as plt # noqa: E402 # preselected atoms of interest plotted_atoms = [0, 1, 2, 6, 4] n_plots = 3 # number of plots by atom n_columns = min(6, len(plotted_atoms)) split = int(np.ceil(len(plotted_atoms) / n_columns)) figsize = (4 * n_columns, 3 * n_plots * split) fig, axes = plt.subplots(n_plots * split, n_columns, figsize=figsize) for ii, kk in enumerate(plotted_atoms): # Select the axes to display the current atom print("\rDisplaying {}-th atom".format(kk), end='', flush=True) i_row, i_col = ii // n_columns, ii % n_columns it_axes = iter(axes[i_row * n_plots:(i_row + 1) * n_plots, i_col]) # Select the current atom u_k = cdl.u_hat_[kk] v_k = cdl.v_hat_[kk] # Plot the spatial map of the atom using mne topomap ax = next(it_axes) mne.viz.plot_topomap(u_k, info, axes=ax, show=False) ax.set(title="Spatial pattern %d" % (kk, )) # Plot the temporal pattern of the atom ax = next(it_axes) t = np.arange(n_times_atom) / sfreq ax.plot(t, v_k) ax.set_xlim(0, n_times_atom / sfreq) ax.set(xlabel='Time (sec)', title="Temporal pattern %d" % kk) # Plot the power spectral density (PSD) ax = next(it_axes) psd = np.abs(np.fft.rfft(v_k, n=256)) ** 2 frequencies = np.linspace(0, sfreq / 2.0, len(psd)) ax.semilogy(frequencies, psd, label='PSD', color='k') ax.set(xlabel='Frequencies (Hz)', title="Power spectral density %d" % kk) ax.grid(True) ax.set_xlim(0, 30) ax.set_ylim(1e-4, 1e2) ax.legend() print("\rDisplayed {} atoms".format(len(plotted_atoms)).rjust(40)) fig.tight_layout() .. GENERATED FROM PYTHON SOURCE LINES 197-205 Display the evoked reconstructed envelope ----------------------------------------- The MNE sample data contains data for auditory (event_id=1 and 2) and visual stimuli (event_id=3 and 4). We extract the events now so that we can later identify the atoms related to different events. Note that the convolutional sparse coding method does not need to know the events for learning atoms. .. GENERATED FROM PYTHON SOURCE LINES 205-212 .. code-block:: Python event_id = [1, 2, 3, 4] events = mne.find_events(raw, stim_channel='STI 014') events = mne.pick_events(events, include=event_id) events[:, 0] -= raw.first_samp .. GENERATED FROM PYTHON SOURCE LINES 213-218 For each atom (columns), and for each event (rows), we compute the envelope of the reconstructed signal, align it with respect to the event onsets, and take the average. For some atoms, the activations are correlated with the events, leading to a large evoked envelope. The gray area corresponds to not statistically significant values, computing with sampling. .. GENERATED FROM PYTHON SOURCE LINES 218-263 .. code-block:: Python from alphacsc.utils.signal import fast_hilbert # noqa: E402 from alphacsc.viz.epoch import plot_evoked_surrogates # noqa: E402 from alphacsc.utils.convolution import construct_X_multi # noqa: E402 # time window around the events. Note that for the sample datasets, the time # inter-event is around 0.5s t_lim = (-0.1, 0.5) n_plots = len(event_id) n_columns = min(6, len(plotted_atoms)) split = int(np.ceil(len(plotted_atoms) / n_columns)) figsize = (4 * n_columns, 3 * n_plots * split) fig, axes = plt.subplots(n_plots * split, n_columns, figsize=figsize) for ii, kk in enumerate(plotted_atoms): # Select the axes to display the current atom print("\rDisplaying {}-th atom envelope".format(kk), end='', flush=True) i_row, i_col = ii // n_columns, ii % n_columns it_axes = iter(axes[i_row * n_plots:(i_row + 1) * n_plots, i_col]) # Select the current atom v_k = cdl.v_hat_[kk] v_k_1 = np.r_[[1], v_k][None] z_k = z_hat[:, kk:kk + 1] X_k = construct_X_multi(z_k, v_k_1, n_channels=1)[0, 0] # compute the 'envelope' of the reconstructed signal X_k correlation = np.abs(fast_hilbert(X_k)) # loop over all events IDs for this_event_id in event_id: this_events = events[events[:, 2] == this_event_id] # plotting function ax = next(it_axes) this_info = info.copy() event_info = dict(event_id=this_event_id, events=events) this_info['temp'] = event_info plot_evoked_surrogates(correlation, info=this_info, t_lim=t_lim, ax=ax, n_jobs=n_jobs, label='event %d' % this_event_id) ax.set(xlabel='Time (sec)', title="Evoked envelope %d" % kk) print("\rDisplayed {} atoms".format(len(plotted_atoms)).rjust(40)) fig.tight_layout() .. GENERATED FROM PYTHON SOURCE LINES 264-280 Display the equivalent dipole for a learned topomap --------------------------------------------------- Finally, let us fit a dipole to one of the atoms. To fit a dipole, we need the following: * BEM solution: Obtained by running the cortical reconstruction pipeline of Freesurfer and specifies the conductivity of different tissues in the head. * Trans: An affine transformation matrix needed to bring the data from sensor space to head space. This is usually done by coregistration of the fiducials with the MRI. * Noise covariance matrix: To whiten the data so that the assumption of Gaussian noise model with identity covariance matrix is satisfied. We recommend users to consult the MNE documentation for further information. .. GENERATED FROM PYTHON SOURCE LINES 280-289 .. code-block:: Python subjects_dir = os.path.join(data_path, 'subjects') fname_bem = os.path.join(subjects_dir, 'sample', 'bem', 'sample-5120-bem-sol.fif') fname_trans = os.path.join(data_path, 'MEG', 'sample', 'sample_audvis_raw-trans.fif') fname_cov = os.path.join(data_path, 'MEG', 'sample', 'sample_audvis-cov.fif') .. GENERATED FROM PYTHON SOURCE LINES 290-292 Let us construct an evoked object for MNE with the spatial pattern of the atoms. .. GENERATED FROM PYTHON SOURCE LINES 292-295 .. code-block:: Python evoked = mne.EvokedArray(cdl.u_hat_.T, info) .. GENERATED FROM PYTHON SOURCE LINES 296-297 Fit a dipole to each of the atoms. .. GENERATED FROM PYTHON SOURCE LINES 297-301 .. code-block:: Python dip = mne.fit_dipole(evoked, fname_cov, fname_bem, fname_trans, n_jobs=n_jobs, verbose=False)[0] .. GENERATED FROM PYTHON SOURCE LINES 302-304 Plot the dipole fit from the 5th atom, linked to mu-wave and display the goodness of fit. .. GENERATED FROM PYTHON SOURCE LINES 304-332 .. code-block:: Python atom_dipole_idx = 4 from mpl_toolkits.mplot3d import Axes3D # noqa: E402, F401 fig = plt.figure(figsize=(10, 4)) # Display the dipole fit ax = fig.add_subplot(1, 3, 1, projection='3d') dip.plot_locations(fname_trans, 'sample', subjects_dir, idx=atom_dipole_idx, ax=ax) ax.set_title('Atom #{} (GOF {:.2f}%)'.format(atom_dipole_idx, dip.gof[atom_dipole_idx])) # Plot the spatial map ax = fig.add_subplot(1, 3, 2) mne.viz.plot_topomap(cdl.u_hat_[atom_dipole_idx], info, axes=ax) # Plot the temporal atom ax = fig.add_subplot(1, 3, 3) t = np.arange(n_times_atom) / sfreq ax.plot(t, cdl.v_hat_[atom_dipole_idx]) ax.set_xlim(0, n_times_atom / sfreq) ax.set(xlabel='Time (sec)', title="Temporal pattern {}" .format(atom_dipole_idx)) fig.suptitle('') fig.tight_layout() .. _sphx_glr_download_auto_examples_multicsc_sample_evoked_response_dicodile.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: sample_evoked_response_dicodile.ipynb <sample_evoked_response_dicodile.ipynb>` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: sample_evoked_response_dicodile.py <sample_evoked_response_dicodile.py>` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: sample_evoked_response_dicodile.zip <sample_evoked_response_dicodile.zip>` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_