alphacsc.learn_d_z_multi

alphacsc.learn_d_z_multi(X, n_atoms, n_times_atom, n_iter=60, n_jobs=1, lmbd_max='fixed', reg=0.1, loss='l2', loss_params={'gamma': 0.1, 'ordar': 10, 'sakoe_chiba_band': 10}, rank1=True, uv_constraint='separate', eps=1e-10, algorithm='batch', algorithm_params={}, detrending=None, detrending_params={}, solver_z='l-bfgs', solver_z_kwargs={}, solver_d='alternate_adaptive', solver_d_kwargs={}, D_init=None, D_init_params={}, unbiased_z_hat=False, use_sparse_z=False, stopping_pobj=None, raise_on_increase=True, verbose=10, callback=None, random_state=None, name='DL', window=False, sort_atoms=False)

Multivariate Convolutional Sparse Coding with optional rank-1 constraint

Parameters
Xarray, shape (n_trials, n_channels, n_times)

The data on which to perform CSC.

n_atomsint

The number of atoms to learn.

n_times_atomint

The support of the atom.

regfloat

The regularization parameter

lmbd_max‘fixed’ | ‘scaled’ | ‘per_atom’ | ‘shared’
If not fixed, adapt the regularization rate as a ratio of lambda_max:
  • ‘scaled’: the regularization parameter is fixed as a ratio of its maximal value at init i.e. reg_used = reg * lmbd_max(uv_init)

  • ‘shared’: the regularization parameter is set at each iteration as a ratio of its maximal value for the current dictionary estimate i.e. reg_used = reg * lmbd_max(uv_hat)

  • ‘per_atom’: the regularization parameter is set per atom and at each iteration as a ratio of its maximal value for this atom i.e. reg_used[k] = reg * lmbd_max(uv_hat[k])

n_iterint

The number of coordinate-descent iterations.

n_jobsint

The number of parallel jobs.

loss‘l2’ | ‘dtw’

Loss for the data-fit term. Either the norm l2 or the soft-DTW.

loss_paramsdict

Parameters of the loss

rank1boolean

If set to True, learn rank 1 dictionary atoms.

uv_constraintstr in {‘joint’ | ‘separate’}

The kind of norm constraint on the atoms: If ‘joint’, the constraint is norm_2([u, v]) <= 1 If ‘separate’, the constraint is norm_2(u) <= 1 and norm_2(v) <= 1

epsfloat

Stopping criterion. If the cost descent after a uv and a z update is smaller than eps, return.

algorithm‘batch’ | ‘greedy’ | ‘online’

Dictionary learning algorithm.

algorithm_paramsdict
Parameters for the global algorithm used to learn the dictionary:
alphafloat

Forgetting factor for online learning. If set to 0, the learning is stochastic and each D-step is independent from the previous steps. When set to 1, each the previous values z_hat - computed with different dictionary - have the same weight as the current one. This factor should be large enough to ensure convergence but to large factor can lead to sub-optimal minima.

batch_selection‘random’ | ‘cyclic’

The batch selection strategy for online learning. The batch are either selected randomly among all samples (without replacement) or in a cyclic way.

batch_sizeint in [1, n_trials]

Size of the batch used in online learning. Increasing it regularizes the dictionary learning as there is less variance for the successive estimates. But it also increases the computational cost as more coding signals z_hat must be estimate at each iteration.

solver_zstr

The solver to use for the z update. Options are ‘l-bfgs’ (default) | “lgcd”

solver_z_kwargsdict

Additional keyword arguments to pass to update_z_multi

solver_dstr

The solver to use for the d update. Options are ‘alternate’ | ‘alternate_adaptive’ (default) | ‘joint’

solver_d_kwargsdict

Additional keyword arguments to provide to update_d

D_initstr or array, shape (n_atoms, n_channels + n_times_atoms) or shape (n_atoms, n_channels, n_times_atom)

The initial atoms or an initialization scheme in {‘kmeans’ | ‘ssa’ | ‘chunks’ | ‘random’}.

D_init_paramsdict

Dictionnary of parameters for the kmeans init method.

unbiased_z_hatboolean

If set to True, the value of the non-zero coefficients in the returned z_hat are recomputed with reg=0 on the frozen support.

use_sparse_zboolean

Use sparse lil_matrices to store the activations.

verboseint

The verbosity level.

callbackfunc

A callback function called at the end of each loop of the coordinate descent.

random_stateint | None

The random state.

raise_on_increaseboolean

Raise an error if the objective function increase

windowboolean

If True, re-parametrizes the atoms with a temporal Tukey window

sort_atomsboolean

If True, the atoms are sorted by explained variances.

Returns
pobjlist

The objective function value at each step of the coordinate descent.

timeslist

The cumulative time for each iteration of the coordinate descent.

uv_hatarray, shape (n_atoms, n_channels + n_times_atom)

The atoms to learn from the data.

z_hatarray, shape (n_trials, n_atoms, n_times_valid)

The sparse activation matrix.

regfloat

Regularization parameter used.