alphacsc.OnlineCDL#

class alphacsc.OnlineCDL(n_atoms, n_times_atom, reg=0.1, n_iter=60, n_jobs=1, solver_z='lgcd', solver_z_kwargs={}, unbiased_z_hat=False, solver_d='auto', solver_d_kwargs={}, rank1=True, window=False, uv_constraint='auto', lmbd_max='scaled', eps=1e-10, D_init=None, alpha=0.8, batch_size=1, batch_selection='random', verbose=10, random_state=None)#

Base class for convolutional dictionary learning algorithms.

Online algorithm for convolutional dictionary learning

Parameters
Problem Specs
n_atomsint

The number of atoms to learn.

n_times_atomint

The support of the atom.

rank1boolean

If set to True, learn rank 1 dictionary atoms.

windowboolean

If set to True, re-parametrizes the atoms with a temporal Tukey window.

uv_constraint{‘joint’ | ‘separate’ | ‘auto’}

The kind of norm constraint on the atoms if rank1=True. If rank1=False, it must be ‘auto’, else it can be:

  • 'joint': the constraint is ||[u, v]||_2 <= 1

  • 'separate': the constraint is ||u||_2 <= 1 and ||v||_2 <= 1. This is the default for rank1 with if ‘auto’.

sort_atomsboolean

If True, the atoms are sorted by explained variances.

Global algorithm

Online algorithm

alphafloat

Forgetting factor for online learning. If set to 0, the learning is stochastic and each D-step is independent from the previous steps. When set to 1, each the previous values z_hat - computed with different dictionary - have the same weight as the current one. This factor should be large enough to ensure convergence but to large factor can lead to sub-optimal minima.

batch_selection‘random’ | ‘cyclic’

The batch selection strategy for online learning. The batch are either selected randomly among all samples (without replacement) or in a cyclic way.

batch_sizeint in [1, n_trials]

Size of the batch used in online learning. Increasing it regularizes the dictionary learning as there is less variance for the successive estimates. But it also increases the computational cost as more coding signals z_hat must be estimate at each iteration.

n_iterint

The number of alternate steps to perform.

epsfloat

Stopping criterion. If the cost descent after a uv and a z update is smaller than eps, return.

regfloat

The regularization parameter.

lmbd_max‘fixed’ | ‘scaled’ | ‘per_atom’ | ‘shared’

If not fixed, adapt the regularization rate as a ratio of lambda_max:

  • 'scaled': the regularization parameter is fixed as a ratio of its maximal value at init i.e. lambda = reg * lmbd_max(uv_init).

  • 'shared': the regularization parameter is set at each iteration as a ratio of its maximal value for the current dictionary estimate i.e. lambda = reg * lmbd_max(uv_hat).

  • 'per_atom': the regularization parameter is set per atom and at each iteration as a ratio of its maximal value for this atom i.e. lambda[k] = reg * lmbd_max(uv_hat[k]).

Z-step parameters
solver_zstr

The solver to use for the z update. Options are {‘l_bfgs’ (default) | ‘lgcd’}.

solver_z_kwargsdict

Additional keyword arguments to pass to update_z_multi.

unbiased_z_hatboolean

If set to True, the value of the non-zero coefficients in the returned z_hat are recomputed with reg=0 on the frozen support.

D-step parameters
solver_dstr (default: ‘auto’)

The solver to use for the d update. Options are: {‘alternate’, ‘alternate_adaptive’, ‘joint’, ‘fista’, ‘auto’} ‘auto’ amounts to ‘fista’ when rank1=False and ‘alternate_adaptive’ for rank1=True.

solver_d_kwargsdict

Additional keyword arguments to provide to update_d

D_initstr or array

The initial atoms with shape (n_atoms, n_channels + n_times_atoms) or (n_atoms, n_channels, n_times_atom) or an initialization scheme str in {‘chunk’ | ‘random’ | ‘greedy’}.

Technical parameters
n_jobsint

The number of parallel jobs.

verboseint

The verbosity level.

callbackfunc

A callback function called at the end of each loop of the coordinate descent.

random_stateint | None

State to seed the random number generator.

raise_on_increaseboolean

Raise an error if the objective function increase.

__init__(n_atoms, n_times_atom, reg=0.1, n_iter=60, n_jobs=1, solver_z='lgcd', solver_z_kwargs={}, unbiased_z_hat=False, solver_d='auto', solver_d_kwargs={}, rank1=True, window=False, uv_constraint='auto', lmbd_max='scaled', eps=1e-10, D_init=None, alpha=0.8, batch_size=1, batch_selection='random', verbose=10, random_state=None)#

Methods

__init__(n_atoms, n_times_atom[, reg, ...])

fit(X[, y])

Learn a convolutional dictionary from the set of signals X.

fit_transform(X[, y])

Learn a convolutional dictionary and returns sparse codes.

partial_fit(X[, y])

transform(X)

Returns sparse codes associated to the signals X for the dictionary.

transform_inverse(z_hat)

Reconstruct the signals from the given sparse codes.

Attributes

D_hat_

array: dictionary in full rank mode.

pobj_

list: Objective function value at each step of the alternate minimization.

times_

list: Cumulative time for each iteration of the coordinate descent.

u_hat_

array: spatial map of the dictionary.

uv_hat_

array: dictionary in rank 1 mode.

v_hat_

array: temporal patterns of the dictionary.

z_hat_

array: Sparse code associated to the signals used to fit the model.