| Literature DB >> 18628948 |
Rafal Zdunek1, Andrzej Cichocki.
Abstract
Recently, a considerable growth of interest in projected gradient (PG) methods has been observed due to their high efficiency in solving large-scale convex minimization problems subject to linear constraints. Since the minimization problems underlying nonnegative matrix factorization (NMF) of large matrices well matches this class of minimization problems, we investigate and test some recent PG methods in the context of their applicability to NMF. In particular, the paper focuses on the following modified methods: projected Landweber, Barzilai-Borwein gradient projection, projected sequential subspace optimization (PSESOP), interior-point Newton (IPN), and sequential coordinate-wise. The proposed and implemented NMF PG algorithms are compared with respect to their performance in terms of signal-to-interference ratio (SIR) and elapsed time, using a simple benchmark of mixed partially dependent nonnegative signals.Entities:
Year: 2008 PMID: 18628948 PMCID: PMC2443642 DOI: 10.1155/2008/939567
Source DB: PubMed Journal: Comput Intell Neurosci
Algorithm 1(OPL).
Algorithm 2(GPSR-BB).
Algorithm 4
Algorithm 3(NMF-PSESOP).
Algorithm 5
Figure 1Dataset: (a) original 4 source signals, (b) observed 8 mixed signals.
Mean-SIRs [dB] obtained with 100 samples of Monte Carlo analysis for the estimation of sources and columns of mixing matrix from noise-free mixtures of signals in Figure 1. Sources X are estimated with the projected pseudoinverse. The number of inner iterations for updating A is denoted by k, and the number of layers (in the multilayer technique) by L. The notation best or worst in parenthesis that follows the algorithm name means that the mean-SIR value is calculated for the best or worst sample from Monte Carlo analysis, respectively. In the last column, the elapsed time [in seconds] is given for each algorithm with k = 1 and L = 1.
| Algorithm | Mean-SIR | Mean-SIR | Time | ||||||
|---|---|---|---|---|---|---|---|---|---|
|
|
|
|
| ||||||
|
|
|
|
|
|
|
|
| ||
| M-NMF (best) | 21 | 22.1 | 42.6 | 37.3 | 26.6 | 27.3 | 44.7 | 40.7 | 1.9 |
| M-NMF (mean) | 13.1 | 13.8 | 26.7 | 23.1 | 14.7 | 15.2 | 28.9 | 27.6 | |
| M-NMF (worst) | 5.5 | 5.7 | 5.3 | 6.3 | 5.8 | 6.5 | 5 | 5.5 | |
| OPL(best) | 22.9 | 25.3 | 46.5 | 42 | 23.9 | 23.5 | 55.8 | 51 | 1.9 |
| OPL(mean) | 14.7 | 14 | 25.5 | 27.2 | 15.3 | 14.8 | 23.9 | 25.4 | |
| OPL(worst) | 4.8 | 4.8 | 4.8 | 5.0 | 4.6 | 4.6 | 4.6 | 4.8 | |
| Lin-PG(best) | 36.3 | 23.6 | 78.6 | 103.7 | 34.2 | 33.3 | 78.5 | 92.8 | 8.8 |
| Lin-PG(mean) | 19.7 | 18.3 | 40.9 | 61.2 | 18.5 | 18.2 | 38.4 | 55.4 | |
| Lin-PG(worst) | 14.4 | 13.1 | 17.5 | 40.1 | 13.9 | 13.8 | 18.1 | 34.4 | |
| GPSR-BB(best) | 18.2 | 22.7 | 7.3 | 113.8 | 22.8 | 54.3 | 9.4 | 108.1 | 2.4 |
| GPSR-BB(mean) | 11.2 | 20.2 | 7 | 53.1 | 11 | 20.5 | 5.1 | 53.1 | |
| GPSR-BB(worst) | 7.4 | 17.3 | 6.8 | 24.9 | 4.6 | 14.7 | 2 | 23 | |
| PSESOP(best) | 21.2 | 22.6 | 71.1 | 132.2 | 23.4 | 55.5 | 56.5 | 137.2 | 5.4 |
| PSESOP(mean) | 15.2 | 20 | 29.4 | 57.3 | 15.9 | 34.5 | 27.4 | 65.3 | |
| PSESOP(worst) | 8.3 | 15.8 | 6.9 | 28.7 | 8.2 | 16.6 | 7.2 | 30.9 | |
| IPG(best) | 20.6 | 22.2 | 52.1 | 84.3 | 35.7 | 28.6 | 54.2 | 81.4 | 2.7 |
| IPG(mean) | 20.1 | 18.2 | 35.3 | 44.1 | 19.7 | 19.1 | 33.8 | 36.7 | |
| IPG(worst) | 10.5 | 13.4 | 9.4 | 21.2 | 10.2 | 13.5 | 8.9 | 15.5 | |
| IPN(best) | 20.8 | 22.6 | 59.9 | 65.8 | 53.5 | 52.4 | 68.6 | 67.2 | 14.2 |
| IPN(mean) | 19.4 | 17.3 | 38.2 | 22.5 | 22.8 | 19.1 | 36.6 | 21 | |
| IPN(worst) | 11.7 | 15.2 | 7.5 | 7.1 | 5.7 | 2 | 1.5 | 2 | |
| RMRNSD(best) | 24.7 | 21.6 | 22.2 | 57.9 | 30.2 | 43.5 | 25.5 | 62.4 | 3.8 |
| RMRNSD(mean) | 14.3 | 19.2 | 8.3 | 33.8 | 17 | 21.5 | 8.4 | 33.4 | |
| RMRNSD(worst) | 5.5 | 15.9 | 3.6 | 8.4 | 4.7 | 13.8 | 1 | 3.9 | |
| SCWA(best) | 12.1 | 20.4 | 10.6 | 24.5 | 6.3 | 25.6 | 11.9 | 34.4 | 2.5 |
| SCWA(mean) | 11.2 | 16.3 | 9.3 | 20.9 | 5.3 | 18.6 | 9.4 | 21.7 | |
| SCWA(worst) | 7.3 | 11.4 | 6.9 | 12.8 | 3.8 | 10 | 3.3 | 10.8 | |