Most of the content on this page is obsolete now.
An updated page is the
dictionary learning page,
where newer sparse approximation (matching pursuit) implemetations are available.
This page contains MatLab functions for frame design and sparse signal representation. During the work on my thesis a lot of Matlab files were generated. Many of these functions will be helpful for people (students) that want to continue the work on spare signal representation and frame design and for people that work in (closely) related areas, some of the files may even have a more general interest. To make the job of deciding if some of these functions may be helpful for your work there is a brief description just below the list of files.
Many of the functions are rather complicated as they are closely connected to the rather theoretic work in my thesis. To understand these functions at least some understanding of the work in my thesis is necessary, the thesis is also included in the file list below.
An overview of the matching pursuit algorithms for vector selection is given in an article presented at the NORSIG-2003 conference in Bergen, Norway. The vector selection part of this document contains the algorithms both as Matlab functions and as an exe-file. Using the exe-file execution is approximately ten times faster.
Spring and summer of 2007 I made some Java classes for Matching Pursuit,
they were collected in mp
package. The main class is
MatchingPursuit
which contains the different variants of the Matching
Pursuit algorithm. The dictionary, or frame, should be given as a matrix which should be
representated by an instance of the MPDictionary
abstract class.
Five different implementations of this abstract class is given, but I think that
MPFlexMatrix
will be appropriate for most dictionaries, i.e. matrices.
The classes are:
MatchingPursuit.class
with documentation
MPDictionary.class
with documentation
MPSimpleMatrix.class
with documentation
MPMatrix.class
with documentation
MPSparseMatrix.class
with documentation
MPBandMatrix.class
with documentation
MPFlexMatrix.class
with documentation
I have also made two Java classes that implement several matching pursuit algorithms.
This work was finished in 2006, but most work done earlier.
The classes are:
BlockFrame.class, 15.9 kB,
with documentation
OverlappingFrame.class, 13.6 kB,
with documentation
I also made a Matlab m-file to test these functions in Matlab,
test02.m, 10.4 kB, this file is in Norwegian.
This file is an example of how to use Java classes in Matlab m-files.
To make Java classes available to Matlab, see Matlab documentation
Calling Java from MATLAB, typically you need to edit the Matlab file classpath.txt.
Downloads of the Matlab-files are available at the top of this page.
I recommend that you read the overview of the matching pursuit algorithms for vector selection that is given in the article presented at the NORSIG-2003 conference in Bergen, Norway. The abstract is:
In this paper a new algorithm for vector selection in signal representation problems is proposed, we call it Partial Search (PS). The vector selection problem is described, and one group of algorithms for solving this problem, the Matching Pursuit (MP) algorithms, is reviewed. The proposed algorithm is based on the Order Recursive Matching Pursuit (ORMP) algorithm, it extends ORMP by searching a larger part of the solution space in an effective way. The PS algorithm tests up to a given number of possible solutions and returns the best, while ORMP is a greedy algorithm testing and returning only one possible solution. A detailed description of PS is given. In the end some examples of its performance are given, showing that PS performs very well, that the representation error is considerable reduced and that the probability of finding the optimal solution is increased compared to ORMP, even when only a small part of the solution space is searched.The vector selection functions presented here can be grouped into two categories:
I have also made two m-files to test and demonstrate the functions above.
Here we presents some results of the TestVSblockC function. We have done the test on two systems: A new fast PC, 2.4 GHz Pentium 4 CPU, running Matlab version 6.5, and an old PC, 120 Mhz Pentium CPU, running Matlab version 5.0. We note from the times of VSblockC that the fast PC is approximately 37 times faster than the slower PC. The tables are divided vertically into two halves, on the left side are the results using VSblockC (the fast C program) and on the right side are the results using the Matlab function VSblock. The table is also grouped horizontally into groups for each of the different main algorithms for vector selection, the number of variants within each group vary from VSblockC and VSblock.
The test vectors that are generated for this test are simply random values generated by the Matlab randn function, and so are the frame vectors. On the fast PC we use L=10000 test vectors, each of length N=16, selecting S=9 frame vectors from a frame of K=40 vectors. On the slow PC we use only one tenth of the test vectors, L=1000.
VSblockC | error (1) | time (s) | time (s) | error (1) | VSblock |
---|---|---|---|---|---|
BMP, A1 | 0.0743 | 0.84 | |||
BMP, A2 | 0.0448 | 1.19 | 12.33 | 0.0469 | VSmp2 |
BMP, A3 | 0.0295 | 1.83 | |||
OMP, A4 | 0.0240 | 0.81 | 11.61 | 0.0240 | VSomp |
11.7 | 0.0188 | VSormp | |||
ORMP, A5 | 0.0189 | 0.80 | 60.2 | 0.0188 | VSormp2 |
17.5 | 0.0189 | VSfomp2 | |||
PS, A6 M=5 | 0.0097 | 3.91 | 959 | 0.0069 | VSps (2) |
PS, A6 M=20 | 0.0063 | 13.6 | 235 | 0.0065 | VSps2 (3) |
PS, A6 M=50 | 0.0051 | 34.0 | 544 | 0.0051 | VSps2 (4) |
PS, A6 M=200 | 0.0039 | 126 | |||
PS, A6 M=500 | 0.0034 | 311 |
.
VSblockC | error (1) | time (s) | time (s) | error (1) | VSblock |
---|---|---|---|---|---|
BMP, A1 | 0.0744 | 2.97 | |||
BMP, A2 | 0.0444 | 3.62 | 54 | 0.0465 | VSmp2 |
BMP, A3 | 0.0294 | 5.77 | |||
OMP, A4 | 0.0244 | 2.69 | 807 | 0.0244 | VSomp |
818 | 0.0198 | VSormp | |||
ORMP, A5 | 0.0198 | 2.69 | 249 | 0.0198 | VSormp2 |
59 | 0.0198 | VSfomp2 | |||
PS, A6 M=5 | 0.0105 | 14.0 | 3919 | 0.0072 | VSps (2) |
PS, A6 M=20 | 0.0067 | 49.4 | 13488 | 0.0068 | VSps2 (3) |
PS, A6 M=50 | 0.0053 | 126 | 31368 | 0.0054 | VSps2 (4) |
PS, A6 M=200 | 0.0041 | 458 | |||
PS, A6 M=500 | 0.0035 | 1160 |
(1) The error measure is the average (over the NL entries in the error signal) of the error squared (energy). Since the signal samples are random Gaussian distributed values with standard deviation one, the (expected) average value of the energy of the signal samples is 1.
(2) VSps is the recursive variant of partial search, here we have used the P-argument as [5,4], giving 20 different combinations to search. The function is not called from VSblock.
(3) VSps2 here is from using the VSblock function. Here M is always 20.
(4) VSps2 with M=50 is found without using VSblock.
Signal expansions using frames may be considered as generalizations of signal representations based on transforms and filter banks. Frames for sparse signal representations may be designed using an iterative method with two main steps: (1) Frame vector selection and expansion coefficient determination for signals in a training set, -- selected to be representative of the signals for which compact representations are desired, using the frame designed in the previous iteration. (2) Update of frame vectors with the objective of improving the representation of step (1).
In this thesis we solve step (2) of the general frame design problem using the compact notation of linear algebra. This makes the solution both conceptually and computationally easy, especially for the non-block-oriented frames, -- for short overlapping frames, that may be viewed as generalizations of critically sampled filter banks. Also, the solution is more general than those presented earlier, facilitating the imposition of constraints, such as symmetry, on the designed frame vectors. We also take a closer look at step (1) in the design method. Some of the available vector selection algorithms are reviewed, and adaptations to some of these are given. These adaptations make the algorithms better suited for both the frame design method and the sparse representation of signals problem, both for block-oriented and overlapping frames.
The performances of the improved frame design method are shown in extensive experiments. The sparse representation capabilities are illustrated both for one-dimensional and two-dimensional signals, and in both cases the new possibilities in frame design give better results.
Also a new method for texture classification, denoted Frame Texture Classification Method (FTCM), is presented. The main idea is that a frame trained for making sparse representations of a certain class of signals is a model for this signal class. The FTCM is applied to nine test images, yielding excellent overall performance, for many test images the number of wrongly classified pixels is more than halved, in comparison to state of the art texture classification methods.
Finally, frames are analyzed from a practical viewpoint, rather than in a mathematical theoretic perspective. As a result of this, some new frame properties are suggested. So far, the new insight this has given has been moderate, but we think that this approach may be useful in frame analysis in the future.
Visit my home page.
Last update: April 7, 2011. Karl Skretting