Hierarchical V2 model with choice of sparse coding and ICA.

Joshua Bowren 0ecffb19b6 Fixed title. 2 years ago
classification 6c02628e32 Initial commit 2 years ago
inference 6c02628e32 Initial commit 2 years ago
modindex_texturesfamily 6c02628e32 Initial commit 2 years ago
LICENSE-2.0.txt 6c02628e32 Initial commit 2 years ago
README.md 0ecffb19b6 Fixed title. 2 years ago
fg32.npy 6c02628e32 Initial commit 2 years ago
fg32_labels.npy 6c02628e32 Initial commit 2 years ago
gabors.npy 6c02628e32 Initial commit 2 years ago
layers.py 6c02628e32 Initial commit 2 years ago
lineangle32.npy 6c02628e32 Initial commit 2 years ago
lineangle32_labels.npy 6c02628e32 Initial commit 2 years ago
modpatch.py 6c02628e32 Initial commit 2 years ago
mse.py 6c02628e32 Initial commit 2 years ago
patchessmall 6c02628e32 Initial commit 2 years ago
patchpredict.py 6c02628e32 Initial commit 2 years ago
patchpredict2.py 6c02628e32 Initial commit 2 years ago
patchreconst.py 6c02628e32 Initial commit 2 years ago
reconst.py 6c02628e32 Initial commit 2 years ago
reconst2.py 6c02628e32 Initial commit 2 years ago
reconstimages.py 6c02628e32 Initial commit 2 years ago
textures32.npy 6c02628e32 Initial commit 2 years ago
textures32_labels.npy 6c02628e32 Initial commit 2 years ago
utils.py 6c02628e32 Initial commit 2 years ago
v1complex.py 6c02628e32 Initial commit 2 years ago
v1complex_inference.py 6c02628e32 Initial commit 2 years ago
v1complex_stimuli.py 6c02628e32 Initial commit 2 years ago

README.md

hv2model

Code associated with the paper "Inference Via Sparse Coding in a Hierachical Vision Model."

How to reproduce the results

There are 3 main experiments:

1) Classification experiment

2) Modulation index experiment

3) Infernece experiment

The root directory contains files associated with all experiments. The classification directory contains the files for the classification and modulation index experiments. The inference directory contains the code for the inference experiment.

All experiments follow roughly the same procedure. First the V1 simple, complex, and PCA transformed responses must be computed. Then sparse coding and ICA can be run on top.

Sparse coding was run with several choices of the regularization coefficient denoted by a constant at the top of several files by SPARSITY_PARAMETER. To reproduce all the results of this paper, each configuration must be used.

A small input training set of 10000 imagenet patches was included for learning how the model works, but to train the model as in the paper, you will need to get 400000 imagenet patches yourself and train on them. The input dataset is stored in a numpy memmap array with shape (PATCH_COUNT, 32, 32, 1) where the last dimension is for convenice in the code.

Classification Experiment

Open the file V1 complex and choose the desired setup. The default is a small collection of 10000 patches from imagenet. The sparse coding and ICA models must be run on PCA transformed V1 complex responses to imagenet patches. Run v1complex.py to get the imagenet responses in the default folder imnet.

Next, go to the classification directory and run sparse coding and ICA by running sc.py and ica.py. Each file has several configurations, including the sparsity parameter used for sparse coding. To reproduce the results of the paper, each configuration of sparse coding must be learned by running sc.py multiple times. The constants are at the top of the file.

The sparse coding and ICA bases will be saved in the folders scbasis and icabasis. Now the responses for each dataset must be computed. Each dataset is located in the root directory.

Open v1complex_stimuli.py in the root directory and choose the desired dataset. Both the input and output file must be choosen. Modify the constant OUT_FOLDER according to the comment, and uncomment the appropriate input dataset.

Now go to the classification directory again run scresp.py and icaresp.py to get the model responses which will be located in the folders sccodes and icacodes.

Now use the function classify from classify.py to get the model's classification accuracy under a SVM (linear kernel by default used in paper). The function takes as input the filesnames of the sparse coding/ICA responses and the labels located in the root directory for the chosen dataset. The output is a tuple with the average accuracy of 5 folders, the accuracy of each fold, and the number of wrong classfications per label.

Mod Index Experiment

The procedure is the same as for the classification experiment, except no basis is compute and running the function modindex from modindex.py is the last step instead of classify from classify.py.

First, compute the PCA transformed V1 complex responses for the naturalistic texture dataset and the noise texture dataset by uncommenting the appropriate lines in the v1complex_stimuli.py and changing OUT_FOLDER according to the comment above.

Next, compute the sparse coding and ICA responses by uncommenting the appropriate lines in scresp.py and icaresp.py. Again, the responses will be located in the folder sccodes and icacodes respectively.

Finally, use the function modindex to compute the average modulation index over the valid responses (both the naturalistic and noise response canot be zero). The function takes the filenames of the naturalistic textures responses and the noise texture responses as input.

Inference Experiment

First, generated the modified imagenet patches with the deleted regions by running modpatch.py

Next, compute the PCA transformed V1 complex responses for this experiment by running v1complex_inference.py.

Next, go to the inference directory and run sc.py and ica.py to learn the sparse coding ica bases (this may take a while).

Next, compute the sparse coding and ICA responses by running scresp.py and icaresp.py.

Finally run patcpredict to get the error of reconstruction of the V1 patch representation for both models. Optionally, you can view reconstruction by uncommenting the block comment at the end and importing matplotlib.py as plt.