티스토리 뷰

The weights of the spatial pooler layer / 공간풀러의 가중치 행렬
The weights of the last layer(linear) / 마지막 층(선형)의 가중치
The duty cycles of the minicolumns / 미니칼럼 듀티 사이클
The activations of the minicolumns / 미니칼럼 출력

 

Outputs generated by the network / 신경망 출력

The whole program is written in C++ with OpenGL(GLSL Compute Shader).

The internal process is pretty much the same as the paper Numenta wrote[각주:1] describes.

I've set up the network be an autoencoder.

The weird thing is that the weight matrix of SP is pretty dense as opposed to the permanence matrix of the vanilla SP being almost as sparse as the input. And That's what I expected to happen to it as well.

 

The network spec:

- input: the MNIST database

- number of the minicolumns: 1024

- number of winner minicolumns: 20

- minibatch size: 32

 

The network performed with the moving average sparsity mean of about 1.9531% with the variance of 6e-05.

 

GitHub link(warning: still working in progress): https://github.com/cokwa/DeepHTM

 

cokwa/DeepHTM

Contribute to cokwa/DeepHTM development by creating an account on GitHub.

github.com

 

 

댓글
공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
«   2024/04   »
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30
글 보관함