티스토리 뷰
Deep HTM for GPU - Differentiable Scalar Spatial Pooler / 미분가능한 스칼라 공간풀러
Cokwa 2019. 10. 9. 22:07
The whole program is written in C++ with OpenGL(GLSL Compute Shader).
The internal process is pretty much the same as the paper Numenta wrote describes. 1
I've set up the network be an autoencoder.
The weird thing is that the weight matrix of SP is pretty dense as opposed to the permanence matrix of the vanilla SP being almost as sparse as the input. And That's what I expected to happen to it as well.
The network spec:
- input: the MNIST database
- number of the minicolumns: 1024
- number of winner minicolumns: 20
- minibatch size: 32
The network performed with the moving average sparsity mean of about 1.9531% with the variance of 6e-05.
GitHub link(warning: still working in progress): https://github.com/cokwa/DeepHTM
cokwa/DeepHTM
Contribute to cokwa/DeepHTM development by creating an account on GitHub.
github.com
'연구' 카테고리의 다른 글
역전파를 이용한 HTM (0) | 2019.01.20 |
---|---|
계층형 시간적 메모리(Hierarchical Temporal Memory) - 공간 풀러(Spatial Pooler) (0) | 2017.11.08 |
- Total
- Today
- Yesterday
- htm
- opengl
- 신경망
- 역전파
- Compute Shader
- mnist
- 인공신경망
- 공간풀러
- 공간 풀러
- 계층형 시간적 메모리
- spatial pooler
- 딥러닝
- GLSL
- hierarchical temporal memory
- GPGPU
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | ||||
4 | 5 | 6 | 7 | 8 | 9 | 10 |
11 | 12 | 13 | 14 | 15 | 16 | 17 |
18 | 19 | 20 | 21 | 22 | 23 | 24 |
25 | 26 | 27 | 28 | 29 | 30 | 31 |