티스토리 뷰
연구
Deep HTM for GPU - Differentiable Scalar Spatial Pooler / 미분가능한 스칼라 공간풀러
Cokwa 2019. 10. 9. 22:07
The whole program is written in C++ with OpenGL(GLSL Compute Shader).
The internal process is pretty much the same as the paper Numenta wrote describes. 1
I've set up the network be an autoencoder.
The weird thing is that the weight matrix of SP is pretty dense as opposed to the permanence matrix of the vanilla SP being almost as sparse as the input. And That's what I expected to happen to it as well.
The network spec:
- input: the MNIST database
- number of the minicolumns: 1024
- number of winner minicolumns: 20
- minibatch size: 32
The network performed with the moving average sparsity mean of about 1.9531% with the variance of 6e-05.
GitHub link(warning: still working in progress): https://github.com/cokwa/DeepHTM
'연구' 카테고리의 다른 글
역전파를 이용한 HTM (0) | 2019.01.20 |
---|---|
계층형 시간적 메모리(Hierarchical Temporal Memory) - 공간 풀러(Spatial Pooler) (0) | 2017.11.08 |
댓글
공지사항
최근에 올라온 글
최근에 달린 댓글
- Total
- Today
- Yesterday
링크
TAG
- 신경망
- 공간 풀러
- 계층형 시간적 메모리
- GPGPU
- spatial pooler
- hierarchical temporal memory
- 인공신경망
- opengl
- GLSL
- 딥러닝
- 역전파
- 공간풀러
- Compute Shader
- htm
- mnist
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | |
7 | 8 | 9 | 10 | 11 | 12 | 13 |
14 | 15 | 16 | 17 | 18 | 19 | 20 |
21 | 22 | 23 | 24 | 25 | 26 | 27 |
28 | 29 | 30 |
글 보관함