메뉴 건너뛰기




Volumn 69, Issue 5, 2004, Pages 6-

Entropy and information in neural spike trains: Progress on the sampling problem

Author keywords

[No Author keywords available]

Indexed keywords


EID: 85036164237     PISSN: 1063651X     EISSN: None     Source Type: Journal    
DOI: 10.1103/PhysRevE.69.056111     Document Type: Article
Times cited : (53)

References (37)
  • 12
    • 0002014402 scopus 로고
    • W. Rosenblith, MIT Press
    • H. B. Barlow, in Sensory Communication, edited by W. Rosenblith (MIT Press, Cambridge, MA, 1961), pp. 217–234.
    • (1961) Sensory Communication , pp. 217-234
    • Barlow, H.B.1
  • 18
    • 85036170213 scopus 로고    scopus 로고
    • J. D. Victor, Phys. Rev. EPLEEE81063-651X10.1103/PhysRevE.66.051903 66, 051903 (2002).
    • (2002) Phys. Rev. E , vol.66 , pp. 51903
    • Victor, J.D.1
  • 20
    • 28444443140 scopus 로고
    • S. Ma, J. Stat. Phys.JSTPBS0022-4715 26, 221 (1981).
    • (1981) J. Stat. Phys. , vol.26 , pp. 221
    • Ma, S.1
  • 22
    • 0041877169 scopus 로고    scopus 로고
    • L. Paninski, Neural Comput.NEUCEB0899-766710.1162/089976603321780272 15, 1191 (2003).
    • (2003) Neural Comput. , vol.15 , pp. 1191
    • Paninski, L.1
  • 26
    • 0000090102 scopus 로고
    • P. Grassberger, Phys. Lett. APYLAAG0375-960110.1016/0375-9601(88)90193-4 128, 369 (1988).
    • (1988) Phys. Lett. A , vol.128 , pp. 369
    • Grassberger, P.1
  • 28
    • 0000328645 scopus 로고
    • D. Wolpert and D. Wolf, Phys. Rev. EPLEEE81063-651X10.1103/PhysRevE.52.6841 52, 6841 (1995).
    • (1995) Phys. Rev. E , vol.52 , pp. 6841
    • Wolpert, D.1    Wolf, D.2
  • 32
    • 85036184186 scopus 로고    scopus 로고
    • I. Nemenman, Ph.D. thesis, Princeton University, 2000.
    • Nemenman, I.1
  • 34
    • 33645051555 scopus 로고    scopus 로고
    • J. van Hemmen, J. D. Cowan, and E. Domany, Springer-Verlag
    • R. de Ruyter van Steveninck and W. Bialek, in Methods in Neural Networks IV, edited by J. van Hemmen, J. D. Cowan, and E. Domany (Springer-Verlag, Heidelberg, New York, 2001), pp. 313–371 (see Fig. 17).
    • (2001) Methods in Neural Networks IV , pp. 313-371
    • de Ruyter van Steveninck, R.1    Bialek, W.2
  • 35
    • 85036267844 scopus 로고    scopus 로고
    • reference to Bayesian estimators, consistency usually means that, as (Formula presented) grows, the posterior probability concentrates around unknown parameters of the true model that generated the data. For finite parameter models, such as the one considered here, only technical assumptions like positivity of the prior for all parameter values, soundness (different parameters always correspond to different distributions) c31, and a few others are needed for consistency. For nonparametric models, the situation is more complicated. There one also needs ultraviolet convergence of the functional integrals defined by the prior c32 c33
    • In reference to Bayesian estimators, consistency usually means that, as (Formula presented) grows, the posterior probability concentrates around unknown parameters of the true model that generated the data. For finite parameter models, such as the one considered here, only technical assumptions like positivity of the prior for all parameter values, soundness (different parameters always correspond to different distributions) c31, and a few others are needed for consistency. For nonparametric models, the situation is more complicated. There one also needs ultraviolet convergence of the functional integrals defined by the prior c32 c33.
  • 36
    • 85036281240 scopus 로고    scopus 로고
    • It may happen that information is a small difference between two large entropies. Then, due to statistical errors, methods that estimate information directly will have an advantage over NSB, which estimates entropies first. In our case, this is not a problem since the information is roughly a half of the total available entropy c4
    • It may happen that information is a small difference between two large entropies. Then, due to statistical errors, methods that estimate information directly will have an advantage over NSB, which estimates entropies first. In our case, this is not a problem since the information is roughly a half of the total available entropy c4.
  • 37
    • 85036433146 scopus 로고    scopus 로고
    • For our and many other neural systems, the spike timing can be more accurate than the refractory period of roughly 2 ms c6 c10 c34. For the current amount of data, discretization of (Formula presented) ms and large enough (Formula presented) will push the limits of all estimation methods, including ours, that do not make explicit assumptions about properties of the spike trains. Thus, to have enough statistics to convincingly show validity of the NSB approach, in this paper we choose (Formula presented) ms, which is still much shorter than other methods can handle. We leave open the possibility that more information is contained in timing precision at finer scales
    • For our and many other neural systems, the spike timing can be more accurate than the refractory period of roughly 2 ms c6 c10 c34. For the current amount of data, discretization of (Formula presented) ms and large enough (Formula presented) will push the limits of all estimation methods, including ours, that do not make explicit assumptions about properties of the spike trains. Thus, to have enough statistics to convincingly show validity of the NSB approach, in this paper we choose (Formula presented) ms, which is still much shorter than other methods can handle. We leave open the possibility that more information is contained in timing precision at finer scales.


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.