The Monnum spiking dataset for spike neural networks

Authors

  • adiyabat enkhjargal National University of Mongolia
  • Byambajav Dorj
  • Bayarpurev Mongol
  • Telmuun Tumnee
  • Sumiyakhand Dagdanpurev
  • Sandagsuren Dashzeveg

Abstract

Spiking Neural Networks are a type of artificial neural network that mimics the way biological neural networks in the brain process information. Spiking neural networks form the foundation of the brain’s efficient information processing. While we don’t fully understand how these networks calculate, recent optimization techniques allow us to create increasingly complex functional spiking neural networks in a simulated environment. These methods promise to develop more efficient computing hardware and explore new possibilities in understanding brain circuit function. It is essential to have objective methods to compare their performance to speed up the development of such techniques. However, there are currently no widely accepted means of comparing the computational performance of spiking neural networks. We have introduced a new spike-based classification dataset that can be widely used to evaluate software performance and neuromorphic hardware implementations of spiking neural networks to address this issue. To achieve this, we have created a general procedure for converting audio signals into spiking neural network activity, drawing inspiration from neurophysiology. We created the Monnum digit dataset specifically for this study. Within the range of this research, We implemented a digit recognition system from 1 to 10 spoken in the Mongolian language for the Spike neural network. The last is data for training and testing, which was prepared in HDF5 format extension and then trained in the SNN network.

Downloads

References

K. Boahen, “A neuromorph’s prospectus,” Comput.Sci. Eng., vol. 19, no. 2, pp. 14 28, 2017.

F. Zenke and S. Ganguli, “SuperSpike: Supervised learning in multilayer spiking neural networks,” Neural Comput., vol. 30, no. 6, pp. 1514–1541, Jun. 2018. DOI: 10.1162/neco a 01076.

M. Pfeiffer and T. Pfeil, “Deep learning with spiking neurons: Opportunities and challenges,” Frontiers Neurosci., vol. 12, p. 774, Oct. 2018. DOI: .

A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, and A. Maida, “Deep learning in spiking neural networks,” Neural Netw., vol. 111, pp. 47–63, Mar. 2019. DOI: .

G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass, “Long short-term memory and learning-to-learn in networks of spiking neurons,” in Proc. Adv. Neural Inf. Process. Syst., 2018, pp. 787–797.

S. B. Shrestha and G. Orchard, “SLAYER: Spike layer error reassign- ment in time,” in Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds. Red

Hook, NY, USA: Curran Associates, 2018, pp. 1419–1428.

S. Wo´aniak, A. Pantazi, T. Bohnstingl, and E.Eleftheriou, “Deep learning incorporating biologically inspired neural dynamics and in- memory computing,” Nature Mach. Intell., vol. 2, no. 6, pp. 325–336, Jun. 2020. v

E. O. Neftci, H. Mostafa, and F. Zenke, “Surrogate gradient learning in spiking neural networks,”

, arXiv:1901.09948. [Online]. Available: http://arxiv.org/abs/1901.09948

J. Schemmel, D. Briiderle, A. Griibl, M. Hock, K.Meier, and S. Millner, “A wafer-scale neuromorphic hardware system for large-scale neural modeling,” in Proc. IEEE Int. Symp. Circuits Syst., May 2010, pp. 1947–1950. DOI: .

S. Friedmann, J. Schemmel, A. Grubl, A. Hartel, M. Hock, and K. Meier, “Demonstrating hybrid learning in a flexible neuromorphic hardware system,” IEEE Trans. Biomed. Circuits Syst., vol. 11, no. 1, pp. 128–142, Feb. 2017. DOI:

S. B. Furber et al., “Overview of the SpiNNaker system architecture,” IEEE Trans. Comput., vol.62, no. 12, pp. 2454–2467, Dec. 2013. DOI: .

M. Davies et al., “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro, vol. 38, no. 1, pp. 82–99, Jan. 2018.

S. Moradi, N. Qiao, F. Stefanini, and G. Indiveri, “A scalable multicore architecture with heterogeneous memory structures for dynamic neuromorphic asynchronous processors (DYNAPs),” IEEE Trans. Biomed. Circuits Syst., vol. 12, no. 1, pp.

–122, Feb. 2018. DOI: .

K. Roy, A. Jaiswal, and P. Panda, “Towards spike-based machine intelligence with neuromorphic

computing,” Nature, vol. 575, no. 7784, pp. 607–617, Nov. 2019. [Online]. Available:https://www.nature.com/articles/s41586-

-1677-2

M. Davies, “Benchmarks for progress in neuromorphic computing,” Nature Mach. Intell., vol. 1,

no. 9, pp. 386–388, Sep. 2019. DOI: .

X. Huang, A. Acero, H.-W. Hon, and R. Reddy, Spoken Language Processing: A Guide to Theory, Algorithm and System Development. Upper Saddle River, NJ, USA: Prentice-Hall, 2001.

B. Cramer, Y. Stradmann, J. Schemmel, “The Hiedelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks”, IEEE Transactions on Neural Networks and Learning sytems, VOL.33, NO. 7, JULY 2022. DOI: .

W. Gerstner and W. M. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge, U.K.: Cambridge Univ. Press, 2002.DOI: .

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 1026–1034. DOI: .

A. Paszke et al., “Automatic differentiation in Py-Torch,” in Proc. NIPS, 2017, p. 5.

F. Zenke and T. P. Vogels, “The remarkable robustness of surrogate gradient learning for instilling complex function in spiking neural net- works,” bioRxiv, p.2020.06.29.176925, Jun. 2020. [Online]. Available: https://www.biorxiv.org/content/10.1101/2020.06.29.176925v1[22] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2014, arXiv:1412.6980. [Online]. Available: http://arxiv. org/abs/1412.6980

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2014, arXiv:1412.6980. [Online]. Available: http://arxiv. org/abs/1412.6980

”Audacity: Free Audio Editor and Recorder”. audacityteam.org. Archived from the original on March 14, 2011. Retrieved January 5, 2012.

Robert G¨utig and Haim Sompolinsky. The tempotron: a neuron that learns spike timing-based decisions. Nat Neurosci, 9(3):420–428, March 2006. ISSN 1097-6256. : 10.1038/nn1643

Downloads

Published

2025-01-27

How to Cite

[1]
adiyabat enkhjargal, B. Dorj, B. Mongol, T. Tumnee, S. Dagdanpurev, and S. Dashzeveg, “The Monnum spiking dataset for spike neural networks”, MJEngApplS, vol. 6, no. 1, Jan. 2025.