top of page
  • Instagram
  • Twitter
  • TikTok
  • Youtube
  • Facebook
  • Spotify
  • LinkedIn

Neural Network Models, Algorithms, and Different Learning Rules In Neuromorphic Computing: A Review

  • Ananya Das; Krystal Nkoronye; Nathaniel Straight
  • Aug 28, 2025
  • 17 min read

Updated: Oct 12, 2025

Abstract

Artificial intelligence (AI) is an increasingly promising staple in future societies and has already started influencing and dominating many fields, including the semiconductor industry. AI semiconductors have emerged as a result of this. Semiconductors are the backbone of different AI tasks, such as image recognition, natural language processing, or autonomous decision-making, and are optimized for speed, energy efficiency, and real-time data processing. This article describes a specialized kind of advanced AI semiconductor referred to as neuromorphic chips. Neuromorphic chips offer unique and novel solutions not previously possible in the artificial intelligence field. This article aims to emphasize the fundamental learning rules, algorithms, and models used in this advanced chip, focusing on the Hopfield neural network Model, Voltage-Dependent Synaptic Plasticity, and the Izhikevich Model.


Introduction

Neuromorphic computing is a system that mimics the brain's neural networks. Neuromorphic chips allow machines to process information in a way that is similar to the way humans process data. These chips are great with pattern recognition, image recognition, and speech recognition as they learn and adapt to answer problems based on previously given solutions. These give the benefits of compact size, low power, and robust operation. They can be used to help in the healthcare field and with recognizing and diagnosing diseases by implementing extremely advanced data processing and logical skills. These chips are designed using different neural network models, which provide various important properties such as parallelism, asynchronism, and on-device learning. Some models that are widely used in this process include the Hopfield neural network model, the Izhikevich model, the integrate-and-fire model, the Spike Response Model, etc. The functionality and development of neuromorphic chips depend on many crucial algorithms and learning rules, such as the Hebb learning rule, Spike-Timing-Dependent Plasticity, Voltage-Dependent Synaptic Plasticity, etc. The integration of Spiking Neural Networks (SNNs) onto CMOS-compatible platforms is an integral part of research being done regarding neuromorphic architecture. Various companies like Lohi, IBM TrueNorth, etc, are utilizing Neuromorphic Computing in their architectures for various applications. One important factor within Neuromorphic Computing is the use of Neural Networks to process data. Neural networks offer high processing speeds, are adept at learning solutions to solve problems from solutions from previous data, and very closely imitate biological neural networks. Neurons communicate at synapses. Alone, neurons are slow in processing information, but altogether, processing information at multiple synapses, their power can exceed that of supercomputers with a large amount of fault tolerance, meaning the system can still operate even if it loses neurons, as neurons die daily.


Although they are computationally advanced, there are still some disadvantages to using neural networks in neuromorphic architecture. For one, it could be tedious having to provide example data to train the computer system to recognize and use to solve future problems. The disadvantages can range from having a large or small effect, depending on the intended use of the neural network. It is also possible to use techniques to lessen the negative effects. Neural networks are a good option to have if there is adequate data for network training, high speeds are needed to process the data, and the processing method needs to be strong and able to withstand a large amount of input.


Both biological and artificial neural networks have the property of adaptability, where they go through the process of learning, changing their responses in response to external signals. The first model of an Artificial Neural Networks (ANN) was first introduced by McCulloch in a paper he published in 1943 (it was a non-linear function that translated inputs to outputs [ie. xi to z]) in his model, xi is multiplied by the weight, or perimeter/"synaptic strength", then added to the weighted input signals to give a total unit.


Neural networks consist of layers, most notably known as the input layer, hidden layer, and output layer. The activation of the neurons in the input layer stimulates activations in the hidden layers and so forth. This is synonymous with how neurons work in biological neural networks. In biological neural networks, some neurons fire at synapses, causing the other neurons to fire as well. Assigning a weight between the neuron and the following layer and multiplying it by the activations determines the parameter. Then, add them together. The sigmoid function describes this equation best. In this function, a bias is added to the weights, and it is essentially compressed to be located between the numbers 0 and 1. The process of learning is adjusting the weights and biases to find the best balance. This is possible through multiplying the input by its weight, adding the products of the inputs and weights, and adding the bias; a(x) = wx + b.


The paper is organised as follows: In the first section Hopfield network model and its relation with Hebb learning rules are discussed. The second section discusses Voltage Dependent Spike Plasticity.

The third section discusses the Izikevitch Model. And the final section summarizes the overall review of these neural network models, algorithms, and learning rules. This section also briefly discusses the future possibilities of neuromorphic computing.


  1. Hopfield Neural Network Model


Hopfield's neural network model is one of the most significant algorithms in the development of neural network systems in neuromorphic chips. Professor J.J. Hopfield, who was a physicist at the California Institute of Technology, proposed in 1982 that this is a single-layer feedback neural network. A Hopfield neural network can accurately identify the object and digital signals even though they are contaminated by noise [1].


This can be used as an associative memory. It is a recurrent neural network that has feedback connections from output to input. All neurons have the same structure, and they are interconnected. Each neuron gets feedback information through connection weights, and the signal is transferred both in positive and negative directions. This kind of design allows all the outputs of the neurons to be controlled by all the neurons [2].


Hopfield’s neural network model can be classified into a discrete Hopfield neural network (DHNN) and a continuous Hopfield neural network (CHNN).


Discrete Hopfield’s neural network is primarily used for associative memory, and the continuous Hopfield neural network is primarily used for optimization calculation [2]. But the difference between them is whether its activation function is a discrete step function or a continuous step function [1].

The neuron variable function of the discrete Hopfield network is symbolic. The node states of the network take binary values +1 and -1. It can be described as a set of nonlinear difference equations[2].


In Discrete Hopfield’s neural network, both input and output are binarized [2]. The synaptic weight between neuron i and neuron j is Wij. So for a Hopfield Neural network with N number of neurons, the weight matrix size is NN. Neurons in the network are symmetrically connected, i.e, Wij = Wji.

The fully connected neural network is shown in Figure 1. The circles here represent the neurons. The output of each neuron is the input of another neuron, which means the input of neurons comes from other neurons [1]. Every neuron of this network only takes discrete binary values of 1 and 0.

In Figure 1.



Figure 1. Discrete Hopfield Neural Network Diagram [2]
Figure 1. Discrete Hopfield Neural Network Diagram [2]

In Figure 1, y shows the neuron output at the moment [2]. Neurons have current state ui and output vi [3].

The relationship between state and output is given below, which is the discrete Hopfield neural network evolution equations.

Where Ii is the continuous external input of neuron i, and f () is the activation function of the network. When used for associative memory applications, the weights remain stable after the network training is completed..


Now, the overall network has two variable parameters, which are, as mentioned, the updated state and the output of the neurons in this network. This model is discrete and random on the network because the neurons get randomly updated. As the network gets updated, if the weight matrix is symmetric to the non-negative diagonal, the energy function can reach a minimized value till the overall system converges to a stable state [2]. When DHNN is designing the connection weight and the stable state of the system occurs, the available weight matrix W can be obtained through the learned memory of this network. The DHNN can learn depending on Hebb's rules[2].


Hebb's rules are the fundamental working functions of artificial neural networks. This was the first rule to develop a learning algorithm for the unsupervised neural network [4]. It is believed that through repetition and continuous learning, artificial neural networks can increase certain connections and strengthen their adaptability [5]. Hebb's rule says that the learning process occurs due to the changes in synaptic gaps [1]. Hebb's rule says that the weight connecting two particular neurons increases if two neighbouring neurons are activated and deactivated at the same time [5]. This rule has similarities with the biological theory of conditioned reflex[2].


The Hebbian learning rule on the Hopfield Neural Network can be mathematically expressed by the equation [2] :

Where 𝑊𝑖𝑗(𝑛) represents the connection weight from node j to node i.𝑊𝑖𝑗(𝑛+1) is the node n to node i after the ( n + 1) adjustment of connection weight. Here 𝜂 is the learning rate

parameter. 𝑌𝑖 and 𝑋𝑗 are outputs of node i and node j, respectively [2].

In a Hopfield neural network Hebb rule is used for a set of q different input samples P𝑛𝑥𝑞=[p1,p2,p3,p4…,pq]. This adjusts the weight matrix W so that it achieves a group of input samples p𝑘,[ k  =  1,  2,  3,  …, q] as the initial value of this neural network. This way, the overall system could converge to the respective input sample vectors [2].



  1. Voltage Dependent Spike Plasticity (VDSP)


Plasticity, or the adaptability to environmental changes, is essential to understand when discussing how neurons learn and adapt to learned solutions. William James was the first to use the term plasticity in the field of neurology within his book, The Principles of Psychology. He gathered inspiration from many other sources, like William Benjamin Carpenter, Léon Dumont, and other philosophers and theorists. [6]. In subsequent years, scholars made increasing observations regarding plasticity and its connection to the neuronal world. In particular, Euginio Tanzi made a connection between neurons and plasticity …, Earnesto Lugaro linked neural plasticity with synaptic plasticity, and Cajal completed Tanzi’s theory through his hypothesis of plasticity being formed through new connections between neurons. These are just some of the influential figures who contributed to the development of the beliefs behind neuronal plasticity that can be thought of today. Further information on the historical aspect can be found in [6].


Biological synaptic plasticity is the ability of the brain to adapt and learn from experiences, affecting ensuing behaviours and thoughts. Short-term synaptic plasticity (STSP) is triggered when there are rapid, temporary changes in the strength of the synapse due to prior synaptic activity. Changes in the synaptic efficacy can last from milliseconds to minutes and can be characterized into two different categories, short-term facilitation (STF) and short-term depression (STD). STF is a result of repeated stimulation leading to a strengthened synapse. STD, on the contrary, happens when synapse strength is weakened instead of increased with repeated stimulation. Similar to short-term plasticity is long-term plasticity (LTP), which consists of longer-term modifications to behaviors due to synaptic efficacy. LTP also houses the phenomenon present in STP, like long-term facilitation (LTF) and long-term depression (LTD). [7]

 

Although there are a variety of neural networks (i.e, ANNs, RNNs, etc), Spiking Neural Networks (SNNs) most closely mimic the biological spikes used by our brains. In this section of the paper, we are going to be discussing Spike Plasticity, most specifically Voltage Dependent Spike Plasticity(VDSP).   

 

Spike Time-Dependent Plasticity (STDP) is an unsupervised learning strategy that uses correlation between presynaptic and postsynaptic neurons to modify synaptic efficacy. STDP utilizes synaptic learning to make modifications and adapt using previous data. One of the major issues with STDP would be the level of precision needed regarding spike times to be stored in memory. [8] 


With STDP, there is a cost to gain the memory requirement for storing spike times for each neuron,

with a large energy requirement. There is also an amount of increased complexity with the circuit, which in turn decreases the benefit of using this concept with low-power memory devices. The STDP model contains a relationship that can be shown through w ∈ ℝ, which describes a change in weight w, being an element of a set of real numbers. It can also be modelled through the following equation:


This equation states that the change in weight is proportional to two exponential functions. The top function is proportional to the time constraints within synaptic potentiation (+), with the bottom function being proportional to the time constraints in synaptic depression (-)


Voltage Dependent Spike Plasticity (VDSP) is an unsupervised learning rule similar to Spike Time-Dependent Plasticity (STDP), yet it addresses some of the issues present in STDP. Unlike SDTP, VDSP can be more easily integrated into memory on computer hardware and doesn’t strictly require the timing of synaptic spikes to influence the adaptation of synaptic weights in the way STDP does. 


VDSP is used in the Leaky integrate and fire models: models that implement a leaky element, spiking, and various extensions (ie, AELIF and ELIF) to closely mimic the brain's way of processing input. The LIF model describes how, after an input’s membrane potential reaches a threshold, it will spike and return to the resting position, where it will partake in a refractory period, starting the process again. 


The LIF neuron model was used for the presynaptic neuron layers. The related equation is[8] :

Here 𝜏𝑚 is the membrane leak time constant,v is the membrane potential, I is the injected current, and 

b is a bias [8]. As the membrane potential crosses a threshold potential ( vth ), the neuron emits a spike. This becomes insensitive to any input for the refractory period (𝑡𝑟𝑒𝑓) and the neuron potential is reset to voltage (𝑣𝑟𝑒𝑠𝑒𝑡) [8].


VDSP relies on weight dependence (also referred to as the multiplicative Hebbian learning rule, in which the present weight value and weight update magnitude are considered[8]. The formula (Wmax - W) can be used to describe the weight update during synaptic potentiation, as the maximum weight is taken from the current weight, and during synaptic depression, the weight update corresponds to W (the current weight) [8]. Weight update is especially important within the VDSP model, as VDSP doesn’t require hardbound. The weight update can be described using the following equation:


  1. Izhikevich Model


First presented in 2003, the Izhikevich model is a simplification of the Hodgkin–Huxley neuron model and is able to maintain a degree of accuracy while also cutting significantly on computing costs [9]. Though the Hodgkin-Huxley model is all but exclusively biologically applicable, the Izhikevich model is efficient enough to be usable in neuromorphic systems. Its main strengths in this field are its avoidance of the complexity brought on by differential equations and the variety of behaviours that can be achieved quite easily. The equation is as follows:

Eq. 1. Adapted from [10]. Where v, u, and I represent the neuron’s membrane potential, the membrane recovery, and direct current (DC) input. Variables a, b, c, and d are parameters to change the behavior of the model and can be manipulated to create wildly differing results. They represent, respectively, the time scaling of u, the sensitivity of u, and the reset value of v, the reset value of v, and the reset parameter of u after the spiking. This uses 30 mV as the peak of the spike rather than a threshold as might be assumed [11].

The Izhikevich model is able to perform a variety of behaviours, as mentioned previously, and, to the authors’ knowledge, 23 distinct firing patterns can be created from the manipulation of just a, b, c, and d that are organized into Table. 1  and condensed from [11] and [12]. Of course, further behaviours can be created through modification of the model, and a few of such modifications can be found at [13] and [14].  For the sake of brevity, those are not described in this paper. A visual of each behavior can be seen in Fig. 1


Tab. 1. Behavior types and descriptions with the Izhikevich model.

Behavior Name

Behavior Action

Tonic Spiking

Regular, consistent action potentials are fired in response to a constant input current.

Phasic Spiking

When the stimulus begins, a burst of spikes occurs before ceasing altogether.

Tonic Bursting

A series of spike bursts followed by a period of quiescence.

Phasic Bursting

When the stimulus begins, a burst of spikes occurs before a prolonged period of quiescence.

Spike Frequency Adaptation

Initial spike firing at a high frequency in response to a stimulus, but the frequency decreases over time.

Rebound Spiking

After a period of hyperpolarization, a burst of spikes is fired upon returning to a depolarized state.

Rebound Bursting

Following a hyperpolarization, a burst of spikes is fired upon depolarization, with each burst followed by a period of quiescence.

Threshold Variability

Irregular firing patterns due to the neuron's firing threshold vary over time.

Bistability

Stable existence in two different states: a quiescent state and an active firing state.

Depolarizing After-Potential

After a spike, a prolonged depolarization occurs before returning to the resting membrane potential.

Accomidation

Response to a constant stimulus decreases over time, leading to a reduction in firing rate.

Inhibition-Induced Spiking

The neuron fires spikes in response to inhibitory inputs, an irregular phenomenon.

Inhibition-Induced Bursting

The neuron fires bursts of spikes in response to inhibitory inputs.

Class 1 Excitable

A gradual increase in firing rate as the stimulus intensity increases, without a threshold.

Class 2 Excitable

A sharp increase in firing rate once a threshold is crossed.

Spike Latency

A delay between the onset of a stimulus and the initiation of spikes occurs.

Subthreshold Oscillations

Oscillatory behavior below the threshold for spike generation.

Resonator

Oscillatory behavior in response to periodic stimuli

Integrator

Integrates incoming stimuli over time, leading to a gradual buildup to threshold and subsequent firing.

Mixed Mode

A combination of different firing patterns, such as tonic spiking and bursting, depending on the stimulus.

All or Nothing

Fires a full spike or nothing at all. Small sub-threshold inputs don’t cause partial spikes—only full spikes occur once the threshold is crossed.

Refractory Period

After firing, a period of either complete unresponsiveness or less excitability to further stimulation occurs.

Excitation Block

After exposure to very high sustained input, firing stops even though the neuron is excited; it becomes locked in a depolarized state.

Note: Data synthesized from [11], [12]


Figure. 2. Original Izhikevich model
Figure. 2. Original Izhikevich model

[11] adapted and extended by [12]. The 23 distinct behaviours possible with the unmodified Izhikevich, to the authors’ knowledge. These can be condensed into more general descriptions like Izhikevich did in his later textbook [15]. http://creativecommons.org/licenses/by/4.0/.


One notable mechanism that can be achieved with the Izhikevich model is chaotic resonance. With the parameters of a = 0.2, b = 2, c =  − 56, d =  − 16, chaotic resonance can be achieved [14, 16, 17, 18, 19, 20]. The situations can, of course, be identified by calculating the Lyapunov exponents, a process shown in [21], [22], and [23]. This phenomenon amplifies weak signals without noise, and optimal signal detection occurs when the system is poised between order and chaos, providing a balance of flexibility and stability [16]. While the model was originally a mainly biological one, it has transitioned into neuromorphic computing quite naturally, and implementation on Loihi 2 is being studied currently [24]. Due to efficiency, the model is favored for simulating huge brain‑scale networks, especially combined with synaptic plasticity for modeling. A simple model of the neuronal structure can be seen in Fig. 2.  The final main frontier of research into the Izhikevich model is its memristive and electromagnetic‑induced extensions. These can transform the previously two-dimensional system into a multi-dimensional one and add interpretability and phenomenological realism, especially when modeling disruption, field effects, or energy-mediated coupling [25, 26].

Figure 3. The logic structure of the Izhikevich model with limited neurons [27]
Figure 3. The logic structure of the Izhikevich model with limited neurons [27]
  1. Conclusion

Artificial Intelligence is rapidly developing, and its implementation with semiconductors is already shaping the future of the semiconductor industry. Neuromorphic chips are a part of this. With its visionary uses from advanced image and video recognition to edge AI, neuromorphic computing is expected to change many AI technologies and shape them. This also has potential uses in robotics and neuroscience research. In this paper, a few of the neural network models, algorithms, and learning rules used in neuromorphic chips were reviewed. These neural networks, algorithms, and learning rules are the basis of neuromorphic computing. Ongoing advancements in learning rules, algorithms, and neural networks and their applications are actively shaping and driving the progress of neuromorphic computing.


  1. References


[1]

Z. Yu, A. M. Abdulghani, A. Zahid, H. Heidari, M. A. Imran, and Q. H. Abbasi, “An Overview of Neuromorphic Computing for Artificial Intelligence Enabled Hardware-Based Hopfield Neural Network,” IEEE Access, vol. 8, pp. 67085–67099, 2020, doi: https://doi.org/10.1109/access.2020.2985839.


[2]

Z. Yu, M. Imran, S. Ansari, H. T. Abbas, A. M. Abdulghani, and H. Heidari, “Hardware-Based Hopfield Neuromorphic Computing for Fall Detection,” vol. 20, no. 24, pp. 7226–7226, Dec. 2020, doi: https://doi.org/10.3390/s20247226.



[3]


[4]

D. Team, “Introduction to Learning Rules in Neural Network,” DataFlair, Jul. 25, 2017. https://data-flair.training/blogs/learning-rules-in-neural-network/

[5]

The Decision Lab, “Hebbian Learning,” The Decision Lab. https://thedecisionlab.com/reference-guide/neuroscience/hebbian-learning

[6]

C. A. Blanco, “The principal sources of William James’ idea of habit,” Frontiers in Human Neuroscience, vol. 8, May 2014, doi: https://doi.org/10.3389/fnhum.2014.00274.


[7]

A. Citri and R. C. Malenka, “Synaptic Plasticity: Multiple Forms, Functions, and Mechanisms,” Neuropsychopharmacology, vol. 33, no. 1, pp. 18–41, Aug. 2007, doi: https://doi.org/10.1038/sj.npp.1301559.


[8]

N. Garg et al., “Voltage-dependent synaptic plasticity: Unsupervised probabilistic Hebbian plasticity rule based on neurons membrane potential,” Frontiers in neuroscience, vol. 16, Oct. 2022, doi: https://doi.org/10.3389/fnins.2022.983950.


[9]

E. M. Izhikevich, “Which Model to Use for Cortical Spiking Neurons?,” IEEE Transactions on Neural Networks, vol. 15, no. 5, pp. 1063–1070, Sep. 2004, doi: https://doi.org/10.1109/tnn.2004.832719.

[10]

Z. Karaca, N. Korkmaz, Y. Altuncu, and R. Kılıç, “An extensive FPGA-based realization study about the Izhikevich neurons and their bio-inspired applications,” Nonlinear Dynamics, vol. 105, no. 4, pp. 3529–3549, Aug. 2021, doi: https://doi.org/10.1007/s11071-021-06647-1.


[11]

E. M. Izhikevich, “Which Model to Use for Cortical Spiking Neurons?,” IEEE Transactions on Neural Networks, vol. 15, no. 5, pp. 1063–1070, Sep. 2004, doi: https://doi.org/10.1109/tnn.2004.832719.

[12]

W. Yi, K. K. Tsang, S. K. Lam, X. Bai, J. A. Crowell, and E. A. Flores, “Biological plausibility and stochasticity in scalable VO2 active memristor neurons,” Nature Communications, vol. 9, no. 1, Nov. 2018, doi: https://doi.org/10.1038/s41467-018-07052-w.


[13]

F. Jia, P. He, and L. Yang, “A Novel Coupled Memristive Izhikevich Neuron Model and Its Complex Dynamics,” Mathematics, vol. 12, no. 14, pp. 2244–2244, Jul. 2024, doi: https://doi.org/10.3390/math12142244.


[14]

Sou Nobukawa, H. Nishimura, Teruya Yamanishi, and J.-Q. Liu, “Analysis of Chaotic Resonance in Izhikevich Neuron Model,” PLoS ONE, vol. 10, no. 9, pp. e0138919–e0138919, Sep. 2015, doi: https://doi.org/10.1371/journal.pone.0138919.


[15]

“Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting,” Izhikevich.org, 2025. https://www.izhikevich.org/publications/dsn/


[16]

S. Nobukawa, H. Nishimura, and T. Yamanishi, “Chaotic Resonance in Typical Routes to Chaos in the Izhikevich Neuron Model,” Scientific Reports, vol. 7, no. 1, May 2017, doi: https://doi.org/10.1038/s41598-017-01511-y.


[17]

S. Nobukawa, H. Nishimura, and T. Yamanishi, “Routes to Chaos Induced by a Discontinuous Resetting Process in a Hybrid Spiking Neuron Model,” Scientific Reports, vol. 8, no. 1, Jan. 2018, doi: https://doi.org/10.1038/s41598-017-18783-z.


[18]

 S. Nobukawa, H. Nishimura, T. Yamanishi, and J.-Q. Liu, “Evaluation of resonance phenomena in chaotic states through typical routes in Izhikevich neuron model,” in Proc. 2015 Int. Symp. Nonlinear Theory Its Appl. (NOLTA2015), Dec. 1–4, 2015, pp. 435–438. [Online]. Available: https://www.ieice.org/nolta/symposium/archive/2015/articles/B2L-B2-6195.pdf

[19]

S. Nobukawa, H. Nishimura, T. Yamanishi, and J.-Q. Liu, “Analysis of routes to chaos in Izhikevich neuron model with resetting process,” 2014 Joint 7th International Conference on Soft Computing and Intelligent Systems (SCIS) and 15th International Symposium on Advanced Intelligent Systems (ISIS), pp. 813–818, Dec. 2014, doi: https://doi.org/10.1109/scis-isis.2014.7044746.

[20]

Sou Nobukawa, H. Nishimura, Teruya Yamanishi, and J.-Q. Liu, “Chaotic States Induced By Resetting Process In Izhikevich Neuron Model,” Journal of Artificial Intelligence and Soft Computing Research, vol. 5, no. 2, pp. 109–119, Apr. 2015, doi: https://doi.org/10.1515/jaiscr-2015-0023.


[21]

Y. Kim, “Identification of Dynamical States in Stimulated Izhikevich Neuron Models by Using a 0-1 Test,” Journal of the Korean Physical Society, vol. 57, no. 6, pp. 1363–1368, Dec. 2010, doi: https://doi.org/10.3938/jkps.57.1363.

[22]

F. Bizzarri, A. Brambilla, and Giancarlo Storti Gajani, “Lyapunov exponents computation for hybrid neurons,” Oct. 2013, doi: https://doi.org/10.1007/s10827-013-0448-6.

[23]

S. Lynch, Dynamical Systems with Applications using MATLAB®. Cham: Springer International Publishing, 2014. doi: https://doi.org/10.1007/978-3-319-06820-6.


[24]

R. B. Uludağ, S. Çağdaş, Y. S. İşler, N. S. Şengör, and İ. Aktürk, “Bio-realistic neural network implementation on Loihi 2 with Izhikevich neurons,” Neuromorphic Computing and Engineering, vol. 4, no. 2, p. 024013, Jun. 2024, doi: https://doi.org/10.1088/2634-4386/ad5584.

[25]

P. Stoliar, Olivier Schneegans, and M. J. Rozenberg, “Biologically Relevant Dynamical Behaviors Realized in an Ultra-Compact Neuron Model,” Frontiers in Neuroscience, vol. 14, May 2020, doi: https://doi.org/10.3389/fnins.2020.00421.


[26]

Y. Yang, J. Ma, Y. Xu, and Y. Jia, “Energy dependence on discharge mode of Izhikevich neuron driven by external stimulus under electromagnetic induction,” Cognitive Neurodynamics, vol. 15, no. 2, pp. 265–277, May 2020, doi: https://doi.org/10.1007/s11571-020-09596-4.


[27]

H. Wang and H. Wang, “Improvement of Izhikevich’s Neuronal and Neural Network Model,” 2009 International Conference on Information Engineering and Computer Science, pp. 1–4, Dec. 2009, doi: https://doi.org/10.1109/iciecs.2009.5363122.


Figure 1.

Discrete Hopfield Neural Network Diagram

Reprinted from Z. Yu, M. Imran, S. Ansari, H. T. Abbas, A. M. Abdulghani, and H. Heidari, “Hardware-Based Hopfield Neuromorphic Computing for Fall Detection,” vol. 20, no. 24, pp. 7226–7226, Dec. 2020, doi: https://doi.org/10.3390/s20247226.

Figure 3.

The logic structure of the Izhikevich model with limited neurons..

Adapted from [27]



https://www.facebook.com/SoftwareTestingHelp, “Neural Network Learning Rules – Perceptron & Hebbian Learning,” Software Testing Help - FREE IT Courses and Business Software/Service Reviews, Apr. 2025. https://www.softwaretestinghelp.com/neural-network-learning-rules/



S. Kulkarni, A. V. Babu, and B. Rajendran, “Spiking neural networks — Algorithms, hardware implementations and applications,” Aug. 2017, doi: https://doi.org/10.1109/mwscas.2017.8052951.



D. Ivanov, A. Chezhegov, M. Kiselev, A. Grunin, and D. Larionov, “Neuromorphic artificial intelligence systems,” Frontiers in Neuroscience, vol. 16, Sep. 2022, doi: https://doi.org/10.3389/fnins.2022.959626.

“Intelligent Silicon: Neuromorphic Systems and Semiconductor Innovations for the AI Revolution - Journals - Master Educational Services,” Nationaleducationservices.org, 2024. https://www.nationaleducationservices.org/intelligent-silicon-neuromorphic-systems-and-semiconductor-innovations-for-the-ai-revolution/pid-2230538460


B. J. Wythoff, “Backpropagation neural networks,” Chemometrics and Intelligent Laboratory Systems, vol. 18, no. 2, pp. 115–155, Feb. 1993, doi: https://doi.org/10.1016/0169-7439(93)80052-j.


L. Deng et al., “Rethinking the performance comparison between SNNS and ANNS,” Neural Networks, vol. 121, pp. 294–307, Jan. 2020, doi: https://doi.org/10.1016/j.neunet.2019.09.005.








Comments


Sign-Up for Our Newsletter

Thanks for signing up!

Get in Touch

Thanks for contacting us!

  • White YouTube Icon
  • White Facebook Icon
  • White Twitter Icon
  • White Instagram Icon

© 2021 STEME Youth Career Development Program &

Science & Engineering Fair of Houston

STEME WHITE TRANSPARENT .png
Huey Uganda 3125 White Black.png
bottom of page