Blockchain & AI

The Journey of Artificial Neural Networks (ANN)

Neural networks, inspired by the human brain's architecture, have become foundational in modern artificial intelligence (AI). Deep learning, a subset of machine learning, leverages these networks to model complex patterns in data.

Artificial Neural Networks (ANNs) have been a prominent area of research in artificial intelligence since the 1980s. Inspired by the structure and function of the human brain, ANNs abstract the neural processing mechanism into a simplified computational model.

Artificial Neural Networks (ANN)

Artificial Neural Networks (ANNs) have been a prominent area of research in artificial intelligence since the 1980s. Inspired by the structure and function of the human brain, ANNs abstract the neural processing mechanism into a simplified computational model. These networks are constructed by connecting numerous processing elements (neurons) based on different configurations to mimic how the brain processes and stores information.

In both academic and engineering contexts, these models are commonly referred to simply as neural networks. A neural network consists of a large number of interconnected nodes, each functioning as a neuron with a specific activation function that determines its output. 

The connections between these nodes are associated with weights, representing the strength or influence of the signal passing through them, analogous to memory in biological systems. The behavior and output of the network are determined by its structure, the values of the weights, and the type of activation functions used. Neural networks can be seen as approximations of algorithms or logical functions, capable of representing complex strategies.

Source

Each neuron in a neural network can represent a feature, concept, symbol, or abstract pattern. These neurons are typically categorized into input, output, and hidden units.

The input units receive data from external sources, the output units deliver the final results, and the hidden units, located between the input and output layers, perform intermediate transformations that are not directly observable from outside the network.

The weights connecting neurons reflect the strength of relationships between them, and the way these connections are arranged underpins how information is represented and processed within the network. 

Fundamentally, ANNs are adaptive, brain-inspired systems designed for parallel and distributed information processing. Rather than being explicitly programmed, they learn and evolve dynamically, emulating aspects of how the human brain processes information through its neural structures.

ANNs differ fundamentally from traditional AI methods that rely on predefined logic. They excel in handling intuitive and unstructured data, offering advantages such as adaptability, self-organization, and real-time learning. 

Due to these strengths, neural networks are widely used in fields ranging from neuroscience and cognitive science to artificial intelligence and computer science, serving as a cornerstone of interdisciplinary research and application.

The Journey of Artificial Neural Network: From 1943 Till now

The development of artificial neural networks can be divided into four historical phases.

The Rise Stage

The rise stage took place between the 1940s and 1950s, when foundational theories and models were first introduced:

  • – 1943: psychologist McCulloch and mathematician Pitts introduced the M-P model, treating the neuron as a logic gate. This laid the groundwork for theoretical neural network research.
  • – 1949: psychologist Donald Hebb proposed the Hebbian learning rule in his book The Organization of Behavior, suggesting that learning occurs through the strengthening or weakening of synaptic connections based on neuron activity. This concept became central to learning mechanisms in neural networks.
  • – 1957: Frank Rosenblatt introduced the Perceptron model, an advancement of the M-P model. It featured adjustable weights and could classify input vectors through training, making it the first practical neural network model. Rosenblatt also highlighted the importance of hidden layers, pointing toward deeper architectures.
  • – 1959: engineers Widrow and Hoff developed the ADALINE (Adaptive Linear Neuron) model and the Widrow-Hoff learning rule (also known as the LMS or delta rule). This model used continuous values and was the first neural network successfully applied to real-world problems, marking a major step forward in practical neural network applications.

Second Phase: Decline (Ebb)

The second stage in the history of neural networks is marked by a significant decline in interest and progress, largely due to critical theoretical limitations identified during this period:

  • – 1969: AI pioneers Marvin Minsky and Seymour Papert published a landmark book titled Perceptrons, in which they mathematically analyzed the limitations of single-layer neural networks. They demonstrated that perceptrons are incapable of solving non-linearly separable problems, such as the XOR function. This revelation cast serious doubt on the potential of neural networks and led to a sharp decline in research enthusiasm, resulting in what is often referred to as the “AI winter” for neural networks, lasting for about a decade.
  • – 1972: Finnish professor Teuvo Kohonen introduced the Self-Organizing Feature Map (SOM), a type of unsupervised learning network that uses a competitive learning algorithm often summarized as “winner takes all.” SOM was notably different from earlier perceptron models and was applied to tasks like pattern and speech recognition where labeled data was unavailable.
  • – 1976: Stephen Grossberg introduced Adaptive Resonance Theory (ART), another influential neural network model. ART systems are characterized by their self-organizing nature and stability, allowing them to learn continuously while maintaining previously acquired knowledge.

Third Phase: Revival

The third phase in the evolution of neural networks marked a major revival, driven by a series of groundbreaking theoretical advancements and practical applications that reignited academic and industry interest:

  • – 1982: physicist John Hopfield introduced the discrete Hopfield network, using a concept known as the Lyapunov function (or energy function) to mathematically prove network stability.
  • – 1983: Kirkpatrick introduced his simulated annealing algorithm, an approach rooted in thermodynamic principles. Geoffrey Hinton and Terry Sejnowski developed the Boltzmann machine, a probabilistic neural network model that introduced the key concept of hidden units, critical to building multi-layer networks.
  • – 1984: Hopfield extended his model by introducing a continuous version, changing neuron activation functions from binary to continuous.
  • – 1985: Hopfield’s models, particularly when applied to problems like the Traveling Salesman Problem, demonstrated how neural networks could solve complex optimization tasks using dynamic, non-linear equations.
  • – 1986: Rumelhart, Hinton, and Williams introduced the Backpropagation (BP) algorithm, enabling effective training of multi-layer neural networks by solving the problem of weight adjustment. This breakthrough proved that deep networks had powerful learning capabilities and could tackle a wide range of real-world problems.
  • – 1988: Chua and Yang proposed the Cellular Neural Network (CNN) model, inspired by cellular automata and suitable for large-scale nonlinear computation. Kosko introduced the Bidirectional Associative Memory (BAM), capable of unsupervised learning.
  • – 1991: Haken integrated the concept of synergetics, viewing cognition as a spontaneous process of pattern formation.
  • – 1994: Liao Xiaoxin developed mathematical foundations for CNNs and proposed extensions such as Delayed CNNs, Hopfield Networks, and generalized BAMs, significantly enriching neural network theory.

Fourth Stage: Flourishing

The flourishing phase of neural network development began with the rise of Deep Learning (DL), a concept introduced by Hinton and colleagues in 2006. 

Deep learning represents a significant advancement in the field of machine learning. It involves building neural network models with multiple hidden layers, allowing for the extraction of highly representative features through training on large-scale datasets.

Unlike traditional neural networks, which were limited in depth, deep learning overcomes these constraints by enabling designers to freely choose the number of layers based on the complexity of the task, thus providing greater flexibility and learning capacity.

Artificial Neural Networks: Capabilities

Self Learning

For instance, in image recognition tasks, the network is trained using various image templates and their corresponding labels.

Over time, the network learns to identify and classify similar images on its own. This self-learning feature is especially valuable in predictive tasks, such as economic forecasting, market analysis, and profit prediction, making neural networks highly promising for future applications.

Associative Memory

In feedback-based neural network architectures, the system can form connections between inputs and outputs, enabling it to retrieve related information, mimicking human associative thinking.

Finding Optimal Solutions Quickly

Traditionally, solving complex problems involves extensive computation. However, by utilizing feedback-type neural networks designed specifically for optimization, combined with modern computing power, solutions can be reached quickly and efficiently.

Artificial Neural Networks: Characteristics

Non-linearity

Non-linear relationships are fundamental to natural systems, including human intelligence, which is inherently non-linear. Artificial neurons mimic this by switching between activation and inhibition states, reflecting non-linear behavior mathematically. 

Neural networks that include thresholds in their neurons tend to perform better, offering improved fault tolerance and data storage capacity.

Non-limited (Interconnectedness)

A neural network is made up of many interconnected neurons. Its overall function depends not only on individual neuron behavior but also on the interactions and interconnections among them. 

This mirrors the brain’s complexity, where a vast number of inter-neuronal connections produce sophisticated behavior. Associative memory serves as an example of such a system.

Non-qualitative (Adaptive and Dynamic)

Artificial neural networks possess qualities such as self-adaptation, self-organization, and self-learning. They can process diverse types of data and adjust their internal structure as they learn. These networks operate as nonlinear dynamic systems that evolve over time, often described using iterative processes.

Non-convexity

The behavior of such systems can be influenced by a state function, like an energy function, where certain states correspond to stable system configurations. Non-convexity means the function has multiple minima, allowing the system to reach different stable states. This results in diverse evolutionary pathways for the network, increasing its flexibility and robustness in learning and problem-solving.

EndNote

This article reviewed the foundational concepts and historical evolution of artificial neural networks, with a particular focus on deep learning. 

Drawing from academic contributions, we explored the progression from early models like the perceptron and Hopfield networks to modern deep learning architectures powered by multi-layered neural networks. 

Key properties such as non-linearity, adaptability, associative memory, and optimization capabilities were also discussed, emphasizing the computational strength and flexibility of neural networks. 

As deep learning continues to evolve, fueled by advances in algorithms and hardware, neural networks remain at the forefront of research and application in artificial intelligence, offering powerful tools for pattern recognition, forecasting, and autonomous learning across disciplines.

SIGN UP TO GET THE LATEST NEWS

Newsletter

Subscription