
Deep learning consists of multiple hidden layers in an artificial neural network. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. However, over time, attention moved to performing specific tasks, leading to deviations from biology. The original goal of the ANN approach was to solve problems in the same way that a human brain would. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. Different layers may perform different kinds of transformations on their inputs.

Typically, artificial neurons are aggregated into layers. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. The weight increases or decreases the strength of the signal at a connection. Artificial neurons and edges typically have a weight that adjusts as learning proceeds.

The connections between artificial neurons are called "edges". In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. GitHub - Aryia-Behroziuan/neurons: An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain.
