Neural Networks: Difference between revisions
Line 19: | Line 19: | ||
==The Input Layer== | ==The Input Layer== | ||
The input nodes are known as the ''input layer'', which is also conventionally named "layer 1". | The input nodes are known as the ''input layer'', which is also conventionally named "layer 1". The input layer gets fed the training set X. | ||
m represents the number of samples in the training set. | |||
n represents the number of features per sample. | |||
=Input= | =Input= | ||
=Output= | =Output= |
Revision as of 03:10, 4 January 2018
Internal
Individual Neuron
Individual neurons are computational units that read input features, represented as an unidimensional vector x1 ... xn in the diagram below, and calculate the hypothesis function as output. Note that x0 is not part of the feature vector, but it represents a bias value for the unit.
A common option is to use a logistic function as hypothesis, thus the unit is referred to as a logistic unit with a sigmoid (logistic) activation function.
The θ vector represents the model's parameters (model's weights). For a multi-layer neural network, the model parameters are collected in matrices named Θ, which will be describe below.
The x0 input node is called the bias unit, and it is optional. When provided, it is equal with 1.
Multi-Layer Neural Network
The Input Layer
The input nodes are known as the input layer, which is also conventionally named "layer 1". The input layer gets fed the training set X.
m represents the number of samples in the training set.
n represents the number of features per sample.