суббота, 10 мая 2008 г.

Artificial Neural Networks/Activation Functions

Activation Functions

There are a number of common activation functions in use with neural networks. This is not an exhaustive list.

[edit] Step Function

A step function is a function like that used by the original Perceptron. The output is a certain value, A1, if the input sum is above a certain threshold and A0 if the input sum is below a certain threshold. The values used by the Perceptron were A1 = 1 and A0 = 0.

These kinds of step activation functions are useful for binary classification schemes. In other words, when we want to classify an input pattern into one of two groups, we can use a binary classifier with a step activation function. Another use for this would be to create a set of small feature identifiers. Each identifier would be a small network that would output a 1 if a particular input feature is present, and a 0 otherwise. Combining multiple feature detectors into a single network would allow a very complicated clustering or classification problem to be solved.

[edit] Linear Combination

A linear combination is where the weighted sum input of the neuron plus a linearly dependant bias becomes the system output. Specifically:

y = ζ + b

In these cases, the sign of the output is considered to be equivalent to the 1 or 0 of the step function systems, which enables the two methods be to equivalent if

θ = − b

[edit] Continuous Log-Sigmoid Function

A log-sigmoid function, also known as a logistic function, is given by the relationship:

\Pi(t) = \frac{1}{1 + e^{-\Beta t}}

Where β is a slope parameter. This is called the log-sigmoid because a sigmoid can also be constructed using the hyperbolic tangent function instead of this relation. In that case, it would be called a tan-sigmoid. Here, we will refer to the log-sigmoid as simply “sigmoid”. The sigmoid has the property of being similar to the step function, but with the addition of a region of uncertainty. Sigmoid functions in this respect are very similar to the input-output relationships of biological neurons, although not exactly the same. Below is the graph of a sigmoid function.

Sigmoid functions are also prized because their derivatives are easy to calculate, which is helpful for calculating the weight updates in certain training algorithms. The derivative is given by:

\frac{d\Pi(t)}{dt} = \Pi(t)[1 - \Pi(t)]

[edit] Continuous Tan-Sigmoid Function

[edit] Softmax Function

The softmax activation function is useful predominantly in the output layer of a clustering system. Softmax functions convert a raw value into a posterior probability. This provides a measure of certainty. The softmax activation function is given as:

y_i = \frac{e^{\zeta_i}}{\sum_{j\in L} e^{\zeta_j} }

L is the set of neurons in the output layer.

Комментариев нет: