Back Propagation is a method of training artificial neural networks in conjunction with optimization methods. The derivatives of the final output are determined and then used to calculate the derivatives of the penultimate layer which are used for the derivatives in the next previous layer and so on, towards the first hidden layer.
It works by setting the network weight randomly and computing the corrections, such that the algorithm ends when the error value is acceptably small.