A research blog about the optimization of neural networks

by Thomas George

Derivatives through a batch norm layer

February 15, 2019

In this note, we will write the derivations for the backpropagated gradient through the batch norm operation, and also the gradient w.r.t the weight matrix. This derivation is very cumbersome and after having tried many different ways (column by column/line by line/element by element/etc), we present the way we think is the simplest one here.

1 Notations

We focus on a single batch normalized layer in a fully connected network. The linear part is denoted and parametrized by the weight matrix . Then the batch normalization operation is computed by:

Note that we can rewrite it in terms of and directly instead of computing the intermediate step :

Some remarks:

  1. These notations are not very precise since we mix up elementwise operations with linear algebra operations. Specifically, by an abuse of notation we divide a vector on the top part of the quotient, by another vector on the bottom part.
  2. Even if we only require to compute an elementwise variance of the components of , in \ref{eq:bn_wx} we see that it hides the full covariance matrix on the vectors in minibatches, here denoted by . It is a dense covariance matrix, with size .
  3. We did not write the scaling and bias parameters and since obtaining their derivative is easier and less interesting.

2 Minibatch vector notation

To clarify things, we consider that the examples are stacked in design matrices of size :

, and

Using this notation, we can write the result of BN for a column of the matrix (so all s component for all examples in a minibatch). We denote this column by , as opposed to the lines of that we denoted by Note that does not correspond to an example in the minibatch.

We will go step by step:

The mean of a column is obtained by multiplying it with a vector full of (denoted by a bold ), and dividing by :

Using this we can write the (unbiased) variance of the column vector :

We multiplied the mean by a vector in order to repeat it along all components of . We can simplify the expression:

And so we obtain one column of batch norm:

Using this notation we gained the fact that everything here is linear algebra and scalar operations. We do not have any more elementwise operations, nor sums or variance, so it is easier to write derivatives using only elementary calculus rules.

3 Jacobians and gradients

Writing derivatives of vector functions with respect to vector parameters can be cumbersome, and sometimes ill-defined.

In this note we follow the convention that a gradient of a scalar function of any object has the same shape as this object, so for instance as the same shape as .

We also make an heavy use of jacobians, which are the matrices of partial derivatives. For a function , its jacobian is a matrix, defined by:

Using this notation the chain rule can be written for a composition of function :

Using this notation it is also easy to write first order Taylor series expansion of vector functions. The first order term is just the jacobian matrix, that we can multiply to the right by an increment :

Since is a matrix then is a column vector so it lives in the same space as . Everything works out fine!

4 Derivative w.r.t

We start by computing the derivative through the BN operation. One of the weakness of BN is that each batch normalized feature will be a function of all other elements in a minibatch, because of the mean and variance. This is why we will focus on a single column of the design matrix : in this case all elements of this column only depend on the elements of the corresponding column in .

We write the derivative using the expression in \ref{eq:bn1}.


Note that and are row vectors.

5 Derivative w.r.t

For efficient implementation, it is often more efficient to work with design matrices of size where is the size of the minibatch, and is the feature size. With some algebraic manipulation we write the gradient for all elements in the design matrix:

we denoted by the diagonal matrix of the inverse standard deviation as usually used in BN, and is a diagonal matrix where the coefficients are the (scalar) covariances of the elements of and .

6 Derivative w.r.t one line of the weight matrix

Using the fact that , we write , where is a line of the weight matrix (that we transpose to obtain a column vector). We can now write the derivative using the chain rule:

7 Derivative w.r.t the whole matrix

Now we can stack all lines of the matrix in order to get the derivative for the whole weight matrix:

8 Derivative w.r.t the input of the batch normalized layer

Using \ref{eq:bn1} and we can write using the chain rule:

9 Wrap-up and acknowledgements

Now you have everything you need !

Special thanks to César Laurent for the help and proofreading.

Derivatives through a batch norm layer - February 15, 2019 - Thomas George