Transcription of: Initializing and Accessing Bias with Keras
hey what's going on everyone in this video we're going to see how we can initialize and access biases in a neural network encode with Kerris so let's get to it alright i'm here in our jupiter notebook and I just have this arbitrary small neural network with one hidden dense layer containing 4 nodes and an output layer with two nodes everything here is pretty standard and you should be familiar with almost all of it based on previous videos in this playlist the only new items are in this hidden layer where we have two parameters we haven't yet seen before and that's these parameters called use bias and bias initializer now we discussed in an earlier video what exactly bias is in a neural network and now we're going to see how biases worked with within our Kerris model in Kerris we specify whether or not we want a given layer to include biases for all of its neurons with this use bias parameter if we do want to include bias we set the parameter equal to true otherwise we set it to false the default value is true so if we don't specify this parameter here at all the layer will include biases by default next we have this bias initializer parameter this determines how the biases are initialized this initialization process is really similar to the weight initialization process that we talked about in an earlier video this just determines how the biases are 1st set before we start training the model now we're setting this parameter equal to the string zeroes this means that all four biases in this layer will be set to a value of zero before the model starts training this is actually the default value for the bias initialization parameter if we instead wanted to change this so that the biases were set to some other types of values like all ones or random numbers then we can caris has a list of initializers that it supports and they're actually the same list of initializers we talked about when we discussed weight initialization so we could even initialize the biases with Xavier initialization if we wanted after we initialize these biases we can check them out and look at their values by calling model dot get weights this is going to give us all the weights and all the biases for each layer in the model so we see here that we have these randomly initialized weights in this weight matrix for the first hidden layer and we also have this bias vector containing four zeros corresponding to the bias term for each node in the layer for which we specified zero initialization similarly we have the weight matrix corresponding to the output layer which is again followed by the bias vector that contains two zeros corresponding to the bias term for each node in this layer remember we didn't set any bias parameters for the output layer but because Kerris uses bias and initializes the bias terms with zeros we get this for free now after being initialized during training these biases and weights will be updated as the model learns the optimized values for them if we were to train this model and then call get weights again then the values for these weights and biases would likely be very different so not bad our our models have been using bias this whole time without any effort from our side since by default caris is initializing the biases with zeros now we showed this for dense layers but the same is true for other layer types as well like convolutional layers for example after first learning about bias on a fundamental level and now seeing it applied in code what are your thoughts I hope to hear from you in the comments thanks for watching see you next time [Music] [Music] you [Music]