Building the Model Itself

7 minutes
Share the link to this page
Copied
  Completed

Transcript

Hello, everyone, and welcome to the one of the most interesting and important videos on the course. In this one, we are going to build our image search model. The architecture that we are going to build is represented by the image right here. This image came from the GitHub repository where you can learn a lot about secret time classification. If you want to learn more about it, the link will be in the resources of this lesson. Let's remember how the image search is done.

We create and test an architecture for classical image classification, training it and use it as a pre trained layers as a vector features representation of each image based on vector similarity function cosine distance, hamming distance, for example, or any other we can determine how close or similar to images are. Okay, just to mention the style of the implementation is not the only one to me. models using tensor flow library. If you have any easier or more practical way to use, by all means do that. To start with your model class right TF dot reset default graph. This function will reset the tensor flow graph each time the model is defined.

By doing this, we will escape the possibility of having nested models graphs. Now that we have cleaned tensor flow graph, let's define our model inputs. Write self dot inputs, self dot targets, and self dot dropout rate equals model inputs. This function takes only one argument and that is image size. Until now, we haven't defined any functions for image normalization. So every image is between zero and 255.

To overcome this, we will introduce batch normalization layer before the first convolution on layer. Now we will define the comp one block, define separate comp block one and self.com one features equals comm block comm block as a first argument takes inputs, which is normalized images. The next argument is number of features and as you can see on the image here the number of features for this layer is 64 for the kernel size is three by three and the stride is one by one for the padding limited to say activation is equal to rally and we will use max pool at this block so it is set to true. Leave the better normalization on true because most parameters are the same, just copy this blog Basten Change names for the arguments here. Change inputs to comm block one and filters change from 64 to 128. Everything else stays the same phase the comm block again, change names to match the comm blog free and from arguments set inputs to comm block two filters to 256 and now we need to change current size to five by five.

Paste everything one more time to create the block for as we did in previous blocks change names to match on block for change inputs to come block free or previous layer and filter to 512 and set kernel size again to five by five. Before we apply dense block we need to flatten convolution features right flat layer equals to tf dot layers dot flatten, and provide the last convolution block in our case comp block for this will reshape convolutional features to be one single vector. Now define dense block one, in the same way we did with block one. Set dense block one equals to dense one features equals to dense block. For the arguments, set inputs to flat layer units to 128 as you can see in the model picture, activation is relu. But for dropout, use self dot dropout and set batch normalization to true.

Copy the dense block one part of the code and basic block change names to match dense block to And set inputs to dense block one and Unit Two 256 the rest stays the same. Okay, base it again and change to dense block three four arguments set inputs to dense block two and units to 512 or 512. Paste everything again for the last dense block. Rename everything to match the dense block for set inputs to be dense block free any units 124 before dense two features and dense, free dense four features add the self so we can access them from outside of the class. And last we define logic layer for input set dense block for and units equals to number of classes. Because we want loads to be output of this layer set activation equals to none.

Next define a variable called predictions equals to tf.nn dot softmax of logics. Lastly, define the optimizer and loss function on the model set self dot loss and self dot optimizer equals to opt loss. Firstly, we provide logic as a first argument. Then targets equals to sell those targets, which is our placeholder and learning rate it was to learn in grade which is argument of the whole class. With this we are done with the whole model architecture. Before we go to the next lesson, let's test it with some arbitrary parameters to see if everything works correctly.

So right model equals to Image Search model and set arguing To 0.001 3232 and 10. Execute to sell. If everything runs correctly, you're all set to proceed to the next one. And that's it. If you have any questions or comments, please post them in the comment section. Otherwise see in the next tutorial

Sign Up

Share

Share with friends, get 20% off
Invite your friends to LearnDesk learning marketplace. For each purchase they make, you get 20% off (upto $10) on your next purchase.