FACE RECOGNITION WITH TRANSFER LEARNING
Facial recognition is a way of recognizing a human face through technology. A Facial Recognition System compares the information with a database of known faces to find a match. It has all kinds of Commercial Applications and can be used for everything from surveillance to marketing.
Machine Learning works behind these face recognition systems. Convolution Neural Networks-CNN is the technique to do image classification and image recognition. It is designed to process the data by multiple layers of arrays.
In Machine Learning we have to collect the data and then feed this data to the machine learning model so that the model learns from the provided data.
We have two challenges :
1.In CNN world it requires huge data a
nd we don't get so many images.
2.Training our model with huge data consumes lots of time,CPU,RAM etc.
Once we complete training our model with that huge data then we need not train our complete model again that takes lot of time and resources with Transfer Learning.
In Transfer Learning we retain our pre-trained model but not completely, We freeze our pre-trained model after that add we add some more layers and then train only these layers for that new Face.
In this Machine Learning program,I used VGG architecture of CNN and pre-trained weights of Imagenet data set.
you can find the datasets in my github repository
In our code first we import all the required libraries and modules keras is one of the important library in python that we use to train and test the model. keras use tensor flow behind the scene and we import VGG16 function as we are using VGG architecture.
1.We use Dense() function to add new hidden layers of neurons.
2.we use Flatten() function to flatten our image input to 1D.
3.we use Model() function to group our layers into single object.
4.we use compile() function to compile our model by specifying optimizer and loss function.
5.we use summary() function to see the summary of our model.
Loading Data and Training Model:
Here before loading the data we do augmentation on our image to increase our data.
we use ImageDataGenerator() function to increase our data.
Now, we train our model using fit_generator() function from keras library to fit the data in the model.
This will train our model then we save our model with save.model("filename") we also give number of epochs here.
Finally ,we test our model with the images that are provided neither in train set nor test set.
After importing the required libraries we test with predict() function and then encode the output as the output of predict() function can't be understand by us.
✨👏
ReplyDelete