Pre-Training (Initializing the Network)
The 'PT' part in GPT is for 'Pre-Training'.
Pre-Training (Initializing the Network)
GPT models are pre-trained over a corpus/dataset of unlabeled textual data using a language modeling objective. Put simply, this means that we train the model by (i) sampling some text from the dataset and (ii) training the model to predict the next word.
Qs? When we 'train' images, is that what we are doing? We are sort of seeing the image through each filter and recording the result?
When training a Convolutional Neural Network (CNN) on images, you could say that the network is learning to "see" the image through various filters and recording the results. During the training process, the network adjusts the filter values to better detect meaningful features or patterns in the input images that are relevant to the task at hand, such as image classification or object detection.
When an image is passed through a convolutional layer, the filters slide over the image and apply the convolution operation. This process produces feature maps, which are essentially new representations of the image that highlight the presence of specific features detected by each filter.
As the network is trained, it updates the filter values to minimize the loss function, which measures the difference between the network's predictions and the true labels or targets. By doing so, the CNN learns to recognize and extract relevant features from the images more effectively.
In summary, during the training of a CNN on images, the network learns to "see" the images through various filters and records the resulting feature maps. These feature maps are new representations of the images that highlight specific features detected by the filters. The training process aims to adjust the filter values so that the network can better recognize and extract meaningful features from the input images that are relevant to the task at hand.
Last updated