Skip to main content

Skip-Gram Model

 

Introduction

Natural Language Processing is the popular field of Artificial Intelligence. We go to process human language as text or speech to make computers like humans in this process. Humans have a big amount of data written in a much careless format. That is a problem for any machine to find meaning from raw text.

Skip-Gram Model


We essential to transforming this data into a vector format to make a machine learn from the raw text. It then may simply be processed by the computers. Transformation of this raw text into a vector format is recognized as word representation.

We need unsupervised learning methods because the vocabulary of any language is big and cannot be labeled by humans. That is required to learn the context of any word on its own. Skip-gram is one of the unsupervised learning methods. This is used to discover the best-related words for a given word. In this article, we will know about the Skip-gram model in detail.

Description

The Skip-gram model tries to guess the source context words given a target word. It reverses the practice of target and context words. In this circumstance;

  • The target word is provided for at the input.
  • The hidden layer leftovers the same.
  • The output layer of the neural network is computer-generated many times to put up the chosen number of context words.
  • Look at the example of “cat” and “tree” as context words. And “climbed” by way of the target word.
  • The input vector in the skim-gram model will be [0 0 0 1 0 0 0 0] t.
  • Though the two output layers will have [0 1 0 0 0 0 0 0] t and [0 0 0 0 0 0 0 1] t as target vectors correspondingly.
  • Two such vectors will be made for the current example ready for making one vector of probabilities.
For More Detail Visit : https://www.technologiesinindustry4.com/2021/11/skip-gram-model.html

Comments