How Does Generative AI Work?
Input Text
The model receives a text prompt (e.g., “Write a summary of this article”).
Tokenization
The text is broken down into smaller units called tokens. These tokens could be words, parts of words, or even characters.
(about ¾ of a word)Contextual Understanding
The model analyzes the context of the tokens by looking at the relationships between them. This helps the LLM understand the meaning of the input and anticipate what should come next.
Prediction
Based on the input and context, the model predicts the next word or phrase. It does this by selecting from a range of possible outputs, weighted by probabilities.
Output Generation
The LLM then generates a coherent response or continuation based on the input prompt.