GistNoesis
today at 10:23 AM
Typically the input of a LLM is a sequence of tokens, aka a list of integer between 0 and max number of tokens.
The sequence is of variable length. It was one of the "early" problem in sequence modelling : how to deal with input of varying length with neural networks. There is a lot of literature about it.
This is the source of plenty of silent problems of various kind :
- data out of distribution (short sequence vs long sequences may not have the same performance )
- quadratic behavior due to data copy
- normalization issues
- memory fragmentation
- bad alignment
One way of dealing with it is by considering a variable length sequence as a fixed sized sequence but filling with zeros the empty elements and having some "masks" to specify which elements should be ignored during the operations.
----
Concerning the embedding having multiple semantic meaning, it is best effort, all combinations of behavior can occur.
The embedding layer is typically the first layer and it convert the integer from the token into a vector of embedding dimension of floating point numbers. It tries its best to separate the meaning to make the task of the subsequent layers of the neural network easier. It's shovelling the shit it can't handle down to road for the next layers to deal with it.
For experiments you can try to merge two tokens into one or into <unknown> token, in order to free some token for special use without having to increase the size of the vocabulary.
Embeddings some times can be the average of the disambiguated embeddings. Some times can be their own things.
In addition to embeddings, you can often look at the inner representation at a specific depth of the neural network. There after a few layers the representation have usually been disambiguated based on the context.
The last layer is also specially interesting because it is the one used to project back to the original token space. Sometimes we force the weights to be shared with the embedding layer. This projection layer usually can't use context so it must have within itself all necessary information to very simply map back to token space. This last representation is often used as a full sequence representation vector which can be used for subsequent more specialized training task.
Embedding weights are fixed after training, but in-context learning occur during inference. The early tokens of the prompt will help disambiguate the new tokens more easily. For example <paragraph about money> bank vs <paragraph about landscape> bank vs bank will have the same input embedding for the bank token, but one or two layer down the line, the associated representation will be very different and close to the appropriate meaning.