From 5b3c8d7ad108d4a65b4a946d719d242fe9eae308 Mon Sep 17 00:00:00 2001 From: SAM <60264918+SAM-DEV007@users.noreply.github.com> Date: Fri, 31 May 2024 15:14:09 +0530 Subject: [PATCH] Update Transformers.md Added information about Model Architecture --- contrib/machine-learning/Transformers.md | 59 +++++++++++++++++++++++- 1 file changed, 57 insertions(+), 2 deletions(-) diff --git a/contrib/machine-learning/Transformers.md b/contrib/machine-learning/Transformers.md index 7bcc102..49a1b97 100644 --- a/contrib/machine-learning/Transformers.md +++ b/contrib/machine-learning/Transformers.md @@ -4,9 +4,53 @@ A transformer is a deep learning architecture developed by Google and based on t mechanism. Before transformers, predecessors of attention mechanism were added to gated recurrent neural networks, such as LSTMs and gated recurrent units (GRUs), which processed datasets sequentially. Dependency on previous token computations prevented them from being able to parallelize the attention mechanism. -## Key Concepts +## Model Architecture +

Model Architecture

-## Architecture +### Encoder +The encoder is composed of a stack of identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, positionwise fully connected feed-forward network. Each encoder consists of two major components: a self-attention mechanism and a feed-forward neural network. The self-attention mechanism accepts input encodings from the previous encoder and weights their relevance to each other to generate output encodings. The feed-forward neural network further processes each output encoding individually. These output encodings are then passed to the next encoder as its input, as well as to the decoders. + +### Decoder +The decoder is also composed of a stack of identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. The decoder functions in a similar fashion to the encoder, but an additional attention mechanism is inserted which instead draws relevant information from the encodings generated by the encoders. This mechanism can also be called the encoder-decoder attention. + +### Attention +#### Scaled Dot-Product Attention +The input consists of queries and keys of dimension dk, and values of dimension dv. We compute the dot products of the query with all keys, divide each by √dk, and apply a softmax function to obtain the weights on the values. + +> Attention(Q, K, V) = softmax(QKT / √dk) * V + +#### Multi-Head Attention +Instead of performing a single attention function with dmodel-dimensional keys, values and queries, it is beneficial to linearly project the queries, keys and values h times with different, learned linear projections to dk, dk and dv dimensions, respectively. + +Multi-head attention allows the model to jointly attend to information from different representation +subspaces at different positions. With a single attention head, averaging inhibits this. + +> MultiHead(Q, K, V) = Concat(head1, ..., headh) * WO + +where, + +> headi = Attention(Q * WiQ, K * WiK, V * WiV) + +where the projections are parameter matrices. + +#### Masked Attention +It may be necessary to cut out attention links between some word-pairs. For example, the decoder for token position +𝑡 should not have access to token position 𝑡+1. + +> MaskedAttention(Q, K, V) = softmax(M + (QKT / √dk)) * V + +### Feed-Forward Network +Each of the layers in the encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This +consists of two linear transformations with a ReLU activation in between. +> FFN(x) = (max(0, (x * W1) + b1) * W2) + b2 + +### Positional Encoding +A positional encoding is a fixed-size vector representation that encapsulates the relative positions of tokens within a target sequence: it provides the transformer model with information about where the words are in the input sequence. + +The sine and cosine functions of different frequencies: +> PE(pos,2i) = sin(pos/100002i/dmodel) + +> PE(pos,2i+1) = cos(pos/100002i/dmodel) ## Implementation ### Theory @@ -19,3 +63,14 @@ allowing the signal for key tokens to be amplified and less important tokens to ### Tensorflow and Keras ### PyTorch + +## Application +The transformer has had great success in natural language processing (NLP). Many large language models such as GPT-2, GPT-3, GPT-4, Claude, BERT, XLNet, RoBERTa and ChatGPT demonstrate the ability of transformers to perform a wide variety of such NLP-related tasks, and have the potential to find real-world applications. + +These may include: +- Machine translation +- Document summarization +- Text generation +- Biological sequence analysis +- Computer code generation +- Video analysis