Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence ScoringBERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingSimCSE: Simple Contrastive Learning of Sentence EmbeddingsAttention: Neural Machine Translation by Jointly Learning to Align and TranslateTransformer: Attention is All You Need- DPR: Dense Passage Retrieval for Open-Domain Question Answering
- BGE M3-Embedding : Multi-Lingual, Multi-Functionality, Multi-Granularity
- GPL : Generative Pseudo Labeling for Unsupervised Domain Adaptation)
- Improving Text Embeddings with Large Language Models