lecture14-distributed-reprennnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnsentations.pptx

RAtna29 10 views 60 slides Jul 06, 2024
Slide 1
Slide 1 of 60
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60

About This Presentation

b


Slide Content

CS276: Information Retrieval and Web Search Christopher Manning and Pandu Nayak Lecture 14: Distributed Word Representations for Information Retrieval

How can we more robustly match a user’s search intent? We want to understand a query, not just do String equals() If user searches for [Dell notebook battery size], we would like to match documents discussing “Dell laptop battery capacity” If user searches for [Seattle motel], we would like to match documents containing “Seattle hotel” A pure keyword-matching IR system does nothing to help…. Simple facilities that we have already discussed do a bit to help Spelling correction Stemming / case folding But we’d like to better understand when query/document match Sec. 9.2.2

How can we more robustly match a user’s search intent? Query expansion: Relevance feedback could allow us to capture this if we get near enough to matching documents with these words We can also use information on word similarities : A manual thesaurus of synonyms for query expansion A measure of word similarity Calculated from a big document collection Calculated by query log mining (common on the web) Document expansion: Use of anchor text may solve this by providing human authored synonyms, but not for new or less popular web pages, or non-hyperlinked collections Sec. 9.2.2

Example of manual thesaurus Sec. 9.2.2

Search log query expansion Context-free query expansion ends up problematic [wet ground] ≈ [wet earth] So expand [ground] ⇒ [ground earth] But [ground coffee] ≠ [earth coffee] You can learn query context-specific rewritings from search logs by attempting to identify the same user making a second attempt at the same user need [Hinton word vector] [Hinton word embedding] In this context, [vector] ≈ [embedding] But not when talking about a disease vector or C++!

Automatic Thesaurus Generation Attempt to generate a thesaurus automatically by analyzing a collection of documents Fundamental notion: similarity between two words Definition 1: Two words are similar if they co-occur with similar words. Definition 2: Two words are similar if they occur in a given grammatical relation with the same words. You can harvest, peel, eat, prepare, etc. apples and pears, so apples and pears must be similar. Co-occurrence based is more robust, grammatical relations are more accurate. Why? Sec. 9.2.3

Simple Co-occurrence Thesaurus Simplest way to compute one is based on term-term similarities in C = AA T where A is term-document matrix. w i,j = (normalized) weight for ( t i , d j ) For each t i , pick terms with high values in C t i d j N M What does C contain if A is a term-doc incidence (0/1) matrix? Sec. 9.2.3 A

Automatic thesaurus generation example … sort of works Word Nearest neighbors absolutely absurd, whatsoever, totally, exactly, nothing bottomed dip, copper, drops, topped, slide, trimmed captivating shimmer, stunningly, superbly, plucky, witty doghouse dog, porch, crawling, beside, downstairs makeup repellent, lotion, glossy, sunscreen, skin, gel mediating reconciliation, negotiate, cease, conciliation keeping hoping, bring, wiping, could, some, would lithographs drawings, Picasso, Dali, sculptures, Gauguin pathogens toxins, bacteria, organisms, bacterial, parasites senses grasp, psyche, truly, clumsy, naïve, innate Too little data (10s of millions of words) treated by too sparse method . 100,000 words = 10 10 entries in C .

How can we represent term relations? With the standard symbolic encoding of terms, each term is a dimension Different terms have no inherent similarity motel [0 0 0 0 0 0 0 0 0 0 1 0 0 0 0] T hotel [0 0 0 0 0 0 0 3 0 0 0 0 0 0 0] = 0 If query on hotel and document has motel , then our query and document vectors are orthogonal Sec. 9.2.2

Can you directly learn term relations? Basic IR is scoring on q T d No treatment of synonyms; no machine learning Can we learn parameters W to rank via q T Wd ? Cf. Query translation models: Berger and Lafferty (1999) Problem is again sparsity – W is huge > 10 10

Is there a better way? Idea: Can we learn a dense low-dimensional representation of a word in ℝ d such that dot products u T v express word similarity? We could still if we want to include a “translation” matrix between vocabularies (e.g., cross-language): u T Wv But now W is small ! Supervised Semantic Indexing ( Bai et al. Journal of Information Retrieval 2009) shows successful use of learning W for information retrieval But we’ll develop direct similarity in this class

You can get a lot of value by representing a word by means of its neighbors “You shall know a word by the company it keeps” (J. R. Firth 1957: 11) One of the most successful ideas of modern statistical NLP  These words will represent banking  Distributional similarity based representations 12 … government debt problems turning into banking crises as happened in 2009 … … saying that Europe needs unified banking regulation to replace the hodgepodge … … India has just given its banking system a shot in the arm …

Solution: Low dimensional vectors The number of topics that people talk about is small (in some sense) Clothes, movies, politics, … Idea: store “most” of the important information in a fixed, small number of dimensions: a dense vector Usually 25 – 1000 dimensions How to reduce the dimensionality? Go from big, sparse co-occurrence count vector to low dimensional “word embedding” 13

Traditional Way: Latent Semantic Indexing/Analysis Use Singular Value Decomposition (SVD) – kind of like Principal Components Analysis (PCA) for an arbitrary rectangular matrix – or just random projection to find a low-dimensional basis or orthogonal vectors Theory is that similarity is preserved as much as possible You can actually gain in IR (slightly) by doing LSA, as “noise” of term variation gets replaced by semantic “concepts” Somewhat popular in the 1990s [ Deerwester et al. 1990, etc.] But results were always somewhat iffy ( … it worked sometimes) Hard to implement efficiently in an IR system (dense vectors!) Discussed in IIR chapter 18, but not discussed further here Not on the exam (!!!) Sec. 18.2

“Neural Embeddings”

Word meaning is defined in terms of vectors We will build a dense vector for each word type, chosen so that it is good at predicting other words appearing in its context … those other words also being represented by vectors … it all gets a bit recursive 0.286 0.792 −0.177 −0.107 0.109 −0.542 0.349 0.271 banking =

Neural word embeddings - visualization 17

Basic idea of learning neural network word embeddings We define a model that aims to predict between a center word w t and context words in terms of word vectors p( context| w t ) = … which has a loss function, e.g., J = 1 − p( w − t | w t ) We look at many positions t in a big language corpus We keep adjusting the vector representations of words to minimize this loss

Idea: Directly learn low-dimensional word vectors based on ability to predict Old idea: Learning representations by back-propagating errors. ( Rumelhart et al., 1986) A neural probabilistic language model ( Bengio et al., 2003) NLP (almost) from Scratch ( Collobert & Weston, 2008) A recent, even simpler and faster model: word2vec ( Mikolov et al. 2013)  intro now The GloVe model from Stanford (Pennington, Socher, and Manning 2014) connects back to matrix factorization Per-token representations: Deep contextual word representations: ELMo, ULMfit , BERT 19 Non-linear and slow Fast bilinear models Current state of the art

Word2vec is a family of algorithms [ Mikolov et al. 2013] Predict between every word and its context words! Two algorithms Skip-grams (SG) Predict context words given target (position independent) Continuous Bag of Words (CBOW) Predict target word from bag-of-words context Two (moderately efficient) training methods Hierarchical softmax Negative sampling Naïve softmax

Word2Vec Skip-gram Overview Example windows and process for computing   21 … crises banking into turning problems … as center word at position t outside context words in window of size 2 outside context words in window of size 2        

Word2vec: objective function For each position , predict context words within a window of fixed size m , given center word . The objective function is the (average) negative log likelihood: Minimizing objective function ⟺ Maximizing predictive accuracy   Likelihood = is all variables to be optimized   s ometimes called cost or loss function 22

Word2vec: objective function We want to minimize the objective function: Question: How to calculate ? Answer: We will use two vectors per word w : when w is a center word when w is a context word Then for a center word c and a context word o :   23

Word2vec: prediction function This is an example of the softmax function The softmax function maps arbitrary values to a probability distribution “max” because amplifies probability of largest “soft” because still assigns some probability to smaller Frequently used in neural networks/Deep Learning   Dot product compares similarity of o and c. Larger dot product = larger probability   Normalize over entire vocabulary to give probability distribution 24 Exponentiation makes anything positive Open region

Word2vec: 2 matrices of parameters Center word embeddings as rows Context word embeddings as columns (Transposed!)

To learn good word vectors: Compute all vector gradients! We often define the set of all parameters in a model in terms of one long vector In our case with d -dimensional vectors and V many words: We then optimize these parameters Note: Every word has two vectors! Makes it simpler!

Intuition of how to minimize loss for a simple function over two parameters We start at a random point and walk in the steepest direction, which is given by the derivative of the function Contour lines show points of equal value of objective function

Descending by using derivatives We will minimize a cost function by gradient descent T rivial example: (from Wikipedia) Find a local minimum of the function f ( x ) = x 4 −3 x 3 +2, with derivative f' ( x ) = 4 x 3 −9 x 2 Subtracting a fraction of the gradient moves you towards the minimum!

Vanilla Gradient Descent Code

Stochastic Gradient Descent But Corpus may have 40B tokens and windows You would wait a very long time before making a single update! Very bad idea for pretty much all neural nets! Instead: We update parameters after each window t  Stochastic gradient descent (SGD)

Working out how to optimize a neural network is really all the chain rule! Chain rule! If y = f ( u ) and u = g ( x ), i.e. y = f ( g (x)), then: Simple example:  

36

Linear Relationships in word2vec These representations are very good at encoding similarity and dimensions of similarity ! Analogies testing dimensions of similarity can be solved quite well just by doing vector subtraction in the embedding space Syntactically x apple − x apples ≈ x car − x cars ≈ x family − x families Similarly for verb and adjective morphological forms Semantically ( Semeval 2012 task 2) x shirt − x clothing ≈ x chair − x furniture x king − x man ≈ x queen − x woman 37

king man woman Test for linear relationships, examined by Mikolov et al. a:b :: c:? man woman [ 0.20 0.20 ] [ 0.60 0.30 ] king [ 0.30 0.70 ] [ 0.70 0.80 ] − + + queen queen man:woman :: king:? a:b :: c:? Word Analogies

GloVe Visualizations 39 http://nlp.stanford.edu/projects/glove/

Glove Visualizations: Company - CEO 40

Glove Visualizations: Superlatives 5/14/19 41

Application to Information Retrieval Application is just beginning – we’re “at the end of the early years” Google’s RankBrain – little is publicly known Bloomberg article by Jack Clark (Oct 26, 2015): http://www.bloomberg.com/news/articles/2015-10-26/google-turning-its-lucrative-web-search-over-to-ai-machines A result reranking system. “3 rd most valuable ranking signal” But note: more of the potential value is in the tail? New SIGIR Neu-IR workshop series (2016 on)

An application to information retrieval Nalisnick , Mitra , Craswell & Caruana . 2016. Improving Document Ranking with Dual Word Embeddings. WWW 2016 Companion. http://research.microsoft.com/pubs/260867/pp1291-Nalisnick.pdf Mitra , Nalisnick , Craswell & Caruana . 2016. A Dual Embedding Space Model for Document Ranking. arXiv:1602.01137  [ cs.IR ] Builds on BM25 model idea of “aboutness” Not just term repetition indicating aboutness Relationship between query terms and all terms in the document indicates aboutness (BM25 uses only query terms) Makes clever argument for different use of word and context vectors in word2vec’s CBOW/SGNS or GloVe

Modeling document aboutness: Results from a search for Albuquerque d 1 d 2

Using 2 word embeddings word2vec model with 1 word of context Focus word Context word W IN Embeddings for focus words W OUT Embeddings for context words We can gain by using these two embeddings differently

Using 2 word embeddings

Dual Embedding Space Model (DESM) Simple model A document is represented by the centroid of its word vectors Query-document similarity is average over query words of cosine similarity

Dual Embedding Space Model (DESM) What works best is to use the OUT vectors for the document and the IN vectors for the query This way similarity measures aboutness – words that appear with this word – which is more useful in this context than (distributional) semantic similarity

Experiments Train word2vec from either 600 million Bing queries 342 million web document sentences Test on 7,741 randomly sampled Bing queries 5 level eval (Perfect, Excellent, Good, Fair, Bad) Two approaches Use DESM model to rerank top results from BM25 Use DESM alone or a mixture model of it and BM25

Results – reranking k- best list Pretty decent gains – e.g., 2% for NDCG@3 Gains are bigger for model trained on queries than docs

Results – whole ranking system

A possible explanation IN-OUT has some ability to prefer Relevant to close-by (judged) non-relevant, but it’s scores induce too much noise vs. BM25 to be usable alone

DESM conclusions DESM is a weak ranker but effective at finding subtler similarities/aboutness It is effective at, but only at, reranking at least somewhat relevant documents For example, DESM can confuse Oxford and Cambridge Bing rarely makes an Oxford/Cambridge mistake!

What else can neural nets do in IR? Use a neural network as a supervised reranker Assume a query and document embedding network (as we have discussed) Assume you have ( q,d,rel ) relevance data Learn a neural network (with supervised learning) to predict relevance of ( q,d ) pair An example of “machine-learned relevance”, which we’ll talk about more next lecture

What else can neural nets do in IR? BERT: Devlin, Chang, Lee, Toutanova (2018) A deep transformer-based neural network Builds per-token (in context) representations Produces a query/document representation as well Or jointly embed query and document and ask for a retrieval score Incredibly effective! https://arxiv.org/abs/1810.04805

Summary: Embed all the things! Word embeddings are the hot new technology (again!) Lots of applications wherever knowing word context or similarity helps prediction: Synonym handling in search Document aboutness Ad serving Language models: from spelling correction to email response Machine translation Sentiment analysis …

Global vs. local embedding [Diaz 2016]

Global vs. local embedding [Diaz 2016] Train w2v on documents from first round of retrieval Fine-grained word sense disambiguation

Ad-hoc retrieval using local and distributed representation [ Mitra et al. 2017] Argues both “lexical” and “semantic” matching is important for document ranking Duet model is a linear combination of two DNNs using local and distributed representations of query/ document as inputs, and jointly trained on labelled data
Tags