• Non ci sono risultati.

Text-based Ranking

N/A
N/A
Protected

Academic year: 2021

Condividi "Text-based Ranking"

Copied!
27
0
0

Testo completo

(1)

Document ranking

Text-based Ranking

(1° generation)

(2)

Binary vector X,Y in {0,1}D

Score: overlap measure

Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth

Antony 1 1 0 0 0 1

Brutus 1 1 0 1 0 0

Caesar 1 1 0 1 1 1

Calpurnia 0 1 0 0 0 0

Cleopatra 1 0 0 0 0 0

mercy 1 0 1 1 1 1

worser 1 0 1 1 1 0

Y

X

What’s wrong ?

(3)

Normalization

Dice coefficient

(wrt avg #terms)

:

Jaccard coefficient (wrt possible terms):

Y X

Y

X  / 

|)

|

| /(|

2 XY XY

OK, triangular NO, triangular

(4)

Overlap matching doesn’ t consider:

Term frequency in a document

Talks more of t ? Then t should be weighted more.

Term scarcity in collection

of commoner than baby bed

Length of documents

score should be normalized

(5)

A famous “ weight” : tf-idf

) /

,

log(

,d t d t

t

tf n n

w  

Number of occurrences of term t in doc d

tft,d

where nt = #docs containing term t

n = #docs in the indexed collection

log nn idf

t

t  



Vector Space model

(6)

Why distance is a bad idea

(7)

An example

cos() = v w / ||v|| * ||w||

Easy to Spam

t1 v

w t3

t2

document v w

term 1 2 4

term 2 0 0

term 3 3 1

cos() = 2*4 + 0*0 + 3*1 / sqrt{ 22 + 32 } * sqrt{ 42 + 12 }  0,75  40°

Computational Problem

#pages .it ≈ a few billions

# terms ≈ some mln

#ops ≈ 1015

1 op/ns ≈ 1015 ns ≈ 1 week

!!!!

(8)

cosine(query,document)

 

D

i i

D

i i

D

i i i

d q

d q d

d q

q d

q d d q

q

1 2 1

2

)

1

,

cos( 

 

 

 

Dot product

qi is the tf-idf weight of term i in the query

di is the tf-idf weight of term i in the document cos(q,d) is the cosine similarity of q and d … or,

equivalently, the cosine of the angle between q and d.

(9)

Cos for length-normalized vectors

For length-normalized vectors, cosine similarity is simply the dot product (or scalar product):

for q, d length-normalized.

q d

iD

q

i

d

i

d

q , )

1

cos(    

(10)

Storage

For every term t, we have in memory the length nt of its posting list, so the IDF is implicitly available.

For every docID d in the posting list of term t, we store its frequency tft,d which is tipically small and thus stored with unary/gamma.

) /

,

log(

,d t d t

t

tf n n

w  

(11)

Computing cosine score

We could restrict to docs in the intersection

(12)

Vector space OK for bag-of-words queries

Clean metaphor for similar-document queries

Not a good combination with operators:

Boolean, wild-card, positional, proximity

First generation of search engines

Invented before “ spamming” web search

(13)

Top-K documents

Approximate retrieval

(14)

Speed-up top-k retrieval

Costly is the computation of the cos()

Find a set A of contenders, with k < |A| << N

Set A does not necessarily contain all top-k, but has many docs from among the top-k

Return the top-k docs in A, according to the score

The same approach is also used for other (non- cosine) scoring functions

Will look at several schemes following this approach

(15)

How to select A’s docs

Consider docs containing at least one query term (obvious… as done before!).

Take this further:

1. Only consider docs containing many query terms

2. Only consider high-idf query terms 3. Champion lists: top scores

4. Fancy hits: for complex ranking functions 5. Clustering

Sec. 7.1.2

(16)

For multi-term queries, compute scores for docs containing several of the query terms

Say, at least q-1 out of q terms of the query

Imposes a “ soft conjunction” on queries seen on web search engines (early Google)

Easy to implement in postings traversal

(17)

Many query terms

Brutus Caesar

Calpurnia

1 2 3 5 8 13 21 34

2 4 8 16 32 64128 13 16

Antony 3 4 8 16 32 64128

32

Scores only computed for docs 8, 16 and 32.

(18)

only

High-IDF means short posting lists = rare term

Intuition: in and the contribute little to the scores and so don’ t alter rank-ordering

much

Only accumulate ranking for the documents in those posting lists

(19)

Approach #3: Champion Lists

Preprocess: Assign to each term, its m best documents

Search:

If |Q| = q terms, merge their preferred lists ( mq answers).

Compute COS between Q and these docs, and choose the top

Need to pick m>k to work well empirically.k.

Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth

Antony 13.1 11.4 0.0 0.0 0.0 0.0

Brutus 3.0 8.3 0.0 1.0 0.0 0.0

Caesar 2.3 2.3 0.0 0.5 0.3 0.3

Calpurnia 0.0 11.2 0.0 0.0 0.0 0.0

Cleopatra 17.7 0.0 0.0 0.0 0.0 0.0

mercy 0.5 0.0 0.7 0.9 0.9 0.3

worser 1.2 0.0 0.6 0.6 0.6 0.0

(20)

Preprocess:

Assign docID by decreasing PR weight, sort by docID

= decr PR weight

Define FH(t) = m docs for t with highest tf-idf weight

Define IL(t) = the rest

Idea: a document that scores high should be in FH or in the front of IL

Search for a t-term query:

First FH: Take the common docs of their FH

compute the score of these docs and keep the top-k docs.

Then IL: scan ILs and check the common docs

Compute the score and possibly insert them into the top-k.

Stop when M docs have been checked or the PR score becomes smaller than some threshold.

(21)

Modeling authority

Assign to each document a query-

independent quality score in [0,1] to each document d

Denote this by g(d)

Thus, a quantity like the number of citations (?) is scaled into [0,1]

Sec. 7.1.4

(22)

ordering

Can combine champion lists with g(d)- ordering

Or, maintain for each term a champion list of the r>k docs with highest g(d) + tf-idftd

G(d) may be the PageRank

Seek top-k results from only the docs in these champion lists

(23)

Approach #5: Clustering

Query

Leader Follower

Sec. 7.1.6

(24)

Cluster pruning: preprocessing

Pick N docs at random: call these leaders

For every other doc, pre- compute nearest leader

Docs attached to a leader: its followers;

Likely: each leader has ~ N

followers.

(25)

Cluster pruning: query processing

Process a query as follows:

Given query Q, find its nearest leader L.

Seek K nearest docs from among L’s followers.

Sec. 7.1.6

(26)

Why use random sampling

Fast

Leaders reflect data distribution

(27)

General variants

Have each follower attached to b1=3 (say) nearest leaders.

From query, find b2=4 (say) nearest leaders and their followers.

Can recur on leader/follower construction.

Sec. 7.1.6

Riferimenti

Documenti correlati

in: Turkic languages | Turkic languages - 4 | Contents Turkic Languages, Volume 4, 2000, Number 2 Editorial note Turkic Languages, Volume 4, 2000, Number 2 Obituary Nachruf für

SELECT DateMonth, DateYear, NumCalls, TotPrice, RANK() over (ORDER BY TotPrice DESC) as RankPrice FROM GROUPBYMonthYear;.

 For multi-term queries, compute scores for docs containing several of the query terms..  Say, at least q-1 out of q terms of

Using elements of a discourse analysis of a scientific text, we investigate compositional and content aspects of the process of formalization of the text from the following viewpoint:

The result- ing classification is thus threefold and comprises European English translation equivalents for Italian terms designating legal concepts shared by both national

As an application of this constructions, the holonomy radius of a riemannian manifold is defined by assigning to each point the holonomy radius at the origin of the holonomic

Recent results connecting translation planes with spreads in P G(3, q) admitting cyclic affine homology groups of order q + 1 with conical flocks spreads provide the background

Example 16 is interesting not only because it is one of the few cases in which the pronoun has been rendered explicit in the Italian subtitles, but also because of the fact that it