Scaling Personalized Web Search

Glen Jeh
Stanford University
glenj@db.stanford.edu


Jennifer Widom
Stanford University
widom@db.stanford.edu

Copyright is held by the author/owner(s).
WWW2003, May 20-24, 2003, Budapest, Hungary.
ACM 1-58113-680-3/03/0005.

Abstract

Recent web search techniques augment traditional text matching with a global notion of ``importance'' based on the linkage structure of the web, such as in Google's PageRank algorithm. For more refined searches, this global notion of importance can be specialized to create personalized views of importance--for example, importance scores can be biased according to a user-specified set of initially-interesting pages. Computing and storing all possible personalized views in advance is impractical, as is computing personalized views at query time, since the computation of each view requires an iterative computation over the web graph. We present new graph-theoretical results, and a new technique based on these results, that encode personalized views as partial vectors. Partial vectors are shared across multiple personalized views, and their computation and storage costs scale well with the number of views. Our approach enables incremental computation, so that the construction of personalized views from partial vectors is practical at query time. We present efficient dynamic programming algorithms for computing partial vectors, an algorithm for constructing personalized views from partial vectors, and experimental results demonstrating the effectiveness and scalability of our techniques.

Categories and Subject Descriptors

G.2.2 [Discrete Mathematics]: Graph Theory

General Terms

Algorithms

Keywords

web search, PageRank


1. Introduction and Motivation

General web search is performed predominantly through text queries to search engines. Because of the enormous size of the web, text alone is usually not selective enough to limit the number of query results to a manageable size. The PageRank algorithm [11], among others [9], has been proposed (and implemented in Google [1]) to exploit the linkage structure of the web to compute global ``importance'' scores that can be used to influence the ranking of search results. To encompass different notions of importance for different users and queries, the basic PageRank algorithm can be modified to create ``personalized views'' of the web, redefining importance according to user preference. For example, a user may wish to specify his bookmarks as a set of preferred pages, so that any query results that are important with respect to his bookmarked pages would be ranked higher. While experimentation with the use of personalized PageRank has shown its utility and promise [5,11], the size of the web makes its practical realization extremely difficult. To see why, let us review the intuition behind the PageRank algorithm and its extension for personalization.

The fundamental motivation underlying PageRank is the recursive notion that important pages are those linked-to by many important pages. A page with only two in-links, for example, may seem unlikely to be an important page, but it may be important if the two referencing pages are Yahoo! and Netscape, which themselves are important pages because they have numerous in-links. One way to formalize this recursive notion is to use the ``random surfer'' model introduced in [11]. Imagine that trillions of random surfers are browsing the web: if at a certain time step a surfer is looking at page $p$, at the next time step he looks at a random out-neighbor of $p$. As time goes on, the expected percentage of surfers at each page $p$ converges (under certain conditions) to a limit $r(p)$ that is independent of the distribution of starting points. Intuitively, this limit is the PageRank of $p$, and is taken to be an importance score for $p$, since it reflects the number of people expected to be looking at $p$ at any one time.

The PageRank score $r(p)$ reflects a ``democratic'' importance that has no preference for any particular pages. In reality, a user may have a set $P$ of preferred pages (such as his bookmarks) which he considers more interesting. We can account for preferred pages in the random surfer model by introducing a ``teleportation'' probability $c$: at each step, a surfer jumps back to a random page in $P$ with probability $c$, and with probability $1-c$ continues forth along a hyperlink. The limit distribution of surfers in this model would favor pages in $P$, pages linked-to by $P$, pages linked-to in turn, etc. We represent this distribution as a personalized PageRank vector (PPV) personalized on the set $P$. Informally, a PPV is a personalized view of the importance of pages on the web. Rankings of a user's text-based query results can be biased according to a PPV instead of the global importance distribution.

Each PPV is of length $n$, where $n$ is the number of pages on the web. Computing a PPV naively using a fixed-point iteration requires multiple scans of the web graph [11], which makes it impossible to carry out online in response to a user query. On the other hand, PPV's for all preference sets, of which there are $2^n$, is far too large to compute and store offline. We present a method for encoding PPV's as partially-computed, shared vectors that are practical to compute and store offline, and from which PPV's can be computed quickly at query time.

In our approach we restrict preference sets $P$ to subsets of a set of hub pages $H$, selected as those of greater interest for personalization. In practice, we expect $H$ to be a set of pages with high PageRank (``important pages''), pages in a human-constructed directory such as Yahoo! or Open Directory [2], or pages important to a particular enterprise or application. The size of $H$ can be thought of as the available degree of personalization. We present algorithms that, unlike previous work [5,11], scale well with the size of $H$. Moreover, the same techniques we introduce can yield approximations on the much broader set of all PPV's, allowing at least some level of personalization on arbitrary preference sets.

The main contributions of this paper are as follows.

In Section 2 we introduce the notation used in this paper and formalize personalized PageRank mathematically. Section 3 presents basis vectors, the first step towards encoding PPV's as shared components. The full encoding is presented in Section 4. Section 5 discusses the computation of partial quantities. Experimental results are presented in Section 6. Related work is discussed in Section 7. Section 8 summarizes the contributions of this paper.

Due to space constraints, this paper omits proofs of the theorems and algorithms presented. These proofs are included as appendices in the full version of this paper [7].


2. Preliminaries

Let $G = (V,E)$ denote the web graph, where $V$ is the set of all web pages and $E$ contains a directed edge $\langle p, q \rangle$ iff page $p$ links to page $q$. For a page $p$, we denote by $I(p)$ and $O(p)$ the set of in-neighbors and out-neighbors of $p$, respectively. Individual in-neighbors are denoted as $I_i(p)$ ( $1 \leq i \leq \vert I(p)\vert$), and individual out-neighbors are denoted analogously. For convenience, pages are numbered from $1$ to $n$, and we refer to a page $p$ and its associated number $i$ interchangeably. For a vector $\bm{v}$, $v(p)$ denotes entry $p$, the $p$-th component of $\bm{v}$. We always typeset vectors in boldface and scalars (e.g., $v(p)$) in normal font. All vectors in this paper are $n$-dimensional and have nonnegative entries. They should be thought of as distributions rather than arrows. The magnitude of a vector $\bm{v}$ is defined to be $\sum_{i=1}^n{v(i)}$ and is written $\left\vert\bm{v}\right\vert$. In this paper, vector magnitudes are always in $[0,1]$. In an implemention, a vector may be represented as a list of its nonzero entries, so another useful measure is the size of $\bm{v}$, the number of nonzero entries in $\bm{v}$.

We generalize the preference set $P$ discussed in Section 1 to a preference vector $\bm{u}$, where $\vert\bm{u}\vert
= 1$ and $u(p)$ denotes the amount of preference for page $p$. For example, a user who wants to personalize on his bookmarked pages $P$ uniformly would have a $\bm{u}$ where $u(p) = \frac{1}{\vert P\vert}$ if $p \in P$, and $u(p) = 0$ if $p \notin P$. We formalize personalized PageRank scoring using matrix-vector equations. Let $\bm{A}$ be the matrix corresponding to the web graph $G$, where $A_{ij} = \frac{1}{\vert O(j)\vert}$ if page $j$ links to page $i$, and $A_{ij}
= 0$ otherwise. For simplicity of presentation, we assume that every page has at least one out-neighbor, as can be enforced by adding self-links to pages without out-links. The resulting scores can be adjusted to account for the (minor) effects of this modification, as specified in the appendices of the full version of this paper [7].

For a given $\bm{u}$, the personalized PageRank equation can be written as

\begin{displaymath}
\bm{v} = (1-c)\bm{Av} + c\bm{u}
\end{displaymath} (1)

where $c \in (0,1)$ is the ``teleportation'' constant discussed in Section 1. Typically $c \approx 0.15$, and experiments have shown that small changes in $c$ have little effect in practice [11]. A solution $\bm{v}$ to equation (1) is a steady-state distribution of random surfers under the model discussed in Section 1, where at each step a surfer teleports to page $p$ with probability $c \cdot u(p)$, or moves to a random out-neighbor otherwise [11]. By a theorem of Markov Theory, a solution $\bm{v}$ with $\vert\bm{v}\vert = 1$ always exists and is unique [10].[footnote 1] The solution $\bm{v}$ is the personalized PageRank vector (PPV) for preference vector $\bm{u}$. If $\bm{u}$ is the uniform distribution vector $\bm{u} = [1/n, \dots, 1/n]$, then the corresponding solution $\bm{v}$ is the global PageRank vector [11], which gives no preference to any pages.

For the reader's convenience, Table 1 lists terminology that will be used extensively in the coming sections.

Table 1: Summary of terms.
Term Description Section
Hub Set $H$ A subset of web pages. 1
Preference Set $P$ Set of pages on which to personalize (restricted in this paper to subsets of $H$). 1
Preference Vector $\bm{u}$ Preference set with weights. 2
Personalized PageRank Vector (PPV) Importance distribution induced by a preference vector. 2
Basis Vector $\bm{r_p}$ (or $\bm{r_i}$) PPV for a preference vector with a single nonzero entry at $p$ (or $i$). 3
Hub Vector $\bm{r_p}$ Basis vector for a hub page $p \in H$. 3
Partial Vector $(\bm{r_p-r_p^H})$ Used with the hubs skeleton to construct a hub vector. 4.2
Hubs Skeleton $S$ Used with partial vectors to construct a hub vector. 4.3
Web Skeleton Extension of the hubs skeleton to include pages not in $H$. 4.4.3
Partial Quantities Partial vectors and the hubs, web skeletons.  
Intermediate Results Maintained during iterative computations. 5.2



3. Basis Vectors

We present the first step towards encoding PPV's as shared components. The motivation behind the encoding is a simple observation about the linearity[footnote 2] of PPV's, formalized by the following theorem.

Theorem 1 (Linearity)   For any preference vectors $\bm{u_1}$ and $\bm{u_2}$, if $\bm{v_1}$ and $\bm{v_2}$ are the two corresponding PPV's, then for any constants $\alpha_1, \alpha_2 \geq 0$ such that $\alpha_1 + \alpha_2 = 1$,
\begin{displaymath}
\alpha_1\bm{v_1} + \alpha_2\bm{v_2} =
(1-c)\bm{A}(\...
...\alpha_2\bm{v_2}) +
c(\alpha_1\bm{u_1} + \alpha_2\bm{u_2})
\end{displaymath} (2)

Informally, the Linearity Theorem says that the solution to a linear combination of preference vectors $\bm{u_1}$ and $\bm{u_2}$ is the same linear combination of the corresponding PPV's $\bm{v_1}$ and $\bm{v_2}$. The proof is in the full version [7].

Let $\bm{x_1}, \dots, \bm{x_n}$ be the unit vectors in each dimension, so that for each $i$, $\bm{x_i}$ has value $1$ at entry $i$ and $0$ everywhere else. Let $\bm{r_i}$ be the PPV corresponding to $\bm{x_i}$. Each basis vector $\bm{r_i}$ gives the distribution of random surfers under the model that at each step, surfers teleport back to page $i$ with probability $c$. It can be thought of as representing page $i$'s view of the web, where entry $j$ of $\bm{r_i}$ is $j$'s importance in $i$'s view. Note that the global PageRank vector is $\frac{1}{n}(\bm{r_1} + \dots +
\bm{r_n})$, the average of every page's view.

An arbitrary personalization vector $\bm{u}$ can be written as a weighted sum of the unit vectors $\bm{x_i}$:

\begin{displaymath}
\bm{u} = \sum_{i=1}^{n}{\alpha_i\bm{x_i}}
\end{displaymath} (3)

for some constants $\alpha_1, \dots, \alpha_n$. By the Linearity Theorem,
\begin{displaymath}
\bm{v} = \sum_{i=1}^{n}{\alpha_i\bm{r_i}}
\end{displaymath} (4)

is the corresponding PPV, expressed as a linear combination of the basis vectors $\bm{r_i}$.

Recall from Section 1 that preference sets (now preference vectors) are restricted to subsets of a set of hub pages $H$. If a basis hub vector (or hereafter hub vector) for each $p \in H$ were computed and stored, then any PPV corresponding to a preference set $P$ of size $k$ (a preference vector with $k$ nonzero entries) can be computed by adding up the $k$ corresponding hub vectors $\bm{r_p}$ with the appropriate weights $\alpha_p$.

Each hub vector can be computed naively using the fixed-point computation in [11]. However, each fixed-point computation is expensive, requiring multiple scans of the web graph, and the computation time (as well as storage cost) grows linearly with the number of hub vectors $\vert H\vert$. In the next section, we enable a more scalable computation by constructing hub vectors from shared components.


4. Decomposition of Basis Vectors

In Section 3 we represented PPV's as a linear combination of $\vert H\vert$ hub vectors $\bm{r_p}$, one for each $p \in H$. Any PPV based on hub pages can be constructed quickly from the set of precomputed hub vectors, but computing and storing all hub vectors is impractical. To compute a large number of hub vectors efficiently, we further decompose them into partial vectors and the hubs skeleton, components from which hub vectors can be constructed quickly at query time. The representation of hub vectors as partial vectors and the hubs skeleton saves both computation time and storage due to sharing of components among hub vectors. Note, however, that depending on available resources and application requirements, hub vectors can be constructed offline as well. Thus ``query time'' can be thought of more generally as ``construction time''.

We compute one partial vector for each hub page $p$, which essentially encodes the part of the hub vector $\bm{r_p}$ unique to $p$, so that components shared among hub vectors are not computed and stored redundantly. The complement to the partial vectors is the hubs skeleton, which succinctly captures the interrelationships among hub vectors. It is the ``blueprint'' by which partial vectors are assembled to form a hub vector, as we will see in Section 4.3.

The mathematical tools used in the formalization of this decomposition are presented next.[footnote 3]


4.1 Inverse P-distance

To formalize the relationship among hub vectors, we relate the personalized PageRank scores represented by PPV's to inverse P-distances in the web graph, a concept based on expected-$f$ distances as introduced in [8].

Let $p, q \in V$. We define the inverse P-distance $r_p'(q)$ from $p$ to $q$ as

\begin{displaymath}
r_p'(q) = \sum_{t:p \rightsquigarrow q}{P[t]c(1-c)^{l(t)}}
\end{displaymath} (5)

where the summation is taken over all tours $t$ (paths that may contain cycles) starting at $p$ and ending at $q$, possibly touching $p$ or $q$ multiple times. For a tour $t = \langle w_1, \dots, w_k \rangle$, the length $l(t)$ is $k-1$, the number of edges in $t$. The term $P[t]$, which should be interpreted as ``the probability of traveling $t$'', is defined as $\prod_{i = 1}^{k-1}{\frac{1}{\vert O(w_i)\vert}}$, or $1$ if $l(t) = 0$. If there is no tour from $p$ to $q$, the summation is taken to be $0$.[footnote 4] Note that $r_p'(q)$ measures distances inversely: it is higher for nodes $q$ ``closer'' to $p$. As suggested by the notation and proven in the full version [7], $r_p'(q) = r_p(q)$ for all $p, q \in V$, so we will use $r_p(q)$ to denote both the inverse P-distance and the personalized PageRank score. Thus PageRank scores can be viewed as an inverse measure of distance.

Let $H \subseteq V$ be some nonempty set of pages. For $p, q \in V$, we define $r^H_p(q)$ as a restriction of $r_p(q)$ that considers only tours which pass through some page $h \in H$ in equation (5). That is, a page $h \in H$ must occur on $t$ somewhere other than the endpoints. Precisely, $r_p^H(q)$ is written as

\begin{displaymath}
r^H_p(q) = \sum_{t:p \rightsquigarrow H \rightsquigarrow q}{P[t]c(1-c)^{l(t)}}
\end{displaymath} (6)

where the notation $t:p \rightsquigarrow H \rightsquigarrow q$ reminds us that $t$ passes through some page in $H$. Note that $t$ must be of length at least $2$. In this paper, $H$ is always the set of hub pages, and $p$ is usually a hub page (until we discuss the web skeleton in Section 4.4.3).


4.2 Partial Vectors

Intuitively, $r_p^H(q)$, defined in equation (6), is the influence of $p$ on $q$ through $H$. In particular, if all paths from $p$ to $q$ pass through a page in $H$, then $H$ separates $p$ and $q$, and $r^H_p(q) = r_p(q)$. For well-chosen sets $H$ (discussed in Section 4.4.2), it will be true that $r_p(q) - r_p^H(q) = 0$ for many pages $p, q$. Our strategy is to take advantage of this property by breaking $\bm{r_p}$ into two components: $(\bm{r_p-r_p^H})$ and $\bm{r_p^H}$, using the equation
\begin{displaymath}
\bm{r_p} = (\bm{r_p - r_p^H}) + \bm{r_p^H}
\end{displaymath} (7)

We first precompute and store the partial vector $(\bm{r_p-r_p^H})$ instead of the full hub vector $\bm{r_p}$. Partial vectors are cheaper to compute and store than full hub vectors, assuming they are represented as a list of their nonzero entries. Moreover, the size of each partial vector decreases as $\vert H\vert$ increases, making this approach particularly scalable. We then add $\bm{r_p^H}$ back at query time to compute the full hub vector. However, computing and storing $\bm{r_p^H}$ explicitly could be as expensive as $\bm{r_p}$ itself. In the next section we show how to encode $\bm{r_p^H}$ so it can be computed and stored efficiently.


4.3 Hubs Skeleton

Let us briefly review where we are: In Section 3 we represented PPV's as linear combinations of hub vectors $\bm{r_p}$, one for each $p \in H$, so that we can construct PPV's quickly at query time if we have precomputed the hub vectors, a relatively small subset of PPV's. To encode hub vectors efficiently, in Section 4.2 we said that instead of full hub vectors $\bm{r_p}$, we first compute and store only partial vectors $(\bm{r_p}-\bm{r_p^H})$, which intuitively account only for paths that do not pass through a page of $H$ (i.e., the distribution is ``blocked'' by $H$). Computing and storing the difference vector $\bm{r_p^H}$ efficiently is the topic of this section.

It turns out that the vector $\bm{r_p^H}$ can be be expressed in terms of the partial vectors $(\bm{r_h} -\bm{r_h^H})$, for $h \in H$, as shown by the following theorem. Recall from Section 3 that $\bm{x_h}$ has value $1$ at $h$ and $0$ everywhere else.

Theorem 2 (Hubs)   For any $p \in V$, $H \subseteq V$,
\begin{displaymath}
\bm{r^H_p} = \frac{1}{c}\sum_{h \in H} {(r_p(h)-c \cdot x_p(h))
\left(\bm{r_h}-\bm{r_h^H}-c\bm{x_h} \right)}
\end{displaymath} (8)

In terms of inverse P-distances (Section 4.1), the Hubs Theorem says roughly that the distance from page $p$ to any page $q
\in V$ through $H$ is the distance $r_p(h)$ from $p$ to each $h \in H$ times the distance $r_h(q)$ from $h$ to $q$, correcting for the paths among hubs by $r_h^H(q)$. The terms $c \cdot x_p(h)$ and $c\bm{x_h}$ deal with the special cases when $p$ or $q$ is itself in $H$. The proof, which is quite involved, is in the full version [7].

The quantity $\left(\bm{r_h}-\bm{r_h^H} \right)$ appearing on the right-hand side of (8) is exactly the partial vectors discussed in Section 4.2. Suppose we have computed $r_p(H) =
\{ (h, r_p(h)) \,\vert\, h \in H\}$ for a hub page $p$. Substituting the Hubs Theorem into equation 7, we have the following Hubs Equation for constructing the hub vector $\bm{r_p}$ from partial vectors:

\begin{displaymath}
\bm{r_p} = (\bm{r_p - r_p^H}) \, +
\frac{1}{c}\sum_...
...p(h))\left[\left(\bm{r_h}-\bm{r_h^H}\right)-c\bm{x_h}\right]}
\end{displaymath} (9)

This equation is central to the construction of hub vectors from partial vectors.

The set $r_p(H)$ has size at most $\vert H\vert$, much smaller than the full hub vector $\bm{r_p}$, which can have up to $n$ nonzero entries. Furthermore, the contribution of each entry $r_p(h)$ to the sum is no greater than $r_p(h)$ (and usually much smaller), so that small values of $r_p(h)$ can be omitted with minimal loss of precision (Section 6). The set $S = \{r_p(H) \,\vert\, p \in H\}$ forms the hubs skeleton, giving the interrelationships among partial vectors.

An intuitive view of the encoding and construction suggested by the Hubs Equation (9) is shown in Figure 1.

Figure 1: Intuitive view of the construction of hub vectors from partial vectors and the hubs skeleton.
\scalebox{0.5}{\includegraphics{fig2}}
At the top, each partial vector $(\bm{r_h-r_h^H})$, including $(\bm{r_p-r_p^H})$, is depicted as a notched triangle labeled $h$ at the tip. The triangle can be thought of as representing paths starting at $h$, although, more accurately, it represents the distribution of importance scores computed based on the paths, as discussed in Section 4.1. A notch in the triangle shows where the computation of a partial vector ``stopped'' at another hub page. At the center, a part $r_p(H)$ of the hubs skeleton is depicted as a tree so the ``assembly'' of the hub vector can be visualized. The hub vector is constructed by logically assembling the partial vectors using the corresponding weights in the hubs skeleton, as shown at the bottom.


4.4 Discussion


4.4.1 Summary

In summary, hub vectors are building blocks for PPV's corresponding to preference vectors based on hub pages. Partial vectors, together with the hubs skeleton, are building blocks for hub vectors. Transitively, partial vectors and the hubs skeleton are building blocks for PPV's: they can be used to construct PPV's without first materializing hub vectors as an intermediate step (Section 5.4). Note that for preference vectors based on multiple hub pages, constructing the corresponding PPV from partial vectors directly can result in significant savings versus constructing from hub vectors, since partial vectors are shared across multiple hub vectors.


4.4.2 Choice of $H$

So far we have made no assumptions about the set of hub pages $H$. Not surprisingly, the choice of hub pages can have a significant impact on performance, depending on the location of hub pages within the overall graph structure. In particular, the size of partial vectors is smaller when pages in $H$ have higher PageRank, since high-PageRank pages are on average close to other pages in terms of inverse P-distance (Section 4.1), and the size of the partial vectors is related to the inverse P-distance between hub pages and other pages according to the Hubs Theorem. Our intuition is that high-PageRank pages are generally more interesting for personalization anyway, but in cases where the intended hub pages do not have high PageRank, it may be beneficial to include some high-PageRank pages in $H$ to improve performance. We ran experiments confirming that the size of partial vectors is much smaller using high-PageRank pages as hubs than using random pages.


4.4.3 Web Skeleton

The techniques used in the construction of hub vectors can be extended to enable at least approximate personalization on arbitrary preference vectors that are not necessarily based on $H$. Suppose we want to personalize on a page $p \notin H$. The Hubs Equation can be used to construct $\bm{r_p^H}$ from partial vectors, given that we have computed $r_p(H)$. As discussed in Section 4.3, the cost of computing and storing $r_p(H)$ is orders of magnitude less than $\bm{r_p}$. Though $\bm{r_p^H}$ is only an approximation to $\bm{r_p}$, it may still capture significant personalization information for a properly-chosen hub set $H$, as $\bm{r_p^H}$ can be thought of as a ``projection'' of $\bm{r_p}$ onto $H$. For example, if $H$ contains pages from Open Directory, $\bm{r_p^H}$ can capture information about the broad topic of $\bm{r_p}$. Exploring the utility of the web skeleton $W = \{r_p(H) \,\vert\, p \in V\}$ is an area of future work.


5. Computation

In Section 4 we presented a way to construct hub vectors from partial vectors $(\bm{r_p}-\bm{r_p^H})$, for $p \in H$, and the hubs skeleton $S = \{r_p(H) \,\vert\, p \in H\}$. We also discussed the web skeleton $W = \{r_p(H) \,\vert\, p \in V\}$. Computing these partial quantities naively using a fixed-point iteration [11] for each $p$ would scale poorly with the number of hub pages. Here we present scalable algorithms that compute these quantities efficiently by using dynamic programming to leverage the interrelationships among them. We also show how PPV's can be constructed from partial vectors and the hubs skeleton at query time. All of our algorithms have the property that they can be stopped at any time (e.g., when resources are depleted), so that the current ``best results'' can be used as an approximation, or the computation can be resumed later for increased precision if resources permit.

We begin in Section 5.1 by presenting a theorem underlying all of the algorithms presented (as well as the connection between PageRank and inverse P-distance, as shown in the full version [7]). In Section 5.2, we present three algorithms, based on this theorem, for computing general basis vectors. The algorithms in Section 5.2 are not meant to be deployed, but are used as foundations for the algorithms in Section 5.3 for computing partial quantities. Section 5.4 discusses the construction of PPV's from partial vectors and the hubs skeleton.


5.1 Decomposition Theorem

Recall the random surfer model of Section 1, instantiated for preference vector $\bm{u} = \bm{x_p}$ (for page $p$'s view of the web). At each step, a surfer $s$ teleports to page $p$ with some probability $c$. If $s$ is at $p$, then at the next step, $s$ with probability $1-c$ will be at a random out-neighbor of $p$. That is, a fraction $(1-c)\frac{1}{\vert O(p)\vert}$ of the time, surfer $s$ will be at any given out-neighbor of $p$ one step after teleporting to $p$. This behavior is strikingly similar to the model instantiated for preference vector $\bm{u'} = \frac{1}{\vert O(p)\vert}\sum_{i =
1}^{\vert O(p)\vert}{\bm{x_{O_i(p)}}}$, where surfers teleport directly to each $O_i(p)$ with equal probability $\frac{1}{\vert O(p)\vert}$. The similarity is formalized by the following theorem.

Theorem 3 (Decomposition)   For any $p \in V$,
\begin{displaymath}
\bm{r_p} = \frac{(1-c)}{\vert O(p)\vert} \sum_{i=1}^{\vert O(p)\vert}\bm{r_{O_i(p)}} + c\bm{x_p}
\end{displaymath} (10)

The Decomposition Theorem says that the basis vector $\bm{r_p}$ for $p$ is an average of the basis vectors $\bm{r_{O_i(p)}}$ for its out-neighbors, plus a compensation factor $c\bm{x_p}$. The proof is in the full version [7].

The Decomposition Theorem gives another way to think about PPV's. It says that $p$'s view of the web ($\bm{r_p}$) is the average of the views of its out-neighbors, but with extra importance given to $p$ itself. That is, pages important in $p$'s view are either $p$ itself, or pages important in the view of $p$'s out-neighbors, which are themselves ``endorsed'' by $p$. In fact, this recursive intuition yields an equivalent way of formalizing personalized PageRank scoring: basis vectors can be defined as vectors satisfying the Decomposition Theorem.

While the Decomposition Theorem identifies relationships among basis vectors, a division of the computation of a basis vector $\bm{r_p}$ into related subproblems for dynamic programming is not inherent in the relationships. For example, it is possible to compute some basis vectors first and then to compute the rest using the former as solved subproblems. However, the presence of cycles in the graph makes this approach ineffective. Instead, our approach is to consider as a subproblem the computation of a vector to less precision. For example, having computed $\bm{r_{O_i(p)}}$ to a certain precision, we can use the Decomposition Theorem to combine the $\bm{r_{O_i(p)}}$'s to compute $\bm{r_p}$ to greater precision. This approach has the advantage that precision needs not be fixed in advance: the process can be stopped at any time for the current best answer.


5.2 Algorithms for Computing Basis Vectors

We present three algorithms in the general context of computing full basis vectors. These algorithms are presented primarily to develop our algorithms for computing partial quantities, presented in Section 5.3. All three algorithms are iterative fixed-point computations that maintain a set of intermediate results $(\bm{D_k[*], E_k[*]})$. For each $p$, $\bm{D_k[p]}$ is a lower-approximation of $\bm{r_p}$ on iteration $k$, i.e., $D_k[p](q) \leq
r_p(q)$ for all $q
\in V$. We build solutions $\bm{D_k[p]}$ ( $k = 0, 1, 2,
\dots$) that are successively better approximations to $\bm{r_p}$, and simultaneously compute the error components $\bm{E_k[p]}$, where $\bm{E_k[p]}$ is the ``projection'' of the vector $(\bm{r_p} -
\bm{D_k[p]})$ onto the (actual) basis vectors. That is, we maintain the invariant that for all $k \geq 0$ and all $p \in V$,
\begin{displaymath}
\bm{D_k[p]} + \sum_{q \in V}{E_k[p](q) \bm{r_q}} = \bm{r_p}
\end{displaymath} (11)

Thus, $\bm{D_k[p]}$ is a lower-approximation of $\bm{r_p}$ with error

\begin{displaymath}\left\vert \sum_{q\in V}{E_k[p](q)\bm{r_q}} \right\vert =
\left\vert\bm{E_k[p]}\right\vert\end{displaymath}

We begin with $\bm{D_0[p]} = \bm{0}$ and $\bm{E_0[p]} = \bm{x_p}$, so that logically, the approximation is initially $\bm{0}$ and the error is $\bm{r_p}$. To store $\bm{E_k[p]}$ and $\bm{D_k[p]}$ efficiently, we can represent them in an implementation as a list of their nonzero entries. While all three algorithms have in common the use of these intermediate results, they differ in how they use the Decomposition Theorem to refine intermediate results on successive iterations.

It is important to note that the algorithms presented in this section and their derivatives in Section 5.3 compute vectors to arbitrary precision; they are not approximations. In practice, the precision desired may vary depending on the application. Our focus is on algorithms that are efficient and scalable with the number of hub vectors, regardless of the precision to which vectors are computed.


5.2.1 Basic Dynamic Programming Algorithm

In the basic dynamic programming algorithm, a new basis vector for each page $p$ is computed on each iteration using the vectors computed for $p$'s out-neighbors on the previous iteration, via the Decomposition Theorem. On iteration $k$, we derive $(\bm{D_{k+1}[p]}, \bm{E_{k+1}[p]})$ from $(\bm{D_k[p]}, \bm{E_k[p]})$ using the equations:
$\displaystyle \bm{D_{k+1}[p]}$ $\textstyle =$ $\displaystyle \frac{1-c}{\vert O(a)\vert} \sum\limits_{i=1}^{\vert O(p)\vert}{\bm{D_{k}[O_i(p)]}} + c\bm{x_p}$ (12)
$\displaystyle \bm{E_{k+1}[p]}$ $\textstyle =$ $\displaystyle \frac{1-c}{\vert O(a)\vert} \sum\limits_{i=1}^{\vert O(p)\vert}{\bm{E_{k}[O_i(p)]}}$ (13)

A proof of the algorithm's correctness is given in the full version [7], where the error $\vert\bm{E_k[p]}\vert$ is shown to be reduced by a factor of $1-c$ on each iteration.

Note that although the $\bm{E_k[*]}$ values help us to see the correctness of the algorithm, they are not used here in the computation of $\bm{D_k[*]}$ and can be omitted in an implementation (although they will be used to compute partial quantities in Section 5.3). The sizes of $\bm{D_k[p]}$ and $\bm{E_k[p]}$ grow with the number of iterations, and in the limit they can be up to the size of $\bm{r_p}$, which is the number of pages reachable from $p$. Intermediate scores $(\bm{D_k[*]},
\bm{E_k[*]})$ will likely be much larger than available main memory, and in an implementation $(\bm{D_k[*]},
\bm{E_k[*]})$ could be read off disk and $(\bm{D_{k+1}[*]}, \bm{E_{k+1}[*]})$ written to disk on each iteration. When the data for one iteration has been computed, data from the previous iteration may be deleted. Specific details of our implementation are discussed in Section 6.


5.2.2 Selective Expansion Algorithm

The selective expansion algorithm is essentially a version of the naive algorithm that can readily be modified to compute partial vectors, as we will see in Section 5.3.1.

We derive $(\bm{D_{k+1}[p], E_{k+1}[p]})$ by ``distributing'' the error at each page $q$ (that is, $E_k[p](q)$) to its out-neighbors via the Decomposition Theorem. Precisely, we compute results on iteration-$k$ using the equations:

$\displaystyle \bm{D_{k+1}[p]}$ $\textstyle =$ $\displaystyle \bm{D_k[p]} + \sum_{q \in Q_k(p)}{c \cdot E_k[p](q)\bm{x_q}}$ (14)
$\displaystyle \bm{E_{k+1}[p]}$ $\textstyle =$ $\displaystyle \bm{E_k[p]} - \sum_{q \in Q_k(p)}{E_k[p](q)\bm{x_q}} +
\sum_{q \i...
...{1-c}{\vert O(q)\vert}\sum_{i = 1}^{\vert O(q)\vert}
{E_k[p](q)\bm{x_{O_i(q)}}}$ (15)

for a subset $Q_k(p) \subseteq V$. If $Q_k(p) = V$ for all $k$, then the error is reduced by a factor of $1-c$ on each iteration, as in the basic dynamic programming algorithm. However, it is often useful to choose a selected subset of $V$ as $Q_k(p)$. For example, if $Q_k(p)$ contains the $m$ pages $q$ for which the error $E_k[p](q)$ is highest, then this top-$m$ scheme limits the number of expansions and delays the growth in size of the intermediate results while still reducing much of the error. In Section 5.3.1, we will compute the hub vectors by choosing $Q_k(p) = H$. The correctness of selective expansion is proven in the full version [7].


5.2.3 Repeated Squaring Algorithm

The repeated squaring algorithm is similar to the selective expansion algorithm, except that instead of extending $(\bm{D_{k+1}[*]}, \bm{E_{k+1}[*]})$ one step using equations (14) and (15), we compute what are essentially iteration-$2k$ results using the equations
$\displaystyle \bm{D_{2k}[p]}$ $\textstyle =$ $\displaystyle \bm{D_k[p]} + \sum_{q \in Q_k(p)}{E_k[p](q)\bm{D_k[q]}}$ (16)
$\displaystyle \bm{E_{2k}[p]}$ $\textstyle =$ $\displaystyle \bm{E_k[p]} - \sum_{q \in Q_k(p)}{E_k[p](q)\bm{x_q}} +
\sum_{q \in Q_k(p)}{E_k[p](q)\bm{E_k[q]}}$ (17)

where $Q_k(p) \subseteq V$. For now we can assume that $Q_k(p) = V$ for all $p$; we will set $Q_k(p) = H$ to compute the hubs skeleton in Section 5.3.2. The correctness of these equations is proven in the full version [7], where it is shown that repeated squaring reduces the error much faster than the basic dynamic programming or selective expansion algorithms. If $Q_k(p) = V$, the error is squared on each iteration, as equation (17) reduces to:
\begin{displaymath}
\bm{E_{2k}[p]} = \sum_{q \in V}{E_k[p](q)\bm{E_k[q]}}
\end{displaymath} (18)

As an alternative to taking $Q_k(p) = V$, we can also use the top-$m$ scheme of Section 5.2.2.

Note that while all three algorithms presented can be used to compute the set of all basis vectors, they differ in their requirements on the computation of other vectors when computing $\bm{r_p}$: the basic dynamic programming algorithm requires the vectors of out-neighbors of $p$ to be computed as well, repeated squaring requires results $(\bm{D_k[q], E_k[q]})$ to be computed for $q$ such that $E_k[p](q) > 0$, and selective expansion computes $\bm{r_p}$ independently.


5.3 Computing Partial Quantities

In Section 5.2 we presented iterative algorithms for computing full basis vectors to arbitrary precision. Here we present modifications to these algorithms to compute the partial quantities:
$\bullet$
Partial vectors $(\bm{r_p-r_p^H})$, $p \in H$.
$\bullet$
The hubs skeleton $S = \{r_p(H) \,\vert\, p \in H\}$ (which can be computed more efficiently by itself than as part of the entire web skeleton).
$\bullet$
The web skeleton $W = \{r_p(H) \,\vert\, p \in V\}$.
Each partial quantity can be computed in time no greater than its size, which is far less than the size of the hub vectors.


5.3.1 Partial Vectors

Partial vectors can be computed using a simple specialization of the selective expansion algorithm (Section 5.2.2): we take $Q_0(p) = V$ and $Q_k(p) =
V - H$ for $k > 0$, for all $p \in V$. That is, we never ``expand'' hub pages after the first step, so tours passing through a hub page $H$ are never considered. Under this choice of $Q_k(p)$, $\bm{D_k[p]} + c
\bm{E_k[p]}$ converges to $(\bm{r_p}-\bm{r_p^H})$ for all $p \in V$. Of course, only the intermediate results $(\bm{D_k[p]}, \bm{E_k[p]})$ for $p \in H$ should be computed. A proof is presented in the full version [7].

This algorithm makes it clear why using high-PageRank pages as hub pages improves performance: from a page $p$ we expect to reach a high-PageRank page $q$ sooner than a random page, so the expansion from $p$ will stop sooner and result in a shorter partial vector.


5.3.2 Hubs Skeleton

While the hubs skeleton is a subset of the complete web skeleton and can be computed as such using the technique to be presented in Section 5.3.3, it can be computed much faster by itself if we are not interested in the entire web skeleton, or if higher precision is desired for the hubs skeleton than can be computed for the entire web skeleton.

We use a specialization of the repeated squaring algorithm (Section 5.2.3) to compute the hubs skeleton, using the intermediate results from the computation of partial vectors. Suppose $(\bm{D_k[p]}, \bm{E_k[p]})$, for $k \geq 1$, have been computed by the algorithm of Section 5.3.1, so that $\sum_{q \notin
H}E_k[p](q) < \epsilon$, for some error $\epsilon$. We apply the repeated squaring algorithm on these results using $Q_k(p) = H$ for all successive iterations. As shown in the full version [7], after $i$ iterations of repeated squaring, the total error $\vert\bm{E_i[p]}\vert$ is bounded by $(1-c)^{2^i} + \epsilon /c$. Thus, by varying $k$ and $i$, $r_p(H)$ can be computed to arbitrary precision.

Notice that only the intermediate results $(\bm{D_k[h], E_k[h]})$ for $h \in H$ are ever needed to update scores for $\bm{D_k[p]}$, and of the former, only the entries $D_k[h](q), E_k[h](q)$, for $q \in H$, are used to compute $D_k[p](q)$. Since we are only interested in the hub scores $D_k[p](q)$, we can simply drop all non-hub entries from the intermediate results. The running time and storage would then depend only on the size of $r_p(H)$ and not on the length of the entire hub vectors $\bm{r_p}$. If the restricted intermediate results fit in main memory, it is possible to defer the computation of the hubs skeleton to query time.


5.3.3 Web Skeleton

To compute the entire web skeleton, we modify the basic dynamic programming algorithm (Section 5.2.1) to compute only the hub scores $r_p(H)$, with corresponding savings in time and memory usage. We restrict the computation by eliminating entries $q \notin H$ from the intermediate results $(\bm{D_k[p]}, \bm{E_k[p]})$, similar to the technique used in computing the hubs skeleton.

The justification for this modification is that the hub score $D_{k+1}[p](h)$ is affected only by the hub scores $D_{k}[*](h)$ of the previous iteration, so that $D_{k+1}[p](h)$ in the modified algorithm is equal to that in the basic algorithm. Since $\vert H\vert$ is likely to be orders of magnitude less than $n$, the size of the intermediate results is reduced significantly.


5.4 Construction of PPV's

Finally, let us see how a PPV for preference vector $\bm{u}$ can be constructed directly from partial vectors and the hubs skeleton using the Hubs Equation. (Construction of a single hub vector is a specialization of the algorithm outlined here.) Let $\bm{u} = \alpha_1 p_1 + \dots + \alpha_z
p_z$ be a preference vector, where $p_i \in H$ for $1 \leq i \leq z$. Let $Q
\subseteq H$, and let
\begin{displaymath}
r_u(h) = \sum_{i=1}^{z}\alpha_i \left(r_{p_i}(h)-c\cdot x_{p_i}(h)\right)
\end{displaymath} (19)

which can be computed from the hubs skeleton. Then the PPV $\bm{v}$ for $\bm{u}$ can be constructed as
\begin{displaymath}
\bm{v} = \sum_{i=1}^{z} \alpha_i (\bm{r_{p_i}-r_{p_i}^...
... r_u(h) > 0}}
r_u(h)\left[(\bm{r_h-r_h^H})-c\bm{x_h}\right]
\end{displaymath} (20)

Both the terms $(\bm{r_{p_i}-r_{p_i}^H})$ and $(\bm{r_h-r_h^H})$ are partial vectors, which we assume have been precomputed. The term $c\bm{x_h}$ represents a simple subtraction from $(\bm{r_h-r_h^H})$. If $Q = H$, then % latex2html id marker 1722
$(\ref{eq:constructionB})$ represents a full construction of $\bm{v}$. However, for some applications, it may suffice to use only parts of the hubs skeleton to compute $\bm{v}$ to less precision. For example, we can take $Q$ to be the $m$ hubs $h$ for which $r_u(h)$ is highest. Experimentation with this scheme is discussed in Section 6.3. Alternatively, the result can be improved incrementally (e.g., as time permits) by using a small subset $Q$ each time and accumulating the results.


6. Experiments

We performed experiments using real web data from Stanford's WebBase [6], a crawl of the web containing 120 million pages. Since the iterative computation of PageRank is unaffected by leaf pages (i.e., those with no out-neighbors), they can be removed from the graph and added back in after the computation [11]. After removing leaf pages, the graph consisted of 80 million pages

Both the web graph and the intermediate results $(\bm{D_k[*], E_k[*]})$ were too large to fit in main memory, and a partitioning strategy, based on that presented in [4], was used to divide the computation into portions that can be carried out in memory. Specifically, the set of pages $V$ was partitioned into $k$ arbitrary sets $P_1, \dots, P_k$ of equal size ($k = 10$ in our experiments). The web graph, represented as an edge-list $E$, is partitioned into $k$ chunks $E_i$ ( $1 \leq i \leq k$), where $E_i$ contains all edges $\langle p, q \rangle$ for which $p \in P_i$. Intermediate results $\bm{D_k[p]}$ and $\bm{E_k[p]}$ were represented together as a list $\bm{L_k[p]} = \langle (q_1, d_1, e_1), (q_2, d_2, e_2),
\dots \rangle$ where $D_k[p](q_z) = d_z$ and $E_k[p](q_z) = e_z$, for $z =
1, 2, \dots$. Only pages $q_z$ for which either $d_z > 0$ or $e_z > 0$ were included. The set of intermediate results $\bm{L_k[*]}$ was partitioned into $k^2$ chunks $\bm{L_k^{i,j}[*]}$, so that $\bm{L_k^{i,j}[p]}$ contains triples $(q_z, d_z, e_z)$ of $\bm{L_k[p]}$ for which $p \in P_i$ and $q_z
\in P_j$. In each of the algorithms for computing partial quantities, only a single column $\bm{L_k^{*,j}[*]}$ was kept in memory at any one time, and part of the next-iteration results $\bm{L_{k+1}[*]}$ were computed by successively reading in individual blocks of the graph or intermediate results as appropriate. Each iteration requires only one linear scan of the intermediate results and web graph, except for repeated squaring, which does not use the web graph explicitly.


6.1 Computing Partial Vectors

For comparison, we computed both (full) hub vectors and partial vectors for various sizes of $H$, using the selective expansion algorithm with $Q_k(p) = V$ (full hub vectors) and $Q_k(p) =
V - H$ (partial vectors). As discussed in Section 4.4.2, we found the partial vectors approach to be much more effective when $H$ contains high-PageRank pages rather than random pages. In our experiments $H$ ranged from the top $1000$ to top $100,000$ pages with the highest PageRank. The constant $c$ was set to $0.15$.

To evaluate the performance and scalability of our strategy independently of implementation and platform, we focus on the size of the results rather than computation time, which is linear in the size of the results. Because of the number of trials we had to perform and limitations on resources, we computed results only up to 6 iterations, for $\vert H\vert$ up to $100,000$.

Figure 2: Average Vector Size vs. Number of Hubs
\scalebox{0.42}[0.4]{\includegraphics{numHubs}}
Figure 2 plots the average size of (full) hub vectors and partial vectors (recall that size is the number of nonzero entries), as computed after 6 iterations of the selective expansion algorithm, which for computing full hub vectors is equivalent to the basic dynamic programming algorithm. Note that the x-axis plots $\vert H\vert$ in logarithmic scale.

Experiments were run using a 1.4 gigahertz CPU on a machine with 3.5 gigabytes of memory. For $\vert H\vert = 50,000$, the computation of full hub vectors took about $2.8$ seconds per vector, and about $0.33$ seconds for each partial vector. We were unable to compute full hub vectors for $\vert H\vert =
100,000$ due to the time required, although the average vector size is expected not to vary significantly with $\vert H\vert$ for full hub vectors. In Figure 2 we see that the reduction in size from using our technique becomes more significant as $\vert H\vert$ increases, suggesting that our technique scales well with $\vert H\vert$.


6.2 Computing the Hubs Skeleton

We computed the hubs skeleton for $\vert H\vert = 10,000$ by running the selective expansion algorithm for $6$ iterations using $Q_k(p) = H$, and then running the repeated squaring algorithm for $10$ iterations (Section 5.3.2), where $Q_k(p)$ is chosen to be the top 50 entries under the top-$m$ scheme (Section 5.2.2). The average size of the hubs skeleton is $9021$ entries. Each iteration of the repeated squaring algorithm took about an hour, a cost that depends only on $\vert H\vert$ and is constant with respect to the precision to which the partial vectors are computed.


6.3 Constructing Hub Vectors from Partial Vectors

Next we measured the construction of (full) hub vectors from partial vectors and the hubs skeleton. Note that in practice we may construct PPV's directly from partial vectors, as discussed in Section 5.4. However, performance of the construction would depend heavily on the user's preference vector. We consider hub vector computation because it better measures the performance benefits of our partial vectors approach.

As suggested in Section 4.3, the precision of the hub vectors constructed from partial vectors can be varied at query time according to application and performance demands. That is, instead of using the entire set $r_p(H)$ in the construction of $\bm{r_p}$, we can use only the highest $m$ entries, for $m \leq \vert H\vert$.

Figure 3: Construction Time and Size vs. Hubs Skeleton Portion ($m$)
\scalebox{0.42}[0.4]{\includegraphics{m}}
Figure 3 plots the average size and time required to construct a full hub vector from partial vectors in memory versus $m$, for $\vert H\vert = 10,000$. Results are averaged over $50$ randomly-chosen hub vectors. Note that the x-axis is in logarithmic scale.

Recall from Section 6.1 that the partial vectors from which the hubs vector is constructed were computed using 6 iterations, limiting the precision. Thus, the error values in Figure 3 are roughly $16\%$ (ranging from $0.166$ for $m = 100$ to $0.163$ for $m =
10,000$). Nonetheless, this error is much smaller than that of the iteration-$6$ full hub vectors computed in Section 6.1, which have error $(1-c)^6 = 38\%$. Note, however, that the size of a vector is a better indicator of precision than the magnitude, since we are usually most interested in the number of pages with nonzero entries in the distribution vector. An iteration-6 full hub vector (from Section 6.1) for page $p$ contains nonzero entries for pages at most 6 links away from $p$, $93,993$ pages on average. In contrast, from Figure 3 we see that a hub vector containing 14 million nonzero entries can be constructed from partial vectors in 6 seconds.


7. Related Work

The use of personalized PageRank to enable personalized web search was first proposed in [11], where it was suggested as a modification of the global PageRank algorithm, which computes a universal notion of importance. The computation of (personalized) PageRank scores was not addressed beyond the naive algorithm.

In [5], personalized PageRank scores were used to enable ``topic-sensitive'' web search. Specifically, precomputed hub vectors corresponding to broad categories in Open Directory were used to bias importance scores, where the vectors and weights were selected according to the text query. Experiments in [5] concluded that the use of personalized PageRank scores can improve web search, but the number of hub vectors used was limited to 16 due to the computational requirements, which were not addressed in that work. Scaling the number of hub pages beyond 16 for finer-grained personalization is a direct application of our work.

Another technique for computing web-page importance, HITS, was presented in [9]. In HITS, an iterative computation similar in spirit to PageRank is applied at query time on a subgraph consisting of pages matching a text query and those ``nearby''. Personalizing based on user-specified web pages (and their linkage structure in the web graph) is not addressed by HITS. Moreover, the number of pages in the subgraphs used by HITS (order of thousands) is much smaller than that we consider in this paper (order of millions), and the computation from scratch at query time makes the HITS approach difficult to scale.

Another algorithm that uses query-dependent importance scores to improve upon a global version of importance was presented in [12]. Like HITS, it first restricts the computation to a subgraph derived from text matching. (Personalizing based on user-specified web pages is not addressed.) Unlike HITS, [12] suggested that importance scores be precomputed offline for every possible text query, but the enormous number of possibilities makes this approach difficult to scale.

The concept of using ``hub nodes'' in a graph to enable partial computation of solutions to the shortest-path problem was used in [3] in the context of database search. That work deals with searches within databases, and on a scale far smaller than that of the web.

Some system aspects of (global) PageRank computation were addressed in [4]. The disk-based data-partitioning strategy used in the implementation of our algorithm is adopted from that presented therein.

Finally, the concept of inverse P-distance used in this paper is based on the concept of expected-$f$ distance introduced in [8], where it was presented as an intuitive model for a similarity measure in graph structures.


8. Summary

We have addressed the problem of scaling personalized web search:

9. Acknowledgment

The authors thank Taher Haveliwala for many useful discussions and extensive help with implementation.

10. Bibliography

1
http://www.google.com.

2
http://dmoz.org.

3
R. Goldman, N. Shivakumar, S. Venkatasubramanian, and H. Garcia-Molina.
Proximity search in databases.
In Proceedings of the Twenty-Fourth International Conference on Very Large Databases, New York, New York, Aug. 1998.

4
T. H. Haveliwala.
Efficient computation of PageRank.
Technical report, Stanford University Database Group, 1999.
http://dbpubs.stanford.edu/pub/1999-31.

5
T. H. Haveliwala.
Topic-sensitive PageRank.
In Proceedings of the Eleventh International World Wide Web Conference, Honolulu, Hawaii, May 2002.

6
J. Hirai, S. Raghavan, A. Paepcke, and H. Garcia-Molina.
WebBase: A repository of web pages.
In Proceedings of the Ninth International World Wide Web Conference, Amsterdam, Netherlands, May 2000.
http://www-diglib.stanford.edu/~testbed/doc2/WebBase/.

7
G. Jeh and J. Widom.
Scaling personalized web search.
Technical report, Stanford University Database Group, 2002.
http://dbpubs.stanford.edu/pub/2002-12.

8
G. Jeh and J. Widom.
SimRank: A measure of structural-context similarity.
In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Edmonton, Alberta, Canada, July 2002.

9
J. M. Kleinberg.
Authoritative sources in a hyperlinked environment.
In Proceedings of the Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, California, Jan. 1998.

10
R. Motwani and P. Raghavan.
Randomized Algorithms.
Cambridge University Press, United Kingdom, 1995.

11
L. Page, S. Brin, R. Motwani, and T. Winograd.
The PageRank citation ranking: Bringing order to the Web.
Technical report, Stanford University Database Group, 1998.
http://citeseer.nj.nec.com/368196.html.

12
M. Richardson and P. Domingos.
The intelligent surfer: Probabilistic combination of link and content information in PageRank.
In Proceedings of Advances in Neural Information Processing Systems 14, Cambridge, Massachusetts, Dec. 2002.

About this paper

This work was supported by the National Science Foundation under grant IIS-9817799. This is an abbreviated version of the full paper that omits appendices. The full version is available on the web at http://dbpubs.stanford.edu/pub/2002-12.

Copyright is held by the author/owner(s).
WWW2003, May 20-24, 2003, Budapest, Hungary.
ACM 1-58113-449-5/02/0005



Footnotes

[footnote 1]
Specifically, $\bm{v}$ corresponds to the steady-state distribution of an ergodic, aperiodic Markov chain.
[footnote 2]
More precisely, the transformation from personalization vectors $\bm{u}$ to their corresponding solution vectors $\bm{v}$ is linear.
[footnote 3]
Note that while the mathematics and computation strategies in this paper are presented in the specific context of the web graph, they are general graph-theoretical results that may be applicable in other scenarios involving stochastic processes, of which PageRank is one example.
[footnote 4]
The definition here of inverse P-distance differs slightly from the concept of expected-$f$ distance in [8], where tours are not allowed to visit $q$ multiple times. Note that general expected-$f$ distances have the form $\sum_{t}{P[t]f(l(t))}$; in our definition, $f(x) = c(1-c)^x$.


This document was generated using the LaTeX2HTML translator Version 2002-2 (1.70)