Locality sensitive hashing

Locality sensitive hashing

Locality Sensitive Hashing (LSH) is a method of performing probabilistic dimension reduction of high-dimensional data. The basic idea is to hash the input items so that similar items are mapped to the same buckets with high probability (the number of buckets being much smaller than the universe of possible input items).

Definition

An "LSH family mathcal F" is defined fora metric space mathcal M =(M,d), a thresholdR>0 and an approximation factor c>1.An LSH familycite journal
author= Gionis, A.
coauthors = Indyk, P., Motwani, R.
year = 1999
title = Similarity Search in High Dimensions via Hashing
url= http://people.csail.mit.edu/indyk/vldb99.ps ,
journal = Proceedings of the 25th Very Large Database (VLDB) Conference
] cite journal
author = Indyk, Piotr.
coauthors = Motwani, Rajeev.
year = 1998
title = Approximate Nearest Neighbors: Towards Removing the Curse of Dimensionality.
url= http://people.csail.mit.edu/indyk/nndraft.ps ,
journal = Proceedings of 30th Symposium on Theory of Computing
] mathcal F is afamily of functions h:{mathcal M} imes{mathcal M} o Ssatisfying the following conditions for any twopoints p,qin {mathcal M}, and a function h chosenuniformly at random from mathcal F:
* if d(p,q) le R, then h(p)=h(q) (i.e.,p and q collide) with probability at least P_1,
* if d(p,q) ge cR, then h(p)=h(q) with probability at most P_2.

A family is interesting when P_1>P_2. Such a family mathcal F is called "(R,cR,P_1,P_2)-sensitive".

An alternative definitioncite journal
author = Charikar, Moses S..
coauthors =
year = 2002
title = Similarity Estimation Techniques from Rounding Algorithms
journal = Proceedings of the 34th Annual ACM Symposium on Theory of Computing 2002
pages = (ACM 1–58113–495–9/02/0005)…
url = http://portal.acm.org/citation.cfm?id=509965
accessdate = 2007-12-21
doi = 10.1145/509907.509965
] is defined with respect to a universe of items U that have a similarity function phi : U imes U o [0,1] . An LSH scheme is a family of hash functions H coupled with a probability distribution D over the functions such that a function h in H chosen according to D satifies the property that Pr_{h in H} [h(a) = h(b)] = phi(a,b) for any a,b in U.

Applications

LSH has been applied to several problem domains including
*Near Duplicate Detection
*Image Similarity Identification
*Gene Expression Similarity Identification
*Audio Similarity Identification

Methods

Bit Sampling For Hamming Distance

One of the easiest ways to construct an LSH family is by bit sampling ] .This approach works for the Hamming distance over d-dimensional vectors {0,1}^d. Here, the family mathcal F of hash functions is simply the family of all the projections of points onone of the d coordinates, i.e., {mathcal F}={h:{0,1}^d o {0,1}mid h(x)=x_i,i =1 ... d}, where x_i is the i^{th} coordinate ofx. A random function h from {mathcal F} simply selects a random bit from input point. This family has the following parameters:P_1=1-R/d, P_2=1-cR/d.

Min-Wise Independent Permutations

Suppose U is composed of subsets of some ground set of enumerable items S and the similarity function of interest is the Jaccard index J. If pi is a permutation on the indices of S, for A subseteq S let h(A) = min_{a in A} { pi(a) }. Each possible choice of pi defines a single hash function h mapping input sets to integers.

Define the function family H to be the set of all such functions and let D be the uniform distribution. Given two sets A,B subseteq S the event that h(A) = h(B) corresponds exactly to the event that the minimizer of pi lies inside A igcap B. As h was chosen uniformly at random, Pr [h(A) = h(B)] = J(A,B), and (H,D), define an LSH scheme for the Jaccard index.

Because the symmetric group on n elements has size n!, choosing a truly random permutation from the full symmetric group is infeasible for even moderately sized n. Because of this fact, there has been significant work on finding a family of permutations that is "min-wise independent" - a permutation family for which each element of the domain has equal probability of being the minimum under a randomly chosen pi. It has been established that a min-wise independent family of permutations is at least of size lcm(1, 2, ..., n) ge e^{n-o(n)}cite journal
author = Broder, A.Z.
coauthors = Charikar, M.; Frieze, A.M.; Mitzenmacher, M.
year = 1998
title = Min-wise independent permutations
journal = Proceedings of the thirtieth annual ACM symposium on Theory of computing
pages = 327–336
url = http://www.cs.princeton.edu/~moses/papers/minwise.ps
accessdate = 2007-11-14
doi = 10.1145/276698.276781
] and that this boundary is tight [cite journal
title=An optimal construction of exactly min-wise independent permutations
coauthors=Takei, Y. and Itoh, T. and Shinozaki, T.
journal=Technical Report COMP98-62, IEICE, 1998
] .

Because min-wise independent families are too big for practical applications, two variant notions of min-wise independence are introduced: restricted min-wise independent permutations families, and approximate min-wise independent families.Restricted min-wise independence is the min-wise independence property restricted to certain sets of cardinality at most k cite journal
author = Matousek, J.
coauthors = Stojakovic, M.
year = 2002
title = On Restricted Min-Wise Independence of Permutations
journal = Preprint
url = http://citeseer.ist.psu.edu/689217.html
accessdate = 2007-11-14
] .Approximate min-wise independence differs from the property by at most a fixed epsiloncite journal
author = Saks, M.
coauthors = Srinivasan, A.; Zhou, S.; Zuckerman, D.
year = 2000
title = Low discrepancy sets yield approximate min-wise independent permutation families
journal = Information Processing Letters
volume = 73
issue = 1-2
pages = 29–32
url = http://citeseer.ist.psu.edu/saks99low.html
accessdate = 2007-11-14
doi = 10.1016/S0020-0190(99)00163-5
] .

Random Projection

The random projection method of LSH is designed to approximate the cosine distance between vectors. The basic idea of this technique is to choose a random hyperplane (defined by a normal unit vector r) at the outset and use the hyperplane to hash input vectors.

Given an input vector v and a hyperplane defined by r, we let h(v) = sgn(v cdot r). That is, h(v) = pm 1 depending on which side of the hyperplane v lies.

Each possible choice of r defines a single function. Let H be the set of all such functions and let D be the uniform distribution once again. It is not difficult to prove that, for two vectors u,v, Pr [h(u) = h(v)] = 1 - frac{ heta(u,v)}{pi}, where heta(u,v) is the angle between u and v. 1 - frac{ heta(u,v)}{pi} is closely related to cos( heta(u,v)).

In this instance hashing produces only a single bit. Two vectors' bits match with probability proportional to the cosine of the angle between them.

table Distributions

The hash functioncite journal
author = Datar, M.
coauthors = Immorlica, N., Indyk, P., Mirrokni, V.S.
year=2004
title = Locality-Sensitive Hashing Scheme Based on p-Stable Distributions
url = http://theory.csail.mit.edu/~mirrokni/pstable.ps
journal = Proceedings of the Symposium on Computational Geometry
] h_{mathbf{a},b} (oldsymbol{upsilon}) : mathcal{R}^d o mathcal{N} maps a "d" dimensional vectoroldsymbol{upsilon} onto a set of integers. Each hash functionin the family is indexed by a choice of random mathbf{a} andb where mathbf{a} is a "d" dimensional vector withentries chosen independently from a stable distribution and b isa real number chosen uniformly from the range [0,r] . For a fixedmathbf{a},b the hash function h_{mathbf{a},b} isgiven by h_{mathbf{a},b} (oldsymbol{upsilon}) = left lfloorfrac{mathbf{a}cdot oldsymbol{upsilon}+b}{r} ight floor .

LSH Algorithm for the Nearest Neighbor Search

One of the main application of LSH is to provide efficient nearest neighbor search algorithms.Consider any LSH family mathcal F. Thealgorithm has two main parameters: the width parameter k and thenumber of hash tables L.

In the first step, we define a new family mathcal G of hashfunctions g, where each function g isobtained by concatenating k functions h_1,...,h_k frommathcal F, i.e., g(p)= [h_1(p), ...,h_k(p)] .In other words, a random hash function g is obtained byconcatenating k randomly chosen hash functions from mathcalH.The algorithm then constructs L hash tables, each corresponding toa different randomly chosen hash function g.

In the preprocessing step we hash all n points from the data setS into each ofthe L hash tables.Given that the resulting hash tables have only n non-zero entries,one can reduce the amount of memory used per each hash table toO(n) using standard hash functions.

Given a query point q, the algorithm iterates over theL hash functions g.For each g considered, it retrieves the data points that are hashedinto the same bucket as q.The process is stopped as soon as a point within distance cR fromq is found.

Given the parameters k and L, the algorithm hasthe following performance guarantees:
* preprocessing time: O(nLkt), where t is the time to evaluate a function hin F on an input point p;
* space: O(nL), plus the space for storing data points;
* query time: O(L(kt+dnP_2^k));
* the algorithm succeeds in finding a point within distance cR from q (if it exists) with probability at least Omega(min{1, LP_1^k}).

For a fixed approximation ratio c=1+epsilon and probabilitiesP_1 and P_2, one can set k={log n over log1/P_2} and L = n^{ ho}, where ho={log P_1over log P_2}.Then one obtains the following performance guarantees:
* preprocessing time: O(n^{1+ ho}kt);
* space: O(n^{1+ ho}), plus the space for storing data points;
* query time: O(n^{ ho}(kt+d));

ee also

*Nearest neighbor search

Related Papers

* [http://web.mit.edu/andoni/www/LSH/index.html Alex Andoni's LSH homepage]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Locality sensitive hashing — (LSH) est une méthode de recherche approximative dans des espaces de grande dimension. C est une solution au problème de la malédiction de la dimension qui apparait lors d une recherche des plus proches voisins en grande dimension. L idée… …   Wikipédia en Français

  • LSH — Locality sensitive hashing Locality Sensitive Hashing (LSH) est une méthode de recherche approximative dans des espaces de grande dimension. C est une solution au problème de la malédiction de la dimension qui apparait lors d une recherche des… …   Wikipédia en Français

  • Nearest neighbor search — (NNS), also known as proximity search, similarity search or closest point search, is an optimization problem for finding closest points in metric spaces. The problem is: given a set S of points in a metric space M and a query point… …   Wikipedia

  • MinHash — In computer science, MinHash (or the min wise independent permutations locality sensitive hashing scheme) is a technique for quickly estimating how similar two sets are. The scheme was invented by Andrei Broder (1997),[1] and initially used… …   Wikipedia

  • Kppv — Recherche des plus proches voisins Le problème de la recherche des plus proches voisins (ou des k plus proches voisins) est très courant en algorithmique et de nombreux auteurs ont proposé des algorithmes efficaces pour le résoudre rapidement.… …   Wikipédia en Français

  • Problème des plus proches voisins — Recherche des plus proches voisins Le problème de la recherche des plus proches voisins (ou des k plus proches voisins) est très courant en algorithmique et de nombreux auteurs ont proposé des algorithmes efficaces pour le résoudre rapidement.… …   Wikipédia en Français

  • Recherche des plus proches voisins — Le problème de la recherche des plus proches voisins (ou des k plus proches voisins) est très courant en algorithmique et de nombreux auteurs ont proposé des algorithmes efficaces pour le résoudre rapidement. Soient : un espace E de… …   Wikipédia en Français

  • Michael Mitzenmacher — (September 2009) Residence USA …   Wikipedia

  • K-nearest neighbor algorithm — In pattern recognition, the k nearest neighbor algorithm ( k NN) is a method for classifying objects based on closest training examples in the feature space. k NN is a type of instance based learning, or lazy learning where the function is only… …   Wikipedia

  • Singular value decomposition — Visualization of the SVD of a 2 dimensional, real shearing matrix M. First, we see the unit disc in blue together with the two canonical unit vectors. We then see the action of M, which distorts the disk to an ellipse. The SVD decomposes M into… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”