- Levenshtein distance
-
In information theory and computer science, the Levenshtein distance is a string metric for measuring the amount of difference between two sequences. The term edit distance is often used to refer specifically to Levenshtein distance.
The Levenshtein distance between two strings is defined as the minimum number of edits needed to transform one string into the other, with the allowable edit operations being insertion, deletion, or substitution of a single character. It is named after Vladimir Levenshtein, who considered this distance in 1965.[1]
Contents
Example
For example, the Levenshtein distance between "kitten" and "sitting" is 3, since the following three edits change one into the other, and there is no way to do it with fewer than three edits:
- kitten → sitten (substitution of 's' for 'k')
- sitten → sittin (substitution of 'i' for 'e')
- sittin → sitting (insertion of 'g' at the end).
Applications
In approximate string matching, the objective is to find matches for short strings, for instance, strings from a dictionary, in many longer texts, in situations where a small number of differences is to be expected. Here, one of the strings is typically short, while the other is arbitrarily long. This has a wide range of applications, for instance, spell checkers, correction systems for optical character recognition, and software to assist natural language translation based on translation memory.
The Levenshtein distance can also be computed between two longer strings, but the cost to compute it, which is roughly proportional to the product of the two string lengths, makes this impractical.
Relationship with other edit distance metrics
Levenshtein distance is not the only popular notion of edit distance. Variations can be obtained by changing the set of allowable edit operations: for instance,
- length of the longest common subsequence is the metric obtained by allowing only addition and deletion, not substitution;
- the Damerau–Levenshtein distance allows addition, deletion, substitution, and the transposition of two adjacent characters;
- the Hamming distance only allows substitution (and hence, only applies to strings of the same length).
Edit distance in general is usually defined as a parametrizable metric in which a repertoire of edit operations is available, and each operation is assigned a cost (possibly infinite). This is further generalized by DNA sequence alignment algorithms such as the Smith–Waterman algorithm, which make an operation's cost depend on where it is applied.
Computing Levenshtein distance
Computing the Levenshtein distance is based on the observation that if we reserve a matrix to hold the Levenshtein distances between all prefixes of the first string and all prefixes of the second, then we can compute the values in the matrix by flood filling the matrix, and thus find the distance between the two full strings as the last value computed.
This algorithm, an example of bottom-up dynamic programming, is discussed, with variants, in the 1974 article The String-to-string correction problem by Robert A. Wagner and Michael J. Fischer.
A straightforward implementation, as pseudocode for a function LevenshteinDistance that takes two strings, s of length m, and t of length n, and returns the Levenshtein distance between them:
int LevenshteinDistance(char s[1..m], char t[1..n]) { // for all i and j, d[i,j] will hold the Levenshtein distance between // the first i characters of s and the first j characters of t; // note that d has (m+1)x(n+1) values declare int d[0..m, 0..n] for i from 0 to m d[i, 0] := i // the distance of any first string to an empty second string for j from 0 to n d[0, j] := j // the distance of any second string to an empty first string for j from 1 to n { for i from 1 to m { if s[i] = t[j] then d[i, j] := d[i-1, j-1] // no operation required else d[i, j] := minimum ( d[i-1, j] + 1, // a deletion d[i, j-1] + 1, // an insertion d[i-1, j-1] + 1 // a substitution ) } } return d[m,n] }
Two examples of the resulting matrix (hovering over a number reveals the operation performed to get that number):
k i t t e n 0 1 2 3 4 5 6 s 1 1 2 3 4 5 6 i 2 2 1 2 3 4 5 t 3 3 2 1 2 3 4 t 4 4 3 2 1 2 3 i 5 5 4 3 2 2 3 n 6 6 5 4 3 3 2 g 7 7 6 5 4 4 3 S a t u r d a y 0 1 2 3 4 5 6 7 8 S 1 0 1 2 3 4 5 6 7 u 2 1 1 2 2 3 4 5 6 n 3 2 2 2 3 3 4 5 6 d 4 3 3 3 3 4 3 4 5 a 5 4 3 4 4 4 4 3 4 y 6 5 4 4 5 5 5 4 3 The invariant maintained throughout the algorithm is that we can transform the initial segment
s[1..i]
intot[1..j]
using a minimum ofd[i,j]
operations. At the end, the bottom-right element of the array contains the answer.Proof of correctness
As mentioned earlier, the invariant is that we can transform the initial segment
s[1..i]
intot[1..j]
using a minimum ofd[i,j]
operations. This invariant holds since:- It is initially true on row and column 0 because
s[1..i]
can be transformed into the empty stringt[1..0]
by simply dropping alli
characters. Similarly, we can transforms[1..0]
tot[1..j]
by simply adding allj
characters. - If
s[i] = t[j]
, and we can transforms[1..i-1]
tot[1..j-1]
ink
operations, then we can do the same tos[1..i]
and just leave the last character alone, givingk
operations. - Otherwise, the distance is the minimum of the three possible ways to do the transformation:
- If we can transform
s[1..i]
tot[1..j-1]
ink
operations, then we can simply addt[j]
afterwards to gett[1..j]
ink+1
operations (insertion). - If we can transform
s[1..i-1]
tot[1..j]
ink
operations, then we can removes[i]
and then do the same transformation, for a total ofk+1
operations (deletion). - If we can transform
s[1..i-1]
tot[1..j-1]
ink
operations, then we can do the same tos[1..i]
, and exchange the originals[i]
fort[j]
afterwards, for a total ofk+1
operations (substitution).
- If we can transform
- The operations required to transform
s[1..n]
intot[1..m]
is of course the number required to transform all ofs
into all oft
, and sod[n,m]
holds our result.
This proof fails to validate that the number placed in
d[i,j]
is in fact minimal; this is more difficult to show, and involves an argument by contradiction in which we assumed[i,j]
is smaller than the minimum of the three, and use this to show one of the three is not minimal.Possible improvements
Possible improvements to this algorithm include:
- We can adapt the algorithm to use less space, O(min(n,m)) instead of O(mn), since it only requires that the previous row and current row be stored at any one time.
- We can store the number of insertions, deletions, and substitutions separately, or even the positions at which they occur, which is always
j
. - We can normalize the distance to the interval
[0,1]
. - If we are only interested in the distance if it is smaller than a threshold k, then it suffices to compute a diagonal stripe of width 2k+1 in the matrix. In this way, the algorithm can be run in O(kl) time, where l is the length of the shortest string.[2]
- We can give different penalty costs to insertion, deletion and substitution. We can also give penalty costs that depend on which characters are inserted, deleted or substituted.
- By initializing the first row of the matrix with 0, the algorithm can be used for fuzzy string search of a string in a text.[3] This modification gives the end-position of matching substrings of the text. To determine the start-position of the matching substrings, the number of insertions and deletions can be stored separately and used to compute the start-position from the end-position.[4]
- This algorithm parallelizes poorly, due to a large number of data dependencies. However, all the
cost
values can be computed in parallel, and the algorithm can be adapted to perform theminimum
function in phases to eliminate dependencies. - By examining diagonals instead of rows, and by using lazy evaluation, we can find the Levenshtein distance in O(m (1 + d)) time (where d is the Levenshtein distance), which is much faster than the regular dynamic programming algorithm if the distance is small.[5]
Upper and lower bounds
The Levenshtein distance has several simple upper and lower bounds that are useful in applications which are applied with many of them and compare them. These include:
- It is always at least the difference of the sizes of the two strings.
- It is at most the length of the longer string.
- It is zero if and only if the strings are identical.
- If the strings are the same size, the Hamming distance is an upper bound on the Levenshtein distance.
See also
- agrep
- Bitap algorithm
- Damerau–Levenshtein distance
- diff
- Dynamic time warping
- Euclidean distance
- Fuzzy string searching
- Hamming weight
- Hirschberg's algorithm
- Homology of sequences in genetics
- Hunt–McIlroy algorithm
- Jaccard index
- Jaro–Winkler distance
- Levenshtein automaton
- Longest common subsequence problem
- Lucene (an open source search engine that implements edit distance)
- Manhattan distance
- Metric space
- Needleman–Wunsch algorithm
- Sequence alignment
- Similarity (mathematics)
- Similarity space on Numerical taxonomy
- Smith–Waterman algorithm
- Sørensen similarity index
Notes
- ^ В.И. Левенштейн (1965). "Двоичные коды с исправлением выпадений, вставок и замещений символов". Доклады Академий Наук СCCP 163 (4): 845–8. Appeared in English as: Levenshtein VI (1966). "Binary codes capable of correcting deletions, insertions, and reversals". Soviet Physics Doklady 10: 707–10. http://www.scribd.com/doc/18654513/levenshtein?secret_password=1aycnw239qw4jqjtsm34#full.
- ^ Gusfield, Dan (1997). Algorithms on strings, trees, and sequences: computer science and computational biology. Cambridge, UK: Cambridge University Press. pp. 263-264. ISBN 0-521-58519-8.
- ^ Navarro G (2001). "A guided tour to approximate string matching". ACM Computing Surveys 33 (1): 31–88. doi:10.1145/375360.375365.
- ^ Bruno Woltzenlogel Paleo. An approximate gazetteer for GATE based on levenshtein distance. Student Section of the European Summer School in Logic, Language and Information (ESSLLI), 2007.
- ^ Allison L (September 1992). "Lazy Dynamic-Programming can be Eager". Inf. Proc. Letters 43 (4): 207–12. doi:10.1016/0020-0190(92)90202-7. http://www.csse.monash.edu.au/~lloyd/tildeStrings/Alignment/92.IPL.html.
External links
Categories:- Algorithms on strings
- String similarity measures
- Dynamic programming
Wikimedia Foundation. 2010.