Saturday, November 15, 2014
ShortStraw: A Simple and Effective Corner Finder for Polylines
Bibliographical Information:
A. Wolin, B. Eoff, and T. Hammond. 2008. ShortStraw: a simple and effective corner finder for polylines. In Proceedings of the Fifth Eurographics conference on Sketch-Based Interfaces and Modeling (SBM'08), Christine Alvarado and Marie-Paule Cani (Eds.). Eurographics Association, Aire-la-Ville, Switzerland, Switzerland, 33-40. DOI=10.2312/SBM/SBM08/033-040 http://dx.doi.org/10.2312/SBM/SBM08/033-040
URL:
http://dl.acm.org.lib-ezproxy.tamu.edu:2048/citation.cfm?id=2386308
ShortStraw is an algorithm for corner finding that relies entirely on one "feature". This technique was originally intended for use in conjunction of other features that helped identify corners in a closed shape, but this particular feature was so accurate that the corners would be identified by using only ShortStraw.
The technique for using ShortStraw is relatively simple. Similar to $1, the algorithm begins by resampling the shape, except unlike $1, the shape is always resampled to N=40 points. From then, "straws" are created, which is essentially small Eucledian Distances between two points. One point is chosen as the "center", while an additional three points are chosen as the "window". The "positive window" are the three points in front of the "center" point, and the "negative window" are the three prior ones. For short, the frontmost point is p[i+w], where i is the current center point and w is the "window", in this case w=3. The backmost point is p[i-w]. Straws are calculated as follows: straw 1 is the eucledian distance between p[i+w] and p[i-w], straw 2 is distance between p[i+w-1] and p[i-w+1], and straw 3 is p[i+w-2] and p[i-w+2].
ShortStraw is divided into two approaches: top-down and bottom-up. Bottom-up is the process by which the straws are compared to first identify a "first pass" of detected corners, while the top-down approach takes these corners and runs another pass to eliminate false positives and identify missed corners.
The bottom-up process iterates through each point and creates the three straws as mentioned above for each point. If the largest straw is nearly identical (with a threshold of 95%) in size with that of the distance along all three points, we know the straw is "laying flat" on top of the shape thus it cannot be a corner. If the eucledian distnace is describing a corner, however, the straw will be much smaller. When these straws are smallest is when these points are considered to be corners.
The top-down approach then takes these corners and runs another pass, ensuring that every single point yields a "flat" line. If this is not the case, corners and removed/added depending on whether this fault was a false positive or false negative. This top-down approach is vital in removing erroneous results from the initial pass of the bottom-up approach.
One of the points raised by this paper is the fact that its definition of "accuracy" is strictly "all-or-nothing". In previous corner finding algorithms by other authors, very high accuracy rates were reported, but this paper feels that these are somewhat inflated due to the fact that there's no "penalty" for false positives. By this logic we could return every single point as a corner and still have a reported 100% accuracy. While this may make the corner finding algorithm sound accurate on paper, in practice reporting too many corners is identically detrimental to the user experience to reporting too few, or the wrong ones, or none at all. In any case, the user has to re-draw or correct their input diagram. The more false positives reported, the worse the user experience, even if the reported accuracy is 100%. Because of this, this algorithm is scrutinized under an all-or-nothing. If the corners found do not match the actual corners exactly (no more, no less), this is counted as a "miss". Under this much more practical definition of accuracy, ShortStraw's accuracy is 0.741 out of 1, while Sezgin's is 0.278 and Kim&Kim's is 0.297. As can be easily seen, ShortStraw is far more accurate at finding corners with an all-or-nothing accuracy model than previously established algorithms.
I believe this paper to be a very interesting one, both in concept and in execution. The concept of "straws" and the identification of local maxima and minima of straw length may at first sound somewhat unorthodox, but in practice it's shown to be highly important. Most intriguing of all is the fact that the algorithm uses only the length of straws to determine corners, whereas other corner-finding algorithms rely heavily on multiple features to make their decisions. Finally, the discussion on the importance of all-or-nothing is one that may have been inconvenient for several established algorithms, but since these techniques are ultimately intended for use by a layman, a technical definition of accuracy that omits false positives does not help the practical usability of these algorithms at all. In some ways this adoption of all-or-nothing changes the discussion of what it means for a recognition algorithm to be "accurate".
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment