Abstract We present algorithms for decomposing a polygon (with holes) into convex polygons by diagonals. The methods are...
Top (2008) 16: 367–387 DOI 10.1007/s11750-008-0055-2 O R I G I N A L PA P E R
A practical algorithm for decomposing polygonal domains into convex polygons by diagonals José Fernández · Boglárka Tóth · Lázaro Cánovas · Blas Pelegrín
Received: 11 October 2007 / Accepted: 21 April 2008 / Published online: 16 May 2008 © Sociedad de Estadística e Investigación Operativa 2008
Abstract We present algorithms for decomposing a polygon (with holes) into convex polygons by diagonals. The methods are computationally quick, and although the partitions that they produce may not have minimal cardinality they usually have a low number of convex pieces. Thus, the methods are suitable for being used when achieving a modest load on the CPU time is more important than finding optimal decompositions as, for instance, in location problems. Keywords Convex polygon decomposition · Polygonal holes · Location Mathematics Subject Classification (2000) 68U05 · 52B55 · 90B85
1 Introduction A convex decomposition of a polygon P is a set of convex polygons whose union gives P such that the intersection of any two of them, if non-empty, consists totally of edges and vertices. The problem of decomposing a polygon into convex polygons
In memory of Lázaro Cánovas (1967–2006). Part of the results in this paper are from Fernández (1999), and were presented in Fernández et al. (1998). This work has been supported by the Ministry of Science and Technology of Spain under the research projects BEC2002-01026, SEJ2005-06273/ECON (J. Fernández, B. Tóth and B. Pelegrín) and TIC2003-05982-C05-03 (L. Cánovas), in part financed by the European Regional Development Fund (ERDF). J. Fernández () · L. Cánovas · B. Pelegrín Department of Statistics and Operations Research, University of Murcia, Murcia, Spain e-mail:
[email protected] B. Tóth Department of Differential Equations, Budapest University of Technology and Economics, Budapest, Hungary
368
J. Fernández et al.
appears, for instance, in object representation, pattern recognition, database systems, computer graphics, or motion planning. References and other applications can be found in Fernández et al. (2000), Lien and Amato (2004). The problem that led us to consider such a decomposition was the definition of the feasible set in constrained planar location problems. Location science deals with the location of one or more facilities in a way that optimizes a certain objective such as minimizing transportation costs, minimizing the undesirable effects produced by the facility, capturing the largest market share, etc. For more details related to locational aspects see Drezner (1995). Whatever the location problem considered, it is required that the point(s) to be located belongs to a particular geographical area. That area may have a more or less complicated shape, but it can always be approximated by a polygon, maybe with polygonal holes (representing areas inside the polygon where the location is not possible), whose number of vertices will depend on the desired accuracy of the approximation. Unfortunately, those polygons, usually nonconvex, cannot be written in an analytical way through a set of linear constraints. So, in order to get an analytical expression of the feasible set the polygon must be decomposed into simpler sets. The most appropriate kind of sets into which to decompose the polygon is the family of convex polygons, both for their easy analytical writing and good optimization properties. When decomposing a polygon P into convex polygons we may follow two different goals: (a) partition a polygon into as few convex pieces as possible, or (b) partition as quickly as possible. The goals conflict, so we have to choose between one of the two following approaches: (1) compromise on the number of pieces, i.e., find a quick algorithm whose inefficiency in terms of the number of pieces is bounded with respect to the optimum, or (2) compromise on the time complexity, i.e., find an algorithm that produces an optimal partition, as quickly as possible. Furthermore, three different types of partition can be used: by diagonals (the endpoints of the edges of the partition must be vertices), by segments (the endpoints of the edges need only lie on ∂P ), or free (the endpoints may be any points of P ). When convex decompositions are to be applied in location problems, the best compromise solution is to find a quick algorithm whose inefficiency in terms of the number of pieces is bounded with respect to the optimum: many location problems are easier to solve in convex regions and, making use of a branch-and-bound process or other techniques, it is not necessary to find the optimal solution in all the subsets, so finding convex decompositions with minimal cardinality may be wasting time. On the other hand, from the optimization point of view it is not advisable to add new (Steiner) vertices in the partition because in many location problems vertices must be treated with special attention, since they may be local (sometimes global) optimal points or give information about optimal points, and adding new vertices will cause extra work in fathoming them. So, for location problems the partition by diagonals is recommended. Other decomposition techniques, such as the approximate convex decomposition of polygons presented in Lien and Amato (2004), are not suitable, since many optimization algorithms for location problems require the feasible set to be convex. Unfortunately, most of the research on polygon decomposition is devoted to the second compromise solution (see, for instance, Chazelle and Dobkin 1985; Keil 1985;
A practical algorithm for decomposing polygonal domains
369
Keil and Snoeyink 2002), and usually only the case without holes is considered: the problem of decomposing a polygon with holes into the minimum number of convex pieces is known to be NP-hard (see Keil 1985; Lingas 1982), so it does not have the theoretical attractiveness of finding polynomial algorithms to solve it. For the case without holes, Keil (1985) presented an exact O(nr 2 log r) time algorithm to decompose a polygon with n vertices, r of which are notches (their interior angles are reflex, i.e., greater than π ), into the minimum number of convex polygons without adding new vertices. Nevertheless, it is a dynamic programming algorithm and although he used the concept of equivalent states, as introduced in Elmaghraby (1970), to reduce the size of a state space in order to achieve a polynomial algorithm, it has, like other dynamic programming algorithms, the problem known as the curse of dimensionality (Denardo 1982), i.e., when run on a computer it is timeconsuming and needs much storage space, especially for polygons like those appearing in location problems with many vertices. Something similar happens with the exact Greene’s dynamic programming algorithm (Greene 1983) which runs in O(n4 ) and is part of the CGAL project1 (function optimal_convex_partition_2). On the other hand, we should remember that in order to obtain the decomposition of a polygon into the minimum number of convex polygons additional (Steiner) points must be introduced as vertices of newly generated polygons. This other problem was proved in Chazelle and Dobkin (1985) to be doable in polynomial time, contrary to what it was thought, although in that article the authors admitted that the O(n + r 3 ) time algorithm they proposed to show this is “inherently intricate and implementing it in its most elaborate form is certainly a formidable task”. Unfortunately, that is true for most of the decomposition algorithms in the literature. As pointed out in O’Rourke (1996), the researchers involved in this kind of problems have usually pursued asymptotically-optimal algorithms and it is now when greater emphasis is being placed on computationally-robust algorithms, on approximations, and on the need for simple, practical algorithms. Hertel and Mehlhorn (1983) presented an algorithm suitable when approximately optimal convex partitions are enough (no optimal partitions are required), as in location problems. The idea of that algorithm (also available in CGAL as function approx_convex_partition_2) is basically this: triangulate the polygon to be decomposed and then remove inessential diagonals. Their technique obtains convex decompositions by diagonals in O(n + r log r) time, and its inefficiency in terms of the number of pieces is bounded with respect to the optimum: no more than four times the optimal number of convex pieces, as any other algorithm producing partitions which do not contain inessential diagonals (see Corollary 2). Fernández et al. (2000) presented new algorithms with the same aim and bound. Although with higher theoretical complexity (see Sect. 2), they proved it to be better than Hertel and Mehlhorn’s, for both the computational times they use and, more importantly, the cardinality of the partitions they produce (see also Sect. 5). The reason for this is that the algorithms in Fernández et al. (2000) try to generate convex polygons with as many vertices as possible, and so the partitions they produce have low cardinality. On the other hand, Hertel and Mehlhorn’s algorithm does not follow 1 Computational Geometry Algorithms Library open source project. http://www.cgal.org/.
370
J. Fernández et al.
any strategy with that aim apart from removing inessential diagonals of the triangulation. An implementation of the algorithms in Fernández et al. (2000) can be found in Fernández et al. (1997). For the case with holes, Narkhede and Manocha (1995) implemented the O(n log∗ n) algorithm described in Seidel (1991) for triangulating simple polygons having no holes and extended the code to handle holes. Held (2001) has also presented an O(n2 h) algorithm able to triangulate polygons with n vertices and h polygonal holes, and in a comprehensive computational study it proved to be superior to Narkhede and Manocha’s implementation (in its current implementation FIST seems to have O(nh + nr) complexity (Held 2008), and FIST’s practical CPUtime consumption tends to be close to linear in n). If we apply a process for removing inessential diagonals to the triangulations obtained by both algorithms, we can obtain approximately optimal convex decompositions. However, as already shown in Fernández et al. (2000), the convex decompositions obtained from triangulations and removing inessential diagonals are far from being of minimal cardinality and usually have higher cardinality (see Sect. 5). In this paper we modify the algorithms in Fernández et al. (2000) so that they can also decompose a special kind of non-simple polygon which we called weakly-insimple polygon. Then, with the help of this kind of polygon we present methods for decomposing polygons with holes that produce partitions of low cardinality in a small amount of time, which is the target of this paper. To the extent of our knowledge, these are the first algorithms proposed in the literature with that specific aim. 2 Decomposing polygons without holes Throughout this paper, ang(a, b, c) denotes the angle between 0 and 360 degrees swept by a counterclockwise rotation from line segment ba to line segment bc. We next give one of the possible definitions for the notion of polygon. Definition 1 Let v1 , v2 , . . . , vn be n points in the plane (here and throughout all index arithmetic will be mod n, implying a cyclic ordering of the points, with v1 following vn ) and let e1 = v1 v2 , e2 = v2 v3 , . . . , en = vn v1 be the n segments connecting the points. Suppose the following conditions hold: (p1) The intersection of each pair of segments adjacent in the cyclic ordering is the single point shared between them: ei ∩ ei+1 = vi+1 , ∀i = 1, . . . , n. (p2) Nonadjacent segments do not intersect: ei ∩ ej = ∅, ∀j = i + 1. The closed bounded connected region of the plane enclosed by the segments e1 , . . . , en is said to be the simple polygon generated by the points v1 , . . . , vn . The points vi are called the vertices of the polygon, and the segments ei are called its edges. The boundary of a simple polygon P , denoted by ∂P , is formed by the vertices and edges of P ; notice that ∂P ⊂ P . The term “simple” is used to distinguish this kind of polygons from those that cross themselves, that is, from those that do not satisfy (p2). Nevertheless, throughout we will drop the modifier “simple”; unless otherwise stated, the term polygon will be used to refer to simple polygons.
A practical algorithm for decomposing polygonal domains
371
In Fernández et al. (2000), the authors presented different greedy-type algorithms for the decomposition of a simple polygon into convex polygons by diagonals, which proved to be very well suited for being used when optimal decompositions are not required; they are quick and their inefficiency in terms of the number of pieces is bounded with respect to the optimum (see Corollary 2). Those algorithms follow a “divide and conquer” scheme. They start with the whole polygon to be decomposed. Then, given an initial vertex, a convex polygon of the partition is generated using a procedure and is cut off from the initial polygon. This process is repeated with the remaining polygon until it is convex, in which case it will be the last polygon of the partition. The best algorithm described in Fernández et al. (2000), MP3, works as follows. The vertices of the next future convex polygon of the partition are stored in a list, L. Initially, L consists of one vertex, say v1 . Then, we add the next consecutive vertex (in clockwise order) of P , v2 , to L. If the last vertex we have already added to L is vi then we provisionally add vi+1 to L if the angles ang(vi−1 , vi , vi+1 ), ang(vi , vi+1 , v1 ) and ang(vi+1 , v1 , v2 ) are less than or equal to 180◦ (this is to avoid vi , vi+1 , and v1 to become notches, respectively). We go on adding new vertices to L until all the vertices of P are in L or until we first find a vertex failing one of the three conditions. In the first case, the convex polygon generated is the whole polygon P , that is, it was convex, and the algorithm stops. So, let us suppose that the final provisional list is L = {v1 , . . . , vk }, 2 ≤ k < n. If k > 2, then we have to check whether the convex polygon generated by the diagonal vk v1 contains vertices of P \ L. Notice that if there are vertices of P \ L inside the convex polygon then at least one of them must be a notch. So we can reduce the checking to the notches of P \ L. When the checking starts, we first generate the smallest rectangle R with sides parallel to the axes containing all the vertices of L. If v is a notch of P \ L we first check whether v is inside R. If so, the checking to see whether v is inside the convex polygon generated by L is done by studying whether v verifies all the linear constraints defining the polygon. If a vertex v is found to be in the polygon generated by L, then we remove from L its last vertex vk and all the vertices of L in the half-plane generated by v1 v containing vk . This process is repeated with the new L until no vertex is inside the polygon generated by L. Then, so as to get a bigger polygon, we try to expand L by adding in counterclockwise order, consecutive, adjacent vertices to it in its first position v1 , doing a similar process to the one described above. If L has then more than two vertices, and at least one of the vertices of the diagonal joining the last and first vertices in L is a notch, it generates one of the polygons of the partition, else the procedure does not generate a polygon in this call. The algorithm continues by calling MP3 again, considering as initial vertex the last vertex (in clockwise order) of L. MP3 can be improved if instead of applying MP3 from the last vertex of the last convex polygon generated, we apply it from the first notch (in clockwise order) that we find starting from the last vertex (in clockwise order) of the last convex polygon generated, and if no polygon is generated, in counterclockwise order. This modified procedure will be called MP4. Notice that in this way we have to add vertices to L in only one direction. Another new procedure, which will be called MP5, can be obtained by applying MP3 from the first notch (in clockwise order) that we find starting from the last vertex
372
J. Fernández et al.
of the last convex polygon. If from a given notch no convex polygon is generated the procedure is repeated from the next one. However, it may happen that MP3 does not produce any convex polygon from any notch. In that case, we apply the same process but counterclockwise and so on. That is, in MP5 we change the direction only when it is not possible to generate any convex polygon from any notch. The theoretical running times of those algorithms can be computed as follows. Assume that we want to decompose a polygon P with n vertices (r of the them are notches and the other n − r are convex) with MP3. Adding vertices to the provisional list L can be done in O(n − r) (there are n − r convex vertices, hence we can add at most n − r + 2 vertices, and each of them needs just three products to check the angles), the computation of the smallest rectangle R containing the convex polygon generated by the vertices in L is O(n − r), and finding the notches inside R is O(r). On the other hand, to check whether the notches are inside the convex polygon generated by L needs O(r(n − r)) (we may have to check r notches, and for each of them we may need to do n − r products to check if it is inside the polygon). Removing vertices from L can be done in O(n − r). Since this process may be repeated up to r times, we need O(n − r) + r · (O(n − r) + O(r) + O(r(n − r))) = O(r 2 (n − r)) time to generate a polygon of the partition. This process may need to be repeated at most n − 2 times, but with diminishing number of vertices: by composing the smallest polygon, a triangle, a convex vertex is eliminated, and the other two vertices may change from notches to convex vertices (if any of them was notch). Thus, after the ith step the is in fact at most O(r 2 (n − r − i)). Hence the total complexity complexity 2 (n − r − i)) = O(nr 2 ( n − r) + 5 nr 2 − 3r 2 ). In particular, if r is close r is O( n−3 i=0 2 2 to n/2 (as it has happened in our computational studies), then the complexity is about O(r 3 ). If we also apply the process to remove inessential diagonals, this needs at most O(n) time, hence the total complexity remains the same. Similar computational times are obtained for algorithms using MP4 and MP5. It is important to note that, whatever the procedure we choose, the choice of the initial vertex affects the final solution, i.e., different initial vertices may produce different partitions, even with different cardinality. On the other hand, the partitions produced by the algorithms may contain inessential diagonals. Definition 2 Given a partition of a polygon into convex polygons, a diagonal d of the partition is said to be essential for vertex v if removal of d creates a piece that is nonconvex at v. A diagonal that is not essential for any vertex is called inessential. If a diagonal is inessential, it can be removed from the partition since the two convex polygons sharing the diagonal can be merged to generate a single one. In Fernández et al. (2000) the authors presented a merging process to remove inessential diagonals, which can be used after any partitioning process. The following result is well known (see, for instance, Fernández et al. 2000). Theorem 1 Let OPT be the fewest number of convex subpolygons into which a polygon P may be partitioned. If polygon P has r notches, then r + 1 ≤ OPT ≤ 2r + 1. 2
A practical algorithm for decomposing polygonal domains
373
Since at most two diagonals are essential for any reflex vertex (see Hertel and Mehlhorn 1983), the following result follows. Corollary 2 Any decomposition without inessential diagonals is within four times of the minimum decomposition. In particular, this holds for the partitions generated by any decomposition algorithm after applying the merging process. Nevertheless, we think that the bound on the absolute quality of an algorithm is not a good representative measure of its performance, since the bound must take the worst possible case into account, which hardly appears in practice. In fact, the computational studies presented in Fernández et al. (2000) clearly show that the performance of the algorithms is much better than suggested by Corollary 2 (see also Sect. 5).
3 Decomposing weakly-in-simple polygons Next we give the definition of weakly-simple polygon, a type of non-simple polygon introduced in Toussaint (1989). Definition 3 Let C1 and C2 be two oriented (possibly self-intersecting) curves. We say that C1 and C2 have a proper crossing provided that, as we traverse C1 from its starting point to its finishing point we encounter a neighborhood of C2 where C1 intersects C2 and actually switches from one side of C2 to the other (see Fig. 1). Definition 4 Let C be a closed polygonal path (a closed path consisting of a sequence of line segments) such that:
Fig. 1 Example of intersecting curves (a) with proper crossing and (b) without proper crossing
374
J. Fernández et al.
Fig. 2 Polygon (a) is weakly-in-simple (notice, for instance, that ang(v1 , v2 , v3 ) and ang(v23 , v24 , v25 ) do not overlap) whereas polygon (b) is weakly-simple, but not weakly-in-simple (ang(v1 , v2 , v3 ) and ang(v9 , v10 , v11 ) do overlap)
1. every pair of distinct points of C partitions C into two polygonal chains that have no proper crossings, and 2. the sum of all the angles turned when C is completely traversed starting and ending from any point on C is equal to 360 degrees. The (possibly disconnected) bounded region P of the plane defined by C is called a weakly-simple polygon (if you walked along C visiting the vertices in clockwise order, the interior of the polygon would be always to your right). In this paper we will use a particular type of weakly-simple polygon, which we call weakly-in-simple polygon. Definition 5 We say that two angles ang(a, b, c) and ang(d, b, e) incident at vertex b overlap if the intersection of the interior of the cones that they generate is not empty. Definition 6 Let P be a weakly-simple polygon. P is said to be a weakly-in-simple polygon provided that the internal angles of P do not overlap. The polygons in Fig. 2 may help to clarify this definition. A modification of the algorithms described in Sect. 2, which we describe next, allows them to decompose this kind of non-simple polygons. In the procedures to generate the next convex polygon of the partition, a vertex v is considered to be inside the polygon generated by the vertices in L (this implies we will have to remove some vertices of L) if v lies in its interior or on its boundary. We need to require this, since otherwise we may provoke the impossibility of resolving some vertices (see Fig. 3a). The modification is the following: now, a vertex v on the boundary of the polygon generated by the vertices in L is considered outside that polygon (no removing of vertices will then be necessary) if the co-ordinates of v coincide with those of one
A practical algorithm for decomposing polygonal domains
375
Fig. 3 (a) The diagonal v5 v1 does not allow us to resolve v8 and v9 . (b) The diagonal v4 v1 does not impede to resolve any vertex
of the vertices in L. In this way, double vertices (with the same co-ordinates) do not exclude themselves from forming part of a convex polygon. No more modifications are necessary. For instance, in Fig. 3b, v10 and v11 are now considered to be outside the convex polygon generated by L = [v1 , v2 , v3 , v4 ], so we can draw the diagonal d = v4 v1 . The reason that the modification allows us to decompose weakly-in-simple polygons is that the internal angles of double vertices do not overlap, so the diagonals and edges incident to a double vertex cannot intersect the diagonals or edges incident to the other double vertex.
4 Decomposing polygons with holes To the extent of our knowledge, apart from the algorithms described in Held (2001) and the implementation of Seidel’s algorithm (Seidel 1991) given in Narkhede and Manocha (1995) to triangulate polygons with holes, Fernández (1999) is the only work where algorithms for decomposing a polygon with holes are described. In this paper, the most practical and efficient of the two algorithms presented in Fernández (1999) is described. First, we define what we mean by a polygon with holes. Definition 7 Let P be a polygon enclosing other polygons H1 , . . . , Hh , each of which is empty (does not enclose any other polygon). Suppose the following conditions hold: (h1) ∂P ∩ ∂Hi = ∅, ∀i = 1, . . . , h. (h2) Hi ∩ Hj = ∅, ∀i = j . The region of the plane interior to or on the boundary of P , but exterior to or on the boundary of H1 , . . . , Hh is called polygon P with holes H1 , . . . , Hh .
376
J. Fernández et al.
Algorithm 1 AbsHol Input: P , H1 , . . . , Hh Output: LPCP 1: Read the vertices of P (border polygon) and H1 , . . . , Hh (holes). 2: LPCP ← ∅, Q ← P . 3: while Q is not empty do 4: Obtain a convex polygon C of the partition of Q using one of the procedures described in Sect. 2, modified as in Sect. 3. Let d = vi vf be the diagonal generating the polygon. 5: if d is cut by a hole or there is a hole inside C then 6: if d is not cut by a hole then 7: d ← vi vhole where vhole is a vertex of one of the holes inside C. 8: end if 9: (d = vi vf , H ) ← DrawTrueDiagonal(d, C). 10: Absorption of H : Modify the list Q of vertices of the border polygon, adding counterclockwise and after vi all the vertices of H (starting and finishing by vf ), and then vi again. 11: else 12: LPCP ← LPCP ∪ {C}, Q ← Q \ C. 13: end if 14: end while 15: LPCP ← LPCP ∪ {Q}.
Fig. 4 Procedure DrawTrueDiagonal
The algorithm for the decomposition of a polygon with holes into convex polygons, named AbsHol (see Algorithm 1), reduces the problem with holes to the decomposition of weakly-in-simple polygons (without holes). Following the notation in Definition 7, let P be a polygon with holes H1 , . . . , Hh . The algorithm starts applying any of the procedures described in Sect. 2, modified as in Sect. 3, to the border polygon P . Each convex polygon generated is cut off from P and stored in a list LPCP, with the rest of convex polygons generated so far by the algorithm. Let Q denote the remaining border polygon still to be decomposed. If a diagonal d = vi vf is intersected by any of the holes (see Fig. 4a), or the polygon that it generates contains a hole, then the algorithm draws a new diag-
A practical algorithm for decomposing polygonal domains
377
Algorithm 2 Procedure DrawTrueDiagonal Input: (d, C) Output: (d , H ) 1: Read d = vi vf and the vertices of C. 2: while d is intersected by the holes do 3: Find all the edges of holes which intersect d, and calculate the corresponding intersection points. 4: Find the intersection point pc closest to vi . Let e denote the edge containing pc , and let es denote the endpoint of e that being in C is closest to vi . 5: d ← v i es . 6: end while 7: d ← d is a true diagonal. Set H equal to the hole containing es . onal (not intersected by any of the holes nor by the border polygon) from the initial vertex of the diagonal, vi , to one of the vertices of one of the holes, using the procedure DrawTrueDiagonal, which will be explained in detail later. Let d = vi vf be the true diagonal so obtained and let H be the hole containing vf . Using d as a bridge between the border polygon and the hole, H is then “absorbed” by Q, that is, H becomes part of the border polygon. Notice that the new border polygon Q is a weakly-in-simple polygon (see Fig. 4b). The algorithm then goes on generating convex polygons with the new Q until it is convex. The steps of AbsHol are detailed in Algorithm 1 (the symbol ← denotes an assignment). Every time the algorithm draws a diagonal d and generates a polygon C, we have to check whether the diagonal is intersected by any of the non-absorbed holes. To do it, it is not necessary to check all the non-absorbed holes. If we denote by Ri the smallest box (rectangle with sides parallel to the axes) containing the hole Hi , i = 1, . . . , h, and by RD the smallest box containing d, then the checking can be reduced to the non-absorbed holes Hi for which RD ∩ Ri = ∅. If Hi is a hole for which RD ∩ Ri = ∅ holds, to check whether d is intersected by Hi , we check for each edge of Hi whether d intersects the edge, by using the technique of signed areas (see, for instance, Sect. 1.5 of O’Rourke 1994). Alternatively, a ray shooting technique can be applied (Havran and Purgathofer 2003). If none of the holes cuts d, then we have to check whether C contains a non-absorbed hole. Let RC be the smallest box containing C. Only the non-absorbed holes Hi for which Ri ⊆ RC can be contained in C. Let us suppose that Hi is a hole for which Ri ⊆ RC. Then, since d is not cut by any hole, Ri is inside C if and only if one of the vertices of Ri is inside C. This can be easily examined, just by choosing a vertex of Ri and checking whether it is on the right of each of the edges of C. The procedure DrawTrueDiagonal is described in Algorithm 2. Notice that in Step 4 of Algorithm 2, the vertex es chosen as final vertex of the new provisional diagonal must lie inside C. In this way we are sure that the new diagonal vi es is not cut by any of the edges of the border polygon P . Nevertheless, it may be cut by edges of holes inside C (see Fig. 4a), even by edges of the polygon containing es . That is why the process is repeated with the new provisional diagonal until it is not cut by any hole.
378
J. Fernández et al.
The theoretical running time of AbsHol can be computed as follows. Assume that we want to decompose a polygon P enclosing h polygonal holes, H1 , . . . , Hh . The border polygon P has n vertices (r notches and n − r convex). Analogously, polygon Hi has ni vertices (ri notches and ni − ri convex). Hence, considering all the holes together they have nH = n1 + · · · + nh vertices and rH = r1 + · · · + rh notches. Thus, in all, considering the border polygon and the holes, we have N = n + nH vertices, and from the point of view of the polygon to be decomposed, R = r + (nH − rH ) of them are notches and N − R are convex. For generating a convex decomposition of the polygon obtained from P absorbing the holes we need O(N R 2 ( N2 − R) + 5 2 2 2 N R − 3R ) time. But every time that we draw a diagonal, we have to be sure that the diagonal is not cut by a hole and that the polygon generated by the diagonal does not contain any hole. Otherwise we absorb the corresponding hole. Generating the rectangle RD containing the diagonal d needs constant time, generating the rectangle Ri containing the hole Hi needs O(ni ) time, checking whether Ri intersects RD needs constant time, checking whether Hi cuts d and computing the corresponding cut-points needs O(ni ), and finding the cut-point closest to the initial vertex of the diagonal d needs O(ni ) time. Hence, for a given hole, this process needs O(ni ) + O(ni ) + O(ni ) = O(ni ). We may need to repeat this process for all the holes, hence we may need O(n1 ) + · · · + O(nh ) = O(nH ) time. The process described has to be repeated with the new diagonal, at most n1 + · · · + nh = nH times, hence we may need O(n2H ). Absorbing the hole can be done in O(nmax ), where nmax = max{ni : i = 1, . . . , h}. Hence the process needs O(n2H ) + O(nmax ) = O(n2H ), since the initial diagonal d is generated until the hole is absorbed. The absorbtion has to be done h times, but it may the necessary to repeat the process O(N ) times. Thus, the total complexity of the algorithm is O(N R 2 ( N2 − R) + 52 N R 2 − 3R 2 ) + O(N n2H ). The algorithm for the decomposition of polygons with holes that we have described may produce partitions with inessential diagonals, since it uses algorithms which in turn may produce inessential diagonals. The merging process described in Fernández et al. (2000) and mentioned in Sect. 2 can be used to remove the inessential diagonals of a partition. No major modifications are required. Just notice that the external angles of the holes are the corresponding internal angles of the polygon with holes, and that we also have to check whether the diagonals drawn with the procedure DrawTrueDiagonal to absorb holes can be removed. The following results about the quality of the partitions after applying the merging process can be obtained. Theorem 3 Let P be a polygon containing h holes, and let r be its number of notches (considering both vertices of the border polygon and vertices of the holes). Let OPT be the fewest number of convex subpolygons into which such a polygon may be partitioned. Then r + 1 − h ≤ OPT ≤ 2r + 1 − h. 2 Proof We know that the smallest number of diagonals required for resolving the notches is 2r , and in the worst case we need at most 2r (see Fernández et al. 2000). On the other hand, the first diagonal incident to any of the holes does not produce
A practical algorithm for decomposing polygonal domains
379
Fig. 5 Optimal convex decompositions: (a) r = 8 and OPT = 2r + 1 − h = 4; (b) r = 32 and OPT = 2r + 1 − h = 12; (c) r = 4 and OPT = 2r + 1 − h = 8; (d) r = 10 and OPT = 2r + 1 − h = 8
any convex polygon; it just connects the hole with another hole or with the border polygon. So, h of the diagonals do not produce any convex polygon. As we can see in Fig. 5 both bounds can be attained for some polygons. Corollary 4 The number of diagonals (including the diagonals connecting the holes) of any decomposition without inessential diagonals is within four times of the number of diagonals of an optimal decomposition. In particular, this holds for any convex decomposition after applying the merging process. Proof If suffices to make the following observation: to resolve a reflex angle we need at most two diagonals, which would be essential for that vertex (see Fernández et al. 2000), so to decompose a polygon into convex polygons we need at most 2r (essential) diagonals; on the other hand, a diagonal can resolve at most two reflex angles, so
380
J. Fernández et al.
to decompose a polygon into convex polygons we need at least 2r diagonals. Since 2r ≤ 4 2r , the assertion follows. However, as we will see in the next section, the performance of the algorithm is much better than suggested by Corollary 4.
5 Computational experiments Most of the articles dealing with convex decompositions do not include computational experiments. This is due, in part, to the fact that some of the algorithms proposed in the literature are theoretical, their only purpose being to offer low complexity, but that in practice are extremely difficult to implement in any programming language (see Chazelle and Dobkin 1985). Another reason is that there are no published test problems with which to compare running times and cardinality of the partitions when the algorithms are not exact. To the extent of our knowledge, only two papers do include computational studies. Held (2001) compared four existing algorithms for the triangulation of polygons with his own code, named FIST. Notice that the aim of all the tested algorithms is to decompose the polygon into triangles; furthermore, all of them are designed to handle simple polygons, with the exception of Narkhede and Manocha’s implementation of Seidel’s algorithm (in the actual implementation the algorithm can handle polygon with holes) and FIST, which can triangulate polygons with holes and other non-simple polygons. As suggested in the introduction, if we apply to the triangulations obtained by these two last algorithms the process for removing inessential diagonals, then we obtain convex decompositions whose bound with regard to the optimum is the same as the one of the algorithms proposed in this paper. However, similarly to what happened in Fernández et al. (2000) for the case without holes, it is to be expected that those decompositions will be not only more time-consuming (due to the process for removing inessential diagonals, which will have to check many more diagonals than in the decompositions obtained with the algorithms presented in this paper) but also, and most importantly, will have higher cardinality. The results in Sect. 5.2 confirm this point. This is not surprising: the algorithms presented in this paper try to generate convex polygons with as many vertices as possible, and so the partitions they produce have low cardinality; on the other hand, the triangulating algorithms do not follow any strategy with that aim, apart from the one of the merging process (which just tries to remove the inessential diagonals of the triangulation). The other paper which includes computational studies is Fernández et al. (2000) (see also Fernández 1999), and in a sense, the results given next can be seen as the continuation of those shown there for polygons without holes. To carry out our study we have generated random polygons without and with holes, by means of RPG (R ANDON P OLYGON G ENERATOR, Auer and Held 1996), a tool designed for the generation of pseudorandom polygons. In particular, we have generated 5 polygons with each of the algorithms 2-opt Moves (o), Steady Growth (g), Quick Star (q), Space Partitioning (s), and Triangle (t) of RPG, other 25 polygons were generated by smoothing the output of polygons obtained
A practical algorithm for decomposing polygonal domains
381
with the mentioned procedures two times by means of RPG’s algorithm Smooth (the corresponding types of polygons will be denoted by oS2, gS2, qS2, sS2, and tS2, respectively; Smooth produces a new polygon with twice the number of vertices by replacing vertex vi of the polygon by two new vertices (vi−1 + 3vi )/4 and (vi+1 + 3vi )/4), and other 25 polygons applying four times Smooth (denoted by oS4, gS4, qS4, sS4, and tS4, respectively). And this has been done to generate polygons with 400, 2000, and 8000 vertices (75 polygons of each size). For polygons with more vertices we did the same, except that we did not generate polygons of type ‘g’, since for resulting polygons with holes were not simple. Hence, we have generated 70 polygons with 16000 vertices and other 70 polygons with 32000 vertices. In all, we have generated 365 polygons without holes and other 365 with holes. The polygons have complicated shapes, making them appropriate to test decomposition algorithms. Notice that the complexity of a polygon not only depends on its number of vertices, but also on its number of notches and shape. All the computational results have been obtained under Linux on a Pentium IV with 1.6 GHz CPU and 2 GB memory. The codes, which can be obtained from the authors upon request (as well as the test problems), were implemented in C++. All the CPU times are given in seconds. 5.1 Decomposing polygons without holes In Fernández et al. (2000), it was already shown that the algorithm MP3 described in Sect. 2 was better than Hertel and Mehlhorn’s algorithm, another simple practical algorithm for decomposing polygons without holes into convex polygons, since the cardinality of the partitions that it produces is lower and its CPU times are smaller. However, those results were obtained decomposing polygons with up to 150 vertices. The aim of this subsection is twofold. First, we want to investigate whether those results remain valid for bigger polygons. Second, we want to compare the cardinality of the partitions obtained by both algorithms with the cardinality of the optimal decompositions. To do it, we have decomposed the polygons using four different algorithms, namely: – MP5, a modification of MP3 described in Sect. 2 (we have used neither MP3 nor MP4, because MP4 is better than MP3, and preliminary studies showed that the implementation of MP5 was quicker than that of MP4, although they both produced partitions of similar cardinality). – MP5 with the merging process. It will be denoted by ‘5m’. – Hertel and Mehlhorn’s (1983) algorithm (in particular, we have used the implementation in CGAL, function approx_convex_partition_2, which removes inessential diagonals). It will be denoted by ‘HM’. – Greene’s (1983) algorithm, a decomposition algorithm which obtains decompositions of minimal cardinality; in particular, we have used the implementation in CGAL (function optimal_convex_partition_2). It will be denoted by ‘Gr’. For the sake of brevity, we present the results for the 365 polygons grouped according to the number of vertices. The results are summarized in Table 1. For every
382
J. Fernández et al.
Table 1 Decomposition of polygons without holes Vertices
notch
C(MP5)
C(5m)
T(5m)
C(HM) C(5m)
T (HM) T (5m)
C(Gr) C(5m)
T (Gr) T (5m)
400
181
174
151
0.007
1.25
39.64
0.87
6222.85
2000
954
918
798
0.070
1.22
35.43
0.85
11045.46
8000
3855
3685
3197
1.137
1.22
17.88
16000
7727
7097
6230
2.906
1.23
20.57
32000
15442
14130
12415
11.905
1.23
15.54
group of polygons we give, after the mean of the number of notches in the polygons (notch), the mean of the number of polygons of the decompositions obtained by MP5 and 5m (C(MP5) and C(5m), respectively) and the mean of the CPU time employed by 5m, in seconds (T(5m)). For the other two algorithms we give the ratio between the number of polygons that they generate (C(HM) and C(Gr), respectively) and C(5m), and the ratio between the CPU time that they need (T(HM) and T(Gr), respectively) and T(5m). However, since Greene’s algorithm was so time-consuming, we have only been able to decompose the polygons with up to 2000 vertices. That is why for the bigger problems we do not present the corresponding results for that algorithm. As we can see, the merging process is very effective at reducing the cardinality of the partitions produced by MP5 (see columns C(MP5) and C(5m)), with relative improvements around 13% for all the groups of polygons. On the other hand, the use of the merging process is not too time-consuming. For instance, on average, only 6.8% of the CPU time employed by 5m in the polygons with 32000 vertices is due to the merging process. Moreover, since after the merging process the partitions do not contain unnecessary edges it is theoretically guaranteed that they are within four times the optimal solution. However, the performance of 5m is much better in practice. As we can see, for the polygons with 400 and 2000 vertices, the mean of the ratio C(Gr)/C(5m) is 0.86, which is not only very far from the theoretical lower bound 0.25, but close to the optimal ratio 1.0 to be produced by any algorithm producing optimal decompositions. As expected, Hertel and Mehlhorn’s algorithm produces decompositions with higher cardinality. On average, the ratio C(HM)/C(5m) is 1.23, i.e., it produces 23% more polygons than 5m, and this ratio is similar regardless of the number of vertices of the polygons. The average time needed by 5m is small (see column T(5m)), although for a fixed number of vertices it may vary considerably depending on the heuristic used to generate the polygon. For instance, for the polygons with 32000 vertices, those generated with 2-opt Moves (type ‘o’) are decomposed in 39.05 seconds, whereas 3.14 seconds are required for those of type ‘qS4’. In any case, 5m is much quicker than HM (see column T(HM)/T(5m) and although the difference tends to diminish as the number of vertices increases, 5m is 15 times quicker than HM for the polygons with 32000 vertices. Of course, Greene’s algorithm is considerably much slower (see column T(Gr)/T(5m)). It needs nearly 13 minutes time to decompose polygons with 2000 vertices, whereas 5m only needs 0.070 seconds, but recall that Greene’s algorithm obtains optimal decompositions.
A practical algorithm for decomposing polygonal domains
383
Table 2 Decomposition of polygons with holes Vertices
V.H.
notch
C(A5m)
T(A5m)
C (Fm) C (A5m)
T (Fm) T (A5m)
C (Sm) C (A5m)
T (Sm) T (A5m)
400
97
192
163
0.007
1.03
1.90
1.13
5.38
2000
326
973
818
0.076
1.04
1.51
1.11
1.83
8000
798
3882
3228
1.237
1.06
1.18
1.14
1.62
16000
1208
7766
6261
3.194
1.07
1.73
1.14
1.72
32000
1755
15506
12492
13.151
1.07
1.66
1.15
1.71
5.2 Decomposing polygons with holes As already mentioned in Sect. 1, apart from the algorithm AbsHol described in this paper, the only available algorithms in literature able to obtain approximately optimal convex decompositions of polygons with holes seem to be triangulating algorithms that can cope with polygons with holes, followed by a process for removing inessential diagonals. To compare the different approaches, in this subsection the polygons have been decomposed using three different algorithms: – AbsHol, in which MP5 is used as the procedure for generating convex polygons, with the merging process. It will be denoted by A5m. – FIST, the triangulating algorithm by Held (2001) (we used the default triangulating option ‘-top’), with the merging process, denoted by ‘Fm’. – Narkhede and Manocha’s (1995) implementation of Seidel’s (1991) triangulating algorithm, with the merging process, denoted by ‘Sm’. We have to mention that, contrary to our algorithm, for Fm and Sm the merging process takes more time than the corresponding triangulating algorithms. This is because triangulating is very fast, but the number of inessential diagonals are much larger than in our case. And also because the diagonals have to be found; in the triangulating algorithms the diagonals are not stored, while MP5 saves them during the decomposition process. This means that the computational time can be somewhat faster for the triangulating algorithms if the merging process is done in the same program, and the diagonals are saved during the triangulating process. Now, we cannot compare the cardinality of the partitions produced by the algorithms with the cardinality of the optimal decompositions, since there is no algorithm in the literature designed to do so. The results are summarized in Table 2, grouped again according to the number of vertices of the polygons. After the average of the number of vertices which are part of the holes (V.H.) and the number of notches in the polygon (considering both the border polygon and the holes), we give, as for the case without holes, the mean of the number of polygons of the decompositions obtained by A5m, C(A5m), and the mean of the CPU time employed by the algorithm, in seconds, T(A5m). For the other two algorithms we give the ratio between the number of polygons that they generate (C(Fm) and C(Sm), respectively) and C(A5m), and the ratio between the CPU time that they need (T(Fm) and T(Sm), respectively) and T(A5m).
384
J. Fernández et al.
Table 3 Decomposition of polygons with holes with 32000 vertices Proced.
V.H.
notch
C(A5m)
T(A5m)
C (Fm) C (A5m)
T (Fm) T (A5m)
C (Sm) C (A5m)
T (Sm) T (A5m)
o
1119
16002
16789
42.901
0.92
0.50
0.99
0.50
oS2
1496
16028
11569
9.793
1.03
2.20
1.17
2.20
oS4
3249
15950
10846
6.600
1.18
3.29
1.25
3.29
gS2
1410
15068
11474
15.482
1.10
1.40
1.14
1.11
gS4
4987
15243
10345
5.707
1.21
3.80
1.25
3.82
105
14286
16691
22.007
1.04
1.10
1.06
1.06
q qS2
173
14317
10446
6.879
1.16
3.18
1.20
3.11
qS4
801
14458
9176
4.091
1.30
5.31
1.34
5.44 1.06
531
16065
17317
20.610
0.97
1.05
0.99
sS2
s
1887
16017
11244
6.675
1.10
3.26
1.20
3.23
sS4
3113
15980
10425
4.935
1.21
4.36
1.29
4.40
690
15886
16612
26.645
0.88
0.81
0.98
0.82
tS2
t
2710
15890
11265
7.615
1.02
2.86
1.19
2.88
tS4
2227
15802
10484
4.638
1.17
4.67
1.28
4.68
Aver
1755
15506
12492
13.151
1.07
1.66
1.15
1.71
As the results prove, A5m is the best algorithm, since on average it produces partitions with lowest cardinality (see columns C(Fm)/C(A5m) and C(Sm)/C(A5m)) and needs less time (see columns T(Fm)/T(A5m) and T(Sm)/T(A5m)). Notice, however, that the differences with the triangulating algorithms are now much smaller. This is not due to the introduction of holes. In fact, if we compare the results obtained by 5m for the polygons without holes with those obtained by A5m for the polygons with holes (see Tables 1 and 2), we can see that there is only a slight increase in the number of polygons and in the CPU time, but this is related to the increase in the number of notches. The reason is that the triangulating algorithms used in this subsection produce triangulations for which the minimum interior angle of the triangles is not, on average, as small as those produced by the implementation of Hertel and Mehlhorn’s algorithm used in the previous subsection, and the merging process can then remove more inessential diagonals in this better quality triangulations. In Table 3 we can see the results for the polygons with 32000 vertices grouped according to the types of polygons generated by RPG. Analyzing those more detailed results we can see that the smoother the polygon is, the better A5m is, both in the cardinality of the partitions that it produces and in the CPU time that it needs to decompose the polygon. Similar conclusions can be inferred for the other sizes of the polygons. 6 Conclusions In this paper, first, we have presented practical algorithms for decomposing a polygon without holes into convex polygons by diagonals, and their theoretical running
A practical algorithm for decomposing polygonal domains
385
Fig. 6 Decomposition of Finland
times have been assessed. The procedures have been then modified so that they can decompose weakly-in-simple polygons. Using them, we have presented greedy-type algorithms for the decomposition of polygons with holes into convex polygons that produce partitions of low cardinality in a small amount of time, which is the target of this paper; for some application areas, such as facility location, achieving a modest load on the CPU time is more important than finding minimum decompositions. The idea of the algorithms is to apply the procedures for polygons without holes to the border polygon, and if in the process a diagonal is intersected by a hole, then the hole is absorbed by the border polygon, producing a weakly-in-simple polygon. The procedure for removing inessential diagonals described in Fernández et al. (2000) has also been adapted to handle decompositions of polygons with holes. The computational studies on polygons without holes have corroborated the two findings in Fernández et al. (2000): (i) it is recommended the use of the merging process, since it is quick and can reduce the number of polygons in the partitions considerably, and (ii) the decomposition algorithms presented in this paper produce approximate optimal convex decompositions with a lower cardinality than those obtained by triangulating algorithms after removing inessential diagonals, and this using less CPU time. Furthermore, in this paper we have also compared the cardinality of the decompositions obtained by the algorithms with the cardinality of the optimal decompositions. Although after the merging process the partitions are theoretically guaranteed within four times the optimal solution, the computational studies carried out show that the performance is much better in practice. On the other hand, the introduction of holes does not make the decomposition problem more difficult for our adapted absorption algorithms, neither on CPU time nor in cardinality of the partitions; the difficulty comes mainly from the number of notches in the polygon and from its shape. The results have also shown that the decomposition algorithms presented in this paper are also better on average than other triangulating algorithms available in the literature that can handle polygons with holes, and that the smoother the polygon, the better the behavior of the algorithms.
386
J. Fernández et al.
When solving constrained location problems with optimization techniques such as branch-and-bound, many of the subproblems (subpolygons) can be eliminated easily. Hence, for those problems, decompositions of minimal cardinality are not a must. In fact, since obtaining optimal decompositions is time-consuming, the use of such decompositions in the whole optimization process is not recommended. It is better to use quick algorithms which obtain approximately optimal convex decompositions, as the ones presented in this paper. As a case study, in Fig. 6 the decomposition of a polygon approximating the shape of Finland is given using the recommended algorithm A5m. The considered non-convex polygon with 7 holes has 355 vertices, 184 of them being notches. The number of polygons of the partition is 154 and the CPU time employed was 5.2 milliseconds. Notice also that the greater is the precision of the approximation of the polygon to the feasible set, the smoother the polygon will be, and hence, the better our algorithms will behave. The algorithms presented in this paper can be improved in several steps. For instance, to check whether a diagonal is intersected by a hole, a ray shooting technique could be applied, and the checking to see whether a vertex is inside a convex polygon can be done in O(log n). Those modifications will make the algorithms even faster, but this is left for future research. Acknowledgements The authors are indebted to Martin Held, who has provided us with the RPG and FIST codes used in the computational studies; his help is gratefully acknowledged. We also want to thank to José Miguel Díaz-Báñez for his comments on the complexity of the algorithms, and to the referees for their valuable remarks.
References Auer T, Held M (1996) RPG—Heuristics for the generation of random polygons. In: Proc 8th Canad conf comput geom. Carleton University Press, Ottawa, pp 38–44 Chazelle B, Dobkin DP (1985) Optimal convex decompositions. In: Computational geometry. Elsevier, Amsterdam, pp 63–133 Denardo EV (1982) Dynamic programming: models and applications. Prentice-Hall, New York Drezner Z (ed) (1995) Facility location. A survey of applications and methods. Springer series in operations research. Springer, New York Elmaghraby SE (1970) The concept of ‘State’ in discrete dynamic programming. J Math Anal Appl 29:523–557 Fernández J (1999) New techniques for design and solution of continuous location models (in Spanish). PhD dissertation thesis, Dpto. Estadística e Investigación Operativa, Universidad de Murcia, Murcia, Spain Fernández J, Cánovas L, Pelegrín B (1997) DECOPOL—codes for decomposing a polygon into convex subpolygons. Eur J Oper Res 102:242–243 (ORSEP section) Fernández J, Cánovas L, Pelegrín B (1998) Decomposing a polygonal region with holes into convex polygonal subregions. In: X meeting of the Euro working group on locational analysis (EWGLA10), Murcia, Spain Fernández J, Cánovas L, Pelegrín B (2000) Algorithms for the decomposition of a polygon into convex polygons. Eur J Oper Res 121:330–342 Greene DH (1983) The decomposition of polygons into convex parts. In: Preparata FP (ed) Computational geometry. Adv comput res, vol 1. JAI Press, Greenwich, pp 235–259 Havran V, Purgathofer W (2003) On comparing ray shooting algorithms. Comput Graph 27:593–604 Held M (2001) FIST: Fast industrial-strength triangulation of polygons. Algorithmica 30:563–596 Held M (2008) Private communication Hertel S, Mehlhorn K (1983) Fast triangulation of simple polygons. In: Proc 4th internat conf found comput theory. Lecture notes in computer science, vol 158. Springer, New York, pp 207–218
A practical algorithm for decomposing polygonal domains
387
Keil JM (1985) Decomposing a polygon into simpler components. SIAM J Comput 14:799–817 Keil JM, Snoeyink J (2002) On the time bound for convex decomposition of simple polygons. Int J Comput Geom Appl 12:181–192 Lien JM, Amato NM (2004) Approximate convex decomposition of polygons. In: Proceedings of the 20th annual ACM symposium on computational geometry, pp 17–26 Lingas A (1982) The power of non-rectilinear holes. In: Automata, languages and programming, Aarhus, 1982. Springer, Berlin, pp 369–383 Narkhede A, Manocha D (1995) Fast polygon triangulation based on Seidel’s algorithm. In: Paeth AW (ed) Graphics gems V. Academic Press, San Diego, pp 394–397 O’Rourke J (1994) Computational geometry in C. Cambridge University Press, Cambridge O’Rourke J (1996) Computational geometry column 29. Int J Comput Geom Appl 6:507–511 Seidel R (1991) A simple and fast incremental randomized algorithm for computing trapezoidal decompositions and for triangulating polygons. Comput Geom Theory Appl 1:51–64 Toussaint GT (1989) Computing geodesic properties inside a simple polygon. Rev d’Intell Artif 3:9–42