For some time I've been working on an alternative approach to 'simplifying' polygons (where polygons in clipping solutions are split when vertices touch etc) and recently got side tracked on implementing triangulation. The triangulation code now works but it's still some way off being ready for alpha testing. Nevertheless before I spend much more time on this, I thought I'd gauge the likely interest in triangulation as an extension to the Clipper library. Would you use triangulation if it was in the library and, if so, how would it be useful to you?
Last edit: Angus Johnson 2015-03-17
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi Angus, and thanks for this great library, can we get in touch please? I have some questions about its usage, if you could reach me on twitter we could chat a bit (if you have a free 20 minutes). https://twitter.com/oliveridario89
We could use twitter messages.
Basically I'm doing a poligonal level editor. I have already triangulation working (with some tweaks), and I need to use poligon offset in order to add "decorative meshes". So basically I need some walkthrough to how do it in the best way using your library along with Unity3D.
What I need is the ability to offset/inset a polygon, and take "for each edge" a trapezoid so I can use it as a "rectangle" and put in it some nice cripsy texture.
Basically after some though, I just need to find corrispondence between a offsetted edge, and its original edge (in some cases number of offsetted edge is < than number of original edges, so It would be nice to have a function that makes a corrispondence between a offseted edge, and its original counterpart..
In some other cases there are multiple edges that are linked to a single vertex (rounded offsets on acute angles), also I have to identity this cases).
Any idea where is the best place to hack the clipper library to do that?
Last edit: Demone 2017-04-07
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I found a solution maybe, Now I'm implementing hoping it will works. Great library anyway. I just started polishing it to use Unity C# code conventions.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
That's fine to hear! Definitely I would use Clipper's triangulation and truly simple polygons to make a Three.js (WebGL) workflow simpler, faster and less error prone. The workflow needs truly simple polygons which Clipper can not currently quarantee and to overcome this the polygons need to be offset by small negative amount which slows down the things. And after this Three.js have to make a triangulation (https://www.google.fi/search?q=three.js+triangulate) which also has some performance and degenerate case issues especially when dealing with holes. I assume that if simplification and triangulation are handled in early stage inside Clipper it gives us significant speedup and increase in reliability.
Angus,
I have been waiting for that from you.No practical use for me ..only exploratory.
I have been playing around with another vatti with a curious bug(some output triangles are eaten away).What I liked was that it has no holes AND no merge operations on output contours.
However the triangulated collection shows intersections in practice, just like the clipped contours. Resequencing of NON Horizontal edge intersections is still required.!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Chiming in a bit late to say that triangulation in Clipper would be awesome. It's the most solid polygon cruncher around, and having that kind of stability in my vector engine would be some peace of mind as compared to the finickier floating-point algorithms which are otherwise available.
<3 <3
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
It's still on the drawing board but I've been side-tracked by other things for the time being.
Also, to do it properly I first have to remove mini self-intersections that arise from rounding, and that's proving much trickier than I'd expected. It quite doable but the code required is considerable.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I found this old topic while researching libtess2 (for a long time I've been using a copy of glu libtess I modified to use floats.) Pending the emergence of a triangulating Clipper, what are your impressions of that library?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Also, to do it properly I first have to remove mini self-intersections that arise from rounding
I have been playing with 6.2.1 C#. I simply converted all ints to doubles and it seems that it works without any tolerance/ULPS checks. I have to play a bit more to be sure that we can say goodbye to ints and int128s.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Timo,
You are correct and very right.
However integers have their own place in such algorithms even when the methods are
polluted with some doubles.
In bad old days integers were used for super speedups with processors used by cpu vendors. [Now I hear that most intel processors are faster for double then 64 bit integers.!]
64 bit integer codes allow a much wider range of unique represented values compared to double.
I vote for both versions being available..at least for c/c++.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Double in Javascript (as well as anywhere IEEE 754 binary standard format is used) has
18,437,736,874,454,810,627 (2^64 - 2^53 + 3) distinct values of which 3 are special: NaN, -Infinity, +Infinity.
Of these
18,014,398,509,481,984 (2^53 * 2) are integers of which
9,007,199,254,740,992 (2^53) are positive integers and
9,007,199,254,740,992 (2^53) are negative integers.
(http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf)
(http://en.wikipedia.org/wiki/Floating_point#Internal_representation)
Int128 has range from
−170,141,183,460,469,231,731,687,303,715,884,105,728
to
170,141,183,460,469,231,731,687,303,715,884,105,727
but Clipper can handle due to prevent overflows "only"
9,223,372,036,854,775,806 distinct values using 128-bit math and
2,147,483,646 values using 64-bit math.
Of course all 18,437,736,874,454,810,627 double values cannot be used because of calculation overflows, but fortunately in the range where coordinates usually are there are plenty of precision (about +-2500). A little "problem" is that there are "too much" precision in the range of -1 to 1. Who ever needs such precision there, may be nuclear scientists...
To my eyes the range of 2,147,483,646 integer values seems rather limited, and in many cases using slower 128-bit math is a necessity.
Angus said that a floating point range of 1.1e-162 to 6.7e+153 should be safe. Hmm, how many distinct values are in that range. Wait a minute...
I think you're quoting me out of context. I was referring specifically to "overflow" errors close to the extremes of precision. Nevertheless I still believe that using floating point coordinates (in any range, not just at extremes of precision) will expose the library to (admittedly rare) error conditions. Again I refer to http://www.mpi-inf.mpg.de/~kettner/pub/nonrobust_cgta_06.pdf as one of many good articles that highlight the problems of numerical robustness with float point calculations.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Angus, maybe a bit out. I tried to figure out the allowed range (when using doubles) like you have got 9,223,372,036,854,775,806 as an allowed range when using 128bit arithmetic.
I'm not sure if the (rare) error conditions are specifically due to using floating point calculations, because you said earlier in this thread that first have to remove mini self-intersections that arise from rounding which I think I have found also earlier because there are cases when there is need for cleaning polygons before executing offset operations. Do you mean that there occurs also mini self-intersections when executing boolean operations? I have not found such ones, although I have tested hundreds of thousands different polygons (generated by rotating and skewing). If I remember right, all issues have been related to offsetting without cleaning polygons first.
I have a problem of "validating floating operations". The only meaningful way that I can think of, is to generate hundreds of thousands random polygons and execute boolean operations in both IntPoint Clipper and FloatingPoint Clipper and compare the results. The problem is again rounding. To compare points there is need to convert floats to ints and vice versa. And this divide/multiply operation causes rounding differences, which makes it impossible to compare them reliably. When I compare by using tolerance check, the results are identical, but it may hide some errors. I have also tested the bug reported issues when there was only floating point Clipper available few years ago and all of them succeeded in my C# testing environment which uses Clipper 6.2.1 (all int-related converted strictly to doubles without any additional tolerance checks).
Last edit: Timo Kähkönen 2015-04-30
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Timo,
I do a lot of such testing to pass time.!. Equality of Area coverage is one such test.For Integers a upscaling is usualy required of a minimum 8 binary bits..So you lose that much of the integer range !.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Comparing area coverage may be enough to quickly find differences between Float Clipper (FC) and Int Clipper (IC), although still the comparison have to be made using tolerance, because scaling doubles to ints and back causes nearly always some rounding differences.
And still it may be like comparing apples to oranges:
FC gives an exact solution: [[{"X":3.0,"Y":2.0},{"X":1.0,"Y":2.0},{"X":0.50000000000001621,"Y":0.50000000000001243}]], but IC gives an approximation or truncated solution: [[{"X":3.0,"Y":2.0},{"X":1.0,"Y":2.0},{"X":0.5,"Y":0.5}]].
It seems that achieving numerical robustness using a workaround by converting Doubles to Ints and back is not a viable solution if exact solutions are needed. It is an acceptable workaround if source coordinates are Ints and there is no need for doubles at all. But in real life this is unusual. The oddness of this workaround comes visible when offsetting is used. Although source coordinates are Ints, without upscaling and downscaling the result of rounded joins is jaggy, because although offset function uses doubles inside, it converts them to ints before returning the solution.
Angus refers in this thread earlier to "mini self-intersections that arise from rounding," I assume that those are due to double to int rounding. I have an assumption that these issues are not present in FC.
Angus refers to A Case Study in Geometric Nonrobustness. I downloaded the data from http://people.mpi-inf.mpg.de/~kettner/proj/NonRobust/nonrobust_06.tgz. There is a data folder. I have tested now many of those datas in FC, but the only failures so far are due to the fact that C# System.Drawing.Drawing2D uses floats and cannot handle double precision that FC produces.
For me it seems that Angus has made so good work in creating and bugfixing the Clipper, that the robustness issues described in the mentioned Study have no equivalencies in the Clipper.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Comparing area coverage may be enough to quickly find differences between
Float Clipper (FC) and Int Clipper (IC), although still the comparison have
to be made using tolerance, because scaling doubles to ints and back causes
nearly always some rounding differences.
And still it may be like comparing apples to oranges:
FC gives an exact solution:
[[{"X":3.0,"Y":2.0},{"X":1.0,"Y":2.0},{"X":0.50000000000001621,"Y":0.50000000000001243}]],
but IC gives an approximation or truncated solution:
[[{"X":3.0,"Y":2.0},{"X":1.0,"Y":2.0},{"X":0.5,"Y":0.5}]].
It seems that achieving numerical robustness using a workaround by
converting Doubles to Ints and back is not a viable solution if exact
solutions are needed. It is an acceptable workaround if source coordinates
are Ints and there is no need for doubles at all. But in real life this is
unusual. The oddness of this workaround comes visible when offsetting is
used. Although source coordinates are Ints, without upscaling and
downscaling the result of rounded joins is jaggy, because although offset
function uses doubles inside, it converts them to ints before returning the
solution.
Angus refers in this thread earlier to "mini self-intersections that arise
from rounding," I assume that those are due to double to int rounding. I
have an assumption that these issues are not present in FC.
Angus refers to A Case Study in Geometric Nonrobustness. I downloaded the
data from http://people.mpi-inf.mpg.de/~kettner/proj/NonRobust/nonrobust_06.tgz.
There is a data folder. I have tested now many of those datas in FC, but
the only failures so far are due to the fact that C#
System.Drawing.Drawing2D uses floats and cannot handle double precision
that FC produces.
For me it seems that Angus has made so good work in creating and bugfixing
the Clipper, that the robustness issues described in the mentioned Study
have no equivalencies in the Clipper.
Regarding Clipper and floating point coords, I said a little while ago (above)...
Nevertheless I still believe that using floating point coordinates (in any range, not just at extremes of precision) will expose the library to (admittedly rare) error conditions.
Anyhow, I'm coming around to the possibility that Clipper could be made robust with floating points after all.
Clipper uses a "sweep line" approach to solving clipping operations - where an imaginary line (scanline) passes from bottom to top over the clipping paths. At ever position of the scanline in its upward travel, edges (path contours) are ordered left to right according to their positions relative to this scanline. Due to integer rounding (since Clipper coords are currently integers) there's a degree of imprecision in calculating both relative edge positions (and hence left to right order) and in assigning intersection points, and accommodation for this imprecision in necessary to ensure robustness.
If, in swapping to floating points coords, they continue to be treated like integer coords with regard to imprecision (and they should since they also represent discrete values on the number plane), then I think it's reasonable to hope that the Clipping algorithm will remain robust.
I will investigate.
Also, with regards to Triangulation (the original reason for this thread), there will be a very considerable delay on that (if it happens at all) as I'm about to start another project which is completely unrelated to Clipper. This new project will take an indefinite amount of time, so sorry to disappoint anyone hoping to see triangulation any time soon.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Anyhow, I'm coming around to the possibility that Clipper could be made robust with floating points after all.
This sounds good! I have continued my investigations using the newest C# Clipper version 6.4.0 rev 496. Previously I have made investigations using version 6.2.9. I converted them to use doubles instead of ints. The conversion is made without any additional tolerance checks and to get identical offsetting I had to modify OffsetPoint() a bit.
Attached is the used version: clipperF.cs.
I have a testbed which is a customized version of GuiDemo. It uses Float and Int versions side by side, so I can compare various details in both versions. It seems that solutions in both versions are identical in nearly every case.
The only differences seems to be related to polygons with common edges. The attached images show an example - one with zero offset and one with bigger offset.
If edges are horizontal or vertical, there are no differences, but if edges are rotated to some arbitrary angle, the joining of common edges is seemingly random. This is not the case only in Float version, but also Int version joins common edges in non-predictable way. I understand that common horizontal and vertical edges are easy to detect exactly, but arbitrary angles need a tolerance check, and this tolerance check is needed both in Float and Int versions. How and in which part of code this tolerance check is needed, is a question which I have no answer. The possible candidates are JoinCommonEdges(), PointInPolygon(), SlopesEqual() and PointOnLineSegment().
The fine thing is that I have not found any robustness problems when using doubles instead of ints. My conclusion is that the only problem - common edges - has nothing to do with robustness, because it is also in int version.
I would certainly use your triangulation code in Slic3r as long as it is numerically stable.
In regard to a floating point implementation, it is certainly possible to evaluate geometric predicates in floating point arithmetics. The famous triangle library
The author gave the code out for free and I was happy to use it in my own triangulation code implementation I did for my former employer.
I also had the priviledge to work on a 3D mesh boolean library, we used filtered predicates, first calculating floating point intervals using the boost::interval library
Both the above described methods (Johnathan Schewchuck's rubust predicates and the filtered interval / GMP predicates) work directly with floats and both the methods certainly work. I have quite an experience with both approaches, please feel free to ask if you feel to.
Vojtech
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I found an interesting bit of code https://github.com/LUXOPHIA/Triangulation I am going to have a bit of a play with it. Thanks Angus for sharing this amazing clipping code.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
For some time I've been working on an alternative approach to 'simplifying' polygons (where polygons in clipping solutions are split when vertices touch etc) and recently got side tracked on implementing triangulation. The triangulation code now works but it's still some way off being ready for alpha testing. Nevertheless before I spend much more time on this, I thought I'd gauge the likely interest in triangulation as an extension to the Clipper library. Would you use triangulation if it was in the library and, if so, how would it be useful to you?
Last edit: Angus Johnson 2015-03-17
Hi Angus, and thanks for this great library, can we get in touch please? I have some questions about its usage, if you could reach me on twitter we could chat a bit (if you have a free 20 minutes).
https://twitter.com/oliveridario89
We could use twitter messages.
Basically I'm doing a poligonal level editor. I have already triangulation working (with some tweaks), and I need to use poligon offset in order to add "decorative meshes". So basically I need some walkthrough to how do it in the best way using your library along with Unity3D.
What I need is the ability to offset/inset a polygon, and take "for each edge" a trapezoid so I can use it as a "rectangle" and put in it some nice cripsy texture.
Basically after some though, I just need to find corrispondence between a offsetted edge, and its original edge (in some cases number of offsetted edge is < than number of original edges, so It would be nice to have a function that makes a corrispondence between a offseted edge, and its original counterpart..
In some other cases there are multiple edges that are linked to a single vertex (rounded offsets on acute angles), also I have to identity this cases).
Any idea where is the best place to hack the clipper library to do that?
Last edit: Demone 2017-04-07
I found a solution maybe, Now I'm implementing hoping it will works. Great library anyway. I just started polishing it to use Unity C# code conventions.
Angus, please post the triangle code.
Last edit: Ignorant 2017-05-18
That's fine to hear! Definitely I would use Clipper's triangulation and truly simple polygons to make a Three.js (WebGL) workflow simpler, faster and less error prone. The workflow needs truly simple polygons which Clipper can not currently quarantee and to overcome this the polygons need to be offset by small negative amount which slows down the things. And after this Three.js have to make a triangulation (https://www.google.fi/search?q=three.js+triangulate) which also has some performance and degenerate case issues especially when dealing with holes. I assume that if simplification and triangulation are handled in early stage inside Clipper it gives us significant speedup and increase in reliability.
So, this sounds a very welcome addition.
An example of our workflow (more here):
Angus,
I have been waiting for that from you.No practical use for me ..only exploratory.
I have been playing around with another vatti with a curious bug(some output triangles are eaten away).What I liked was that it has no holes AND no merge operations on output contours.
However the triangulated collection shows intersections in practice, just like the clipped contours. Resequencing of NON Horizontal edge intersections is still required.!
We are currently using Clipper and poli2tri in our C# project. If clipper had own triangulation it would be easier for us.
Chiming in a bit late to say that triangulation in Clipper would be awesome. It's the most solid polygon cruncher around, and having that kind of stability in my vector engine would be some peace of mind as compared to the finickier floating-point algorithms which are otherwise available.
<3 <3
It's still on the drawing board but I've been side-tracked by other things for the time being.
Also, to do it properly I first have to remove mini self-intersections that arise from rounding, and that's proving much trickier than I'd expected. It quite doable but the code required is considerable.
Hey, Angus --
http://stackoverflow.com/questions/18249055/tesselation-in-opengl
I found this old topic while researching libtess2 (for a long time I've been using a copy of glu libtess I modified to use floats.) Pending the emergence of a triangulating Clipper, what are your impressions of that library?
I have been playing with 6.2.1 C#. I simply converted all ints to doubles and it seems that it works without any tolerance/ULPS checks. I have to play a bit more to be sure that we can say goodbye to ints and int128s.
Timo,
You are correct and very right.
However integers have their own place in such algorithms even when the methods are
polluted with some doubles.
In bad old days integers were used for super speedups with processors used by cpu vendors.
[Now I hear that most intel processors are faster for double then 64 bit integers.!]
64 bit integer codes allow a much wider range of unique represented values compared to double.
I vote for both versions being available..at least for c/c++.
Double in Javascript (as well as anywhere IEEE 754 binary standard format is used) has
18,437,736,874,454,810,627 (2^64 - 2^53 + 3) distinct values of which 3 are special: NaN, -Infinity, +Infinity.
Of these
18,014,398,509,481,984 (2^53 * 2) are integers of which
9,007,199,254,740,992 (2^53) are positive integers and
9,007,199,254,740,992 (2^53) are negative integers.
(http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf)
(http://en.wikipedia.org/wiki/Floating_point#Internal_representation)
Int128 has range from
−170,141,183,460,469,231,731,687,303,715,884,105,728
to
170,141,183,460,469,231,731,687,303,715,884,105,727
but Clipper can handle due to prevent overflows "only"
9,223,372,036,854,775,806 distinct values using 128-bit math and
2,147,483,646 values using 64-bit math.
Of course all 18,437,736,874,454,810,627 double values cannot be used because of calculation overflows, but fortunately in the range where coordinates usually are there are plenty of precision (about +-2500). A little "problem" is that there are "too much" precision in the range of -1 to 1. Who ever needs such precision there, may be nuclear scientists...
To my eyes the range of 2,147,483,646 integer values seems rather limited, and in many cases using slower 128-bit math is a necessity.
Angus said in https://sourceforge.net/p/polyclipping/discussion/1148419/thread/0abedc6a/?page=1 that a floating point range
of 1.1e-162 to 6.7e+153 should be safe. Hmm, how many distinct values are in that range. Wait a minute...
I think you're quoting me out of context. I was referring specifically to "overflow" errors close to the extremes of precision. Nevertheless I still believe that using floating point coordinates (in any range, not just at extremes of precision) will expose the library to (admittedly rare) error conditions. Again I refer to http://www.mpi-inf.mpg.de/~kettner/pub/nonrobust_cgta_06.pdf as one of many good articles that highlight the problems of numerical robustness with float point calculations.
Angus, maybe a bit out. I tried to figure out the allowed range (when using doubles) like you have got 9,223,372,036,854,775,806 as an allowed range when using 128bit arithmetic.
I'm not sure if the (rare) error conditions are specifically due to using floating point calculations, because you said earlier in this thread that first have to remove mini self-intersections that arise from rounding which I think I have found also earlier because there are cases when there is need for cleaning polygons before executing offset operations. Do you mean that there occurs also mini self-intersections when executing boolean operations? I have not found such ones, although I have tested hundreds of thousands different polygons (generated by rotating and skewing). If I remember right, all issues have been related to offsetting without cleaning polygons first.
I have a problem of "validating floating operations". The only meaningful way that I can think of, is to generate hundreds of thousands random polygons and execute boolean operations in both IntPoint Clipper and FloatingPoint Clipper and compare the results. The problem is again rounding. To compare points there is need to convert floats to ints and vice versa. And this divide/multiply operation causes rounding differences, which makes it impossible to compare them reliably. When I compare by using tolerance check, the results are identical, but it may hide some errors. I have also tested the bug reported issues when there was only floating point Clipper available few years ago and all of them succeeded in my C# testing environment which uses Clipper 6.2.1 (all int-related converted strictly to doubles without any additional tolerance checks).
Last edit: Timo Kähkönen 2015-04-30
Timo,
I do a lot of such testing to pass time.!. Equality of Area coverage is one such test.For Integers a upscaling is usualy required of a minimum 8 binary bits..So you lose that much of the integer range !.
Comparing area coverage may be enough to quickly find differences between Float Clipper (FC) and Int Clipper (IC), although still the comparison have to be made using tolerance, because scaling doubles to ints and back causes nearly always some rounding differences.
And still it may be like comparing apples to oranges:
FC gives an exact solution: [[{"X":3.0,"Y":2.0},{"X":1.0,"Y":2.0},{"X":0.50000000000001621,"Y":0.50000000000001243}]], but IC gives an approximation or truncated solution: [[{"X":3.0,"Y":2.0},{"X":1.0,"Y":2.0},{"X":0.5,"Y":0.5}]].
It seems that achieving numerical robustness using a workaround by converting Doubles to Ints and back is not a viable solution if exact solutions are needed. It is an acceptable workaround if source coordinates are Ints and there is no need for doubles at all. But in real life this is unusual. The oddness of this workaround comes visible when offsetting is used. Although source coordinates are Ints, without upscaling and downscaling the result of rounded joins is jaggy, because although offset function uses doubles inside, it converts them to ints before returning the solution.
Angus refers in this thread earlier to "mini self-intersections that arise from rounding," I assume that those are due to double to int rounding. I have an assumption that these issues are not present in FC.
Angus refers to A Case Study in Geometric Nonrobustness. I downloaded the data from http://people.mpi-inf.mpg.de/~kettner/proj/NonRobust/nonrobust_06.tgz. There is a data folder. I have tested now many of those datas in FC, but the only failures so far are due to the fact that C# System.Drawing.Drawing2D uses floats and cannot handle double precision that FC produces.
For me it seems that Angus has made so good work in creating and bugfixing the Clipper, that the robustness issues described in the mentioned Study have no equivalencies in the Clipper.
To each problem ,its own suitable data type.!
On Mon, May 18, 2015 at 5:42 AM, Timo timo23414@users.sf.net wrote:
Regarding Clipper and floating point coords, I said a little while ago (above)...
Anyhow, I'm coming around to the possibility that Clipper could be made robust with floating points after all.
Clipper uses a "sweep line" approach to solving clipping operations - where an imaginary line (scanline) passes from bottom to top over the clipping paths. At ever position of the scanline in its upward travel, edges (path contours) are ordered left to right according to their positions relative to this scanline. Due to integer rounding (since Clipper coords are currently integers) there's a degree of imprecision in calculating both relative edge positions (and hence left to right order) and in assigning intersection points, and accommodation for this imprecision in necessary to ensure robustness.
If, in swapping to floating points coords, they continue to be treated like integer coords with regard to imprecision (and they should since they also represent discrete values on the number plane), then I think it's reasonable to hope that the Clipping algorithm will remain robust.
I will investigate.
Also, with regards to Triangulation (the original reason for this thread), there will be a very considerable delay on that (if it happens at all) as I'm about to start another project which is completely unrelated to Clipper. This new project will take an indefinite amount of time, so sorry to disappoint anyone hoping to see triangulation any time soon.
This sounds good! I have continued my investigations using the newest C# Clipper version 6.4.0 rev 496. Previously I have made investigations using version 6.2.9. I converted them to use doubles instead of ints. The conversion is made without any additional tolerance checks and to get identical offsetting I had to modify OffsetPoint() a bit.
Attached is the used version: clipperF.cs.
I have a testbed which is a customized version of GuiDemo. It uses Float and Int versions side by side, so I can compare various details in both versions. It seems that solutions in both versions are identical in nearly every case.
The only differences seems to be related to polygons with common edges. The attached images show an example - one with zero offset and one with bigger offset.
If edges are horizontal or vertical, there are no differences, but if edges are rotated to some arbitrary angle, the joining of common edges is seemingly random. This is not the case only in Float version, but also Int version joins common edges in non-predictable way. I understand that common horizontal and vertical edges are easy to detect exactly, but arbitrary angles need a tolerance check, and this tolerance check is needed both in Float and Int versions. How and in which part of code this tolerance check is needed, is a question which I have no answer. The possible candidates are JoinCommonEdges(), PointInPolygon(), SlopesEqual() and PointOnLineSegment().
The fine thing is that I have not found any robustness problems when using doubles instead of ints. My conclusion is that the only problem - common edges - has nothing to do with robustness, because it is also in int version.
Feel free to test clipperF.cs!
Angus,
I would certainly use your triangulation code in Slic3r as long as it is numerically stable.
In regard to a floating point implementation, it is certainly possible to evaluate geometric predicates in floating point arithmetics. The famous triangle library
https://www.cs.cmu.edu/~quake/triangle.html
is based on a free set of adaptive exact 2D predicates, written specifically for triangulation.
https://www.cs.cmu.edu/~quake/robust.html
The author gave the code out for free and I was happy to use it in my own triangulation code implementation I did for my former employer.
I also had the priviledge to work on a 3D mesh boolean library, we used filtered predicates, first calculating floating point intervals using the boost::interval library
http://www.boost.org/doc/libs/1_64_0/libs/numeric/interval/doc/interval.htm
and then if the interval test failed, we recalculated the result using the GNU MP library
https://gmplib.org/
Both the above described methods (Johnathan Schewchuck's rubust predicates and the filtered interval / GMP predicates) work directly with floats and both the methods certainly work. I have quite an experience with both approaches, please feel free to ask if you feel to.
Vojtech
Release your triangulation code ..let us have some more fun...!
I found an interesting bit of code https://github.com/LUXOPHIA/Triangulation I am going to have a bit of a play with it. Thanks Angus for sharing this amazing clipping code.
I'm planning to add triangulation to the new Clipper that's also not too far off beta testing.
Thanks a lot , Angus ! Also very interested in this feature. Please update if it was implemented.