Freestyle integration into Blender

January 25, 2010

Weekly update January 18-24

Filed under: Update — The dev team @ 11:11 PM

During the last week, the question of how to implement the clipping of imported meshes was investigated.  Mesh clipping is necessary for two reasons.  One reason is that the internal renderer does clipping.  The camera has two properties called near and far clipping distances, which define the locations of near and far XY planes (referred to as view planes), and only the objects that reside between the two view planes are rendered.  When objects touch the view planes, part of them out of the interval is clipped and becomes invisible.  It is remarked that the clipping distance properties were previously not respected by the Freestyle renderer.  The other reason is that objects that (partially) come behind the camera cause a crash of the Freestyle renderer during the view map creation.  The near view plane can be effectively used to exclude the objects behind the camera in advance of the view map creation.

For these reasons, a straightforward clipping algorithm was implemented in revision 26233 of the branch.  Mesh objects are clipped by the near and far view planes specified with the clipping distance properties of the active camera.  When meshes are partially clipped, new edges are created at the intersections with the view planes.  These edges can result in visible strokes if they are within the camera view.

As of this writing, simple test scenes with objects behind the camera are successfully rendered.  More complex scenes pose a problem in stroke rendering and possibly result in a crash, which seems related to a lot of “strip vertex 0 non valid” warning messages often observed in complex scenes (the number in the warning may vary).  This problem is apparently independent of the instability issues we have been addressing with regard to the view map creation.  Further consolidation of the stroke rendering will be attempted in the next weeks.  Branch users are encouraged to test Freestyle with complex scenes (possibly having objects behind the camera).  Problem reports are always very welcome.

January 18, 2010

Weekly Update January 11-17

Filed under: Update — The dev team @ 11:12 PM

The 2D-to-3D inverse projection transformation problem was finally solved in a direct manner after several unsuccessful attempts to solve it without relying on an iterative method.  The problem and the direct solution are summarized in a separate document.  The only remaining issue with the direct method is the possibility of division by zero (see the “Remarks” section of the summary for more information).  It is unclear how often division by zero may occur.  Since the iterative solver implemented and reported in the previous blog article does not have any risk of division-by-zero errors, the best approach seems to combine both, i.e., to try applying the direct method first and to switch to the iterative solver if necessary.  This approach has been implemented in the latest revision 26086 of the branch.  Now the bug in the 2D-to-3D inverse projection transformation is considered fully fixed.  As planned in the blog article on December 28, we move on to the clipping issue in the next weeks with the aim of further improving the robustness of the view map creation.

January 10, 2010

Weekly update December 28-January 10

Filed under: Update — The dev team @ 7:33 PM

During new year holidays, the development on the Freestyle branch was quiet as the dev team was looking into the longstanding instability issues regarding the view map creation.  As described in the previous blog post, the main cause of the instabilities concerning the view map constructoin was a bug in the previous image-to-world inverse transformation algorithm.  In general, a 2D point in the image space cannot be transformed into a corresponding 3D point in the world space by means of the inverse of a projection transformation matrix since the 2D point does not have a Z component.  The previous image-to-world conversion algorithm did the job by relying on an assumption that the unknown Z component of a 2D point to be converted could be substituted with a single global value that depended on the spatial extent of the scene being rendered.  This assumption works well when the 2D point is transformed into a 3D point near the center of the scene.  If this is not the case, however, the previous 2D-to-3D conversion yields an erroneous 3D point that leads to a fatal “out of range” error.

In order to address the issue above, a new iterative solver of the image-to-world inverse transformation problem was implemented.  Instead of directly solving the problem in the backward direction from the 2D to 3D space, the new solver starts with an initial guess of an approximated solution and asymptotically approaches to the true solution by iteratively performing the forward 3D-to-2D transformation and improving the approximation.  Preliminary tests with one simple and another complex scene showed that the solver is stable and quickly converges.  The following two images are experimental results with the latter test scene (consisting of 71.7K vertices and 70.6K faces).

  

The left image is a render of the scene with the internal renderer plus Freestyle.  During the rendering with Freestyle, 8193 2D points (i.e., intersections of feature edges in the 2D image space) were converted to 3D points by the new solver.  The right image shows the distribution of iteration counts.  As you see, the solver converges to a solution with more and less 20 iterations.  The stopping criterion is a residual distance between the true and approximated solutions less than 1e-6 Blender Unit (BU).  It is remarked that mesh data in Blender is represented with single-precision floating-point numbers (i.e., the number of significant digits is 6), so that a residual distance less than 1e-6 BU is negligible.  A major drawback of the new algorithm is its computational cost.  The previous algorithm required 106 floating-point operations for a 2D-to-3D conversion, while the new algorithm requires 30N + 9 operations where N is the number of iterations, meaning that the new algorithm carries out 609 (about 6 times more) operations in the case of N = 20.  The higher computatoinal cost is conpensated by the numerical stability the new solver provides.

Branch users are encouraged to experiment with the latest revision of the branch and see if the stability with regard to the view map creation has been changed.  If you run into new problems, please let us know (through blog comments, by email, via the BA Freestyle thread, and so on).  At the moment any clipping does not take place, so that objects behind the camera may still result in unexpected strokes.

The WordPress Classic Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 61 other followers