Freestyle integration into Blender

January 10, 2010

Weekly update December 28-January 10

Filed under: Update — The dev team @ 7:33 PM

During new year holidays, the development on the Freestyle branch was quiet as the dev team was looking into the longstanding instability issues regarding the view map creation.  As described in the previous blog post, the main cause of the instabilities concerning the view map constructoin was a bug in the previous image-to-world inverse transformation algorithm.  In general, a 2D point in the image space cannot be transformed into a corresponding 3D point in the world space by means of the inverse of a projection transformation matrix since the 2D point does not have a Z component.  The previous image-to-world conversion algorithm did the job by relying on an assumption that the unknown Z component of a 2D point to be converted could be substituted with a single global value that depended on the spatial extent of the scene being rendered.  This assumption works well when the 2D point is transformed into a 3D point near the center of the scene.  If this is not the case, however, the previous 2D-to-3D conversion yields an erroneous 3D point that leads to a fatal “out of range” error.

In order to address the issue above, a new iterative solver of the image-to-world inverse transformation problem was implemented.  Instead of directly solving the problem in the backward direction from the 2D to 3D space, the new solver starts with an initial guess of an approximated solution and asymptotically approaches to the true solution by iteratively performing the forward 3D-to-2D transformation and improving the approximation.  Preliminary tests with one simple and another complex scene showed that the solver is stable and quickly converges.  The following two images are experimental results with the latter test scene (consisting of 71.7K vertices and 70.6K faces).

  

The left image is a render of the scene with the internal renderer plus Freestyle.  During the rendering with Freestyle, 8193 2D points (i.e., intersections of feature edges in the 2D image space) were converted to 3D points by the new solver.  The right image shows the distribution of iteration counts.  As you see, the solver converges to a solution with more and less 20 iterations.  The stopping criterion is a residual distance between the true and approximated solutions less than 1e-6 Blender Unit (BU).  It is remarked that mesh data in Blender is represented with single-precision floating-point numbers (i.e., the number of significant digits is 6), so that a residual distance less than 1e-6 BU is negligible.  A major drawback of the new algorithm is its computational cost.  The previous algorithm required 106 floating-point operations for a 2D-to-3D conversion, while the new algorithm requires 30N + 9 operations where N is the number of iterations, meaning that the new algorithm carries out 609 (about 6 times more) operations in the case of N = 20.  The higher computatoinal cost is conpensated by the numerical stability the new solver provides.

Branch users are encouraged to experiment with the latest revision of the branch and see if the stability with regard to the view map creation has been changed.  If you run into new problems, please let us know (through blog comments, by email, via the BA Freestyle thread, and so on).  At the moment any clipping does not take place, so that objects behind the camera may still result in unexpected strokes.

9 Comments »

  1. cant wait to test the new branch updates! thanks for your hard work. I’m sooo looking forward to using freestyle/blender for production use.

    you devs rock!

    Comment by Blenderificus — January 11, 2010 @ 2:16 AM

  2. Can you really not get the 3d points directly? Are they necessarily delivered to you in canvas space? I saw some code in GeomUtils doing a 3d to 2d conversion, suggestion that the 3d points are available…

    Comment by yoff — January 11, 2010 @ 3:29 PM

  3. Hi yoff, thank you for the comments. You reminded me to revisit a direct solution for the 2D-to-3D transformation problem. In fact I gave up solving it directly several times, but this time I finally got a direct solution! I am going to write on it in the next weelky update. Thanks again for the feedback.

    Comment by The dev team — January 15, 2010 @ 5:02 PM

  4. Nice, but I actually mean, if you need to do any translation at all? Blender knows the 3d points, they have to be stored somewhere, why can you not just read them off?

    Comment by yoff — January 15, 2010 @ 6:20 PM

  5. The reason is that during the view map creation, new 2D points are generated at intersections of feature edges. These new 2D points do not exist in the original geometry data of the scene. That is why the 2D-to-3D inverse transformation is necessary.

    Comment by The dev team — January 15, 2010 @ 6:31 PM

    • Ah, I see. So for each intersection point, there is actually two 3d points, one on each intersecting edge. I guess you need the 3d representation of these points in order to do distance-from-camera dependent stuff…

      I feel that the conventional 3d to 2d conversion could be replaced by one which preserves the depth(z?)-component, resulting in a linear invertible map. Projecting out the depth component would give a 2d graph on which to find intersection points. With the edges identified, the two distorted 3d points corresponding to an intersection point could be computed and the inverse linear map could the give the 3d locations of these points.

      Did that make sense? Is it close to what you ended up doing?

      Comment by yoff — January 16, 2010 @ 12:35 AM

  6. Yes, yoff, that makes sense. The idea is to rely on not only the perspective projection matrix but also the fact that an intersection point is lying on a feature edge. I prepared a summary of the problem and solution. I hope it explains better the issue.

    Comment by The dev team — January 16, 2010 @ 11:33 AM

  7. Very nice document! That is almost exactly what I suggested, except I imagine that t would be known from the sweep albeit in the wrong space. But the inverse to the pseudo-3d-to-2d projection would account for that.
    It might be a slight optimization, let me know if you are interested…though I would expect any direct translation to run fast enough :-)

    Comment by yoff — January 16, 2010 @ 12:32 PM

  8. Thanks yoff for your interest on this matter. I think the 2D-to-3D inverse projection transformation finally became stable in the latest revision 26086 of the branch. Please feel free to go through the code (i.e., SilhouetteGeomEngine::ImageToWorldParameter() in source/blender/freestyle/intern/view_map/SilhouetteGeomEngine.cpp) and let me know in case some improvment is possible.

    Comment by The dev team — January 18, 2010 @ 11:21 PM


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: