Freestyle integration into Blender

December 28, 2009

Weekly update December 21-27

Filed under: Update — The dev team @ 8:17 PM

In the last week the dev team was working on instability issues regarding the view map creation.  The work started with a thorough review of the entire view map construction process, which consists of 1) importing meshes from a 3D scene, 2) detecting feature edges, and 3) building a view map.  These steps are briefly summarized as follows.

  1. In the first step, Freestyle receives mesh data from Blender in the internal mesh structure called vlak.  In the vlak data structure, mesh vertices are in the camera coordinate system.  Since Freestyle’s feature edge detection algorithm expects mesh data to be in the world coordinate system, the vertices are transformed from the camera to the world coordinate system during the mesh importing.
  2. In the second step, the nature of each edge of the imported meshes is examined and relevant feature edge types (such as borders, crease edges, ridges/valleys, and suggestive contours) are assigned to the edge.
  3. In the third step, a 2D representation of the scene in the camera view is constructed.  All 3D vertices are transformed into the 2D space by means of a model-view matrix (from world to camera), a projection matrix (from camera to retina), and a viewport specification (from retina to image).  An important task in this step is to detect intersections of feature edges in the 2D space.  In the case of Blender’s default cube and camera, for example, we have two intersections of edges in the 2D space from the camera’s viewpoint (as shown by circles in the figure below).  The intersections of feature edges are computed using a traditional sweep line algorithm.  In addition, for each intersection of two edges in the 2D space, we find the corresponding 3D point on each edge by inversely applying the viewport specification, projection matrix, and model-view matrix.

We have identified two implementation issues in the 2D-to-3D conversion of edge intersections.  First, this conversion obviously does not work when edges are out of the viewing frustum.  One solution to address this issue is to clip the imported meshes by the viewing frustum.  A drawback of this approach is that feature edges are no longer available outside the camera view.  For instance, if we have feature edges outside the camera view, we can obtain a continuous stroke that goes out and comes into the camera view as illustrated below.  The left image shows a 3D scene with two rectangular parallelpiped meshes and the viewing frustum.  The center image shows the camera view, in which the rectangular parallelpiped meshes are partly sticking out of the viewing frustum.  The right image shows a rendering result using external_contour_sketchy.py (with minor changes).  This style module draws a continuous stroke around external contours several times.  Since feature edges are available outside the viewing frustum, the resulting render gives us an impression that the strokes continue outside the camera view.

  

If we proceed with the clipping of imported meshes by the viewing frustum, we will get a different rendering result as illustrated below.  The left image shows a modified version of the same 3D scene as the previous example.  The meshes have been clipped by the viewing frustum.  The center image shows the camera view.  Note that we have edges at the border of the camera view, and there are no edges around the outside of the camera view.  The left image shows a rendering result.  Now strokes turn at the border of the image instead of going out of and coming back into the image.

  

Clearly, the clipping of meshes by the viewing frustum makes a big difference.  Ideally, only a minimum amount of clipping (e.g., only excluding those vertices that come behind the camera) takes place by default, and the full clipping by the viewing frustum is performed when users prefer to do so.  We look into this issue more closely in the next weeks.  The second issue we have identified is a bug in the current implementation of the 2D-to-3D intersection conversion, which fails in some apparently trivial cases (even when only dealing with vertices within the viewing frustum).  We address this latter issue before we move on to the clipping problem.

8 Comments »

  1. lots of interesting information here, sounds like some time consuming work, thank you all for your time and talents!

    Comment by Blenderificus — December 28, 2009 @ 8:58 PM

  2. Oh man I CANT WAIT! (to use it)

    Comment by Craigsnedeker — December 29, 2009 @ 1:43 PM

  3. Applying the inverse matrix should work fine outside the viewing frustrum? It is a linear map defined on the entire 3d space, I would think?
    Can you explain why the math of it fails? It seems you should be able to set it up, so that even vertices behind the camera gets coordinates and just have to be filtered out…

    Comment by yoff — December 30, 2009 @ 11:40 AM

  4. Ok, some imprecisions there…the inverse map matrix corresponds to a map defined on the 2d space, and presumably maps everything to 3d points in front of the camera. Still it should be able to map points out side of the canvas to points outside of the viewing frustrum, maintaining continuity…

    Comment by yoff — December 30, 2009 @ 5:45 PM

    • Thanks for the comments, yoff. I agree that applying the inverse matrix should work fine outside the viwing frustrum. An issue in the mapping from world to image in the current implementation is that there is a non-linear operation that prevents linear inversion. The world-to-image coordinate system conversion is currently implemented as follows:

      Input: float p[3], float mv[4][4], float proj[4][4], float viewport[4]
      Output: float q[3]

      p1 := mv * p
      p2 := proj * p1
      q[0] := viewport[0] + viewport[2] * (p2[0] + 1.0) / 2.0
      q[1] := viewport[1] + viewport[3] * (p2[1] + 1.0) / 2.0
      q[2] := p1[2]

      As you see, the conversion is not linear since p2[2] is lost by the last assignment. In addition, intersections of feature edges are computed in the 2D space (i.e., on the XY plane), meaning that the Z components of the intersections are unknown. Since these computations are primitive, I am afraid of breaking somewhere unexpected in the code by modifying these low-level calculations. I feel like I have to carefully go through the code to fix these and related issues.

      Comment by The dev team — December 31, 2009 @ 7:51 PM

      • Thank you for the answer. As far as I can se, though, the last assignment is not part of the map. The 3d to 2d conversion should output a 2d point which is here (q[0], q[1]) and the map p[3] -> q[2] is linear.

        I am guessing that the 4th part of mv is for rotations, and of proj is convenience? What is q[2] used for later on?

        Oh, I should have the sources now, I will see if I can find it :-)

        Comment by yoff — January 2, 2010 @ 12:34 PM

  5. Hi yoff, thank you for your interest. Probably you may find it interesting to look into the actual code. In the Freestyle branch source tree, the view map creation is doen in source/blender/freestyle/intern/view_map/ViewMapBuilder.cpp. Feature edge intersections are computed in the ViewMapBuilder::ComputeSweepLineIntersections() method, where the conversion of feature edge intersections from the 2D to 3D space takes place. Two calls of SilhouetteGeomEngine::ImageToWorldParameter() in the method do the conversion. Feel free to write to me if you have something to discuss.

    Comment by The dev team — January 2, 2010 @ 6:33 PM

  6. […] including Curve, Mesh Deform, Cloth and Soft Body.  Previously, mesh vertices imported from vlak nodes were transformed from the camera coordinate system to the object local coordinate system.  This […]

    Pingback by Weekly update January 25-31 « Freestyle integration into Blender — February 1, 2010 @ 1:10 AM


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: