Freestyle integration into Blender

March 18, 2011

Development updates on March 17

Filed under: Update — The dev team @ 2:48 AM

View map construction is the most time-consuming part of the Freestyle rendering process.  In fact, the view map construction was impossibly slow when it came to large scenes.  This was a known issue that was meant to be addressed after all to-do items were finished and the long-awaited merge into the trunk was done, mainly because of the lack of development resources.  In last December, however, the Freestyle integration project received a big code contribution from Alexander Beels.  His focus was exactly on optimizing the view map construction code.  He made this possible by identifying the performance bottlenecks and carefully redesigning the internal data processing.  Starting from the initial version in December, Alex’s optimization patch underwent several rounds of code review and testing in collaboration with the dev team.  He made substantial efforts on examining different optimization strategies and implementations.  After more than three months of dedicated research and development, his work resulted in a highly efficient view map construction code that is now available in revision 35525 of the Freestyle branch.  As illustrated with several test scenes below, the optimized view map code offers surprising performance gains.  Many thanks to Alex for the excellent achievement!

Technical overview of the optimization

Now let us describe Alex’s optimization work from a technical perspective.  The performance improvements have been mainly done in two major components of the view map construction: silhouette edge detection, and edge visibility calculation.

Silhouette edge detection is to find feature edges of interest that constitute a basis of line drawing in Freestyle.  To optimize this data processing for speed, Alex made careful changes to the code base so as to exploit compilers’ optimization capability.  This modification leads to a shorter rendering time independently of the spatial structure of a given scene.

The detected feature edges then undergo the edge visibility calculation where an integer value expressing quantitative visibility is assigned to each feature edge.  The edge visibility is determined by a raycasting algorithm with the help of a spatial grid structure that allows efficient traversal of faces in the 3D space (i.e., tris and quads) from the view point of a camera.  As a matter of fact, the non-optimized old view map code relied on a grid data structure that was implemented with the assumption that the faces were evenly distributed within the 3D bounding box.  In addition, the old code was internally creating a list of occluders by copying polygons again and again, which slowed down the view map construction.  Alex devised new grid data structures (SphericalGrid for perspective cameras and BoxGrid for orthographic cameras; automatically switched based on the camera types) that do not assume the ideal even distribution of polygons, as well as a heuristic grid density calculation algorithm that enables an adaptive population of the grid structures with polygons.  Moreover, an iterator is employed for obtaining the occluders on demand instead of making a local copy of them.

Based on these internal improvements, two optimized edge visibility calculation algorithms have been added: a “traditional” algorithm for emulating old visibility algorithms, and a “cumulative” algorithm for improved, more consistent line visibility.  Both algorithms exploit the new spatial grid data structures for fast raycasting.  Each optimized algorithm comes with two variations: culled and unculled.  The culled visibility algorithms exclude most of the feature edges outside the image boundary.  This will result in much faster visibility calculation at the cost of possible alterations to the results of edge chaining.  The unculled algorithms take all feature edges into account.  The latter is useful when off-image edges matter for chaining.

New rendering options

A new option “Raycasting Algorithm” has been added to the Freestyle tab to allow users to choose a raycasting algorithm.  Available choices are:

  • Normal Ray Casting
  • Fast Ray Casting
  • Very Fast Ray Casting
  • Culled Traditional Visibility Detection
  • Unculled Traditional Visibility Detection
  • Culled Cumulative Visibility Detection
  • Unculled Cumulative Visibility Detection

The first three algorithms are those available in the original Freestyle (the “normal” raycasting was used unconditionally, though).  The “fast” and “very fast” raycasting algorithms achieve a faster calculation at the cost of less visibility accuracy.  The other four are newly introduced optimized options.  The visibility accuracy of the “traditional” algorithms are the same with the “normal” algorithm, while that of the “cumulative” algorithms is supposed to be the same with or better than the “normal” case.  Performance improvements over the old algorithms depend on the scenes to be rendered.  The recommended options for branch users are the culled/unculled cumulative visibility algorithms.  These two options are intended to replace the other algorithms in the future.

Performance results

To conclude this brief introduction of Alex’s wonderful optimization work, some performance results are shown below using four test scenes.  In all cases, the non-optimized “normal” visibility algorithm in revision 35506 was compared with the culled cumulative visibility algorithm.

Ichiotsu (orthographic)

73448 vertices, 72260 faces, orthographic camera.

  Normal Culled cumulative
Time [sec] Time [sec] Speedup
Silhouette edge detection 5.062 3.993 1.268
View map building 14.27 1.126 12.67
Sum 19.33 5.119 3.777

Ichiotsu (perspective)

73448 vertices, 72260 faces, perspective camera.

  Normal Culled cumulative
Time [sec] Time [sec] Speedup
Silhouette edge detection 5.084 4.182 1.216
View map building 15.98 1.301 12.28
Sum 21.06 5.483 3.841


64011 vertices, 60346 faces, perspective camera.

  Normal Culled cumulative
Time [sec] Time [sec] Speedup
Silhouette edge detection 17.66 0.4280 41.27
View map building 13.07 2.422 5.397
Sum 30.74 2.850 10.78

A scene by Greg Sandor (from Lighting Challenge #15: Film Noir)

109180 vertices, 189191 faces, perspective camera.

  Normal Culled cumulative
Time [sec] Time [sec] Speedup
Silhouette edge detection 0.5670 0.3080 1.841
View map building 2045.0 10.74 190.5
Sum 2045.5 11.04 185.2

The first two examples indicate that the new grid data structures for perspective and orthographic cameras have comparable performance.  The third test showed a huge performance gain in silhouette edge detection.  The last big scene resulted in an incredible speedup rate in the view map building phase.  Remarkably, these amazing performance results are solely by means of Alex’s optimization work.  Again, thank you Alex!  This is really great!

Branch users are encouraged to test the new raycasting algorithms from both performance and artistic perspectives.  Performance reports and rendering results are highly welcome, possibly through the Freestyle thread in


Blog at