Helpful Hints

Quick Renderings

Other Speed Tips

Memory Utilization

Image Quality

Motion Blur

High Resolution Images

Light Intensities

Shadows

Order of Transformations

Filters


There are several techniques that are helpful in using the RenderMan Interface and PhotoRealistic RenderMan that may not be obvious to the novice user. This section describes some of them.

Quick Renderings

There are many times, such as when checking geometry, placing the camera, or adjusting the lights, when it is useful to be able to trade off image quality for rendering time. The most effective means of achieving this is to modify the shading rate. This controls the number of times in an image the shading computation is performed. These computations comprise a significant percentage of the time required to produce a high-quality image, so reducing them can significantly increase the speed at which an image is generated. The effect this has on the image is to reduce the smoothness of the shading, and it will make curved geometric primitives look faceted. The shading rate is controlled using the RiShadingRate command. This specifies the maximum area of a micropolygon. Because the renderer only calls a shader once for each micropolygon, a large shading rate such as 64 will cause the rendering to go much faster than a shading rate that will produce a very smooth image, such as 1.

The amount of time required to render is roughly dependent on the number of pixels covered by geometry in the image. Therefore, another way to reduce the time required to render an image is to reduce its size. This is achieved using the RiFormat command. It takes as two of its arguments the x and y resolutions of the desired image. Rendering a 256x256 image will take approximately one quarter the time required to compute a 512x512 equivalent.

Reducing the sub-pixel sampling rate will also speed up the rendering of an image. Antialiasing is performed by supersampling the image and then filtering to produce the final pixel. One can effectively turn off antialiasing by reducing the sampling rate to one sample per pixel. This is achieved by calling the RiPixelSamples command with both xsamples and ysamples set to 1. (When setting RiPixelSamples to one sample per pixel, the results will look very noisy if the "hidden" hider is used without the jitter option disabled.)

PhotoRealistic RenderMan contains an alternative hider that uses a z-buffer algorithm instead of the default stochastic algorithm. It does not handle transparency, motion blur, anti-aliasing, or depth of field. However, it does run faster, especially if the shading rate has been set to a large value. This hider may be specified by:

RiHider("zbuffer", RI_NULL);

Other Speed Tips

Using crop windows is one of the most effective methods to reduce image rendering time. Large or complex models will often cause excessive virtual memory paging during rendering. Using crop windows to break a rendering job into several smaller tasks can eliminate excessive paging and decrease rendering time significantly (see High Resolution Images). Using crop windows will have no effect on image quality.

The processing of patches and patchmeshes is somewhat more efficient than the processing of polygons and pointspolygons. For this reason, we recommend representing quadrilaterals as bilinear patches whenever possible.

Because shading rate is an attribute, it can differ from object to object in a scene. There are times when this property can be used to speed up rendering without detracting from the image quality. If an object is flat and uniformly shaded, or if it is extremely motion-blurred, it will not suffer from a high shading rate. For example, if a uniform patch is being used in a scene as a background, it can have a large shading rate on it. In this case, rendering will speed up significantly because the patch covers a large percentage of the pixels in the image. Using motionfactor will adaptively raise the shading rate for moving objects, which can greatly improve rendering speed. See PRMan Attributes: Motion Factor for more details.

In general, for a fixed shading rate, the computational complexity of a shader will determine the rendering time for an object. Environment maps are slower than texture maps, which are in turn slower than simple procedural shaders.

Rendering time is not directly affected by distances between objects or locations of objects in the scene, except as this positioning affects the sizes of the objects in the image. Portions of an object which extend beyond the boundaries of the viewing pyramid will have a relatively small effect on the rendering time.

Setting RiSides to 1 will make the renderer discard primitives that face awayfrom the camera before they are shaded. This can speed up renderingsignificantly because only about half the shading calculations are performed. However, if the objects are defined with the wrong orientation (normals pointinginward instead of outward), the wrong half of the primitives will be culled andthe image will contain only the back halves of objects. This can be corrected byusing the attribute RiReverseOrientation on primitives with this orientation problem.

Memory Utilization

It is possible to control, to a certain degree, the amount of memory PhotoRealistic RenderMan uses. This is particularly important for systems that have limited physical memory. Such systems will either show a sharp degradation in performance or PhotoRealistic RenderMan will not complete its rendering when it surpasses the physical memory available. The most effective means of controlling memory usage is to modify the bucketsize and gridsize options and to use crop windows to render the image in small sections. See PRman Options: Limits: Bucket Size, PRman Options: Limits: Grid Size, and High Resolution Images for a discussion of these options.

Another way of reducing memory usage is to increase the value passed to RiShadingRate. This has limited utility in that it typically degrades the quality of the image. However, it is very useful when one is performing motion blur. Motion blur can dramatically increase memory usage. Degrading the quality of the blurred objects by increasing the shading rate is usually quite acceptable since the details of the blurred objects are rather difficult to see anyway. This will both reduce memory usage and increase performance.

Using motionfactor and extremedisplacement can often save memory when motion blur or displacements are in use. See PRMan Attributes: Motion Factor and PRMan Options: Limits: Extreme Displacement for more details.

Image Quality

Under some conditions, geometric inaccuracies at the boundaries of grids may cause small holes to appear in curved primitives, especially bicubic patches and other primitives which are represented internally with bicubic patches, e.g. spheres. These "cracks" will be visible as bright or dark pixels. In addition, using a displacement shader on this type of primitive can create cracks or make them significantly worse. The problem can be alleviated to some degree by setting the binary dicing attribute (see PRMan Attributes: Binary DicingM). If this fails to solve the problem, lowering the shading rate and increasing the number of pixel samples can help, as can turning jitter off (See PRMan Options: Hider Sampling).

For the renderer to compute with the most accurate floating-point depth values, always set the near and far clipping planes to bound the scene as tightly as possible. This will reduce the range of z values that need to be represented. Otherwise, cracking and other numerical artifacts may appear.

Motion Blur

Use motion blur when producing frames for an animation, because without it "strobing" (temporal aliasing) effects will be visible. Motion blur extends the bounding boxes of all the moving primitives, so it slows down the renderer and uses more memory. However, because moving primitives are blurred, a higher shading rate can be used to speed things up without sacrificing image quality. Remember that jitter is necessary for motion blur to work. In addition, if the number of pixel samples is too low, the moving primitives will look noisy instead of blurred.

A motion-picture camera does not have its shutter open at all times. The shutter is closed to control the exposure and to allow the film to be advanced from frame to frame. Similarly, when using motion blur with PhotoRealistic RenderMan, one should not specify that the shutter is open at all times. If one did, it would produce very smooth motion, but the moving objects would look very fuzzy. Instead, the shutter should be open for no longer than 50% of the time. This is achieved by not having the min argument to RiShutter for the current frame be equal to the max argument for the previous frame.

High Resolution Images

There are several issues that can arise when rendering high resolution images. Because the sheer pixel area of these images is so great, a much larger amount of time and memory are required for their rendering. In theory, if everything in a scene except the output size is held constant, the rendering time is directly proportional to the number of pixels in the image. Unfortunately, this relation breaks down as the image size gets beyond a certain threshold. This happens because the increased number of micropolygons, which is also proportional to the number of pixels when the shading rate is held constant, runs up against the system's memory limitations. In addition, if there are texture maps in the scene, their resolution will have to be increased as well to maintain optimum image quality, and these larger texture maps will slow the rendering down even further. For most purposes, this limit does not pose a problem, but if images that make the most of 35mm or higher format film (2K x 1.5K pixels or larger) are required, a simple rendering will be prohibitively slow. In this case, there are several tricks that can be used to improve performance. Each can be used independently, or the techniques can be combined.

The simplest solution is to render the image at a lower resolution and resize it up to the desired output resolution using the tiffsize utility. This is easy to do, but the resulting image quality will not be as good as that of an image rendered at the full size.

Another technique that is quite useful is to use the RiCropWindow procedure to break the image into manageable pieces. By rendering only one piece of the image at a time, all of the system's resources are brought to bear on a smaller problem. Once all of the pieces are rendered, they need to be put together to make the full-sized image. If your frame buffer is large enough, you can just render the pieces into the frame buffer and save the resulting image as a single file. Otherwise, you can use the tiffjoin utility to combine several TIFF image files into one large, single TIFF image file.

The last technique that we will recommend is the image compositing method. This method requires the use of an image compositing utility, such as tiffcomp. If your scene geometry can be broken into sections that occur at different, non-overlapping depths, you can render each of these parts of the scene separately, using the full resolution for each one. This gives you several images that can be composited together to form the final image. If you are using this method, you need to render each component image with an alpha channel so that compositing can be done correctly.

Light Intensities

Many light shaders obey the inverse-square law. That is, their intensity drops off proportionally to the square of the distance from their location. A light of intensity one will thus only contribute an intensity of one one-hundredth to a surface at a distance of ten world-coordinate units. Therefore, it is common to have to set the intensities of lights with locations, such as point and spotlights, to large numbers. The actual numbers depend on the size of the model being lit. Lights that do not have locations, such as distant lights and ambient lights, don't obey the inverse-square law and, therefore, typically have intensities less than one.

Shadows

The production of shadows is a two-step process. First, a shadow depth map is generated by PhotoRealistic RenderMan that contains the relative distance from a light source to all shadow-casting objects. Secondly, this map is used by PhotoRealistic RenderMan to determine, during the rendering of a frame, whether a piece of geometry is the closest thing to the light and therefore illuminated, or if something else is closer and thus it is in shadow.

The shadow depth map is generated by rendering the scene from the point of view of the light with the following display specification:

RiDisplay("filename.shd", "shadow", RI_Z, RI_NULL);

Notice that this specifies the shadow display driver instead of the default file driver. The only file drivers distributed with PhotoRealistic RenderMan that accepts depth data are the shadow driver and the zfile driver. This may change in the future, at which time the default file driver may be used.

It is important to remember that shadow depth maps can not be generated at arbitrary resolution. The resolution (height and width) of a shadow depth map must be powers of two (e.g., 1024 x 1024, or 512 x 256). In addition, shadow depth maps should be generated with the following image options:

RtInt off = 0;
RiPixelSamples(1, 1);
RiPixelFilter(RiBoxFilter, 1.0, 1.0);
RiHider("hidden", "jitter", (RtPointer)&off, RI_NULL);

The trickiest part of producing the shadow depth map is setting up the camera to have the same view as the light. Light shaders typically have from and to parameters to specify their position and aim point. The following subroutine will generate an appropriate camera transformation from these parameters (this subroutine is a modification of the subroutine PlaceCamera specified on page 142 of The RenderMan Companion):

#include <ri.h>
#include <math.h>
#define PI 3.14159265359

void PutCamera(RtPoint from, RtPoint to)
{
    RtPoint direction;
    float xzlen, yzlen, yrot, xrot;

    direction[0] = to[0] - from[0];
    direction[1] = to[1] - from[1];
    direction[2] = to[2] - from[2];

    if (direction[0]==0 && direction[1]==0 && direction[2]==0)
        return;

    RiIdentity();

    xzlen = sqrt(direction[0]*direction[0] + direction[2]*direction[2]);
    if (xzlen == 0)
        yrot = (direction[1] < 0.0)? 180.0 : 0.0;
    else
        yrot = 180.0*acos(direction[2]/xzlen)/PI;

    yzlen = sqrt(direction[1]*direction[1] + xzlen*xzlen);
    xrot = 180.0*acos(xzlen/yzlen)/PI;

    if (direction[1] > 0.0)
        RiRotate(xrot, 1.0, 0.0, 0.0);
    else
        RiRotate(-xrot, 1.0, 0.0, 0.0);

    if (direction[0] > 0.0)
        RiRotate(-yrot, 0.0, 1.0, 0.0);
    else
        RiRotate(yrot, 0.0, 1.0, 0.0);

    RiTranslate(-from[0], -from[1], -from[2]);
}

If one is generating the shadow depth map for a distant light, i.e. a light with parallel rays, the shadow depth map should be generated with an orthographic projection. This is specified by:

RiProjection("orthographic", RI_NULL);

If one is generating the shadow depth map for a spotlight or pointlight, the shadow depth map should be generated with a perspective projection, i.e.:

RiProjection("perspective", RI_NULL);

In both cases the field of view must be sufficient to enclose all the geometry that will cast shadows and, in order to preserve the greatest precision in the shadow map, this field of view should be the minimal one to contain this geometry. The field of view is controlled by RiScreenWindow and, for perspective projections, the RI_FOV parameter to RiProjection.

Pointlights present a difficult problem in that they may cast shadows in a 360-degree field of view. Such a field of view cannot be specified with RenderMan for producing the shadow depth map. There are two approaches to overcoming this problem. First, if the geometry is such that shadows are only cast in a sufficiently narrow field of view, a pointlight may be treated in the same manner as spotlights. Any geometry that falls outside the view of the shadow depth map will be treated as if it were fully illuminated by the pointlight. The other approach is to generate a set of shadow depth maps that view the geometry from the light but in different directions. This is similar to the approach taken for environment maps. A special light shader, shadowpoint, has been written that selects among the set of shadow maps depending on the direction to the point being shaded.

Since PhotoRealistic RenderMan only needs to generate depth information when producing the shadow depth map, it is not necessary to include surface shaders and lights. This will improve the performance of generating the shadow depth map as will removing all geometry that doesn't cast a shadow. Of course, one should not include a light shader that references the shadow map being generated.

An alternative to generating a shadow depth map file directly using the "shadow" display driver is to generate a simple depth file using the "zfile" display driver. A zfile must be turned into a shadow map by using either a call to RiMakeShadow or the stand-alone utility txmake. This method still requires that the height and width of the shadow texture (and hence the zfile) be powers of two.

The shadow map can then be accessed in subsequent frames by using a light-source shader that contains a call to the shadow shader subroutine. Notice that shadow returns the amount of shadow at a point in space rather than the amount of light. Therefore, light shaders typically multiply the light intensity by 1-shadow(...). The following is an example of a distant light shader that uses a shadow map:

light
distshad(
    float  intensity=1;
    color  lightcolor=1;
    point from = point "camera" (0, 0, 0);
    point to   = point "camera" (0, 0, 1);
    string shadowname="";
)
{
    solar( to - from, 0.0 ) {
        Cl = intensity * lightcolor;
        Cl *= 1 - shadow(shadowname, Ps);
    }
}

Order of Transformations

The RenderMan Interface specifies that the transformation hierarchy is maintained as a stack of world-to-object matrices, which can be easily pushed and popped to move subobjects. This is counter-intuitive to some people and can easily lead to confusion. One must always keep in mind that in this type of transformation hierarchy, the transformations are applied to objects in reverse order of their specification. That is, to transform an object from its "object" coordinate system to the "world" coordinate system, one applies the final transformation first.

Filters

The interaction of the selection of filters and whether or not one turns jitter on can have a profound effect on the “look” of the resulting image. In general, a soft, photographic, look can be obtained by using jitter and a wide gaussian filter. This will produce good-looking antialiased results at a relatively low supersampling rate. A harder-edged look can be obtained by not using jitter, increasing the amount of supersampling, and using a narrow box filter. The judicious use of these controls can allow one to develop one's own artistic style.

 

Pixar Animation Studios
(510) 752-3000 (voice)   (510) 752-3151 (fax)
Copyright © 1996- Pixar. All rights reserved.
RenderMan® is a registered trademark of Pixar.