Pixar
January, 1996
Organized as a hands-on exercise guide to the The RenderMan Companion, this document walks the reader through all the examples in the book to demonstrate the use of PhotoRealistic RenderMan to render images with programs written using the RenderMan Interface.
The reader should be very familiar with the The RenderMan Companion by Steve Upstill (Addison-Wesley, 1990) and have a copy on hand when going through the tutorial. The relevant listings of each example program, as well as accompanying descriptions and illustrations in the book are not duplicated here. Instead, the reader is referred to the source code for the example programs which are provided as part of the software distribution.
Besides an understanding of the Computer Graphics concepts as presented in the The RenderMan Companion, this document also assumes that the reader has a working knowledge of the C programming language and the UNIX operating system.
This document also assumes that the PhotoRealistic RenderMan software has been properly installed.
Here are some revisions to The RenderMan Companion issued over the years:
It's time to get our hands dirty. The RenderMan Interface outlined in Steve Upstill's The RenderMan Companion offers a standard for describing scenes to be rendered and specifying the appearance of these scenes in our final images. PhotoRealistic RenderMan is Pixar's implementation of this standard and allows us to put the concepts addressed by RenderMan into action. In this tutorial we will re-examine the example programs presented in The RenderMan Companion and turn them into working programs that generate rendered images with the help of the PhotoRealistic RenderMan software.
Although we will follow the outline of The RenderMan Companion, we will not duplicate listings of the example programs or the discussion about them. Consider this document a companion to the Companion -- an exercise document designed to make the examples in the book into hands-on exercises to illustrate the concepts of the book while acquainting you with the operation of PhotoRealistic RenderMan.
The RenderMan Companion *
Before starting this tutorial, the reader really should read the book -- or at least those parts relevant to the reader's applications. As described in the Preface, the book is divided into four parts.
The example programs in the tutorial directory are divided into chapters to correspond with the book. Likewise, this tutorial is organized to follow this chapter layout.
The RenderMan Interface Specification
The reader should also be acquainted with the RenderMan Interface Specification as this is the official word on the RenderMan standard and the expected behavior of RenderMan interpreters such as PhotoRealistic RenderMan. It is divided into two parts, one describing the RenderMan Interface itself (types and functions, arguments to procedures, the Graphics State, definition of geometric primitives, motion blocks, error detection, texture map facilities, etc.), and the other describing the RenderMan Shading Language (types, predefined variables, expressions, control flow, shaders and functions, built-in functions, example shaders, etc.).
Any question about the expected behavior of a particular procedure call, or shader construct, should be answered using the Specification as the official reference.
The example programs for this tutorial are located in the directory /usr/local/prman/tutorial (which we will refer to as the tutorial directory from now on). Go to this directory by typing:
cd /usr/local/prman/tutorial
In the tutorial directory you will find subdirectories ch1 through ch16 corresponding to the 16 chapters of The RenderMan Companion. Each of these contains the example programs listed in the text of the associated chapter, and enough additional materials to be able to compile and run them. The names of the files should indicate which listing in the book it corresponds to.
The programs may not be identical to those in the actual listings as enough needs to be added or corrected to enable them to actually compile and execute. For example, most of the listings beyond Chapter 2 are program fragments without essential initialization (such as camera positioning) or a main() routine. To accommodate these program fragments, the viewing boilerplate from Chapter 8 (Listing 8.5) has been made into a prototype main() routine. This main() routine is found in the main.c program in the parent directory. It does the essential initialization and camera positioning, sets renderer options, and calls a generic routine named Go(). Each of the example programs in the subdirectories is amended to define this Go() function as a call to the appropriate routine(s).
The details of how these programs are to be compiled and linked to build complete executables, are handled by the makefile in each directory.
In each of the subdirectories you may find some or all of the following files:
There are two ways to render an image from a RenderMan program:
The use of intermediate RIB files has several advantages:
On the other hand there are some constructs (such as RiProcedural() or RiTransformPoints()) which can't be handled without linking the application directly to the renderer. Likewise, the user can't set a custom error handler as far as the real rendering is concerned.
Linking directly to the renderer is also marginally more efficient.
Nevertheless, the examples in this tutorial are all executed using RIB files.
For each chapter in the book -- and thus for each section in this tutorial -- you will have to change to the appropriate directory among ch1 to ch16. There is also an additional ch17 directory which doesn't correspond to a chapter in the book, but refers to section 17 of this tutorial showing the use of options and attributes to control the behavior of the PhotoRealistic RenderMan renderer.
In each directory typing make all will compile and render all the programs in the chapter. In most cases (depending on the speed of your renderer), you can read through the tutorial text while the make runs through all the examples. Alternatively, you can compile and render each example independently, allowing a bit more time to examine the resulting images.
The tutorial text will walk you through the individual examples and point out the key concepts. Essentially, for each example, the RenderMan Interface program is compiled and linked with the necessary main() routine, header files, and libraries, and executed to generate a RIB file. The render utility takes this RIB file and generates an image.
Note: If the early examples seem to take a long time to render, don't be discouraged by thinking that the more complex images will take considerably longer. The length of time needed for a rendering can be related more to the size of the image than its complexity.
Note: The images are generated in the "framebuffer". A
place holder exists in all the makefiles for you to insert a call to
any utility you may have to save an image in a file. If /dev/framebuffer
is not installed on your system, the RiDisplay call in main()
can be amended to generate the image directly into a file. For example, just
replace the RiDisplay call with:
RiDisplay("filename", RI_FILE, RI_RGBA, RI_NULL) ;
All the makefiles have a few choices in common.
make all
compiles and renders all the examples
in the directory. This is the default action and may be all you want to use.make pics
generates all the pictures in the
directory -- which is equivalent to a
make all
in most cases.make rib
generates the RIB files, but does not
render them.make exec
compiles the executable files without
executing them.make clean
cleans the directory of all files
that can be regenerated using makefile.This includes the
.rib, .slo, and .o files.The parent directory (the tutorial directory itself) offers an additional choice which we should execute now. At the prompt, type the following:
% make camera
This generates two object files, These contain the FrameCamera() and PlaceCamera() routines used in our generic main() routine we use to complete most of the example programs in the book. The source for these is taken from the boilerplate listings in Chapter 8 and will be discussed in a bit more detail when we get there.
This tutorial does not have to be rigidly followed. There are some things you can do to make learning RenderMan with the tutorial more interesting:
RiPixelSamples(3.0, 3.0);
before RiWorldBegin in the main program (main.c). You can turn off jitter if the image still looks noisy by inserting the lines:
RtInt flag=0; RiHider( "hidden", "jitter", (RtPointer)&flag, RI_NULL );
into the main program, again before RiWorldBegin.
We don't encounter any program listings in The RenderMan Companion until Chapter 2, which gets our feet wet with a few simple RenderMan programs. In this section we will find out what it takes to compile the the listings in the book and actually generate an image using PhotoRealistic RenderMan.
We should note that the examples in this chapter are different from the rest of book in that they provide their own main() function, thus we don't need the the main() function defined for the other chapters in the main.c program described above.
Because all three of the alternative cube definitions have the same main() function, cubemain.c is used to call cube2_3.c, cube2_4.c, and cube2_5.c. The same thing is done for the ColorCube Listings 2.6 and 2.7.
One thing to note about these example programs is that "framebuffer" is specified as the destination of the rendered images. This is a slight deviation from the text on page 19 of The RenderMan Companion that says that the images would be rendered to the file ri.pic. This is to make the Chapter 2 examples consistent with the rest of the book and to avoid unnecessary consumption of disk space and concerns over file formats used to store the image. Besides, it is a bit more entertaining to watch the rendering on the monitor as it happens.
Finally, we should note that to increase efficiency, a call to RiSides(1) is added to the colorcube and animation code. RiSides(1) causes only the sides which face the camera to be rendered, and ignores the backfaces.
If you haven't already done so, move to the ch2 directory.
The first example in The RenderMan Companion is essentially a Hello World program, intended more to illustrate the minimum ingredients needed to compile and run a program than to do anything interesting. Compile and execute it by typing
% make basic2_1.pic
The make utility will let you know what it is doing:
The result on the screen will be a single large polygon -- a white square in the middle of the image.
The example in Listing 2.2 of The RenderMan Companion adds a few parameters to the first example. A distant light source is added, a perspective projection is declared, the viewer location and direction is changed, and the polygon is shaded as a matte surface with a greenish-blue color.
Typing
% make color2_2.pic
will run through the same steps as in the first example to generate the rendered image.
Listing 2.3 of The RenderMan Companion exists as two files in this directory. The main() routine has been split off into the file cubemain.c to allow it to be used with the other cube examples in this chapter. The file cube2_3.c is essentially identical to the rest of Listing 2.3 with a few minor corrections (described in the readme file).
Typing
% make cube2_3.pic
will render the six-polygon cube specified by the Cube[] array.
The next two cube examples use the same main() routine from Listing 2.3. The programs cube2_4.c and cube2.5.c are identical to Listings 2.4 and 2.5 respectively, and offer alternative definitions of our original cube.
Type
% make cube2_4.pic cube2_5.pic
will compile these and link them with the object compiled for the last example.
The two images rendered will be identical to the cube rendered with Listing 2.3.
The two "color cube" examples also share the same main() routine, this time defined in colrmain.c.
Typing
% make ccube2_6.pic
or
% make ccube2_7.pic
should yield the same image: a cube made up of 4x4x4 smaller cubes each with its own color.
The next example redefines the main() routine to create a simple animation. It uses the cube from Listing 2.7, varying the size of the smaller cubes, and the angle of rotation of the larger cube between frames.
Typing
% make anim2_8.pic
will render each frame in the frame buffer without saving it. The make can be stopped after the first few frames have been rendered and the nature of the animation is clear.
The program as listed in the book saves each frame in a separate file. If you have a utility for getting these back into the frame buffer and looping through the animation, feel free to change the program back to save the frames.
There is only one listing in Chapter 3: a boilerplate program illustrating the overall structure of a RenderMan program. In this boilerplate (Listing 3.1 on page 55) there are no parameters in the function calls, and lots of ellipses (...) used to represent omitted code. It is not something that is meant for execution, but rather just for visual inspection.
The first example in Chapter 4 of The RenderMan Companion generates the quadric surfaces illustrated in Figure 4.1: a sphere, a cone, a cylinder, a hyperboloid, a paraboloid and a torus. Type:
% make quads4_1.pic
to render the illustration.
This example as well as all subsequent examples, provide a Go() function which calls the routine defined in the example -- ShowQuads() in this case. A main() function which calls Go() must be defined in a separate file and linked in with this and each subsequent example program. The program main.c found in the parent directory to all the tutorial examples, is provided for this purpose. It is based on the viewing boilerplate program from Chapter 8, with viewing parameters redefinable for each example.
The header file quads4_1.h is provided for you to fill in these viewing parameters for the example in quads4_1.c (Listing 4.1). This header file and the ones provided for subsequent examples do not appear in The RenderMan Companion.
The next example in The RenderMan Companion generates a surface of revolution using an array of points defining a hyperboloid. Type:
% make surf4_2.pic
to start the rendering.
The RenderMan Companion doesn't provide any data for the surface profile. (I.e., it doesn't define the point array points[].) This is provided in hcontour.h along with the type definition for Point2D.
Also, Listing 4.2 doesn't have a Go() function, so surfor4_2.c has one added. This will be a common addition in subsequent examples.
The resulting rendering will be the bowling pin illustrated on page 64, composed of a series of bands (hyperboloids) assembled top to bottom.
The example in Listing 4.3 generates the circular wave pattern illustrated in Figure 4.2 on page 68 of The RenderMan Companion, using calls to RiTorus(). Type
% make wave4_3.pic
to generate the image.
The examples in Chapter 5 of The RenderMan Companion use polygons defined with RiPolygon() to create surfaces of revolution.
Listing 5.1 defines a function, PolyBoid(), which approximates a hyperboloid using triangular polygons. This pseudo-hyperboloid takes the form of a ring centered around the z axis composed of triangles, with each two adjacent triangles sharing two vertices. The bowling pin illustrated on page 71 of The RenderMan Companion is composed of a series of such rings, and we duplicate it in this example.
Type
% make flat5_1.pic
to render the image.
The Go() and PolySurfOR() functions have been added to Listing 5.1 to create flat5_1.c. The PolySurfOR() function is based on the one used in Listing 5.2, but simplified a little bit to demonstrate the effects of PolyBoid().
The hyperboloid contour data (the points[] array) and the surface color data (the colors[] array) used by PolySurfOR() are provided in hcontour.h. This header file is identical to the hcontour.h used in the Chapter 4 examples.
The result is the bowling pin illustrated on page 71, composed of flat-shaded, triangular polygons.
The next example defines another version of the hyperboloid-approximating function, PolyBoid(), which has the added capability to specify surface normals at each vertex of the band to RiPolygon(). This allows the smooth-shaded bowling pin on page 74 to be generated. Type
% make norml5_2.pic
to generate the image.
Listing 5.2 provides its own PolySurfOR() function to build the bowling pin using the pseudo-hyperboloid bands created by PolyBoid(). Nevertheless, there are a few corrections evident in norml5_2.c.
PolyBoid() is supposed to be provided as a triangular polygon approximation to hyperboloids, with the addition of suitable surface normals. It is meant as an improvement to the PolyBoid() function in Listing 5.1. What is provided in Listing 5.2, however, is a function called PolyBand(), which is semantically identical to the PolyBoid() function presented in Listing 5.3. The file norml5_2.c, however, defines the function described in the text, not the one in the listing.
One thing to note about the smooth-shaded image is the profile of the bowling pin, which is identical to the flat-shaded version of the previous example. Smooth shading will not hide the tell-tale profile of an object made up of polygons.
Listing 5.3 in the book demonstrates the use of points-polygon form to define a set of adjoining polygons from their shared vertices. It uses RiPointsPolygons() in alternative version of the previous example. Typing
% make point5_3.pic
will generate the same bowling pin, but with a much more efficient and economical version of the PolyBoid() function.
As in 5.1, the Go() and PolySurfOR() functions are missing from this listing, as is the contour data (the points[] array). The PolySurfOR() function provided in point5_3.c is borrowed from Listing 5.2. The contour data is provided in hcontour.h.
The examples in Chapter 6 of The RenderMan Companion demonstrate the use of parametric surfaces -- bicubic patches and patch meshes -- to specify surfaces. It contrasts this with the use of quadric surfaces (Chapter 4) and polygons (Chapter 5), by offering yet another alternative to our surface of revolution, the bowling pin.
The first example in Chapter 6 generates a bicubic patch from a control hull specified by the 4x4 geometry matrix Patch[]. It can also be used to render a representation of the control hull as a set of nine bilinear patches. The combination of these images is illustrated in Figure 6.2 of The RenderMan Companion.
Type
% make patch6_1.pic
to render the bicubic patch.
Note that for this example to perform, either HULL or PATCH must be #defined; thus this line must be added to the file. We've defined this to be PATCH for the first rendering. When this has finished and you've examined the resulting bicubic patch, change the first line in patch6_1.c to read:
#define HULL 1
and re-render the image by retyping
% make patch6_1.pic
to render the control hull.
As a final example, add back the line
#define PATCH 1
so that both HULL and PATCH are #defined and re-render the image to see the control points at work.
Listing 6.2 in The RenderMan Companion uses the same geometry matrix (as Listing 6.1) with a different basis matrix, to generate a different bicubic patch from the same control hull. The previous example did not specify a basis matrix and so the default Bezier basis was used. In this example Catmull-Rom basis matrices are added to the current attributes in the graphics environment using a call to RiBasis(). The subsequent call to RiPatch() uses these basis matrices to generate the bicubic patch illustrated in Figure 6.4.
Type
% make catml6_2.pic
to generate the image. Note how the different interpretation of the control points results in a very different looking patch.
Listing 6.3 offers an alternative to the surface-of-revolution-generator SurfOR() introduced in the last chapter. In this case, the surface of revolution is specified as a Bezier patch mesh. The control points of the mesh are computed by taking the original control points (the bowling pin profile as specified in the points[] array) and rotating them about the y axis. The rotation is performed using a set of coefficients representing four Bezier curves approximating a circle. Note that the first and last of these coefficients (in the coeff[] array) are redundant -- the patch mesh does not wrap.
Type
% make surf6_3.pic
to start the rendering.
A new set of control points is needed to produce the Bezier patches for the bowling pin; this is provided in pcontour.h, which is #included at the top of the file. Also, a Go() function is provided to call SurfOR().
There is a problem in consistency between this listing and the other surface of revolution listings; this listing revolves about the y axis, everything else revolves about the z axis. To make this example also revolve about the z axis, we have changed the assignments of the mesh[][][] to read as follows:
mesh[v][u][0] = points[v].x * coeff[u][0]; mesh[v][u][1] = points[v].x * coeff[u][1]; mesh[v][u][2] = points[v].y;
The comments referring to XY plane and Y axis were also changed to XZ plane and Z axis, respectively.
When the rendering is completed you will end up with the bowling pin illustrated on page 100 of The RenderMan Companion. Note the smooth profile -- actually a spline approximating our contour data points.
Listing 6.4 is a variation on the previous example -- this time using a wrapped patch mesh. Note that the coeff[] array now contains 12 points instead of 13 as in the previous example -- the last point for the fourth patch in a row is taken from the beginning of the row, making a wrapped patch mesh.
Type
% make wrap6_4.pic
to render the image.
As in Listing 6.3, a new set of control points is needed to product the Bezier patches; these points are in pcontour.h. A Go() function is also provided.
Another thing to note is the use of the tokens RI_PERIODIC and RI_NONPERIODIC instead of RI_WRAP and RI_NOWRAP as in the listing in the book. This is to correspond with the latest RenderMan Specification which doesn't use the WRAP/NOWRAP terminology.
The resulting image will be identical to the previous example: the smooth bowling pin illustrated on page 100 of the book.
The examples in Chapter 7 illustrate some techniques of combining the primitive surfaces described in the last three chapters -- quadrics, polygons, and parametric surfaces -- to create more complex objects and scenes.
Note that some of the Constructive Solid Geometry examples may take a bit longer to render.
This listing uses some files from Chapter 4 to create a hyperboloid bowling pin, which is then capped on the bottom with a disk to form a solid.
Type
% make surf7_1.pic
to generate the first image. Note that this uses surf_go.c which #includes surf7_1.c (Listing 7.1).
The other files needed are hcontour.h, and the SurfOR() function from Listing 4.2. Note that a new Go() function is provided here, which is not in The RenderMan Companion. This Go() function calls the SolidSurfOR() function presented in Listing 7.1.
The resulting image will be our familiar surface of revolution bowling pin.
Listing 7.2 illustrates some of the techniques and uses of Constructive Solid Geometry, by defining a number of composite shapes from a few simple primitives. The strategy is to constuct a closed-surface hemisphere as the intersection between between a sphere and a strategically sized and placed cylinder. Two of these composite solids (two hemispheres) can then be combined to create a wedge with an arbitrary angle.
Type
% make wedge7_2.pic
to start the rendering.
The program wedge7_2.c is identical to Listing 7.2. The program wedge_go.c provides the necessary Go() function to generate the image illustrated in Figure 7.6 on page 130.
Listing 7.3 illustrates the use of the RI_DIFFERENCE operation of Constructive Solid Geometry, as an added tool to the RI_INTERSECTION and RI_UNION operations used in the last example. Type
% make ball7_3.pic
to generate the image.
A Go() function is added to this listing, and the wedge routines from Listing 7.2 are #included at the bottom of the file.
A couple of cosmetic changes have been made to the description of the bowling ball. The ball is assigned a light blue color, to help contrast against the black background. Another change is made in the second call to RiRotate(), so that it reads
RiRotate( 30.0, -1.0, 1.0, 0.0);
This makes the holes seem more naturally spaced.
The most dramatic change to this listing is based upon the fact that the current implementation of PhotoRealistic RenderMan allows only for binary solid operations, i.e., we cannot simply take the union of three solids. We get around this by nesting the RI_UNION operations in BowlingBall(). Rather than taking the union of three objects (the BowlingBallPlug()) we take the union of two of them, and combine it (union again) with the third.
The next example in The RenderMan Companion illustrates the technique of object instancing. A retained model (BowlingPin()) is defined using an RiObjectBegin()/RiObjectEnd() and given the object handle phandle. This handle is then passed to ten calls to RiObjectInstance() to generate ten instances of the retained model.
Type
% make pins7_4.pic
to start the rendering.
Note that a Go() function is provided, and the apocryphal routine BowlingPin() is defined. BowlingPin() simply calls the hyperboloid surface of revolution example from Listing 4.2. A call to RiColor() is added to give the pins an appropriate color.
This is the chapter which defines the viewing boilerplates that all the other chapters rely on. main8_1.c is a simple boilerplate, place8_2.c provides for camera placement and aiming, frame8_3.c sets the projection plane by using RiScreenWindow(), frame8_4.c uses RiFrameAspectRatio() as an alternative to RiScreenWindow(), and boiler8_5.c provides the main() function for calling the camera placement and framing routines.
Thus, what we have in this chapter are really three executable examples: the first is the simple boilerplate of Listing 8.1 enhanced with a simple scene to view; the second combines Listings 8.2, 8.3, and 8.5 for a better viewing boilerplate; and the third replaces Listing 8.3 with Listing 8.4 to show an alternative way to specify the view window.
In order to have this listing do something visible, a torus was put in front of the camera. The lines presented below were inserted between the RiWorldBegin() and RiWorldEnd() (replacing the line <Scene is described here>):
RiTranslate(0.0, 0.0, 5.0); RiTorus(0.75, 0.4, 0.0, 360.0, 360.0, RI_NULL);
To compile the boilerplate program and render the torus, type
% make main8_1.pic
It should run fairly quickly as there is not much to render, just a white torus.
The example program place8_2.c is identical to Listing 8.2 in The RenderMan Companion, and shows a useful setup procedure for a viewing transformation based on the position, direction and orientation of a camera. By itself it doesn't render anything (it just positions a camera), and must be combined with the code from Listing 8.5 to compile anything interesting. We'll combine this with the next example (the camera specification, FrameCamera(), routine), to make a more interesting camera description.
Note that this listing was used to create the object file in the parent tutorial directory and used with the examples in the other sections.
The example program frame8_3.c is identical to Listing 8.3 in The RenderMan Companion, which takes a camera description in terms of the focal length of a lens and the width and height of the film and manipulates the screen window using RiScreenWindow() to simulate that camera, leaving the field of view fixed. It also must be combined with the code from Listing 8.5 to compile anything interesting.
Typing
% make frame8_3.pic
will generate our next image. This combines the Listing 8.3, with the camera placement routine in the previous example, and the boilerplate program in Listing 8.5, to render another torus.
Note that this listing was used to create the object file in the parent tutorial directory and used with the examples in the other sections.
Listing 8.4 offers an alternative camera description to the one presented in the previous example. The difference is that rather than using RiScreenWindow() to select the part of the world along the direction of view for rendering, FrameCamera() now explicitly sets the field of view using the RI_FOV parameter to RiProjection().
Typing
% make frame8_4.pic
repeats our last exercise, but substitutes Listing 8.4 for Listing 8.3, (i.e. changes the camera specification FrameCamera()).
Note that the caption to Listing 8.4 in The RenderMan Companion is misleading. RiPerspective() is not used in the listing at all although it could be with similar results. As described back in Chapter 7 (pages 113-115), RiPerspective() adds a perspective transformation onto the current transformation. Note that this has the effect of making the camera-space into a perspective space, which is not our intention in this example. Instead, we use RiProjection("perspective", ...) with the RI_FOV parameter to specify a perspective projection from camera-space to screen-space without this undesireable side-effect.
Listing 8.5 offers a revised version of Listing 8.1, using the FrameCamera() and PlaceCamera() routines defined in the other listings to define and locate the camera.
In order to have this listing do something visible with the other listings, a torus was put in front of the camera. The lines presented below were inserted between the RiWorldBegin() and RiWorldEnd() (replacing the line ...Your scene here...):
RiTranslate(0.0, 0.0, 5.0); RiTorus(0.75, 0.4, 0.0, 360.0, 360.0, RI_NULL);
This listing was used with the frame8_3.pic and frame8_4.pic examples, so there is nothing new to make here.
There are no code listings in Chapter 9, which introduces a large number of tools to improve on our virtual camera model of the viewing process. This includes the following constructs with which you can experiment.
RiGeometricApproximation(RI_FLATNESS, 2.0); RiPixelSamples(1,1); RiPixelFilter(RiCatmullRomFilter, 5, 5); RiPixelVariance(1.0/16.0); RiExposure(0.5, 0.5); RiImager(imager); RiQuantize(RI_RGBA, 2048, 0, 2048, 1.0); RiDepthOfField(1.4, ...); RiTransformBegin(); RiMotionBegin(2, 0.0, 1.0); RiRotate(10.0, 1.0, 0.0, 0.0); RiRotate(20.0, 1.0, 0.0, 0.0); RiMotionEnd(); RiSphere(1.0, -.7, .7, 270.0, RI_NULL); RiTransformEnd(); RiShutter(0,2);
There are only two examples that can be effectively executed in this chapter: Listing 10.1 and Listing 10.3. Because PhotoRealistic RenderMan currently doesn't support multiple levels of detail, Listing 10.1 is not fully functional and Listing 10.2 is not provided here.
Only Listing 10.3 (fractals) provides a Go() function to be called by our generic main(). Since listing 10.1 uses the ColorCube() model from Chapter 2, the simple viewing setup from that example is provided instead of the standard Go() function.
PhotoRealistic RenderMan currently doesn't support multiple levels of detail. What it does when presented with models defined in this manner is to use the model defined for the detail range RI_INFINITY. Since the effect described in the book is not supported, and since the mathematics required to generate a generalized geodesic dome is non-trivial, the Dome() function is not presented here. Instead, the color cube from Chapter 2 is used as the model. The following replacements in Domes() have been made:
Replace With ------------- -------------------- Dome(4); ColorCube(2, 0.9); Dome(8); ColorCube(3, 0.9); Dome(16); ColorCube(4, 0.9); Dome(32); ColorCube(5, 0.9);
As noted above, the standard Go() function is not provided for this example. A main() function has been borrowed from the Chapter 2 examples to call the Domes() function.
Type
% make dome10_1.pic
to generate the image.
Listing 10.3 is a demonstration of the use of procedural models, which by definition are called during rendering to specify object geometry. The use of RIB files, however, prevents us from using RiProcedural() calls*. Our amended example program skirts around this limitation by #defining any calls to RiProcedural() to be calls to FractalDiv(). An important implication of this is that the renderer is not maintaining the data structure. (*Without linking directly to the renderer, we cannot pass function pointers to it.)
Type
% make frac10_3.pic
to start the rendering.
We should note that there are many other differences between what is presented in this directory and what is in The RenderMan Companion.
Notice that a FractalFree() function is not provided in the example in the book. Second, there is a substantial Go() function that is not provided in the book. Third, there are faults in both the syntax and semantics of the listing as presented in the book. These are described in the readme file in this directory if you are interested.
The two listings in this chapter are more for illustrative purposes rather than to provide something which actually runs. However, they are very effective in showing the differences made by using different types of lights and different constants for surfaces. Thus there is one source file in this directory, and it combines the lighting (11.1) and surface (11.2) listings from this chapter with the demonstration of quadric surfaces from Chapter 4. Type
% make comp11.pic
to run the example.
Note that because this example uses a sequence of frames, it needs its own copy of main.c. This special copy does not enclose the call to Go() within RiWorldBegin() and RiWorldEnd().
In all four images, the sphere shows only ambient light, the cone illustrates the diffuse contribution, and the cylinder has only specular highlights. All three objects in the bottom row of each image use the ambient, diffuse, and specular components.
Be aware that several of the surface constants are being set to zero, which will result in 'invisible" surfaces. For example, the cone and cylinder in the ambient light image cannot be seen because their surface is declared with no ambient component. Also notice that the intensity is boosted for the point and spot lights, because their intensity falls of with the square of the distance.
The examples in Chapter 12 demonstrate the use of texture maps and shadow maps to enhance the appearance of surfaces in an image. The texture map used in the early examples is a simple grid allowing easy illustration of the use of texture coordinates to map sections of the texture map to areas of the surface.
The first example in Chapter 12 uses RiSurface() to establish the shader "mytexture" as the current shader, and passes to it a pointer to the texture map name ("grid.txt") it expects. A bilinear patch is then rendered twice, once with the default texture coordinates, (the entire texture map) and a second time with different coordinates (the textcoords[] array). The result is the two patches illustrated in Figure 12.4.
Type
% make parm12_1.pic
to generate the image.
The next example in the book wraps the same texture map just used around our bowling pin from Chapter 4. This is done by using RiTextureCoordinates() to map the appropriate part of the texture map around each hyperboloid band of the surface of revolution (the bowling pin).
Type
% make pins12_2.pic
to start the rendering.
As usual Go() needs to be provided. It simply sets the proper object space rotation, and calls MapSurfOR(). Of course, hcontour.h is needed again to create the bowling pins, and <math.h> must be #included because of the call to sqrt().
The next two examples in Chapter 12 demonstrate the creation of an environment map (an image of the world from the point of view of an object in the scene) and its use as a reflection map. Listing 12.3 creates an environment map which is used in Listing 12.4.
Type
% make envr12_3.pic
to render the example. Nothing visible will happen while the environment map is being created. This combines the two listings into one program which uses a shader shiny.sl, borrowed from Chapter 16.
Listing 12.5 in The RenderMan Companion illustrates a simple example for creating and using a shadow map using RiMakeShadow() and RiLightSource() respectively.
Type
% make shdw12_5.pic
to compile and render the image.
Note the presence of SetupInv(), Setup(), and DoScene() in shdw12_5.c. The scene is our familiar ColorCube, which casts shadows on itself. In shdw12_5.c, there is also the introduction of a specular constant, Ks, for use with the plastic surface of the colorcube, and an intensity for use with the ambient light source. A new function, View(), is added to place the eye at a strategic location for viewing the shadows on the colorcube. This example does not use Go().
Chapters 13, 14 and 15 introduce the use of shaders with RenderMan and show numerous examples written in the RenderMan Shading Language. These shaders are in files suffixed .sl and compiled by the shader utility into .slo files. Only Chapter 13 has any listings to execute.
The only example in Chapter 13 is the application of a "clouds" shader to the Bezier bowling pin created in Chapter 6.
Type
% make surf13_1.pic
to render the example, (the bowling pin illustrated on page 283).
This is the standard Bezier bowling pin from Chapter 6 with two changes: the color is set to a light blue, and a call to RiSurface() is used to access the "clouds" shader.
The "clouds" shader in this directory differs from the presentation in The RenderMan Companion. The book's listing results in an extremely dark image because the sum of the noise rarely exceeds 0.1. Although lighting can be used here, a more effective looking shader ignores the lights and scales the sum somewhat.
As an example of a RenderMan program that references a shader, lets walk through the compilation of this example. Type
% make surf13_1.pic
and the makefile as usual, will list out what it is doing:
Chapter 14 presents a somewhat formal description of concepts and constructs of the RenderMan Shading Language. There are no listings to execute.
Chapter 15 continues our description of the RenderMan Shading Language with a discussion of a number of built-in shading functions that shaders and other functions can call. There are no listings to execute in this chapter.
Chapter 16 provides a large assortment of shaders written in the RenderMan Shading Language -- 38 of them to be exact. The listings contain only the shaders without any RenderMan code to exercise them. The example programs in ch16 provide a few objects and scenes that use these shaders.
There are a number of fixes and additions to the shaders as presented in the listings. These are described in the readme file if you are interested.
Type
% make all
to generate all the examples in the directory, or you can generate any or all of the images independently as described in the following sections.
The first few examples are intended to show off some of our Gallery of Shaders by using some of the simple quadric shapes from Chapter 4.
Type
% make quads_a.pic
to generate the first example. This is the six quadrics in Figure 4.1 with the following shaders applied:
In addition, two light sources are defined using the shaders:
Type
% make quads_b.pic
to generate the next example. It applies the following shaders to the same quadrics illustration:
The two light sources defined use the following shaders.
Type
% make quads_c.pic
This example uses the following shaders:
The two light sources defined use the following shaders.
Type
% make pins_a.pic
This is the bowling pins scene generated in Chapter 7 with a number of shaders added.
The pins use the plastic.sl shader and the floor of the scene uses wood.sl. There are three light sources defined using the ambientlight.sl and pointlight.sl shaders from previous examples, plus the spotlight.sl shader (Listing 16.8).
Type
% make pins_b.pic
The same scene is rendered again, adding the atmosphere shader depthcue.sl, which can be seen in the way the pins in the background appear much darker than the front pin.
It should be noted that the depth-cue shader varies the shading over a range between the near and far clipping planes (using the built-in shader function depth()). As a result, in order for the shader to have any meaning, the near and far clipping planes must be set to something reasonable, rather than the default values of EPSILON and INFINITY. This is done with the definitions of the CLIPNEAR and CLIPFAR values in pins_b.h, which are used in a call to RiClipping() in the main() routine in the parent directory.
Type
% make pins_c.pic
The third pins example renders the same scene, but replaces the depthcue.sl atmosphere shader with fog.sl (Listing 16.10). The effect is one of a white haze which permeates the scene and makes the background pins appear darker than the front pin.
Type
% make pins_d.pic
The next pins example renders the same scene, but replaces the spotlight.sl shader with a light source which uses the windowlight.sl (Listing 16.21) shader.
You may find that this image has a green shadow on the bowling pins. this is because in the windowlight light shader, the darkcolor parameter is defined to be (.05, .02, .1) which is distinctly green. A better color choice would be (.1, .1, .1), or something similar. You can experiment.
Type
% make pins_e.pic
The last pins example renders the same scene and introduces the use of shadows into the picture. First the pinshad.c program (based on Listing 7.4) is compiled and rendered to generate the basic bowling pins scene. The image is rendered directly into the file pinshad.pic.
The utility txmake is then executed to generate a shadow map from pinshad.pic. This is kept in the texture map file pinshad.txt.
As always, the shaders used are compiled with the shader utility. In this case the only shader we are adding from previous examples is the shadowspot.sl shader (Listing 16.33). This is compiled and saved in the shadowspot.slo file.
Finally, the pins_e.c source file is compiled and run and the resulting RIB file is rendered. The result is the bowling pins scene, with a version of the spotlight that casts shadows added.
You may find the shadows somewhat ragged. You can improve their appearance by, for example, raising the resolution of the shadow map by changing the RiFormat in pinshad.c. Try a resolution of 256x256. Alternatively, you can blur out the ragged edges by raising the swidth and twidth parameters (described on page 129 of the RenderMan Interface Specification) to the shadow() function in the shadowspot shader. Since 1.0 is the default, try 2.0.
Type
% make bulb.pic
This renders an object with a number of different shaders and displacement maps applied to different parts of it. The shaders used are:
The image also uses three light sources: an ambient light source using the ambient.sl shader, and directional light sources at different locations, both using the same pointlight.sl shader.
Type
% make realpin.pic
This example generates a single bowling pin from the cover of The RenderMan Companion as illustrated in Plate 12.
It uses only the shader pin_color.sl and the displacement map shader gouge.sl to shade the surface of the pin.
The image also uses four light sources: an ambient light source using the ambient.sl shader, and three directional light sources at the same location and using the same pointlight.sl shader, but with different directions.
Type
% make pencil.pic
This example generates a single pencil -- the middle of the three pencils in Plate 16 of the book.
The shaders used here are:
The image also uses two light sources -- an ambient light source using the ambient.sl shader, and a distant light source using the distantlight.sl shader.
We've now run through all the examples in The RenderMan Companion. This section enters into some of the details of the rendering algorithm implemented in PhotoRealistic RenderMan, and how to affect the behavior of the renderer by setting renderer options and attributes.
Effective use of PhotoRealistic RenderMan, Pixar's implementation of the RenderMan Interface, requires some knowledge about its rendering algorithm. PhotoRealistic RenderMan uses a scanline-type algorithm designed to be most effective for scenes with very large numbers of primitives, where ray-tracing and radiosity algorithms are less useful because of problems in managing this quantity of data.
The central concept of the PhotoRealistic RenderMan rendering algorithm is that every primitive is broken down into a common representation consisting of very small quadrilaterals known as micropolygons. The actual sizes of the micropolygons are determined by the shading rate (see Chapter 11) which can be thought of as the area of an average micropolygon in pixels.
The first step in the breakdown process is to estimate the size of a primitive with a bound routine, which computes a bounding box for the primitive. The primitive is then diced into a grid of micropolygons based on the screen area covered by the bounding box. If a primitive is too big or complex to be bounded or diced, it is first split into other (hopefully now diceable) primitives. Some of these may in turn require further splitting, and this continues until all primitives are diceable. During this process, any primitive with a bounding box that lies entirely outside the clipping window is culled, or thrown away.
After a primitive is diced, each new micropolygon is displaced and shaded by the displacement and surface shaders that are attached to the primitive. The micropolygons are then converted to screen space and scan-converted using an extended form of z-buffer that does depth antialiasing and transparency. To perform spatial antialiasing, each sample point on the micropolygons is jittered, or displaced from a regular grid by a small random amount. This replaces the aliasing that is a result of regular sampling with noise, which often looks better to the eye. PhotoRealistic RenderMan also implements motion blur by jittering the sample points in time. See Chapter 9 for more information on sampling and antialiasing.
To use memory more efficiently, the image is divided into small rectangular regions known as buckets. Instead of processing all of the micropolygons and sample points at the same time, PhotoRealistic RenderMan processes one bucket at a time, producing a rectangle of pixel values and passing any unfinished micropolygons on to the next bucket.
This section is an overview of various RiOptions that affect the steps in the rendering process. More detailed information may be found in Section 4.1 of the User's Manual, along with information about other options not described here. To run the examples, go to the ch17 directory.
The size of the buckets used by the renderer may be controlled by the user. The bucketsize does not affect the quality of the image, but does affect the speed of the renderer. Large buckets can permit more efficient processing, especially when the scene contains large primitives, because they can process a larger area of the primitive in one "scoop". However, they also use up more memory because they require a larger z-buffer.
The optimal bucketsize is the largest that can be accommodated without causing the renderer to thrash or run out of memory, and this size will vary from system to system and from scene to scene. The best way to determine a good bucketsize is to find one that works for most scenes, and change it only when a particularly slow or memory-intensive scene is encountered.
To render a scene with a relatively large bucketsize, type
% make buck17_1.pic
This example uses the file opt_1.c containing the RenderMan calls
static RtInt bucketsize[2] = {24, 24}; RtInt gridsize = 144; RiOption("limits", "gridsize", (RtPointer)&gridsize, "bucketsize", (RtPointer)bucketsize, RI_NULL);
This example sets the bucketsize to 24 × 24 and also sets the gridsize (see below) since the two are closely related.
Now to render an image with smaller buckets, type
% make buck17_2.pic
This example uses the file opt_2.c with the RenderMan calls
static RtInt bucketsize[2] = {6, 6}; RtInt gridsize = 9; RiOption("limits","gridsize", (RtPointer)&gridsize, "bucketsize", (RtPointer)bucketsize, RI_NULL);
Here the bucketsize is set to 6 × 6, and the gridsize is again set accordingly.
The maximum number of micropolygons that will be produced in a grid is given by the gridsize. If dicing a primitive into micropolygons with sizes given by the shading rate would produce more than this number of micropolygons, the primitive will be split up further. Larger grids are shaded more efficiently because they can take better advantage of coherence, but they produce more micropolygons. If any of these micropolygons lie outside of the bucket where they are first produced, they will remain in memory until they are processed. As with bucketsize, the value of gridsize thus reflects a trade-off between efficiency and memory usage. In general it is best to use the largest gridsize possible, but if memory requirements demand a small gridsize it is useless to reduce it below the product of the bucketsize dimensions divided by the shading rate. Grid sizes smaller than this save no memory. Note that in the two examples from the previous section the gridsizes were set to this number (144 and 9, respectively).
Similar to bucketsize, the gridsize does not affect the quality of the image, except that grids that are too large or small will make surfaces that have problems with patch cracking look even worse.
Another speed/memory tradeoff occurs in handling texture, shadow, and environment map files. Data from these files are cached by the texture system of the renderer, and the size of the cache is set with the parameter texturememory. Larger values of texturememory increase texture mapping efficiency, but they also increase the total memory usage. This option, like bucketsize, does not affect the quality of the picture.
Another option, eyesplits, controls the processing of primitives that cross both the eye plane and the near clipping plane. Because these primitives fall within the visible part of the scene, they will not be culled, but the points on them that lie too near the eye plane will have bogus values after the perspective divide. To solve this problem, the renderer recursively splits all such primitives until the ones that are left in the visible window no longer have this problem.
Nevertheless, there are times when no amount of splitting will work. The most common example of this situation is a primitive that spans the eye plane and the near clipping plane because of its motion in a motion-blurred frame. No matter how small the renderer splits the primitive in space, each piece of it will still cross these planes. The renderer deals with this problem by splitting for several levels of recursion and then culling any primitives that have not been resolved. The eyesplits parameter controls the number of levels of recursion, and selection of an appropriate value is again a tradeoff. Too many levels of recursion may create an excessive number of primitives which will slow everything down and require a large amount of memory. Fewer levels will require less memory and less computation, but the culled primitives will be larger and can create noticeable holes in the objects.
To see an example of a scene where eyesplits is set correctly, type
% make splt17_3.pic
This example uses the file opt_3.c, which makes the RenderMan calls
RtInt splits = 8; RiOption("limits", "eyesplits", (RtPointer)&splits, RI_NULL);
This sets the recursion limit to eight levels. To see some of the scene geometry disappear, type
% make splt17_4.pic
The RenderMan calls (in opt_4.c) are
RtInt splits = 4; RiOption("limits", "eyesplits", (RtPointer)&splits, RI_NULL);
Here the renderer prints warnings that some patches wouldn't split nicely at only 4 levels of recursion. There are recursion levels between 8 and 4 for which such warnings are printed but the scene is not visibly affected. The lowest of these (6 in this example) is actually the optimum level for eyesplits - it should be as small as possible without visible effects, even if warnings are printed.
Although not an RiOption, it is possible to choose whether or not pixel sample points will be jittered by using the "jitter" parameter to the RiHider "hidden". The amount of jitter is determined by the renderer and is not controllable by the user. Using jitter substitutes noise for aliasing, as described in the Overview above, but it can make surfaces that have patch-cracking problems look even worse. It is very important to use jitter with motion blur -- motion blur doesn't work otherwise.
For an example of a scene with aliasing problems (with no jitter), type
% make jitt17_5.pic
This example contains the RenderMan calls (in opt_5.c)
RtInt flag = 0; RiPixelSamples(3.0, 3.0); RiHider("hidden", "jitter", (RtPointer)&flag, RI_NULL);
Here the number of pixel samples has been turned up to highlight the difference between this image, with no jitter, and the next, where jitter is turned on. To see this, type
% make jitt17_6.pic
This contains (in opt_6.c)
RtInt flag = 1; RiHider("hidden", "jitter", (RtPointer)&flag, RI_NULL);
The user can control the way the renderer calculates shadows (Chapters 12 and 15) with the two parameters bias0 and bias1. If these parameters are not set correctly, scenes with shadow maps can exhibit problems with numerical accuracy because a depth value stored in the shadow map for a surface can be slightly different than the value calculated by the renderer for the surface when the shadow map is used. This can cause a surface to shadow itself. The bias values lift the shadow map slightly away from surfaces by adding a small value to the depth stored in the shadow map. To avoid aliasing, the user gives a range of bias values and the renderer picks a random value in that range.
For an example of a scene with a bad bias setting, type
% make shad17_7.pic
This example makes the RenderMan calls (in opt_7.c)
RtFloat Bias0 = 0.0, Bias1 = 0.0; RiOption("shadow", "bias0", (RtPointer)&Bias0, "bias1", (RtPointer)&Bias1, RI_NULL);
The symptoms of the problem are the little grey spots all over the spheres. To see them disappear, type
% make shad17_8.pic
This now contains (in opt_8.c) the RenderMan calls
RtFloat Bias0 = 0.3, Bias1 = 0.4; RiOption("shadow", "bias0", (RtPointer)&Bias0, "bias1", (RtPointer)&Bias1, RI_NULL);
The user can exert further control over shadow calculations using the "samples", "swidth", and "twidth" parameters to the shadow() function described in Chapter 15. Increasing "samples" can prevent some types of aliasing, and "swidth" and "twidth" can be used to spread out and soften the shadow. These parameters are included in the "shadowlight" shader used in these examples, so if you would like to see their effects you can change the values of NumSamples and ResFactor in opt_8.c and remake the last example. For more information, see the [REEVES87] reference listed in The RenderMan Companion bibliography.
It is possible to control the way the rendered image will be output to a framebuffer by using the special RiDisplay parameter "origin". This maps the origin of the rendered image to a given pixel in the framebuffer. To see an example, type
% make off17_9.pic
This example contains the RenderMan calls (in opt_9.c)
RtInt offset[2]; offset[0] = 0; offset[1] = 0; RiDisplay(filename, "framebuffer", "rgba", "origin", (RtPointer)offset, RI_NULL); /* other calls torender a cyan sphere */ offset[0] = 128; offset[1] = 0; RiDisplay(filename, "framebuffer", "rgba", "origin", (RtPointer)offset, RI_NULL); /* other calls to render a magenta sphere */ offset[0] = 0; offset[1] = 96; RiDisplay(filename, "framebuffer", "rgba", "origin", (RtPointer)offset, RI_NULL); /* other calls to render a yellow sphere */ offset[0] = 128; offset[1] = 96; RiDisplay(filename, "framebuffer", "rgba", "origin", (RtPointer)offset, RI_NULL); /* other calls to render a white sphere */
This section describes RiAttributes specific to PhotoRealistic RenderMan that may be associated with individual primitives. For more detailed information see Section 4.2 of the User's Manual.
When dicing primitives, it is sometimes desirable to produce grids in which the number of micropolygons along a side is a power of two (16 × 32, for example). The binary dicing attribute will force the renderer to create grids in this manner. This is especially useful in preventing cracking on high-curvature patches because it guarantees that, for a given primitive, the edges of the micropolygons will line up at boundaries between grids of different-sized micropolygons.
To see a scene with patch cracks, type
% make bnd17_10.pic
This example makes the RenderMan calls (in opt_10.c)
RtInt flag = 0; RiAttribute("dice", "binary", (RtPointer)&flag, RI_NULL);
The cracks are the bright and dark dots. To make them go away, type
% make bnd17_11.pic
This example contains (in opt_11.c)
RtInt flag = 1; RiAttribute("dice", "binary", (RtPointer)&flag, RI_NULL);
Every time that a displacement shader is used, it is necessary to let the bounding routine know about the maximum potential displacement for that shader so that it can alter the size of the bounding box accordingly. Setting the displacement bound attribute too low, or failing to set it all, will cause micropolygons that are displaced beyond the limits of the bounding box to be thrown away, producing holes in the displaced surface. Setting the bound too high can keep primitives around needlessly because the renderer thinks they are bigger than they really are. This slows things down and eats up memory. For a good bound value use the actual maximum displacement the shader will produce.
To see an example displacement shader, type
% make dsp17_12.pic
This contains the calls (in opt_12.c)
RtFloat maxdisplacement = 0.25; RtString space = "current"; RiAttribute("displacementbound", "sphere", (RtPointer)&maxdisplacement, "coordinatesystem", (RtPointer)&space, RI_NULL); RiDisplacement("wave", "amplitude", (RtPointer)&maxdisplacement, RI_NULL);
The "wave" shader puts sinusoidal waves of amplitude 0.25 on a surface. To give this shader a bound problem, type
% make dsp17_13.pic
This contains the only these calls (in opt_13.c)
RtFloat maxdisplacement = 0.25; RiDisplacement("wave", "amplitude", (RtPointer)&maxdisplacement,RI_NULL);
Because the displacementbound attribute is left out, the image in rendered incorrectly.
Pixar Animation Studios
|