Graphics State

Options

Attributes

Transformations

Resources

Implementation Specific Attributes


The RenderMan Interface is similar to other graphics packages in that it maintains a graphics state. The graphics state contains all the information needed to render a geometric primitive. RenderMan Interface commands either change the graphics state or render a geometric primitive. The graphics state is divided into two parts: a global state that remains constant while rendering a single image or frame of a sequence, and a current state that changes from geometric primitive to geometric primitive. Parameters in the global state are referred to as options, whereas parameters in the current state are referred to as attributes. Options include the camera and display parameters, and other parameters that affect the quality or type of rendering in general (e.g., global level of detail, number of color samples, etc.). Attributes include the parameters controlling appearance or shading (e.g., color, opacity, surface shading model, light sources, etc.), how geometry is interpreted (e.g., orientation, subdivision level, bounding box, etc.), and the current modeling matrix. To aid in specifying hierarchical models, the attributes in the graphics state may be pushed and popped on a graphics state stack.

The graphics state also maintains the interface mode. The different modes of the interface are entered and exited by matching Begin-End command sequences.


RiBegin( RtToken name )
RiEnd( void )

RiBegin creates and initializes a new rendering context, setting all graphics state variables to their default values, and makes the new context the active one to which subsequent Ri routines will apply. Any previously active rendering context still exists, but is no longer the active one. The name may be the name of a renderer, to select among various implementations that may be available, or the name of the file to write (in the case of a RIB generator). RI_NULL indicates that the default implementation and/or output file should be used. 

RiEnd terminates the active rendering context, including performing any cleanup operations that need to be done. After RiEnd is called, there is no active rendering context until another RiBegin or RiContext is called. All other RenderMan Interface procedures must be called within an active context (the only exceptions are RiErrorHandler, RiOption, and RiContext).


RtContextHandle RiGetContext ( void )
RiContext ( RtContextHandle handle )

RiGetContext returns a handle for the current active rendering context. If there is no active rendering context, RI_NULL will be returned. RiContext sets the current active rendering context to be the one pointed to by handle. Any previously active context is not destroyed. There is no RIB equivalent for these routines. Additionally, other language bindings may have no need for these routines, or may provide an obvious mechanism in the language for this facility (such as class instances and methods in C++).


RiFrameBegin( RtInt frame )
RiFrameEnd( void )

The bracketed set of commands RiFrameBegin-RiFrameEnd mark the beginning and end of a single frame of an animated sequence. frame is the number of this frame. The values of all of the rendering options are saved when RiFrameBegin is called, and these values are restored when RiFrameEnd is called.

All lights and retained objects defined inside the RiFrameBegin-RiFrameEnd frame block are removed and their storage reclaimed when RiFrameEnd is called (thus invalidating their handles).

All of the information that changes from frame to frame should be inside a frame block. In this way, all of the information that is necessary to produce a single frame of an animated sequence may be extracted from a command stream by retaining only those commands within the appropriate frame block and any commands outside all of the frame blocks. This command need not be used if the application is producing a single image.

RIB BINDING

     FrameBegin int
     FrameEnd -

EXAMPLE

     RiFrameBegin(14);

SEE ALSO

     RiWorldBegin

RiWorldBegin()
RiWorldEnd()  

When RiWorldBegin is invoked, all rendering options are frozen and cannot be changed until the picture is finished. The world-to-camera transformation is set to the current transformation and the current transformation is reinitialized to the identity. Inside an RiWorldBegin-RiWorldEnd block, the current transformation is interpreted to be the object-to-world transformation. After an RiWorldBegin, the interface can accept geometric primitives that define the scene. (The only other mode in which geometric primitives may be defined is inside a RiObjectBegin-RiObjectEnd block.) Some rendering programs may immediately begin rendering geometric primitives as they are defined, whereas other rendering programs may wait until the entire scene has been defined.

RiWorldEnd does not normally return until the rendering program has completed drawing the image. If the image is to be saved in a file, this is done automatically by RiWorldEnd.

All lights and retained objects defined inside the RiWorldBegin-RiWorldEnd world block are removed and their storage reclaimed when RiWorldEnd is called (thus invalidating their handles).

RIB BINDING

     WorldBegin -
     WorldEnd -

EXAMPLE
  
     RiWorldEnd();

SEE ALSO

     RiFrameBegin

The following is an example of the use of these procedures, showing how an application constructing an animation might be structured. In the example, an object is defined once and instanced in subsequent frames at different positions.

RtObjectHandle BigUglyObject;
RiBegin();
	BigUglyObject = RiObjectBegin();
		...
	RiObjectEnd();
	/* Display commands */
	RiDisplayChannel(...):
	RiDisplay(...):
	RiFormat(...);
	RiFrameAspectRatio(1.0);
	RiScreenWindow(...);
	RiFrameBegin(0);
		/* Camera commands */
		RiProjection(RI_PERSPECTIVE,...);
		RiRotate(...);
		RiWorldBegin();
		    	...
			RiColor(...);
			RiTranslate(...);
			RiObjectInstance( BigUglyObject );
		   	...
		RiWorldEnd();
	RiFrameEnd();
	RiFrameBegin(1);
		/* Camera commands */
		RiProjection(RI_PERSPECTIVE,...);
		RiRotate(...);
		RiWorldBegin();
		  	...
		 	RiColor(...);
			RiTranslate(...);
			RiObjectInstance( BigUglyObject );
		   	...
		RiWorldEnd();
	RiFrameEnd();
   	...
RiEnd();

The following begin-end pairs also place the interface into special modes.

RiSolidBegin()
RiSolidEnd()

RiMotionBegin()
RiMotionEnd()

RiObjectBegin()
RiObjectEnd()

The properties of these modes are described in the appropriate sections (see the sections on Solids and Spatial Set Operations; Motion; and Retained Geometry.

Two other begin-end pairs:

RiAttributeBegin()
RiAttributeEnd()

RiTransformBegin()
RiTransformEnd() 

save and restore the attributes in the graphics state, and save and restore the current transformation, respectively.

All begin-end pairs (except RiTransformBegin-RiTransformEnd and RiMotionBegin-RiMotionEnd), implicitly save and restore attributes. Begin-End blocks of the various types may be nested to any depth, subject to their individual restrictions, but it is never legal for the blocks to overlap.

4.1 Options

The graphics state has various options that must be set before rendering a frame. The complete set of options includes: a description of the camera, which controls all aspects of the imaging process (including the camera position and the type of projection); a description of the display, which controls the output of pixels (including the types of images desired, how they are quantized and which device they are displayed on); as well as renderer run-time controls (such as the hidden surface algorithm to use).

4.1.1 Camera

The graphics state contains a set of parameters that define the properties of the camera. The complete set of camera options is described in Table 4.1, Camera Options.

The viewing transformation specifies the coordinate transformations involved with imaging the scene onto an image plane and sampling that image at integer locations to form a raster of pixel values. A few of these procedures set display parameters such as resolution and pixel aspect ratio. If the rendering program is designed to output to a particular display device these parameters are initialized in advance. Explicitly setting these makes the specification of an image more device dependent and should only be used if necessary. The defaults given in the Camera Options table characterize a hypothetical framebuffer and are the defaults for picture files.

Table 4.1 Camera Options

Camera OptionTypeDefaultDescription
Horizontal Resolutioninteger 640*The horizontal resolution in the output image.
Vertical Resolution integer 480*The vertical resolution in the output image.
Pixel Aspect Ratiofloat1.0*The ratio of the width to the height of a single pixel.
Crop Window4 floats(0,1,0,1)The region of the raster that is actually rendered.
Frame Aspect Ratiofloat4/3 *The aspect ratio of the desired image.
Screen Window 4 floats(-4/3,4/3,-1,1)*The screen coordinates (coordinates after the projection) of the area to be rendered.
Camera Projectiontoken"orthographic"The camera to screen projection.
World to Cameratransformidentity The world to camera transformation.
Clipping Planes2 floats(epsilon, infinity)The positions of the near and far clipping planes.
Other Clipping Planeslist of planes-Additional planes that clip geometry from the scene.
f-Stop
Focal Length
Focal Distance
float
float
float
infinity
-
-
Parameters controlling depth of field.
Shutter Open
Shutter Close
float
float
0
0
The times when the shutter opens and closes.

* Interrelated defaults

The camera model supports near and far clipping planes that are perpendicular to the viewing direction, as well as any number of arbitrary user-specified clipping planes. Depth of field is specified by setting an f-stop, focal length, and focal distance just as in a real camera. Objects located at the focal distance will be sharp and in focus while other objects will be out of focus. The shutter is specified by giving opening and closing times. Moving objects will blur while the camera shutter is open.

The imaging transformation proceeds in several stages. Geometric primitives are specified in the object coordinate system. This canonical coordinate system is the one in which the object is most naturally described. The object coordinates are converted to the world coordinate system by a sequence of modeling transformations. The world coordinate system is converted to the camera coordinate system by the camera transformation. Once in camera coordinates, points are projected onto the image plane or screen coordinate system by the projection and its following screen transformation. Points on the screen are finally mapped to a device dependent, integer coordinate system in which the image is sampled. This is referred to as the raster coordinate system and this transformation is referred to as the raster transformation. These various coordinate systems are summarized in Table 4.2 Point Coordinate Systems.

Table 4.2 Point Coordinate Systems

Coordinate SystemDescription
"object"
The coordinate system in which the current geometric primitive is defined. The modeling transformation converts from object coordinates to world coordinates.
"world"
The standard reference coordinate system. The camera transformation converts from world coordinates to camera coordinates.
"camera"
A coordinate system with the vantage point at the origin and the direction of view along the positive z-axis. The projection and screen transformation convert from camera coordinates to screen coordinates.
"screen"
The 2-D normalized coordinate system corresponding to the image plane. The raster transformation converts to raster coordinates.
"raster"
The raster or pixel coordinate system. An area of 1 in this coordinate system corresponds to the area of a single pixel. This coordinate system is either inherited from the display or set by selecting the resolution of the image desired.
"NDC"
Normalized device coordinates — like "raster" space, but normalized so that x and y both run from 0 to 1 across the whole (un-cropped) image, with (0,0) being at the upper left of the image, and (1,1) being at the lower right (regardless of the actual aspect ratio).

These various coordinate systems are established by camera and transformation commands. The order in which camera parameters are set is the opposite of the order in which the imaging process was described above. When RiBegin is executed it establishes a complete set of defaults. If the rendering program is designed to produce pictures for a particular piece of hardware, display parameters associated with that piece of hardware are used. If the rendering program is designed to produce picture files, the parameters are set to generate a video-size image. If these are not sufficient, the resolution and pixel aspect ratio can be set to generate a picture for any display device. RiBegin also establishes default screen and camera coordinate systems as well. The default projection is orthographic and the screen coordinates assigned to the display are roughly between ± 1.0. The initial camera coordinate system is mapped onto the display such that the +x axis points right, the +y axis points up, and the +z axis points inward, perpendicular to the display surface. Note that this is left-handed.

Before any transformation commands are made, the current transformation matrix contains the identity matrix as the screen transformation. Usually the first transformation command is an RiProjection, which appends the projection matrix onto the screen transformation, saves it, and reinitializes the current transformation matrix as the identity camera transformation. This marks the current coordinate system as the camera coordinate system. After the camera coordinate system is established, future transformations move the world coordinate system relative to the camera coordinate system. When an RiWorldBegin is executed, the current transformation matrix is saved as the camera transformation, and thus the world coordinate system is established. Subsequent transformations inside of an RiWorldBegin-RiWorldEnd establish different object coordinate systems.


Figure 4.1, Camera-to-Raster Projection Geometry

Figure 4.1, Camera-to-Raster Projection Geometry

(click on image to view a larger version)


The following example shows how to position a camera:

RiBegin();
	RiFormat( xres, yres, 1.0 );	/*Raster coordinate system*/
	RiFrameAspectRatio( 4.0/3.0 ); /*Screen coordinate system*/
	RiFrameBegin(0);
		RiProjection("perspective,"...); /*Camera coordinate system*/
		RiRotate(... );
		RiWorldBegin();		/*World coordinate system*/
		...
			RiTransform(...);	/*Object coordinate system*/
		RiWorldEnd();
	RiFrameEnd();
RiEnd();

The various camera procedures are described below, with some of the concepts illustrated in Figure 4.1, Camera-to-Raster Projection Geometry.


RiFormat( RtInt xresolution, RtInt yresolution, RtFloat pixelaspectratio ) 

Set the horizontal (xresolution) and vertical (yresolution) resolution (in pixels) of the image to be rendered. The upper left hand corner of the image has coordinates (0,0) and the lower right hand corner of the image has coordinates (xresolution, yresolution). If the resolution is greater than the maximum resolution of the device, the desired image is clipped to the device boundaries (rather than being shrunk to fit inside the device). This command also sets the pixel aspect ratio. The pixel aspect ratio is the ratio of the physical width to the height of a single pixel. The pixel aspect ratio should normally be set to 1 unless a picture is being computed specifically for a display device with non-square pixels.

Implicit in this command is the creation of a display viewport with a

The viewport aspect ratio is the ratio of the physical width to the height of the entire image.

An image of the desired aspect ratio can be specified in a device independent way using the procedure RiFrameAspectRatio described below. The RiFormat command should only be used when an image of a specified resolution is needed or an image file is being created.

If this command is not given, the resolution defaults to that of the display device being used (see the Displays section, p. 27). Also, if xresolution, yresolution or pixelaspectratio is specified as a nonpositive value, the resolution defaults to that of the display device for that particular parameter.

RIB BINDING

     Format xresolution yresolution pixelaspectratio

EXAMPLE

     Format 512 512 1

SEE ALSO

     RiDisplay, RiFrameAspectRatio

RiFrameAspectRatio( RtFloat frameaspectratio )

frameaspectratio is the ratio of the width to the height of the desired image. The picture produced is adjusted in size so that it fits into the display area specified with RiDisplay or RiFormat with the specified frame aspect ratio and is such that the upper left corner is aligned with the upper left corner of the display.

If this procedure is not called, the frame aspect ratio defaults to that determined from the resolution and pixel aspect ratio.

RIB BINDING

     FrameAspectRatio frameaspectratio

EXAMPLE

     RiFrameAspectRatio (4.0/3.0);

SEE ALSO

     RiDisplay, RiFormat

RiScreenWindow( RtFloat left, RtFloat right, RtFloat bottom, RtFloat top ) 

This procedure defines a rectangle in the image plane that gets mapped to the raster coordinate system and that corresponds to the display area selected. The rectangle specified is in the screen coordinate system. The values left, right, bottom, and top are mapped to the respective edges of the display.

The default values for the screen window coordinates are:

     (-frameaspectratio, frameaspectratio, -1, 1).

if frameaspectratio is greater than or equal to one, or

     (-1, 1, -1/frameaspectratio, 1/frameaspectratio).

if frameaspectratio is less than or equal to one. For perspective projections, this default gives a centered image with the smaller of the horizontal and vertical fields of view equal to the field of view specified with RiProjection. Note that if the camera transformation preserves relative x and y distances, and if the ratio

is not the same as the frame aspect ratio of the display area, the displayed image will be distorted.

RIB BINDING

     ScreenWindow left right bottom top
     ScreenWindow [left right bottom top]

EXAMPLE

     ScreenWindow -1 1 -1 1

SEE ALSO

     RiCropWindow, RiFormat, RiFrameAspectRatio, RiProjection

RiCropWindow( RtFloat xmin, RtFloat xmax, RtFloat ymin, RtFloat ymax )  

Render only a sub-rectangle of the image. This command does not affect the mapping from screen to raster coordinates. This command is used to facilitate debugging regions of an image, and to help in generating panels of a larger image. These values are specified as fractions of the raster window defined by RiFormat and RiFrameAspectRatio, and therefore lie between 0 and 1. By default the entire raster window is rendered. The integer image locations corresponding to these limits are given by

     rxmin = clamp (ceil ( xresolution*xmin ), 0, xresolution-1);
     rxmax = clamp (ceil ( xresolution*xmax -1 ), 0, xresolution-1);
     rymin = clamp (ceil ( yresolution*ymin ),	0, yresolution-1);
     rymax = clamp (ceil ( yresolution*ymax -1 ), 0, yresolution-1); 

These regions are defined so that if a large image is generated with tiles of abutting but non-overlapping crop windows, the subimages produced will tile the display with abutting and non-overlapping regions.

RIB BINDING

     CropWindow xmin xmax ymin ymax
     CropWindow [xmin xmax ymin ymax]

EXAMPLE

     RiCropWindow (0.0, 0.3, 0.0, 0.5);

SEE ALSO

     RiFrameAspectRatio, RiFormat 

RiProjection( RtToken name, ...parameterlist... ) 

The projection determines how camera coordinates are converted to screen coordinates, using the type of projection and the near/far clipping planes to generate a projection matrix. It appends this projection matrix to the current transformation matrix and stores this as the screen transformation, then marks the current coordinate system as the camera coordinate system and reinitializes the current transformation matrix to the identity camera transformation. The required types of projection are "perspective," "orthographic," and RI_NULL.

"perspective" builds a projection matrix that does a perspective projection along the z-axis, using the RiClipping values, so that points on the near clipping plane project to z=0 and points on the far clipping plane project to z=1. "perspective" takes one optional parameter, "fov," a single RtFloat that indicates the full angle perspective field of view (in degrees) between screen space coordinates (-1,0) and (1,0) (equivalently between (0,-1) and (0,1)). The default is 90 degrees.

Note that there is a redundancy in the focal length implied by this procedure and the one set by RiDepthOfField. The focal length implied by this command is:

"orthographic" builds a simple orthographic projection that scales z using the RiClipping values as above. "orthographic" takes no parameters.

RI_NULL uses an identity projection matrix, and simply marks camera space in situations where the user has generated his own projection matrices himself using RiPerspective or RiTransform.

This command can also be used to select implementation-specific projections or special projections written in the Shading Language. If a particular implementation does not support the special projection specified, it is ignored and an orthographic projection is used. If RiProjection is not called, the screen transformation defaults to the identity matrix, so screen space and camera space are identical.

RIB BINDING

     Projection "perspective" ...parameterlist...
     Projection "orthographic"
     Projection name ...parameterlist...

EXAMPLE

     RiProjection (RI_ORTHOGRAPHIC, "fov", &fov, RI_NULL);

SEE ALSO

     RiPerspective, RiClipping

RiClipping( RtFloat near, RtFloat far ) 

Sets the position of the near and far clipping planes along the direction of view. near and far must both be positive numbers. near must be greater than or equal to RI_EPSILON and less than far. far must be greater than near and may be equal to RI_INFINITY. These values are used by RiProjection to generate a screen projection such that depth values are scaled to equal zero at z=near and one at z=far. Notice that the rendering system will actually clip geometry which lies outside of z=(0,1) in the screen coordinate system, so non-identity screen transforms may affect which objects are actually clipped.

For reasons of efficiency, it is generally a good idea to bound the scene tightly with the near and far clipping planes.

RIB BINDING

     Clipping near far

EXAMPLE

     Clipping  .1 10000

SEE ALSO

     RiBound, RiProjection, RIClippingPlane

RiClippingPlane ( RtFloat nx, RtFloat ny, RtFloat nz, RtFloat x, RtFloat y, RtFloat z)

Adds a user-specified clipping plane. The plane is specified by giving the normal, (nx, ny, nz), and any point on its surface, (x, y, z). All geometry on the positive side of the plane (that is, in the direction that the normal points) will be clipped from the scene. The point and normal parameters are interpreted as being in the active local coordinate system at the time that the RiClippingPlane statement is issued.

Multiple calls to RiClippingPlane will establish multiple clipping planes.

RIB BINDING
ClippingPlane nx ny nz x y z
EXAMPLE
ClippingPlane 0 0 -1 3 0 0
SEE ALSO
RiClipping

RiDepthOfField( RtFloat fstop, RtFloat focallength, RtFloat focaldistance )

focaldistance sets the distance along the direction of view at which objects will be in focus. focallength sets the focal length of the camera. These two parameters should have the units of distance along the view direction in camera coordinates. fstop, or aperture number, determines the lens diameter:

If fstop is RI_INFINITY, a pin-hole camera is used and depth of field is effectively turned off. If the Depth of Field capability is not supported by a particular implementation, a pin-hole camera model is always used.

If depth of field is turned on, points at a particular depth will not image to a single point on the view plane but rather a circle. This circle is called the circle of confusion. The diameter of this circle is equal to

Note that there is a redundancy in the focal length as specified in this procedure and the one implied by RiProjection.

RIB BINDING

     DepthOfField fstop focallength focaldistance
     DepthOfField -

The second form specifies a pin-hole camera with infinite fstop, for which the focallength and focaldistance parameters are meaningless.

EXAMPLE

     DepthOfField 22 45 1200

SEE ALSO

     RiProjection

RiShutter( RtFloat min, RtFloat max )

This procedure sets the times at which the shutter opens and closes. min should be less than max. If min==max, no motion blur is done.

RIB BINDING

     Shutter min max

EXAMPLE

     RiShutter(0.1, 0.9);

SEE ALSO

     RiMotionBegin

4.1.2 Displays

The graphics state contains a set of parameters that control the properties of the display process. The complete set of display options is given in Table 4.3, Display Options.

Table 4.3 Display Options

Display OptionTypeDefaultDescription
Pixel Variancefloat-Estimated variance of the computed pixel value from the true pixel value.
Sampling Rates2 floats2, 2Effective sampling rate in the horizontal and vertical directions.
Filter
Filter Widths
function
2 float
RiGaussianFilter
2, 2
Type of filtering and the width of the filter in the horizontal and vertical directions.
Exposure
  gain
  gamma
 
float
float
 
1.0
1.0
Gain and gamma of the exposure process.
Imagershader"null"A procedure defining an image or pixel operator.
Color Quantizer
  one
  minimum
  maximum
  dither amplitude

int
int
int
float

255
0
255
0.5
Color and opacity quantization parameters.
Depth Quantizer
  one
  minimum
  maximum
  dither amplitude

int
int
int
float

0
-
-
-
Depth quantization parameters.
Display Typetoken*Whether the display is a frame-buffer or a file.
Display Namestring*Name of the display device or file.
Display Modetoken*Image output type.

* Implementation-specific

Rendering programs must be able to produce color, opacity (alpha), and depth images. Display parameters control how the values in these images are converted into a displayable form. Many times it is possible to use none of the procedures described in this section. If this is done, the rendering process and the images it produces are described in a completely device-independent way. If a rendering program is designed for a specific display, it has appropriate defaults for all display parameters. The defaults given in Table 4.3, Display Options characterize a file to be displayed on a hypothetical video framebuffer.

The output process is different for color, alpha, and depth information. (See Figure 4.2, Imaging Pipeline). The hidden-surface algorithm will produce a representation of the light incident on the image plane. This color image is either continuous or sampled at a rate that may be higher than the resolution of the final image. The minimum sampling rate can be controlled directly, or can be indicated by the estimated variance of the pixel values. These color values are filtered with a user-selectable filter and filterwidth, and sampled at the pixel centers. The resulting color values are then multiplied by the gain and passed through an inverse gamma function to simulate the exposure process. The resulting colors are then passed to a quantizer which scales the values and optionally dithers them before converting them to a fixed-point integer. It is also possible to interpose a programmable imager (written in the Shading Language) between the exposure process and quantizer. This imager can be used to perform special effects processing, to compensate for non-linearities in the display media, and to convert to device dependent color spaces (such as CMYK or pseudocolor).

Final output alpha is computed by multiplying the coverage of the pixel (i.e., the sub-pixel area actually covered by a geometric primitive) by the average of the color opacity components. If an alpha image is being output, the color values will be multiplied by this alpha before being passed to the quantizer. Color and alpha use the same quantizer.

Output depth values are the screen-space z values, which lie in the range 0 to 1. Generally, these correspond to camera-space values between the near and far clipping planes. Depth values bypass all the above steps except for the imager and quantization. The depth quantizer has an independent set of parameters from those of the color quantizer.


RiPixelVariance ( RtFloat variation )

The color of a pixel computed by the rendering program is an estimate of the true pixel value: the convolution of the continuous image with the filter specified by RiPixelFilter. This routine sets the upper bound on the acceptable estimated variance of the pixel values from the true pixel values.

RIB BINDING

     PixelVariance variation

EXAMPLE

     RiPixelVariance(.01);

 SEE ALSO

     RiPixelFilter, RiPixelSamples

Figure 4.2 Imaging Pipeline

(click on image to view a larger version)


RiPixelSamples( RtFloat xsamples, RtFloat ysamples ) 

Set the effective hider sampling rate in the horizontal and vertical directions. The effective number of samples per pixel is xsamples*ysamples. If an analytic hidden surface calculation is being done, the effective sampling rate is RI_INFINITY. Sampling rates less than 1 are clamped to 1.

RIB BINDING

     PixelSamples xsamples ysamples

EXAMPLE

     PixelSamples 2 2

SEE ALSO

     RiPixelFilter, RiPixelVariance

RiPixelFilter( RtFloatFunc filterfunc, RtFloat xwidth, RtFloat ywidth )

Anti-aliasing is performed by filtering the geometry (or super-sampling) and then sampling at pixel locations. The filterfunc controls the type of filter, while xwidth and ywidth specify the width of the filter in pixels. A value of 1 indicates that the support of the filter is one pixel. RenderMan supports nonrecursive, linear shift-invariant filters. The type of the filter is set by passing a reference to a function that returns a filter kernel value; i.e.,

     filterkernelvalue = (*filterfunc)( x, y, xwidth, ywidth );

(where (x,y) is the point at which the filter should be evaluated). The rendering program only requests values in the ranges -xwidth/2 to xwidth/2 and -ywidth/2 to ywidth/2. The values returned need not be normalized.

The following standard filter functions are available:

RtFloat RiBoxFilter (RtFloat, RtFloat, RtFloat, RtFloat);
RtFloat RiTriangleFilter (RtFloat, RtFloat, RtFloat, RtFloat);
RtFloat RiCatmullRomFilter (RtFloat, RtFloat, RtFloat, RtFloat);
RtFloat RiGaussianFilter (RtFloat, RtFloat, RtFloat, RtFloat);
RtFloat RiSincFilter (RtFloat, RtFloat, RtFloat, RtFloat);

A particular renderer implementation may also choose to provide additional built-in filters. The standard filters are described in Appendix E.

A high-resolution picture is often computed in sections or panels. Each panel is a subrectangle of the final image. It is important that separately computed panels join together without a visible discontinuity or seam. If the filter width is greater than 1 pixel, the rendering program must compute samples outside the visible window to properly filter before sampling.

RIB BINDING

     PixelFilter type xwidth ywidth

The type is one of: "box," "triangle," "catmull-rom" (cubic), "sinc" and "gaussian."

EXAMPLE

     RiPixelFilter(RiGaussianFilter,  2.0, 1.0);

     PixelFilter "gaussian" 2 1

SEE ALSO

     RiPixelSamples, RiPixelVariance

RiExposure( RtFloat gain, RtFloatgamma )

This function controls the sensitivity and non-linearity of the exposure process. Each component of color is passed through the following function:

RIB BINDING

     Exposure gain gamma

EXAMPLE

     Exposure 1.5 2.3

SEE ALSO

     RiImager

RiImager( RtToken name, parameterlist )

Select an imager function programmed in the Shading Language. name is the name of an imager shader. If name is RI_NULL, no imager shader is used.

RIB BINDING

     Imager name ...parameterlist...

EXAMPLE

     RiImager("cmyk," RI_NULL);

SEE ALSO

     RiExposure
 

RiQuantize( RtToken type, RtInt one, RtInt min, RtInt max, RtFloat ditheramplitude )

Set the quantization parameters for colors or depth. If type is "rgba," then color and opacity quantization are set. If type is "z," then depth quantization is set. The value one defines the mapping from floating-point values to fixed point values. If one is 0, then quantization is not done and values are output as floating point numbers.

Dithering is performed by adding a random number to the floating-point values before they are rounded to the nearest integer. The added value is scaled to lie between plus and minus the dither amplitude. If ditheramplitude is 0, dithering is turned off.

Quantized values are computed using the following formula:

     value = round( one * value + ditheramplitude * random() );
     value = clamp( value, min, max ); 

where random returns a random number between ± 1.0, and clamp clips its first argument so that it lies between min and max.

By default color pixel values are dithered with an amplitude of .5 and quantization is performed for an 8-bit display with a one of 255. Quantization and dithering and not performed for depth values (by default).

RIB BINDING

     Quantize type one min max ditheramplitude

EXAMPLE

     RiQuantize(RI_RGBA,  2048, -1024, 3071, 1.0);

SEE ALSO

     RiDisplay, RiImager  

RiDisplayChannel( RtToken channel, ...parameterlist... )

Defines a new display channel for the purposes of output by a single display stream. Channels defined by this call can then be subsequently passed as part of the mode parameter to RiDisplay.

Channels are uniquely specified for each frame using the channel parameter; its value should be the name of a known geometric quantity or the name of a shader output variable, along with an inline declaration of its type; for example, varying color arbcolor. Future references to the channel (i.e. in RiDisplay) require only the name and not the type (arbcolor). Channels may be further qualified by renderer specific options which may control how the data is to be filtered, quantized, or filled by the display or renderer; see RiDisplay for information on these options. Any such per-channel options should appear in the parameter list. If they are not present, then the equivalent option specified in RiDisplay will be applied.

    DisplayChannel "varying point P" "string filter" "box" "float[2] filterwidth" [1 1] "point fill" [1 0 0] 
    DisplayChannel "varying normal N" 
    DisplayChannel "varying float s" "string filter" "gaussian" "float[2] filterwidth" [5 5] "float fill" [1]
    DisplayChannel "varying color arbcolor"
        
    Display "+output.tif" "tiff" "P,N,s,arbcolor" "string filter" "catmull-rom" "float[2] filterwidth" [2 2]

In this example, four channels P, N, s, and arbcolor are defined. P and s have channel options which control the pixel filter and default fill value. These four channels are then passed to RiDisplay via the mode parameter as a comma separated list. Because the DisplayChannel lines for N and arbcolor did not specify pixel filters, the filter specified on the Display line ("catmull-rom") will be applied to those two channels.

RIB BINDING

     DisplayChannel ...parameterlist...

SEE ALSO

     RiDisplay

RiDisplay( RtToken name, RtToken type, RtToken mode, ...parameterlist... )

Choose a display by name and set the type of output being generated. name is either the name of a picture file or the name of the framebuffer, depending on type. The typeof display is the display format, output device, or output driver. All implementations must support the type names "framebuffer" and "file", which indicate that the renderer should select the default framebuffer or default file format, respectively. Implementations may support any number of particular formats or devices (for example, "tiff" might indicate that a TIFF file should be written), and may allow the supported formats to be user-extensible in an implementation-specific manner.

The mode indicates what data are to be output in this display stream. All renderers must support any combination (string concatenation) of "rgb" for color (usually red, green and blue intensities unless there are more or less than 3 color samples; see the next section, Additional options), "a" for alpha, and "z" for depth values, in that order. Renderers may additionally produce "images" consisting of arbitrary data, by using a mode that is the name of a known geometric quantity, the name of a shader output variable, or a comma separated list of display channels (all of which must be previously defined with RiDisplayChannel).

Shader output variables may optionally be prefaced with a color and the shader type ("volume", "atmosphere", "displacement", "surface", or "light"); if prefaced with "light", the prefix may also include a light handle name. These prefixes serve to disambiguate the source of the variable data. For example, "surface:foo", "light:bar", or "light(myhandle):Cl" will cause the variables to be searched in the surface shader, first light shader to match, or light with handle "myhandle", respectively.

Note also that multiple displays can be specified, by prepending the + character to the name. For example,

RiDisplay ("out.tif," "file," "rgba", RI NULL);
RiDisplay ("+normal.tif," "file," "N", RI NULL);

will produce a four-channel image consisting of the filtered color and alpha in out.tif, and also a second three-channel image file normal.tif consisting of the surface normal of the nearest surface behind each pixel. (This would, of course, only be useful if RiQuantize were instructed to output floating point data or otherwise scale the data.) Renderers which support RiDisplayChannel should expect displays of the form:

RiDisplay ("+bake.tif," "file," "_occlusion,_irradiance", RI NULL);

Assuming _occlusion and _irradiance were both previously declared as floats using RiDisplayChannel, this RiDisplay line will produce a two-channel image.

Display options or device-dependent display modes or functions may be set using the parameterlist . One such option is required: "origin", which takes an array of two RtInts, sets the x and y position of the upper left hand corner of the image in the display's coordinate system; by default the origin is set to (0,0). The default display device is renderer implementation-specific.

RIB BINDING

     Display name type mode ...parameterlist...

EXAMPLE

     RtInt origin[2] = { 10, 10 };
     RiDisplay("pixar0," "framebuffer," "rgba," "origin," (RtPointer)origin, RI_NULL);

SEE ALSO

     RiDisplayChannel, RiFormat, RiQuantize

4.1.3 Additional options

Table 4.4 Additional RenderMan Interface Options

Option Type Default Description
Hidertoken"hidden" The type of hidden surface algorithm that is performed.
Color Samplesint3Number of color components in colors. The default is 3 for RGB.
Relative Detailfloat1.0A multiplicative factor that can be used to increase or decrease the effective level of detail used to render an object.

The hider type and parameters control the hidden-surface algorithm.


RiHider( RtToken type, ...parameterlist... )

The standard types are "hidden," "paint," and "null." "hidden" performs standard hidden-surface computations. "paint" draws the objects in the order in which they are defined. The hider "null" performs no pixel computation and hence produces no output. Other implementation-specific hidden-surface algorithms can also be selected using this routine.

RIB BINDING

     Hider type parameterlist

EXAMPLE

     RiHider "paint"
 

Rendering programs compute color values in some spectral color space. This implies that multiplying two colors corresponds to interpreting one of the colors as a light and the other as a filter and passing light through the filter. Adding two colors corresponds to adding two lights. The default color space is NTSC-standard RGB; this color space has three samples. Color values of 0 are interpreted as black (or transparent) and values of 1 are interpreted as white (or opaque), although values outside this range are allowed.


RiColorSamples( RtInt n, RtFloat nRGB[], RtFloat RGBn[] )

This function controls the number of color components or samples to be used in specifying colors. By default, n is 3, which is appropriate for RGB color values. Setting n to 1 forces the rendering program to use only a single color component. The array nRGB is an n by 3 transformation matrix that is used to convert n component colors to 3 component NTSC-standard RGB colors. This is needed if the rendering program cannot handle multiple components. The array RGBn is a 3 by n transformation matrix that is used to convert 3 component NTSC-standard RGB colors to n component colors. This is mainly used for transforming constant colors specified as color triples in the Shading Language to the representation being used by the RenderMan Interface.

Calling this procedure effectively redefines the type RtColor to be

     typedef RtFloat	RtColor[n];

After a call to RiColorSamples, all subsequent color arguments are assumed to be this size.

If the Spectral Color capability is not supported by a particular implementation, that implementation will still accept multiple component colors, but will immediately convert them to RGB color space and do all internal calculations with 3 component colors.

RIB BINDING

     ColorSamples nRGB RGBn

The number of color components, n, is derived from the lengths of the nRGB and RGBn arrays, as described above.

EXAMPLE

     ColorSamples [.3.3 .4] [1 1 1]

     RtFloat frommonochr[] = {.3, .3, .4};
     RtFloat tomonochr[] = {1., 1., 1.};
     RiColorSamples(1, frommonochr, tomonochr);

SEE ALSO

     RiColor, RiOpacity  

The method of specifying and using level of detail is discussed in the section on Detail.


RiRelativeDetail( RtFloat relativedetail )

The relative level of detail scales the results of all level of detail calculations. The level of detail is used to select between different representations of an object. If relativedetail is greater than 1, the effective level of detail is increased, and a more detailed representation of all objects will be drawn. If relativedetail is less than 1, the effective level of detail is decreased, and a less detailed representation of all objects will be drawn.

RIB BINDING

     RelativeDetail relativedetail

EXAMPLE

     RelativeDetail 0.6

SEE ALSO

     RiDetail, RiDetailRange

4.1.4 Implementation-specific options

Rendering programs may have additional implementation-specific options that control parameters that affect either their performance or operation. These are all set by the following procedure.  In addition, a user can specify rendering option by pre-pending the string "user:" onto the option name.  While these options are not expected to have any meaning to a renderer, user options should not be ignored.  Rather, they must be tracked according to standard option scoping rules and made available to shaders via the option function.


RiOption( RtToken name, parameterlist )

Sets the named implementation-specific option. A rendering system may have certain options that must be set before the renderer is initialized. In this case, RiOption may be called before RiBegin to set those options only.

Although RiOption is intended to allow implementation-specific options, there are a number of options that we expect that nearly all implementations will need to sup-port. It is intended that when identical functionality is required, that all implementations use the option names listed in Table 4.5.

RIB BINDING

     Option name ...parameterlist...

EXAMPLE

     Option "limits" "gridsize" [32] "bucketsize" [12 12]

SEE ALSO

     RiAttribute

 Table 4.5: Typical implementation-specific options

Option name/param Type Default Description
"searchpath" "archive" [s]
string "" List of directories to search for RIB archives.
"searchpath" "texture" [s]
string "" List of directories to search for texture files.
"searchpath" "shader" [s]
string "" List of directories to search for shaders.
"searchpath" "procedural" [s]
string "" List of directories to search for dynamically-loaded RiProcedural primitives.
"statistics" "endofframe" [i]
string "" If nonzero, print runtime statistics when the frame is finished rendering.

4.2 Attributes

Attributes are parameters in the graphics state that may change while geometric primitives are being defined. The complete set of standard attributes is described in two tables: Table 4.5, Shading Attributes, and Table 4.9, Geometry Attributes.

Attributes can be explicitly saved and restored with the following commands. All begin-end blocks implicitly do a save and restore.


RiAttributeBegin()
RiAttributeEnd()

Push and pop the current set of attributes. Pushing attributes also pushes the current transformation. Pushing and popping of attributes must be properly nested with respect to various begin-end constructs.

RIB BINDING

     AttributeBegin -
     AttributeEnd -

EXAMPLE

     RiAttributeBegin();

SEE ALSO

     RiFrameBegin, RiTransformBegin, RiWorldBegin

The process of shading is described in detail in Part II: The RenderMan Shading Language. The complete list of attributes related to shading are in Table 4.5, Shading Attributes.

The graphics state maintains a list of attributes related to shading. Associated with the shading state are a current color and a current opacity. The graphics state also contains a current surface shader, a current atmosphere shader, a current interior volume shader, and a current exterior volume shader.

All geometric primitives use the current surface shader for computing the color (shading) of their surfaces and the current atmosphere shader for computing the attenuation of light towards the viewer. Solid primitives attach the current interior and exterior volume shaders to their interior and exterior. The graphics state also contains a current list of light sources that are used to illuminate the geometric primitive. Finally, there is a current area light source. Geometric primitives can be added to a list of primitives defining this light source.

Table 4.6 Shading Attributes

Shading Attribute Type Default Description
Colorcolor color "rgb" (1,1,1)The reflective color of the object.
Opacitycolorcolor "rgb" (1,1,1) The opacity of the object.
Texture Coordinates8 floats(0,0)(1,0),(0,1),(1,1)The texture coordinates (s, t) at the 4 corners of a parametric primitive.
Light Sourcesshader list -A list of light source shaders that illuminate subsequent primitives.
Area Light Sourceshader-An area light source which is being defined.
Surfaceshaderdefault surfaceA shader controlling the surface shading model.
Atmosphereshader-A volume shader that specifies how the color of light is changed as it travels from a visible surface to the eye.
Interior Volume
Exterior Volume
shader
shader
-
-
A volume shader that specifies how the color of light is changed as it traverses a volume in space.
Effective Shading Ratefloat .25Minimum rate of surface shading.
Shading Interpolation token "constant" How the results of shading are interpolated across a polygon.
Matte Surface Flag boolean falseA flag indicating the surfaces of the subsequent primitives are opaque to the rendering program, but transparent on output.

4.2.1 Color and opacity

All geometric primitives inherit the current color and opacity from the graphics state, unless color or opacity are defined as part of the primitive. Colors are passed in arrays that are assumed to contain the number of color samples being used (see the section on Additional options).


RiColor( RtColor color )

Set the current color to color. Normally there are three components in the color (red, green, and blue), but this may be changed with the colorsamples request.

RIB BINDING

     Color c0 c1... cn 
     Color [c0 c1... cn]

EXAMPLE

     RtColor blue = { .2, .3, .9};
     RiColor(blue);

     Color [.2 .3 .9]

SEE ALSO

     RiOpacity, RiColorSamples 

RiOpacity( RtColor color )

Set the current opacity to color. The color component values must be in the range [0,1]. Normally there are three components in the color (red, green, and blue), but this may be changed with RiColorSamples. If the opacity is 1, the object is completely opaque; if the opacity is 0, the object is completely transparent.

RIB BINDING

     Opacity c0 c1... cn
     Opacity [c0 c1... cn]

EXAMPLE

     Opacity .5 1 1

SEE ALSO

     RiColorSamples, RiColor

4.2.2 Texture coordinates

The Shading Language allows precalculated images to be accessed by a set of two-dimensional texture coordinates. This general process is referred to as texture mapping. Texture access in the Shading Language is very general since the coordinates are allowed to be any legal expression. However, the texture and bump access functions (in Part II, see the sections on Basic texture maps and Bump maps) often use default texture coordinates related to the surface parameters.

All the parametric geometric primitives have surface parameters (u,v) that can be used as their texture coordinates (s,t). Surface parameters for different primitives are normally defined to lie in the range 0 to 1. This defines a unit square in parameter space. Section 5, Geometric Primitives defines the position on each surface primitive that the corners of this unit square lie. The texture coordinates at each corner of this unit square are given by providing a corresponding set of (s,t) values. This correspondence uniquely defines a 3x3 homogeneous two-dimensional mapping from parameter space to texture space. Special cases of this mapping occur when the transformation reduces to a scale and an offset, which is often used to piece patches together, or to an affine transformation, which is used to map a collection of triangles onto a common planar texture.

The graphics state maintains a current set of texture coordinates. The correspondence between these texture coordinates and the corners of the unit square is given by the following table.

Surface Parameters
(u,v)
Texture Coordinates
(s,t)
(0,0)(s1,t1)
(1,0)(s2,t2)
(0,1)(s3,t3)
(1,1)(s4,t4)

By default, the texture coordinates at each corner are the same as the surface parameters (s=u, t=v). Note that texture coordinates can also be explicitly attached to geometric primitives. Note also that polygonal primitives are not parametric, and the current set of texture coordinates do not apply to them.


RiTextureCoordinates(RtFloats1,tFloatt1,tFloats2,tFloatt2,
			tFloats3,tFloatt3,tFloats4,tFloatt4 )

Set the current set of texture coordinates to the values passed as arguments according to the above table.

RIB BINDING

     TextureCoordinates s1 t1 s2 t2 s3 t3 s4 t4
     TextureCoordinates [s1 t1 s2 t2 s3 t3 s4 t4]

EXAMPLE

     RiTextureCoordinates(0.0,0.0,  2.0,-0.5, -0.5,1.75,  3.0,3.0);

SEE ALSO

     texture() and bump() in the Shading Language

4.2.3 Light sources

The graphics state maintains a current light source list. The lights in this list illuminate subsequent surfaces. By making this list an attribute different light sources can be used to illuminate different surfaces. Light sources can be added to this list by turning them on and removed from this list by turning them off. Note that popping to a previous graphics state also has the effect of returning the current light list to its previous value. Initially the graphics state does not contain any lights.

An area light source is defined by a shader and a collection of geometric primitives. The association between the shader and the geometric primitives is done by having the graphics state maintain a single current area light source. Each time a primitive is defined it is added to the list of primitives that define the current area light source.  An area light source may be turned on and off just like other light sources.

The RenderMan Interface includes four standard types of light sources: "ambientlight," "pointlight," "distantlight," and "spotlight." The definition of these light sources are given in Appendix A, Standard RenderMan Interface Shaders. The parameters controlling these light sources are given in Table 4.6, Standard Light Source Shader Parameters.

Table 4.7 Standard Light Source Shader Parameters

Light Source ParameterTypeDefault Description
ambientlight intensity
lightcolor
float
color
1.0
color "rgb" (1,1,1)
Light intensity
Light color
distantlight intensity
lightcolor
from
to
float
color
point
point
1.0
color "rgb" (1,1,1)
point "shader"(0,0,0)
point "shader"(0,0,1)
Light intensity
Light color
Light position
Light direction is from-to
pointlight intensity
lightcolor
from
float
color
point
1.0
color "rgb" (1,1,1)
point "shader"(0,0,0)
Light intensity
Light color
Light position
spotlight intensity
lightcolor
from
to
coneangle
conedeltaangle
beamdistribution
float
color
point
point
float
float
float
1.0
color "rgb" (1,1,1)
point "shader"(0,0,0)
point "shader"(0,0,1)
radians(30)
radians(5)
2.0
Light intensity
Light color
Light position
Light direction is from-to
Light cone angle
Light soft edge angle
Light beam distribution

RtLightHandle RiLightSource(RtToken shadername, ...parameterlist... )

shadername is the name of a light source shader. This procedure creates a non-area light, turns it on, and adds it to the current light source list. An RtLightHandle value is returned that can be used to turn the light off or on again.

RIB BINDING

     LightSource name handle ...parameterlist...

The handle is a unique light identification number or string which is provided by the RIB client to the RIB server. Both client and server maintain independent mappings between the handle and their corresponding RtLightHandles. When specified as a number it must be in the range 0 to 65535.

EXAMPLE

     LightSource "spotlight" 2 "coneangle" [5]
     LightSource "ambientlight" 3 "lightcolor" [.5 0 0] "intensity" [.6]
     LightSource "blacklight" "a-unique-string-handle" "lightcolor" [.5 0 0] "intensity" [.6]
     

SEE ALSO

     RiAreaLightSource, RiIlluminate, RiFrameEnd, RiWorldEnd

RtLightHandle RiAreaLightSource( RtToken shadername, ...parameterlist... )

shadername is the name of a light source shader. This procedure creates an area light and makes it the current area light source. Each subsequent geometric primitive is added to the list of surfaces that define the area light. RiAttributeEnd ends the assembly of the area light source.

The light is also turned on and added to the current light source list. An RtLightHandle value is returned which can be used to turn the light off or on again.

If the Area Light Source capability is not supported by a particular implementation, this subroutine is equivalent to RiLightSource.

RIB BINDING

     AreaLightSource name handle parameterlist

The handle is a unique light identification number or string which is provided by the RIB client to the RIB server. Both client and server maintain independent mappings between the handle and their corresponding RtLightHandles. When specified as a number it must be in the range 0 to 65535.

EXAMPLE

     RtFloat decay = .5, intensity = .6;
     RtColor color = {.5,0,0};

     RiAreaLightSource( "finite," "decayexponent," (RtPointer)&decay, RI_NULL);
     RiAreaLightSource "ambientlight," "lightcolor," (RtPointer)color, "intensity,"
     (RtPointer)&intensity, RI_NULL);

SEE ALSO

     RiFrameEnd, RiLightSource, RiIlluminate, RiWorldEnd

RiIlluminate( RtLightHandle light, RtBoolean onoff )

If onoff is RI_TRUE and the light source referred to by the RtLightHandle is not currently in the current light source list, add it to the list. If onoff is RI_FALSE and the light source referred to by the RtLightHandle is currently in the current light source list, remove it from the list. Note that popping the graphics state restores the onoff value of all lights to their previous values.

RIB BINDING

     Illuminate handle onoff

The handle is the integer or string light handle defined in a LightSource or AreaLightSource request.

EXAMPLE

     LightSource "main" 3
     Illuminate 3 0

SEE ALSO

     RiAttributeEnd, RiAreaLightSource, RiLightSource 

4.2.4 Surface shading

The graphics state maintains a current surface shader. The current surface shader is used to specify the surface properties of subsequent geometric primitives. Initially the current surface shader is set to an implementation-dependent default surface shader (but not "null").

The RenderMan Interface includes six standard types of surfaces: "constant," "matte," "metal," "shinymetal," "plastic," and "paintedplastic." The definitions of these surface shading procedures are given in Appendix A, Standard RenderMan Interface Shaders. The parameters controlling these surfaces are given in Table 4.8, Standard Surface Shader Parameters.


RiSurface( RtToken shadername, ...parameterlist... )

shadername is the name of a surface shader. This procedure sets the current surface shader to be shadername. If the surface shader shadername is not defined, some implementation-dependent default surface shader (but not "null") is used.

RIB BINDING

     Surface shadername parameterlist

EXAMPLE

     RtFloat rough = 0.3, kd = 1.0;

     RiSurface("wood", "roughness",(RtPointer)&rough, "Kd", (RtPointer)&kd,
          RI_NULL);

SEE ALSO

     RiAtmosphere, RiDisplacement 

Table 4.8 Standard Surface Shader Parameters

Surface Name ParameterTypeDefault Description
constant----
matteKa
Kd
float
float
1.0
1.0
Ambient coefficient
Diffuse coefficient
metal Ka
Ks
roughness
float
float
float
1.0
1.0
0.1
Ambient coefficient
Specular coefficient
Surface roughness
shinymetal Ka
Ks
Kr
roughness
texturename
float
float
float
float
string
1.0
1.0
1.0
0.1
""
Ambient coefficient
Specular coefficient
Reflection coefficient
Surface roughness
Environment mapname
plasticKa
Kd
Ks
roughness
specularcolor
float
float
float
float
color
1.0
0.5
0.5
0.1
color "rgb" (1,1,1)
Ambient coefficient
Diffuse coefficient
Specular coefficient
Surface roughness
Specular color
paintedplasticKa
Kd
Ks
roughness
specularcolor
texturename
float
float
float
float
color
string
1.0
0.5
0.5
0.1
color "rgb" (1,1,1)
""
Ambient coefficient
Diffuse coefficient
Specular coefficient
Surface roughness
Specular color
Texture map name

4.2.5 Displacement shading

The graphics state maintains a current displacement shader. Displacement shaders are procedures that can be used to modify geometry before the lighting stage.The RenderMan Interface includes one standard displacement shader: ”bumpy”. The definition of this displacement shader is given in Appendix A, Standard RenderMan Interface Shaders. The parameters controlling this displacement is given in Table 4.9.


RiDisplacement( RtToken shadername, ...parameterlist...)

Set the current displacement shader to the named shader. shadernameis the name of a displacement shader.

If a particular implementation does not support the Displacements capability, displacement shaders can only change the normal vectors to generate bump mapping, and the surface geometry itself is not modified (see Displacement Shaders).

RIB BINDING
Displacement shadername ...parameterlist...
EXAMPLE
RiDisplacement (”displaceit”, RI NULL);
SEE ALSO
RiSurface

 Table 4.9: Standard Displacement Shader Parameters

Surface Name  Parameter  Type  Default  Description
bumpy
 
amplitude
texturename 
float 
string 
1.0 
”” 
Bump scaling factor
Displacement map name

 


4.2.6 Co-shaders

In addition to the light list described in section 4.2.3, the graphics state maintains a list of co-shaders. Co-shaders are not directly executed by the renderer, but can be called by other shaders (like co-routines), as described in the Shader Objects and Co-Shaders application note. As with lights, popping to a previous graphics state returns the current co-shader list to its previous value.


RiShader( RtToken shadername, RtToken handlename, ...parameterlist... )

shadername is the name of a shader definition. The handlename is used as a key when calling this co-shader from another shader.

RIB BINDING

     Shader shadername handlename parameterlist

EXAMPLE

     RtFloat Ks = 0.0;

     RiShader("rust", "rustlayer", "Ks",(RtPointer)&Ks, RI_NULL);

SEE ALSO

     RiSurface, RiLightSource 

 


4.2.7 Volume shading

The graphics state contains a current interior volume shader, a current exterior volume shader, and a current atmosphere shader. These shaders are used to modify the colors of rays traveling through volumes in space. 

The interior and exterior shaders define the material properties on the interior and exterior volumes adjacent to the surface of a geometric primitive. The exterior volume relative to a surface is the region into which the natural surface normal points; the interior is the opposite side. Interior and exterior shaders are applied to rays spawned by trace() calls in a surface shader. Renderers that do not support the optional Ray Tracing capability will also not support interior and exterior shaders.

An atmosphere shader is a volume shader that is used to modify rays traveling towards the eye (i.e., camera rays). Even renderers that do not support the optional Ray Tracing capability can still apply atmosphere shaders to any objects directly visible to the camera.

The RenderMan Interface includes two standard volume shaders: ”fog” and ”depthcue”. The definitions of these volume shaders are given in Appendix A, Standard RenderMan Interface Shaders. The parameters controlling these volumes are given in Table 4.10, Standard Volume Shader Parameters.
 
RiAtmosphere( RtToken shadername, ...parameterlist... )

This procedure sets the current atmosphere shader. shadername is the name of an atmosphere shader. If shadername is RI_NULL, no atmosphere shader is used.

RIB BINDING

     Atmosphere shadername parameterlist

EXAMPLE

     Atmosphere "fog"

SEE ALSO

     RiDisplacement, RiSurface

Table 4.10 Standard Volume Shader Parameters

Volume Name Parameter TypeDefaultDescription
depthcue mindistance
maxdistance
background
float
float
color
0.0
1.0
color "rgb" (0,0,0)
Distance where brightest
Distance where dimmest
Background color
fog distance
background
float
color
1.0
color "rgb" (0,0,0)
Exponential extinction distance
Background color
 

RiInterior( RtToken shadername, ...parameterlist... );

This procedure sets the current interior volume shader. shadername is the name of a volume or atmosphere shader. If shadername is RI_NULL, the surface will not have an interior shader.

RIB BINDING

     Interior shadername parameterlist

EXAMPLE

     Interior "water"

SEE ALSO

     RiExterior, RiAtmosphere

RiExterior( RtToken shadername, ...parameterlist... );

This procedure sets the current exterior volume shader. shadername is the name of a volume or atmosphere shader. If shadername is RI_NULL, the surface will not have an exterior shader.

RIB BINDING

     Exterior shadername parameterlist

EXAMPLE

     RiExterior( "fog," RI_NULL );

SEE ALSO

     RiInterior, RiAtmosphere

If a particular implementation does not support the Volume Shading capability, RiInterior and RiExterior are ignored; however, RiAtmosphere will be available in all implementations.

4.2.8 Shading Rate

The number of shading calculations per primitive is controlled by the current shading rate. The shading rate is expressed in pixel area. If geometric primitives are being broken down into polygons and each polygon is shaded once, the shading rate is interpreted as the maximum size of a polygon in pixels. A rendering program will shade at least at this rate, although it may shade more often. Whatever the value of the shading rate, at least one shading calculation is done per primitive.


RiShadingRate( RtFloat size )

Set the current shading rate to size. The current shading rate is specified as an area in pixels. A shading rate of RI_INFINITY specifies that shading need only be done once per polygon. A shading rate of 1 specifies that shading is done at least once per pixel. This second case is often referred to as Phong shading.

RIB BINDING

     ShadingRate size

EXAMPLE

     RiShadingRate(1.0);

SEE ALSO

     RiGeometricApproximation

4.2.9 Shading interpolation

Shading calculations are performed at discrete positions on surface elements or in screen space (at a frequency determined by the shading rate). The results can then either be interpolated or constant over some region of the screen or the interior of a surface element corresponding to one shading sample. This is controlled by the following procedure:


RiShadingInterpolation( RtToken type )

This function controls how values are interpolated between shading samples (usually across a polygon). If type is "constant," the color and opacity of all the pixels inside the polygon are the same. This is often referred to as flat or facetted shading. If type is "smooth," the color and opacity of all the pixels between shaded values are interpolated from the calculated values. This is often referred to as Gouraud shading.

RIB BINDING

     ShadingInterpolation "constant"
     ShadingInterpolation "smooth"

EXAMPLE

     ShadingInterpolation "smooth"

4.2.10 Matte objects

Matte objects are the functional equivalent of three-dimensional hold-out mattes. Matte objects are not shaded and are set to be completely opaque so that they hide objects behind them. However, regions in the output image where a matte object is visible are treated as transparent.


RiMatte( RtBoolean onoff )

Indicates whether subsequent primitives are matte objects.

RIB BINDING

     Matte onoff

EXAMPLE

     RiMatte(RI_TRUE);

SEE ALSO

     RiSurface

Table 4.11 Geometry Attributes

AttributeType DefaultDescription
Object-to-World transformidentityTransformation from object or model coordinates to world coordinates.
Bound 6 floatsinfiniteSubsequent geometric primitives lie inside this box.
Detail Range 4 floats(0,0,infinity,infinity)Current range of detail. If the current detail is in this range, geometric primitives are rendered.
Geometric Approximation token value­The largest deviation of an Approximation approximation of a surface from the true surface in raster coordinates.
Cubic Basis Matrices 2 matricesBezier, BezierBasis matrices for bicubic patches. There is a separate basis matrix for both the u and the v directions.
Cubic Basis Steps 2 ints3, 3Patchmesh basis increments.
Trim Curves ­­A list of trim curves which bound NURBS.
Orientation token"outside"Whether primitives are defined in a left-handed or right-handed coordinate system.
Number of Sides integer 2Whether subsequent surfaces are considered to have one or two sides.
Displacement shader"null"A displacement shader that specifies small changes in surface geometry.

 


4.2.11 Bound

The graphics state maintains a bounding box called the current bound. The rendering program may clip or cull primitives to this bound.


RiBound( RtBound bound )

This procedure sets the current bound to bound. The bounding box bound is specified in the current object coordinate system. Subsequent output primitives should all lie within this bounding box. This allows the efficient specification of a bounding box for a collection of output primitives.

RIB BINDING

     Bound xmin xmax ymin ymax zmin zmax
     Bound [xmin xmax ymin ymax zmin zmax]

EXAMPLE

     Bound [0 0.5 0 0.5 0.9 1]

SEE ALSO

       RiDetail

4.2.12 Detail

The graphics state maintains a relative detail, a current detail, and a current detail range. The current detail is used to select between multiple representations of objects each characterized by a different range of detail. The current detail range is given by 4 values. These four numbers define transition ranges between this range of detail and the neighboring representations. If the current detail lies inside the current detail range, geometric primitives comprising this representation will be drawn.

Suppose there are two object definitions, foo1 and foo2, for an object. The first contains more detail and the second less. These are communicated to the rendering program using the following sequence of calls.

     RiDetail( bound );
     	RiDetailRange( 0., 0., 10., 20. );
     		RiObjectInstance( foo1 );
     	RiDetailRange( 10., 20., RI_INFINITY, RI_INFINITY );
     		RiObjectInstance( foo2 );

The current detail is set by RiDetail. The detail ranges indicate that object foo1 will be drawn when the current detail is below 10 (thus it is the low detail detail representation) and that object foo2 will be drawn when the current detail is above 20 (thus it is the high detail representation). If the current detail is between 10 and 20, the rendering program will provide a smooth transition between the low and high detail representations.


RiDetail( RtBound bound )

Set the current bound to bound. The bounding box bound is specified in the current coordinate system. The current detail is set to the area of this bounding box as projected into the raster coordinate system, times the relative detail. Before computing the raster area, the bounding box is clipped to the near clipping plane but not to the edges of the display or the far clipping plane. The raster area outside the field of view is computed so that if the camera zooms in on an object the detail will increase smoothly. Detail is expressed in raster coordinates so that increasing the resolution of the output image will increase the detail.

RIB BINDING

     Detail minx maxx miny maxy minz maxz
     Detail [minx maxx miny maxy minz maxz]

EXAMPLE

     RtBound box = { 10.0, 20.0, 42.0, 69.0, 0.0, 1.0 };

     RiDetail(box);

SEE ALSO

     RiBound, RiDetailRange, RiRelativeDetail 

RiDetailRange( RtFloat minvisible, RtFloat lowertransition, RtFloat uppertransition, RtFloat maxvisible )

Set the current detail range. Primitives are never drawn if the current detail is less than minvisible or greater than maxvisible. Primitives are always drawn if the current detail is between lowertransition and uppertransition. All these numbers should be non-negative and satisfy the following ordering:

     minvisible <=; lowertransition <=; uppertransition <=; maxvisible. 
RIB BINDING

     DetailRange minvisible lowertransition uppertransition maxvisible
     DetailRange [minvisible lowertransition uppertransition maxvisible]

EXAMPLE

     DetailRange [0 0 10 20]

SEE ALSO

     RiDetail, RiRelativeDetail

If the Detail capability is not supported by a particular implementation, all object representations which include RI_INFINITY in their detail ranges are rendered.


4.2.13 Geometric approximation

Geometric primitives are typically approximated by using small surface elements or polygons. The size of these surface elements affects the accuracy of the geometry since large surface elements may introduce straight edges at the silhouettes of curved surfaces or cause particular points on a surface to be projected to the wrong point in the final image.


RiGeometricApproximation( RtToken type, RtFloat value )

The predefined geometric approximation is "flatness." Flatness is expressed as a distance from the true surface to the approximated surface in pixels. Flatness is sometimes called chordal deviation.

RIB BINDING

     GeometricApproximation "flatness" value
     GeometricApproximation type value

EXAMPLE

     GeometricApproximation "flatness" 2.5

SEE ALSO

     RiShadingRate

4.2.14 Orientation and Sides

The handedness of a coordinate system is referred to as its orientation. The initial "camera" coordinate system is left-handed: x points right, y point up, and z points in. Transformations, however, can flip the orientation of the current coordinate system. An example of a transformation that does not preserve orientation is a reflection. (More generally, a transformation does not preserve orientation if its Jacobian is negative.)

Similarly, geometric primitives have an orientation, which determines whether their surface normals are defined using a right-handed or left-handed rule in their object coordinate system. Defining the orientation of a primitive to be opposite that of the object coordinate system causes the primitive to be turned inside-out. If a primitive is inside-out, its normal will be computed so that it points in the opposite direction. This has implications for culling, shading, and solids (see the section on Solids and Spatial Set Operations). The outside surface of a primitive is the side from which the normal points outward; the inside surface is the opposite side. The interior of a solid is the volume that is adjacent to the inside surface and the exterior is the region adjacent to the outside. This is discussed further in the section on Geometric Primitives.

The current orientation of primitives is maintained as part of the graphics state independent of the orientation of the current coordinate system. The current orientation is initially set to match the orientation of the initial coordinate system, and always flips whenever the orientation of the current coordinate system flips. It can also be modified directly with RiOrientation and RiReverseOrientation. If the current orientation is not the same as the orientation of the current coordinate system, geometric primitives are turned inside out, and their normals are automatically flipped.


RiOrientation( RtToken orientation )

This procedure sets the current orientation to be either "outside" (to match the current coordinate system), "inside" (to be the inverse of the current coordinate system), "lh" (for explicit left-handed orientation) or "rh" (for explicit right-handed orientation).

RIB BINDING

     Orientation orientation

EXAMPLE

     Orientation "lh"

SEE ALSO

     RiReverseOrientation
 

RiReverseOrientation()

Causes the current orientation to be toggled. If the orientation was right-handed it is now left-handed, and vice versa.

RIB BINDING

     ReverseOrientation -

EXAMPLE

     RiReverseOrientation();

SEE ALSO

     RiOrientation

Objects can be two-sided or one-sided. Both the inside and the outside surface of two-sided objects are visible, whereas only the outside surface of a one-sided object is visible. If the outside of a one-sided surface faces the viewer, the surface is said to be frontfacing, and if the outside surface faces away from the viewer, the surface is backfacing. Normally closed surfaces should be defined as one-sided and open surfaces should be defined as two-sided. The major exception to this rule is transparent closed objects, where both the inside and the outside are visible.


RiSides( RtInt sides )

If sides is 2, subsequent surfaces are considered two-sided and both the inside and the outside of the surface will be visible. If sides is 1, subsequent surfaces are considered one-sided and only the outside of the surface will be visible.

RIB BINDING

     Sides sides

EXAMPLE

     Sides 1

SEE ALSO

     RiOrientation

4.3 Transformations

Transformations are used to transform points between coordinate systems. At various points when defining a scene the current transformation is used to define a particular coordinate system. For example, RiProjection establishes the camera coordinate system, and RiWorldBegin establishes the world coordinate system.

The current transformation is maintained as part of the graphics state. Commands exist to set and to concatenate specific transformations onto the current transformation. These include the basic linear transformations translation, rotation, skew, scale and perspective, and non-linear transformations programmed in the Shading Language. Concatenating transformations implies that the current transformation is updated in such a way that the new transformation is applied to points before the old current transformation. Standard linear transformations are given by 4x4 matrices. These matrices are premultiplied by 4-vectors in row format to transform them.

The following three transformation commands set or concatenate a 4x4 matrix onto the current transformation:


RiIdentity()

Set the current transformation to the identity.

RIB BINDING

     Identity -

EXAMPLE

     RiIdentity( );

SEE ALSO

     RiTransform

RiTransform( RtMatrix transform )

Set the current transformation to the transformation transform.

RIB BINDING

     Transform transform

EXAMPLE

     Transform [.5 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1]

SEE ALSO

     RiIdentity, RiConcatTransform

RiConcatTransform( RtMatrix transform )

Concatenate the transformation transform onto the current transformation. The transformation is applied before all previously applied transformations, that is, before the current transformation.

RIB BINDING

     ConcatTransform transform

EXAMPLE

     RtMatrix foo = { 2.0, 0.0, 0.0, 0.0,	0.0, 2.0, 0.0, 0.0,
     			0.0, 0.0, 2.0, 0.0,	  0.0, 0.0, 0.0, 1.0 };
     RiConcatTransform ( foo );

SEE ALSO

     RiIdentity, RiTransform, RiRotate, RiScale, RiSkew  

The following commands perform local concatenations of common linear transformations onto the current transformation.


RiPerspective( RtFloat fov )

Concatenate a perspective transformation onto the current transformation. The focal point of the perspective is at the origin and its direction is along the z-axis. The field of view angle, fov, specifies the full horizontal field of view.

The user must exercise caution when using this transformation, since points behind the eye will generate invalid perspective divides which are dealt with in a renderer-specific manner.

To request a perspective projection from camera space to screen space, an RiProjection request should be used; RiPerspective is used to request a perspective modeling transformation from object space to world space, or from world space to camera space.

RIB BINDING

     Perspective fov

EXAMPLE

     Perspective 90

SEE ALSO

     RiConcatTransform, RiDepthOfField, RiProjection

RiTranslate( RtFloat dx, RtFloat dy, RtFloat dz )

Concatenate a translation onto the current transformation.

RIB BINDING

     Translate dx dy dz

EXAMPLE

     RiTranslate(0.0, 1.0, 0.0);

SEE ALSO

     RiConcatTransform, RiRotate, RiScale

RiRotate( RtFloat angle, RtFloat dx, RtFloat dy, RtFloat dz )

Concatenate a rotation of angle degrees about the given axis onto the current transformation.

RIB BINDING

     Rotate angle dx dy dz

EXAMPLE

     RiRotate(90.0,  0.0, 1.0, 0.0);

SEE ALSO

     RiConcatTransform, RiScale, RiTranslate 

RiScale( RtFloat sx, RtFloat sy, RtFloat sz )

Concatenate a scaling onto the current transformation.

RIB BINDING

     Scale sx sy sz

EXAMPLE

     Scale .5 1 1

SEE ALSO

     RiConcatTransform, RiRotate, RiSkew, RiTranslate  

RiSkew( RtFloat angle, RtFloat dx1, RtFloat dy1, RtFloat dz1, 
 	RtFloat dx2, RtFloat dy2, RtFloat dz2 )

Concatenate a skew onto the current transformation. This operation shifts all points along lines parallel to the axis vector (dx2, dy2, dz2). Points along the axis vector (dx1, dy1, dz1) are mapped onto the vector (x, y, z), where angle specifies the angle (in degrees) between the vectors (dx1, dy1, dz1) and (x, y, z), The two axes are not required to be perpendicular, however it is an error to specify an angle that is greater than or equal to the angle between them. A negative angle can be specified, but it must be greater than 180 degrees minus the angle between the two axes.

RIB BINDING

     Skew angle dx1 dy1 dz1 dx2 dy2 dz2
     Skew [angle dx1 dy1 dz1 dx2 dy2 dz2]

EXAMPLE

     RiSkew(45.0,  0.0, 1.0, 0.0,  1.0, 0.0, 0.0);

SEE ALSO

     RiRotate, RiScale, RiTransform

4.3.1 Named coordinate systems

Shaders often need to perform calculations in non-standard coordinate systems. The coordinate systems with predefined names are: "raster," "screen," "camera," "world," and "object." At any time, the current coordinate system can be marked for future reference.


RiCoordinateSystem( RtToken space )

This function marks the coordinate system defined by the current transformation with the name space and saves it. This coordinate system can then be referred to by name in subsequent shaders, or in RiTransformPoints. A shader cannot refer to a coordinate system that has not already been named. The list of named coordinate systems is global.

RIB BINDING

     CoordinateSystem space

EXAMPLE

     CoordinateSystem "lamptop"

SEE ALSO

     RiTransformPoints, RiCoordSysTransform

RiScopedCoordinateSystem( RtToken name )

Like RiCoordinateSystem, this function marks the coordinate system defined by the current transformation with the indicated name and saves it. Unlike that call, the marked transformation is saved on a separate stack, independent of the global list maintained by RiCoordinateSystem.  This stack is pushed and popped by RiAttributeBegin and RiAttributeEnd calls (but not by RiTransformBegin and RiTransformEnd).  Scoped coordinate system can then be referred to by name in subsequent shaders, or in RiTransformPoints and RiCoordSysTransform, just like global coordinate systems. When searching for a named coordinate system, a renderer should first check the scoped coordinate system stack; failing that, the global coordinate system list should be checked.

RIB BINDING

     ScopedCoordinateSystem space

EXAMPLE

     ScopedCoordinateSystem "lamptop"

SEE ALSO

     RiTransformPoints, RiCoordSysTransform

RiCoordSysTransform ( RtToken name ) 

This function replaces the current transformation matrix with the matrix that forms the name coordinate system. This permits objects to be placed directly into special or user-defined coordinate systems by their names.

RIB BINDING
CoordSysTransform name
EXAMPLE
CoordSysTransform ”lamptop”
SEE ALSO
RiCoordinateSystem

RtPoint *
RiTransformPoints( RtToken fromspace, RtToken tospace, 
			RtInt n, RtPoint points )

This procedure transforms the array of points from the coordinate system fromspace to the coordinate system tospace. This array contains n points. If the transformation is successful, the array points is returned. If the transformation cannot be computed for any reason (e.g., one of the space names is unknown or the transformation requires the inversion of a noninvertable transformation), NULL is returned.

EXAMPLE

     RtPoint four_points[4];
     RiTransformPoints("current," "lamptop," 4, four_points);

SEE ALSO

     RiCoordinateSystem, RiProjection, RiWorldBegin
 

4.3.2 Transformation stack

Transformations can be saved and restored recursively. Note that pushing and popping the attributes also pushes and pops the current transformation.


RiTransformBegin()
RiTransformEnd()

Push and pop the current transformation. Pushing and popping must be properly nested with respect to the various begin-end constructs.

RIB BINDING

     TransformBegin -
     TransformEnd -

EXAMPLE

     RiTransformBegin();

SEE ALSO

     RiAttributeBegin

4.4 Resources

Resources generally encapsulate some part of the graphics state, or other information specific to the renderer such as a in-memory RIB archive. Resources are always named and have a type associated with them. Resources are unique in that they can exist outside the rest of the graphics state, and are thus not subject to standard scoping rules; instead, they have their own scoping block mechanism. An example of a resource is the ability to save the entirety of the current attribute state, and restore it at a future point, independent of the current attribute stack.


RiResource( RtToken handle,  RtToken type, ...)

Creates or operates on a named resource (with name handle) of a particular type. The allowed operations for the resource are specified in the parameter list, and are specific to the type of resource being manipulated.

A named resource type which is recommended for all implementations of the RenderMan Interface is the encapsulation of the entirety of the current attribute state. This resource is selected by specifying "attributes" for the type. In this case, the parameter list must contain at least the parameter "string operation" which takes a value of "save" (in order to create the saved attribute state with the given handle) or "restore" (to restore a previously saved attribute state). Furthermore, when restoring the state, a further optional parameter is accepted: "string subset", which specifies the subset of the saved attribute state to restore. (i.e. "shading", "transform", or "all"). The actual supported subsets and what parts of the attribute state they affect are implementation dependent. (For PRMan, see RI Extensions: Saved Attributes).

RIB BINDING

     Resource handle type

EXAMPLE
 
     Color 0 1 0
     Surface "marble"
     Resource "greenmarble" "attributes" "string operation" "save"
     Sphere 1 0 1 360
     Color 1 0 0
     Surface "plastic"
     Cone 0.5 0.5 360
     Resource "greenmarble" "attributes" "string operation" "restore" "string subset" "shading"
     Cylinder 0.5 0 1 360

In this example, a resource named "greenmarble" of type "attributes" has been created with the "save" operation. A green marble sphere is then immediately defined. The attribute state is then altered and a red plastic cone is created. Finally, the previously saved resource "greenmarble" is restored with the "restore" operation. Depending on the implementation, this restores the shading part of the attribute state such that the subsequent cylinder is green and uses a marble shader, instead of being red and plastic.

SEE ALSO

     RiResourceBegin, RiResourceEnd


Resources can be explicitly saved and restored with the following commands.


RiResourceBegin()
RiResourceEnd()

Push and pop the current set of resources. Resources defined (named) in the current ResourceBegin scope will cease to exist at ResourceEnd. If a resource is defined outside any ResourceBegin/End scope, that resource is deemed to be global and will persist indefinitely, or at least until FrameEnd depending on the nature of the resource. Otherwise, pushing and popping of resources must be properly nested with respect to various Begin-End constructs.

RIB BINDING

     ResourceBegin -
     ResourceEnd -

EXAMPLE

     Color 1 0 0
     Surface "plastic"
     Resource "foo" "attributes" "string operation" "save"
     ResourceBegin
       Color 0 1 0
       Surface "marble"
       Resource "foo" "attributes" "string operation" "save"
     ResourceEnd

In this example, two resources, both named "foo" and of type "attributes", have been created with the "save" operation. The first resource is global and (depending on the implementation) stores attribute state: namely, that the color is red and the surface is plastic. The second resource's lifetime is scoped by the ResourceBegin and ResourceEnd calls and stores attribute state: green color and marble surface. Hence due to the scoping, references to "foo" within the ResourceBegin/End block will resolve against the second resource (green, marble). After the ResourceEnd, the second resource has been destroyed and the first resource is again in scope, and hence references to "foo" will resolve against the first resource (red, plastic).


4.5 Implementation-specific Attributes

Rendering programs may have additional implementation-specific attributes that control parameters that affect primitive appearance or interpretation. These are all set by the following procedure.  In addition, a user can specify graphics state attributes by pre-pending the string "user:" onto the attribute name.  While these attributes are not expected to have any meaning to a renderer, user attributes should not be ignored.  Rather, they must be tracked according to standard attribute scoping rules and made available to shaders via the attribute function.


RiAttribute( RtToken name, ...parameterlist... );

Set the parameters of the attribute name, using the values specified in the token-value list parameterlist.

RIB BINDING

     Attribute name parameterlist

EXAMPLE

     Attribute "bound" "displacement" [2.0]

SEE ALSO

     RiAttributeBegin

Table 4.12: Typical implementation-specific attributes

Attribute name/param Type Default Description
"displacementbound"
"sphere" [s]
float 0 Amount to pad bounding box for
"displacementbound" "coordinatesystem" [c] string "object" The name of the coordinate system that the displacement bound is measured in.

 

"identifier" "name" [n] string "" The name of the object (helpful for reporting errors).

 

"trimcurve" "sense" [n] string "inside" If "inside", trim the interior of Trim Curve regions. If "outside", trim the exterior of the trim region.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of Pixar. The information in this publication is furnished for informational use only, is subject to change without notice and should not be construed as a commitment by Pixar. Pixar assumes no responsibility or liability for any errors or inaccuracies that may appear in this publication.

 

Pixar Animation Studios
(510) 752-3000 (voice)   (510) 752-3151 (fax)
Copyright © 1996- Pixar. All rights reserved.
RenderMan® is a registered trademark of Pixar.