Houdini 中文帮助文档

Mantra

Renders the scene using Houdini’s standard mantra renderer and generates IFD files.

Overview

The mantra output driver node uses mantra (Houdini’s built-in renderer) to render your scene. You can create a new mantra node by choosing Render > Create render node > Mantrafrom the main menus. You can edit an existing render node with Render > Edit render node > node name. To see the actual network of render driver nodes, click the path at the top of a network editor pane and choose Other networks > out.

You can add and remove properties from the output driver just like you can for objects. If you add object properties to the render driver, they define defaults for all objects in the scene. Select a render node, and in the parameter editor click the Gear menu and choose _Edit rendering parameters to edit the properties on the node. For more information on properties, see properties. For more information on adding properties to a node see the Edit Parameter Interface.

For complex scenes involving multiple render passes, separate lighting and shadow passes, and so on, you can set up dependency relationships between render drivers by connecting driver nodes together. See render dependencies.

Important parameters

Some important parameters on the render driver include:

The camera to render from (Main tab, Camera).

The image to render to. (Properties tab, Output sub-tab, Output picture).

Use ip instead of a filename to render directly into theMPlay image viewer (this is the default). This will not save the image to disk, but you can save manually from MPlay.

The frame range to render for an animation (Valid frame range and Start/end/inc).

Turn on motion blur for all objects (Properties tab,Sampling sub-tab, Allow motion blur).

Adjust render quality (Properties tab, Sampling sub-tab). The Pixel samples and ray sample controls are good places to start. See the documentation for these options below.

Scripts

Each script command refers to an hscript command which will be run, regardless of the expression language selected for the parameter. The resulting string will be run as an hscript command.

Note

It is possible to use the python, unix or source hscript commands to perform complex processing.

The commands are always run when rendering occurs. The command checks the parameters of the output driver when it is rendering a range or sending output to a command.

Before the render occurs, Houdini will automatically set the current hscript directory to point to the output driver.

Pre-Render ScriptThis command is run before any IFDs are generated. It is only run once per render.
Pre-Frame ScriptThis command is run before each IFD is generated.
Post-Frame ScriptThis command is run after each IFD is generated.

Note

Although the IFD may have been generated, this does not necessarily mean that mantra has finished rendering the image when this command is run.

Post-Render ScriptThis command is run one time, after all IFDs have been generated.

Note

Although the IFD may have been generated, this does not necessarily mean that mantra has finished rendering the image when this command is run.

Mantra Attributes

The following attributes on geometry control how mantra renders the geometry.

orientOrientation of curve/point primitives. Curves and points will be oriented so that their normals point in the direction of the orient vector attribute.
v(velocity) Used for velocity motion blur computations.
uvDefault attribute for the -ucommand-line option.
vm_photon,vm_surface,vm_displace,shop_vm_photonshop_vm_surface,shop_vm_displaceShader overrides (per primitive).
width, pscaleControls width of curve/point primitives (see below).
scaleNot used by mantra.

Note

When mantra decides the size of point primitives, it looks for the following attributes in order:

Point attribute width

Point attribute pscale

Detail attribute width

Detail attribute pscale

To decide the width of curve primitives, mantra looks for the following attributes in order:

Vertex attribute width

Point attribute width

Primitive attribute width

Detail attribute width

Point attribute pscale

The first attribute mantra finds controls the size/width of the point/curve.

Technical Tips for Advanced Users

The default set of properties for the output driver can be found in $HH/soho/parameters/IFDmantra.ds.

You can define your own set of properties without modifying this file. Look for the #sinclude line and the parameter definition for default_output. These parameters are added in the creation script for the output driver.

Some properties on the camera will be overridden from the view port menu or the render state. Please see$HH/soho/overrides. You can see by inspecting these files, when rendering from the view port menu, the output image is always set to ip and the output will always be sent to a command (not to a disk file).

Parameters

RenderBegins the render with the last render control settings.
Render ControlOpens the render control dialog to allow adjustments of the render parameters before rendering.
Valid Frame RangeControls whether this render node outputs the current frame (Render any frame) or the image sequence specified in the Start/End/Incparameters (Render Frame Range).

Render Frame Range (strict) will render frames START to END when it is rendered, but will not allow frames outside this range to be rendered at all. Render Frame Range will allow outside frames to be rendered. This is used in conjunction with render dependencies. It also affects the behavior of the ‘Override Frame Range’ in the Render Control dialog.

Two possible cases where you’d want the strict behavior:

A 60 frame walk cycle written out to a geo, but part of a larger ROP net to render out a larger frame range.

A texture loop from 1-20.

Otherwise, you will usually set this to non-strict.

Render Any FrameRenders a single frame, based on the value in the playbar or the frame that is requested by a connected output render node.
Render Frame RangeRenders a sequence of frames. If an output render node is connected, this range is generally ignored in favor of frames requested by the output render node.
Render Frame Range (Strict)Renders a sequence of frames. If an output render node is connected, this range restricts its requested frames to this frame range.
Start/End/IncSpecifies the range of frames to render (start frame, end frame, and increment). All values may be floating point values. The range is inclusive.

These parameters determine the values of the local variables for the output driver.

For example, if the parameters are set to:

StartEndInc
10.5120.5

There will be 4 frames rendered (10.5, 11, 11.5, and 12), so $NRENDER will have a value of 4. $Nwill have the value:

Frame10.51111.512
$N1234
Render With TakeThe output driver will switch to this take before rendering and then restore the current take when rendering is done.

Tip

use chs(“take”) to use this value in other parameters. See the chsexpression function for more information.

Main

The output driver is responsible for generating IFD files. These files describe the Houdini scene to mantra. The Main tab determines how the IFD generation is processed.

CameraThe camera object which defines the scene.
CommandThe command (i.e. mantra) where the IFD file is sent. This will be disabled if the IFD file is saved to disk.

Note

The Mantra ROP will not automatically gzip based on the file extension of the .ifd file. The file.ifd.gz will contain uncompressed data. However, you can set your render command to something likegzip > foo$F4.ifd.gz to compress the file.

Disk FileThe location where the IFD file is saved to disk. You must turn on the Disk File checkbox to enable this parameter.
Block Until Render CompleteWhen sending the output to a command, Houdini will normally return control after it is finished writing the IFD. This allows the render process to complete in the background. Turning on this parameter will force Houdini to block until the mantra finishes rendering the frame.

When rendering a frame range, this option is automatically turned on. However, the option is not automatically turned on when rendering in an hscript or python loop construct. Therefore caution must be used or it is possible to end up starting multiple background renders.

Note

The rps and rkill hscript commands can be used to query or kill background renders.

See the Troubleshooting section for more information.

Create Intermediate DirectoriesCreate intermediate parent directories for output files as needed. This currently only applies to generated scripts, images, and shadow maps.
Initialize Simulation OPsIf this option is turned on, POP and DOP simulations will be initialized before rendering.
Show In ViewportEnabling this checkbox will cause the driver to show up in the viewport menu. By default, SOHO output drivers to not appear in the viewport menu.

Objects

The parameters on this tab determine which objects and lights are included in the IFD.

Candidate ObjectsThe geometry objects in this parameter will be included in the IFD if their display flags are turned on and their display channel is enabled.
Force ObjectsObjects in this parameter are added to the IFD regardless of the state of their display. Objects can only be added to the IFD once.
Forced MatteObjects forced to be output as matte objects.
Forced PhantomObjects forced to be output at phantom objects.
Exclude ObjectsObjects in this parameter are excluded from the scene, regardless of whether they are selected in the Candidate Objects or Force Objects.
Solo LightOnly lights in this parameter will be included in the IFD. This includes shadow map generation and illumination. If this parameter is set, the candidate, forced, and exclusion parameters are ignored.

Note

Using this parameter in conjunction with the render_viewcamera property provides a quick way of generating shadow maps for selected lights.

Candidate LightsEach light in this parameter is added to the IFD if the dimmer channel of the light is not 0. The standard light sets the dimmer channel to 0 when the light is not enabled.
Force LightsThe lights in this parameter are added to the IFD regardless of the value in their dimmer channels.
Exclude LightsThese lights will be excluded from the scene, even if they are selected in Candidate Lightsor Force Lights__.
Visible FogThe fog/atmosphere objects in this parameter are included in the IFD if their display flags are turned on and their display channel is enabled.
Headlight CreationIf there are no lights in the scene, a headlight is created by default. To disable, turn off this checkbox.

Properties

You can add any rendering property to the mantra output driver. Properties change the behavior of how mantra will interpret the scene. You can add any property, even if it might not make sense to add to an output driver. For example, you can add a surface shader even though the output driver does not have any surfaces. These values will be used as default values for objects which do not have these properties defined. However, if an object has a property defined, the value assigned to the object will be used instead of the value assigned on the output driver.

See rendering properties and Mantra rendering properties for more information.

Output

Output pictureThe image or device where the resulting image will be rendered. You can set this value to ipwhich renders the image in MPlay, or you can save it to an image. The following image types are supported: .pic, .tif, .sgi, .pic.gz,.rat, .jpg, .cin, .rta, .bmp, .tga, .rad,.exr, and .png.

Include $F in the file name to insert the frame number. This is necessary when rendering animation. See expressions in file names for more information.

Output deviceThe image format or device for the output image. If you leave this at the default value ofInfer from filename, the image format will be selected based on the file extension (eg. .pic will automatically generate a Houdini format image.)
Pixel filterSpecifies the pixel filter used to combine sub-pixel samples to generate the value for the single pixel. The filter is normally specified as a filter type (eg. gauss) followed by an x and y filter width in pixels. To blur the image, increase the filter width.

There are several different pixel filters available.

minmax styleThe style may be one of:

min – Choose the minimum value from all sub-pixels.

max – Choose the maximum value from all sub-pixels.

ocover – First, choose the object which covers most of the pixel, then take the value from the sub-pixels of that object only.

pointChoose the sub-pixel closest to the center of the pixel.
box [width height]Use a box filter to combine the sub-pixels with a filter size given by width/height.
gauss [width height]Use a Gaussian filter to combine the sub-pixels with a filter size given by width/height.
bartlett [width height]Use a Bartlett (cone) filter to combine the sub-pixels with a size width given by width/height.
blackman [width height]Use a Blackman filter to combine the sub-pixels with a filter size given by width/height.
catrom [width height]Use a Catmull-Rom filter to combine the sub-pixels with a size width given by width/height.
hanning [width height]Use a Hanning filter to combine the sub-pixels with a filter size given by width/height.
mitchell [width height]Use a Mitchell filter to combine the sub-pixels with a filter size given by width/height.
Sample filterControls how transparent samples are combined to produce the color values for individual pixel samples. The sample filter is used to composite transparent surfaces before the pixel filter produces final pixel colors.

Opacity Filtering (alpha)Uses the opacity (Of) values for transparent samples for compositing. This option should be used whenever correct transparent compositing is required. For example, when rendering volumes, sprites, or transparency.
Full Opacity Filtering (fullopacity)When stochastic transparency is enabled, this option causes a channel to be evaluated and composited with every opacity evaluation – as opposed to only being composited with the samples that are selected for full shading. It can be used to produce smoother results for channels that are fast to evaluate such asCe or direct_emission. When stochastic transparency is disabled, this option behaves the same way as Opacity Filtering.
Closest Surface (closest)Ignores the opacity values and just copies the color for the closest transparent sample into the image. This option disables transparency for a given deep raster plane and will only produce the closest sample results.
QuantizationThe storage type for the main image. The type of quantization used will affect image quality and size. If you need to adjust the image’s dynamic range in compositing, you should normally leave this value at the default of 16-bit floating point.

The default is “float16″ for the first plane, and”float” for secondary planes. You can override the first plane’s value with the -b command line argument to mantra.

White pointWhite point for the image plane.
Sub-pixel outputNormally, sub-pixel samples are filtered using the pixel filter defined on an image plane. When this is turned on, each sub-pixel is output without any pixel filtering performed.

The image:resolution property will be scaled by the image:samples property to determine the actual output image resolution. For example, if image:resolution was (1024,512)and image:samples was (4,6), the image rendered would have a resolution of 4096 by 3072. Each pixel would represent a single unfiltered sub-pixel sample.

Override camera resolutionNormally, the resolution channels on the camera determine the output resolution. Enabling this parameter allows an alternate resolution to be used.
Resolution scaleA specific resolution can be specified using this parameter, or alternatively, a fraction of the camera’s resolution.
Resolution overrideAllows you to override the camera resolution.
Pixel Aspect RatioThe pixel aspect ratio represents the width of a pixel divided by the height of a pixel. It is not the aspect ratio of the image (which is determined by the resolution of the image). This parameter does not affect rendering, it is only used to change how images are displayed – by stretching the pixels by this factor.
Tiled renderWhen you render a target node with this option on using HQueue, the server will split frames to render into separate tiles and render each tile as a separate job. When you render locally with this option on, Mantra will render a single tile instead of the entire frame.
Horizontal tilesSplit the frame into this number of tiles horizontally, when Tile render is on.
Vertical tilesSplit the frame into this number of tiles horizontally, when Tile render is on.
Tile indexWhich tile to render, when rendering locally with Tile render on. Tile numbers start at 0 in the top left and increase left to right, top to bottom.
Deep resolverWhen generating an image, mantra runs the sample filter to composite samples to a single color. Mantra then runs the pixel filter to produce the final color for a pixel. A deep resolver is used to store information about each sample prior to sample filtering. This allows the image resolver to store information about each individual sample before compositing.

Options:

filename (default = “”) – The filename to output the deep shadow information.

ofstorage (default = “real16”) – The storage format for Of. The value should be one of…

real16 – 16 bit floating point values.

real32 – 32 bit floating point values.

real64 – 64 bit floating point values.

pzstorage (default = “real32”) – The storage format for Pz. The value should be one of…

real16 – 16 bit floating point values.

real32 – 32 bit floating point values.

real64 – 64 bit floating point values.

ofsize (default = 3) – The number of components to store for opacity. This should be either 1 for monochrome (stored as the average value) or 3 for full RGB color.

compression (default = 4) – Compression value between 0 and 10. Used to limit the number of samples which are stored in a lossy compression mode. The compression parameter applies to opacity values, and determines the maximum possible error in opacity for each sample. For compression greater than 0, the following relationship holds

OfError = 1/(2^(10-compression))

zbias (default = 0.001) – Used in compression to “merge” samples which are closer than some threshold. Samples that are closer together than the zbias are merged into a single sample and compute an average z value.

depth_mode (default = “nearest”) – Used in compression to determine whether to keep the nearest, the farthest or the midpoint of samples. The possible choices for depth_mode are…

nearest – Choose the smallest Pz value.

farthest – Choose the largest Pz value.

midpoint – Choose the midpoint of Pz values.

depth_interp (default = “discrete”)

discrete – Each depth sample represents a discrete surface.

continuous – Each depth sample is part of a continuum (i.e. volume).

Example: shadow filename test.rat ofsize 1

vm_dcmfilenameThe .rat file to generate when the Deep Camera Map resolver is used.

 

Extra Image PlanesAny number of extra image planes may be generated simultaneously. If the primary output image format supports multiple image planes, the plane name will be used to define the name of the deep raster plane. If the primary output device does not support multiple image planes, each image plane will be output to an individual file, with the name of the plane defining the file name. The formats that support multiple image planes are OpenEXR and Houdini .pic (including the “ip” device).
Image PlaneThe name of the plane.
VEX VariableThe VEX variable to be output to the image plane. The variable must be either a global variable, or an exported parameter.

When the N variable is output, its value may not be normalized resulting in either very small or very large values.

VEX TypeThe correct type of the variable must also be specified.
Channel NameName of the channel to write the variable data to in the output file (if the file format supports multiple named channels). Leave this field blank to use the VEX variable name as the channel name.
Different FileTurn on the checkbox next to this field to write the variable data to an output file other than the rendered image.

Note

You can specify the same “Different file” for multiple extra image planes with different channel names (if the file format supports multiple named channels).

QuantizeThe storage type for the output.
Sample Filter
Opacity FilteringTransparent surfaces will be composited using Of.
Closest SurfaceOnly the value of the closest surface will be output, regardless of the opacity of the surface.
Pixel FilterSpecifies the pixel filter used to combine sub-pixel samples to generate the value for the single pixel.
GammaSpecifies the gamma correction for the image.
GainEach color value is multiplied by the gain prior to being quantized.
DitherThe dither amount to be applied. The dither is specified as a fraction of the quantization step (i.e. 0.5 will be one half of a quantization step). The option is ignored for floating point output.
White PointThe white-point of the image used during quantization.
Export variable for each componentEnable per-component export planes for variables that support this feature. When enabled, multiple export planes will be produced – one for each of thevm_components specified on the ROP. Per-component exports may be combined with per-light exports to split up lighting components on a per-component and per-light basis.
Light ExportsControls whether light exports are produced.
Export variable for each lightCreates a separate deep raster plane for each light that matches the criteria defined by the Light Mask and theLight Selection.
Merge all lights into single channelCreates a single deep raster plane storing the sum of the export variable for all lights that match the criteria defined by the Light Mask and theLight Selection.
Light MaskA list of light objects by name/bundle.
Light SelectionA list of light objects by category tags (see the Categories parameter on the light).

Note

For each light the deep raster plane name is prefixed with a mangled version of the path to the light. This can be specified manually by adding the rendering parameter Export Plane Prefix to the light sources. You can set the export prefix to an empty string on the output driver if you are generating deep rasters for a single light source in all cases. This will eliminate the prefix for all light sources.

Output Options

ArtistThe name of the image creator. By default uses the current user’s log in name.

Houdini, TIFF, PNG formats

CommentA text comment to include in the output file.

Houdini, OpenEXR, PNG formats

HostnameThe name of the computer where this image was created.

Houdini format

MPlay tile orderThe direction in which MPlay renders the image. Possible values are “middle” (middle out), “top” (top down), or “bottom” (bottom up).
Mplay session labelWhen rendering to MPlay, all Houdini sessions will send the output to the same MPlay flipbook. This can be problematic when running multiple Houdini sessions. The MPlay Label lets you specify a label for the MPlay associated with the output driver. Only renders which match the given label will be sent to that MPlay.

Houdini Process IDUses the operating system process identifier so that the MPlay flipbook will only accept renders from that Houdini session.
HIP NameUses the $HIPNAME variable so the MPlay will only accept renders from the running $HIP file.
Output Driver NameThe MPlay flipbook will only accept renders from the given output driver. For example, if you copy paste the output driver, each output driver will be sent to different MPlay flipbooks because the operators will have different names.

Note

If there are multiple Houdini sessions, there may be output drivers in the other session which match the same operator name.

For example, say you have two output drivers: “High quality” and “Low Quality”. If you set the MPlay Label to different values for the two output drivers, each render will be sent to different MPlay sessions.

MPlay gammaDisplay gamma for MPlay, from 0.0 to 4.0.
JPEG qualityJPEG Quality, integer from 10 to 100.
TIFF compressionType of image compression to use in TIFF files. Possible values are “None”, “LZW”, “AdobeDeflate”, “Deflate”, “PackBits”, “JPEG”, “PixarLog”, “SGILog”, “SGILog24”.

Render

vm_renderengine /renderer:renderengineSee understanding mantra rendering for more information.

Micropolygon RenderingEach primitive is diced up into micropolygons which are shaded and sampled independently.
Ray TracingThe scene is sampled by sending rays from the camera. Each surface hit by a ray will trigger a surface shader execution.
Micropolygon Physically Based RenderingSampling is performed on micropolygons; however, all shading and illumination is performed using physically based rendering.

The number of rays used to compute shading is determined by the maximum ray-samples.

Physically Based RenderingSampling of the scene is performed using ray-tracing and shading is computed using physically based rendering.

In this case, the pixel samples determine the shading quality of the PBR engine.

Photon Map GenerationRather than rendering an image, a photon map will be generated by sending photons from light sources into the scene. The photon map file to be generated is specified on the PBR tab.

Though this IFD token has an integer value, it’s also possible to set the value through a string value.

micropoly – Micropolygon scanline rendering (default).

raytrace – All rendering will be performed using ray-tracing.

pbrmicropoly – Physically Based Rendering using micro-polygon scanline rendering

pbrraytrace – Physically Based Rendering using ray-tracing only.

photon – Photon map generation.

Tile sizeThe size (in pixels) of the tiles rendered by mantra. Larger tile sizes may consume more memory.
Opacity limitMantra will assume that surfaces become opaque when processing many layers of transparent surfaces, rendering with volume primitives, or when the cumulative opacity exceeds this limit. This prevents additional rendering behind opaque layers.
Use max processorsWhen enabled, automatically set the thread count (renderer:threadcountIFD property) to the number of CPUs of the rendering machine.
Thread countWhen Use Max Processors(renderer:usemaxthreads IFD property) is disabled, sets the number of threads Mantra uses for rendering.
Cache ratioThe proportion of physical memory Mantra will use for its unified cache.

For example, with the defaultvm_cacheratio of 0.25 and 16 Gb of physical memory, Mantra will use 4 Gb for its unified cache.

The unified cache stores dynamic, unloadable data used by the render including the following:

2D .rat texture tiles

3D .i3d texture tiles

3D .pc point cloud pages (when not preloaded into memory)

Tessellated meshes required by ray tracing

Displacements

Subdivision surfaces

Bezier and NURBS primitives

Ray tracing acceleratorControls the type of ray tracing accelerator used by mantra. A ray tracing accelerator is a spatial data structure used to optimize the performance of ray intersection tests against complex geometry.

KD-Tree (“kdtree”)Ray trace using a KD-Tree. Normally, the KD-Tree will produce the fastest raytracing performance at a modest initialization time. It is possible to control the performance/quality tradeoff for KD-Tree construction with the KD-Tree Memory Factorparameter (vm_kdmemfactor).
Bounding Volume Hierarchy (“bboxtree”)Ray trace using a bounding volume hierarchy. Sometimes a bounding volume hierarchy will be faster to construct and/or faster to raytrace than a KD-Tree.
KD-Tree memory factorChange the memory/performance tradeoff used when constructing KD-Tree acceleration data structures. Values larger than 1 will cause mantra to use proportionally more memory and take longer to optimize the tree in an attempt to make ray tracing faster. Smaller values will cause mantra to use proportionally less memory and take less time to optimize the tree, while possibly compromising ray tracing performance. The default value of 1 will try to balance the amount of memory used by ray tracing data structures with the amount of memory used by geometry.

If you are noticing long tree construction times, try decreasing the KD memory factor to 0.1. If your render is too slow after tree construction, increase the value until you find a good balance of tree construction time vs. render performance.

UV Render objectThe name of the object to be rendered in UV space. Mantra is able to render an object in UV space rather than 3D space. Only one object may be rendered in UV space.
UV AttributeThe name of the attribute used in UV un-wrapping.
Enable hidingPerform hidden surface removal. When hidden surface removal is disabled, all surfaces in the camera’s frustum will be rendered, regardless of whether they are occluded. This can impact render time significantly.
Create image from viewing cameraRenders an image from the viewing camera. Sometimes, it is useful to skip this render, for example, when rendering shadow maps.
Auto-generate shadow mapsEnable or disable shadow map generation. Each light also has its own controls to determine whether shadow maps will be generated.
Auto-generate environment mapsEnable or disable environment map generation. Each object can be set up to generate an environment map of all the other objects in the scene.
Auto-generate photon mapsEnable or disable photon map generation.
Output OTLs with full pathsEnabling this checkbox will expand any variables in OTL paths, breaking the dependency on Houdini environment variables, but possibly making the IFD less portable.
Force VEX shader embeddingMantra is able to load the shader directly from the OTL when Houdini uses a shader defined in an OTL. When shaders are built using VOPs, the shader must be embedded in the IFD. Enabling this option will force Houdini to embed the shaders defined by OTLs.

This option makes the IFD more self-contained so that machines which don’t have the OTL installed (or a different version of the OTL) are able to evaluate the shaders correctly.

However, if you have complicated shaders, embedding them will bloat the size of the IFD significantly.

Sampling

Enable depth of fieldMantra will render with depth of field. The parameters controlling depth of field may be found on the camera object.
Allow motion blurMantra will render the image using motion blur. The shutter parameter on the camera determines the duration of the shutter, specified as a fraction of the frame. The Xform Time Samples and Geo Time Samples parameters should be used in motion blur renders to control how motion blur is computed. By default, only transformation motion blur with 2 segments will be computed – meaning that animated SOPs will not produce blur in the render. To enable motion blur for moving geometry, it’s necessary to increase the Geo Time Samples.
Raytrace motion blurEnable or disable raytrace motion blur for micro-polygon rendering and photon map generation. By default, raytrace motion blur is disabled. This setting has no effect on the ray tracing rendering engines.
Motion factorAutomatically adjusts the shading quality for objects which are significantly blurred. Increasing the motion factor of an object will dynamically decrease the shading quality based on the rate of motion. This can significantly speed up renderings of rapid moving objects. It also affects depth of field and may improve speed of scenes with deep depth of focus.

Motion factor reduces shading quality using the following formula:

new shading quality = shading quality / max(pixels of motion * (1/16), 1)

 

Objects traveling more than 16 pixels within the frame will have their shading quality reduced by the above factor. For example, an object blurred over 32 pixels with a shading quality of 1 will have the quality reduced to 0.5. You should not use very large values for this parameter. Values between 0 and 1 are reasonable.

When using the Ray Tracing or Physically Based Rendering rendering engine, motion factor will only affect the geometric subdivision for subdivision surfaces, NURBS/beziers, or displacements and will not change the amount of shading.

Xform time samplesThe number of transformation blur motion samples. Each object (unless the parameter exists on the object) will have this many transforms output over the shutter duration. Increasing this number will result in smoother sub-frame motion, at a small memory and compute expense. Any number of segments may be specified, though the default of 2 is often adequate unless significant nonlinear motion occurs within the shutter time for a frame.
Geo time samplesThe number of deformation blur motion samples. Each object (unless the parameter exists on the object) will have this many copies of the geometry included in the IFD. When an object is deforming it is necessary to increase this parameter to a value of 2 to see motion blurred geometry. Any number of segments may be specified, though a value of 2 is often adequate unless significant nonlinear motion occurs within the shutter time for a frame.

This option has no effect on objects which use velocity blur, since velocity blur is linear by nature.

Note

Any number of segments may be specified; however, duplicate geometry is sent down for each sample which may significantly impact the memory footprint of mantra.

Shutter offsetControls where the blur occurs in the image relative to the position of the object at the current frame. A value of -1 blurs from the position at the previous frame to the position in the current frame. A value of 0 blurs from halfway to the previous frame to halfway to the next frame. A value of 1 blurs from the current position to the position at the next frame. You can use fractional frame values and values greater than -1 or 1to move the blur less or more.

To change the size of the blur, change the Shutter time (shutter property).

This parameter replaces the old Motion blur style (motionstyle) parameter, which only allows values of “before” (shutter offset=-1), “center” (shutter offset=0), and “after” (shutter offset=1).

Allow image motion blurThis checkbox controls whether mantra computes motion blur for direct visibility. If this checkbox is disabled, mantra will render without motion blur but still compute motion blurred positions so that you can use getBlurP() in shaders to export motion vectors.
Pixel samplesThe number of samples which are filtered to compute the color for each pixel. Increasing the samples will improve motion blur and depth of field quality. Two sample values are specified for x and y, with the total number of pixel samples is determined by multiplying these values together. Normally both values are set to the same amount, with common pixel sampling settings being 3×3, 8×8, and 16×16.

When using the ray-tracing engines, each ray is shaded independently. Very large values for pixel samples in this case may significantly increase rendering time. However, often large pixel sample values are necessary with the Physically Based Rendering engine to eliminate noise.

JitterA floating point value which adds noise to pixel sampling patterns. A value of 1 will fully randomize pixel sampling patterns, while a value of 0 turns of pixel jitter resulting in stairstep artifacts when too few pixel samples are used. Jitter only applies to pixel antialiasing and does not apply to motion blur or depth of field sampling (which are always randomized).
Sample lockThis property will “lock” the sampling patterns from frame-to-frame. This minimizes the buzzing caused by noise when rendering animations. The noise is still present, but is more consistent frame-to-frame.
Ray variance anti-aliasingWhen ray-tracing VEX functions are invoked, send out additional rays to perform anti-aliasing of ray-traced effects. This will typically generate higher quality ray-tracing. The sampling is determined by image:minraysamples and image:maxraysamples.

When the micro-polygon rendering engine is used, each vertex of each micro-polygon is shaded one time. When ray-tracing is invoked in shading, mantra is able to automatically anti-alias the ray-tracing by oversampling. When variance anti-aliasing is used, there will be at least the minimum rays sent for each shade and at most the maximum number of rays. Mantra uses variance analysis on the shading results to determine the actual number of rays.

If variance anti-aliasing is disabled, the minimum number of rays will be sent for each shading sample. When primary ray-tracing engines are used, these parameters are ignored and anti-aliasing is done using the pixel samples.

Min ray samplesThe minimum number of ray-tracing samples used in variance anti-aliasing.

For Physically Based Rendering, this parameter can also be used to boost the number of samples taken for each shading point without increasing the pixel samples.

Max ray samplesThe maximum number of ray-tracing samples used when variance anti-aliasing.
Noise levelThe variance threshold to send out additional anti-aliasing rays. When near-by samples are very similar, fewer anti-aliasing rays will be sent out. When near-by samples are different, more rays will be sent.
Volume qualityAn overall scale to adjust the ray march quality for volume rendering. A value of 1 means that mantra should use the native step size specified by the volume primitive type being rendered. For voxel grid volumes, the native step size is determined based on the voxel size, while for procedural volumes it is usually fixed at an object space distance of 0.1. Larger values for the volume quality will result in a smaller ray march distance, leading to more accurate renders. Lowering the volume quality can improve performance but can cause the render to lose definition and to appear overly transparent. The default value of 0.25 will use a step size of 4 voxel widths for voxel grid volumes. When optimizing render performance, you should start at a high volume quality and decrease the value until you see an unacceptable drop in quality.
Volume shadow qualityA factor to proportionally decrease the volume quality only for shadows. Smaller values will cause mantra to use a larger ray march step size for shadow rays. A value of 1 will produce equal quality for shadow rays and shading rays.
Stochastic transparencyUses an optimization to speed up raytracing of translucent objects (volumes, sprites, and transparent surfaces) which normally contain many highly transparent samples per pixel. This may make the image noisier than without stochastic transparency, so you may need to compensate by, for example, increasing the pixel samples. You should generally leave this option on.

When this option is on, mantra randomly selects a certain number of transparent samples per pixel to shade (controlled by the Transparent Samples parameter). When this option is off (the default prior to Houdini 12), mantra shades all transparent samples.

The renderer ignores this option for micropolygon rendering (except for secondary ray tracing) and for renders that only generate opacity (such as deep shadow maps). In those cases it is more efficient to composite all the transparent shading results.

Added in Houdini 12.

Transparent samplesThe number of transparent samples to shade when Stochastic Transparency is on. Higher values improve shading quality for volumes and transparent surfaces, but are slower to render.
Random seedAn integer value that controls initialization of pixel sampling patterns. Different random seeds will produce different pixel sampling patterns.

Shading

Reflect limitLimit the number of specular bounces for physically based rendering. Increasing this limit will produce more accurate rendering of scenes that have many inter-reflections (such as scenes containing glass or fluids).

To see glossy or diffuse bounces, it is also necessary to increase the reflect limit. The reflect limit is a global limiter on all types of bounces, not just reflections and refractions.

Refract limitThe maximum refraction bounces.
Diffuse limitLimit the number of diffuse bounces. Diffuse bounces usually contribute to the majority of global illumination in a scene, so increasing the diffuse limit will produce more accurate global illumination.

This only works with PBR and with the GI Light object in raytrace/micropolygon because diffuse bounces need to be traced by a shader, and when you’re rendering with micropolygon/raytrace, most shaders do not calculate diffuse bounces themselves. The PBR shader and the GI Light both include a computation of diffuse lighting.

Volume limitThe maximum volume bounces allowed in PBR mode.
Raytracing biasGlobal raytracing bias to be used for PBR rendering, specified in world space units. Increase the raytracing bias only if doing so eliminates rendering artifacts in your render. For really small scenes (1 unit in size), it is sometimes necessary to decrease the raytracing bias.
Bias along normalWhen biases are used in VEX shaders, the bias can either be performed along the ray direction or along the surface normal. If this parameter is turned on, biasing will be along the surface normal – using the “Ng” VEX variable.

If the ray direction and normal point in different directions, the normal will first be flipped and then biasing will be performed in the direction of the flipped normal. This setting is particularly useful when ray traced surfaces that are seen edge-on.

Color spaceSampling color space for variance antialiasing. Setting this to Gamma 2.2 will cause darker parts of the image to receive more samples.
At ray limitControls how PBR deals with rays that reach the ray tracing limit (for example the reflect or refract limit).
Smooth grid colorsWhen micro-polygon rendering, shading normally occurs at micro-polygon vertices at the beginning of the frame. Enabling this checkbox causes the vertex colors to be Gouraud shaded to determine the color for a sample.

Turn this checkbox off when you are trying to match a background plate to eliminate any filtering which might occur on the plate. The Gouraud interpolation will cause softening of the map.

PBR

PBR shaderThe shader to use for physically based rendering. If no shader is specified, the default VEX Pathtracer shader is used. Normally you should not need to assign a shader for this parameter unless you are a shader writer and need to make adjustments to the PBR shading algorithm.
PBR photon shaderThe shader to use for photon map generation. If no shader is specified, the default VEX Photon Tracer shader is used. Normally you should not need to assign a shader for this parameter unless you are a shader writer and need to make adjustments to the PBR photon tracing algorithm.
Allowable pathsThe type of path tracing to perform in PBR mode.

diffuse – Trace all diffuse and specular bounces, but once a diffuse bounce is encountered continue tracing only diffuse reflections.

all – All paths are traced. This option can be used to enable rendering of caustics without the use of photon maps – however when using point or small area lights, the rendered result can turn out to be extremely noisy.

Color limitWhen performing shading, mantra places no limits on values which may be returned. However, when performing Physically Based Rendering, it’s possible to get very large values for some indirect rays. These extrema will cause color spikes which cannot be smoothed out without sending huge numbers of rays. The color limit is used to clamp the color values of these indirect samples to avoid these spikes.
Min reflection ratioEnables a minimum amount of secondary ray tracing even when the surface bsdf reflects only a small amount of light. Normally, when mantra encounters a surface that reflects only 10% of the incoming light, ray tracing will be optimized by sending 10x fewer secondary rays – potentially leading to unwanted noise in the image. Increasing the minimum reflection ratio will ensure that the specified proportion of rays are traced as a minimum even for surfaces that have a low reflectivity.

Statistics

Verbose levelIncreasing this value will cause more information to be printed out during rendering. To see render time and memory information, set the verbosity level to 1. To see more detailed information about the render, set this parameter to 3. Larger values print out progressively more information.
VEX profilingVEX profiling leys you do analyze shader performance. Turning this on will slow down shading, especially when NAN detection is turned on.

No VEX Profiling (0)No VEX Profiling is performed.
Execution Profiling (1)Mantra will print out information about shading computations at the conclusion of the render. This helps in identifying bottlenecks in the shading process.
Profiling and NAN detection (2)Prints out the shading information and will not print instructions that generate bad values (Not A Number). The output is cryptic, but can help track down errors in shaders.

When NAN detection is turned on, each instruction executed in VEX will be checked for invalid arithmetic operations. This will check for division by 0, numeric overflow, invalid operations. Errors like this will typically result in white or black pixels in the resulting image.

Alfred style progressA percentage complete value is printed out as tiles are finished. This is in the style expected by Pixar’s Alfred render queue.

The following is an example of timing information and how to read it.

Render Time: 4.52u 6.30s 4.24r Memory:  23.40 MB of 23.59 MB arena size.  VM Size: 345.11 MB

 

The u value is the user time in seconds mantra took to render the image.

Note

This value might not be 100% accurate depending on the OS and other system variables. On Linux, this value will indicate the total time for all threads to render, so rendering with more than one processor may inflate the user time.

The s value is the system overhead incurred in rendering the frame (disk io, swap, etc.). A large system time may indicate a large amount of time spend reading or writing files.

The r value is the wall clock time to render. This is the most important value as it gives a clear indication for the total amount of time spent rendering.

Arena size is the amount of memory mantra allocated to actually render the image. It does not reflect how much memory mantra actually used.

VM size is the virtual memory size for the mantra program. This is the amount of memory reported by the operating system and may significantly exceed the amount of memory that mantra is actually using.

Mantra needs to grab continuous chunks of memory as it builds the data structures. Once it frees up the data, the operating system controls the arena size shrinking it where it finds continuous chunks of memory back to the free pool of available memory. This is called memory allocation and memory deallocation. You don’t want the arena size much larger than the actual memory used.

Python tile callbackThis property specifies a python callback which can be called at the completion of each tile rendered. There is a “built-in” “mantra” module which allows information to be queried. There is a single function available in the mantra module. The “property” function allows querying of any global rendering property as well as some other special properties. The result of the property call is always a list of values.

The special properties queried may be…

tile:ncomplete – The number of tiles which have been completed.

tile:ntiles – The total number of tiles in the image.

tile:laptime – The number of seconds taken to render the last tile.

tile:totaltime – The total number of seconds to render since the render began. This does not include time to load the scene, but rather is defined as the time since the first tile began rendering.

tile:coords – The tile bounding box (in pixels).

tile:memory – The amount of RAM in use by mantra.

import mantra

import sys

 

tile = mantra.property(“tile:ncomplete”)(0)

if tile == 1

print mantra.property(“renderer:name”)

print mantra.property(“renderer:version”)

Dicing

Shading quality multiplierA global multiplier on all per-object shading quality (vm_shadingquality) parameters in the scene. This parameter can be used to globally increase or decrease shading quality. The shading quality used for an object is determined by…

shadingquality = object:shadingquality * renderer:shadingfactor

 

 

Geometry measuringWhen primitives are rendered in mantra, they are split into smaller primitives if they are too big to be rendered. The primitives are measured to determine if they are too big using the measurer.

There are several different measurers available, each which take some optional arguments.

Non-Raster Measuring (nonraster [-z importance])This measures geometry in 3D. TheZ-Importance can be used to bias the z-component of the surface. A Z-Importance of 0 means that the x and y components of the object will be the only metric in determining the size of the object. This is roughly equivalent to raster space measurement.

By increasing the Z-Importance to 1, the z measurement becomes more meaningful. It is possible to increase the Z-Importancebeyond 1.

If you think of a grid in the XY plane, the z-importance has no effect. However, if the grid is nearly in the XZ plane, z-importance has more influence on the dicing. With a Z-Importance of 0, only the projected measurements will be used, which will result in long, thin strips being created. With aZ-Importance of 1, the grid will be more uniformly sub-divided. With a value greater than 1, more divisions will be performed in Z.

This is important when displacement mapping is being performed. Increasing the Z-Importancewill improve quality on displacement shaded ground planes (for example). The default value of 1 generally will result in sharp, high quality displacements at a shading quality of 1 for all incident angles.

Note

This is mantra’s equivalent to prman’sraster-orientflag.

Raster Space Measuring (raster)Measures geometry in screen space. This is roughly equivalent to the “nonraster -z 0″measurer, so is deprecated in favor of that approach.
Uniform Measuring (uniform)Generates uniform divisions. The size of the divisions is controlled by theGeometry Quality orShading Quality in micropolygon renders.
Z-importanceThis parameter controls the z-importance for nonraster measuring. See vm_measure above.
Ray measuringThis parameter is obsolete in Houdini 12. See vm_measure.
Ray Z-importanceThis parameter is obsolete in Houdini 12. See vm_measure.

Geometry

Save binary geometrySaves binary geometry in the IFD. If this option is turned off, ASCII geometry is saved in the IFD. Binary is much more efficient. ASCII is readable.
Save geometry groupsControls whether geometry groups are saved to the IFD. Groups can require a significant amount of storage and are normally unused during rendering – so leaving this option disabled will improve IFD generation time and reduce file size.

Irradiance

Enable irradiance cacheEnables the irradiance cache, which can significantly improve the performance of the irradiance and occlusion VEX function calls.

This has no effect when using area lights or the PBR rendering engines. Normally the irradiance cache should only be used with a VEX Global Illumination light or with shaders that use the occlusion() or irradiance() functions.

Irradiance errorThe maximum error tolerance between samples in the irradiance map. Normally you should only decrease this value from the default of 0.1 if there are artifacts in the render.
Max spacing (pixels)The maximum screen space between irradiance samples. Lowering this value will force more samples to be computed, creating a more accurate representation of the irradiance information.
Min spacing (pixels)In some cases, you can get a very dense clustering of irradiance samples in the cache. This parameter prevents the clustering by ensuring that there are at least this many pixels between samples. Clustering usually occurs between non-smooth intersections between different primitives.
Irradiance cache fileThe file to store the irradiance cache. If multiple objects specify the same file, the cache file will contain samples from all objects.
Read/write file modeThe read-write mode for the global irradiance file.

rRead only.
wWrite only. This will generate a new irradiance cache file on disk at the specified sampling rate
rwRead and write. This will load an irradiance cache file from disk and use the pre-existing results where they exist It will also generate new samples for parts of the image that were not rendered in the original cache file generation.
点击分享到: