workflow

Combining Zbrush and Mari displacement maps by Xuan Prada

Short and sweet (hopefully).
It seems to be quite a normal topic these days. Mari and Zbrush are commonly used by texture artists. Combining displacement maps in look-dev is a must.

I'll be using Maya and Arnold for this demo but any 3D software and renderer is welcome to use the same workflow.

  • Using Zbrush displacements is no brainer. Just export them as 32 bit .exr and that's it. Set your render subdivisions in Arnold and leave the default settings for displacement. Zero value is always 0 and height should be 1 to match your Zbrush sculpt.
  • These are the maps that I'm using. First the Zbrush map and below the Mari map.
  • No displacement at all in this render. This is just the base geometry.
  • In this render I'm only using the Zbrush displacement.
  • In order to combine Zbrush displacement maps and Mari displacement maps you need to normalise the ranges. If you use the same range your Mari displacement would be huge compared with the Zbrush one.
  • Using a multiply node is so easy to control the strength of the Mari displacement. Connect the map to the input1 and play with the values in the input2.
  • To mix both displacement maps you can use an average node. Connect the Zbrush map to the input0 and the Mari map (multiply node) to the input1.
  • The average node can't be connected straight o the displacement node. Use ramp node with the average node connected to it's color and then connect the ramp to the displacement default input.
  • In this render I'm combining both, Zbrush map and Mari map.
  • In this other example I'm about to combine two displacements using a mask. I'll be using a Zbrush displacement as general displacement, and then I'm going to use a mask painted in Mari to reveal another displacement painted in Mari as well.
  • As mask I'm going to use the same symbol that I used before as displacement 2.
  • And as new displacement I'm going to use a procedural map painted in Mari.
  • The first thing to do is exactly the same operation that we did before. Control the strength of the Mari's displacement using a multiply node.
  • Then use another multiply node with the Mari's map (multiply) connected to it's input1 and the mask connected to it's input2. This will reveal the Mari's displacement only in the white areas of the mask.
  • And the rest is exactly the same as we did before. Connect the Zbrush displacement to the input0 of the average node and the Mari's displacement (multiply) to the input1 of the average node. Then the average node to the ramp's color and the ramp to the displacement default input.
  • This is the final render.

VFX footage input/output by Xuan Prada

This is a very quick and dirty explanation of how the footage and specially colour is managed in a VFX facility.

Shooting camera to Lab
The RAW material recorded on-set goes to the lab. In the lab it is converted to .dpx which is the standard film format. Sometimes the might use exr but it's not that common.
A lot of movies are still being filmed with film cameras, in those cases the lab will scan the negatives and convert them to .dpx to be used along the pipeline.

Shooting camera to Dailies
The RAW material recorded on-set goes to dailies. The cinematographer, DP, or DI applies a primary LUT or color grading to be used along the project.
Original scans with LUT applied are converted to low quality scans and .mov files are generated for distribution.

Dailies to Editorial
Editorial department receive the low quality scans (Quicktimes) with the LUT applied.
They use these files to make the initial cuts and bidding.

Editorial to VFX
VFX facilities receive the low quality scans (Quictimes) with LUT applied. They use these files for bidding.
Later on they will use them as reference for color grading.

Lab to VFX
Lab provides high quality scans to the VFX facility. This is pretty much RAW material and the LUT needs to be applied.
The VFX facility will have to apply the LUT's film to the work done by scratch by them.
When the VFX work is done, the VFX facility renders out exr files.

VFX to DI
DI will do the final grading to match the Editorial Quicktimes.

VFX/DI to Editorial
High quality material produced by the VFX facility goes to Editorial to be inserted in the cuts.


The basic practical workflow would be.

  • Read raw scan data.
  • Read Quicktime scan data.
  • Dpx scans usually are in LOG color space.
  • Exr scans usually are in LIN color space.
  • Apply LUT and other color grading to the RAW scans to match the Quicktime scans.
  • Render out to Editorial using the same color space used for bringing in footage.
  • Render out Quicktimes using the same color space used for viewing. If wathcing for excample in sRGB you will have to bake the LUT.
  • Good Quicktime settings: Colorspace sRGB, Codec Avid Dnx HD, 23.98 frames, depth million of colors, RGB levels, no alpha, 1080p/23.976 Dnx HD 36 8bit

Introduction to scatterers in Clarisse by Xuan Prada

Scatterers in Clarisse are just great. They are very easy to control, reliable and they render in no time.
I've been using them for matte painting purposes, just feed them with a bunch of different trees to create a forest in 2 minutes. Add some nice lighting and render insane resolution. Then use all the 3D material with all the needed AOV's in Nuke and you'll have full control to create stunning matte paintings.

To make this demo a bit funnier instead of trees I'm using cool Lego pieces :)

  • Create a context called obj and import the grid.obj and the toy_man.obj
  • Create another context called shaders and create generic shaders for the objs.
  • Also create two textures and load the images from the hard drive.
  • Assign the textures to the diffuse input of each shader and then assign each shader to the correspondent obj.
  • Set the camera to see the Lego logo.
  • Create a new context called crowd, and inside of it create a point cloud and a scatterer.
  • In the point cloud set the parent to be the grid.
  • In the scatterer set the parent to be the grid as well.
  • In the scatterer set the point cloud as geometry support.
  • In the geometry section of the scatterer add the toy_man.
  • Go back to the point cloud and in the scattering geometry add the grid.
  • Now play with the density. In this case I’m using a value of 0.7

  • As you can see all the toy_men start to populate the image.

  • In the decimate texture add the Lego logo. Now the toy_men stick to the Logo.
  • Add some variation in the scatterer position and rotation.
  • That’s it. Did you realise how easy was to setup this cool effect? And did you check the polycount? 108.5 million :)
  • In order to make this look a little bit better, we can remove the default lighting and do some quick IBL setup.

Final render.

Photography assembly for matte painters by Xuan Prada

In this post I'm going to explain my methodology to merge different pictures or portions of an environment in order to create a panoramic image to be used for matte painting purposes. I'm not talking about creating equirectangular panoramas for 3D lighting, for that I use ptGui and there is not a better tool for it.

I'm talking about blending different images or footage (video) to create a seamless panoramic image ready to use in any 3D or 2D program. It can be composed using only 2 images or maybe 15, it doesn't matter.
This method is much more complicated and requires more human time than using ptGui or any other stitching software. But the power of this method is that you can use it with HDR footage recorded with a Blackmagic camera, for example.

The pictures that I'm using for this tutorial were taken with a nodal point base, but they are not calibrated or similar. In fact they don't need to be like that. Obviously taking pictures from a nodal point rotation base will help a lot, but the good thing of this technique is that you can use different angles taken from different positions and also using different focal and different film backs from various digital cameras.

  • I'm using these 7 images taken from a bridge in Chiswick, West London. The resolution of the images is 7000px wide so I created a proxy version around 3000px wide.
  • All the pictures were taken with same focal, same exposure and with the ISO and White Balance locked.
  • We need to know some information about these pictures. In order to blend the images in to a panoramic image we need to know the focal length and the film back or sensor size.
  • Connect a view meta data node to every single image to check this information. In this case I was the person who took the photos, so I know all of them have the same settings, but if you are not sure about the settings, check one by one.
  • I can see that the focal length is 280/10 which means the images were taken using a 28mm lens.
  • I don't see film back information but I do see the camera model, a Nikon D800. If I google the film back for this camera I see that the size is 35.9mm x 24mm.
  • Create a camera node with the information of the film back and the focal length.
  • At this point it would be a good idea to correct the lens distortion in your images. You can use a lens distortion node in Nuke if you shot a lens distortion grid, or just do eyeballing.
  • In my case I'm using the great lens distortion tools in Adobe Lightroom, but this is only possible because I'm using stills. You should always shot lens distortion grids.
  • Connect a card node to the image and remove all the subdivisions.
  • Also deactivate the image aspect to have 1:1 cards. We will fix this later.
  • Connect a transfer geo node to the card, and it's axis input to the camera.
  • If we move the camera, the card is attached to it all the time.
  • Now we are about to create a custom parameter to keep the card aligned to the camera all the time, with the correct focal length and film back. Even if we play with the camera parameters, the image will be updated automatically.
  • In the transform geo parameters, RMB and select manage user knobs and add a floating point slider. Call it distance. Set the min to 0 and the max to 10
  • This will allow us to place the card in space always relative to the camera.
  • In the transform geo translate z press = to type an expression. write -distance
  • Now if we play with the custom distance value it works.
  • Now we have to refer to the film back and focal length so the card matches the camera information when it's moved or rotated.
  • In the x scale of the transform geo node type this expression (input1.haperture/input1.focal)*distance and in the y scale type: (input1.vaperture/input1.focal)*distance being input1 the camera axis.
  • Now if we play with the distance custom parameter everything is perfectly aligned.
  • Create a group with the card, camera and transfer geo nodes.
  • Remove the input2 and input3 and connect the input1 to the card instead of the camera.
  • Go out of the group and connect it to the image. There are usually refreshing issues so cut the whole group node and paste it. This will fix the problem.
  • Manage knobs here and pick the focal length and film back from the camera (just for checking purposes)
  • Also pick the rotation from the camera and the distance from the transfer geo.
  • Having these controls here we won't have to go inside of the group if we need to use them. And we will.
  • Create a project 3D node and connect the camera to the camera input and the input1 to the input.
  • Create a sitch node below the transfer geo node and connect the input1 to the project3D node.
  • Add another custom control to the group parameters. Use the pulldown choice, call it mode and add two lines: card and project 3D.
  • In the switch node add an expression: parent.mode
  • Put the mode to project 3D.
  • Add a sphere node, scale it big and connect it to the camera projector.
  • You will se the image projected in the sphere instead of being rendered in a flat card.

Depending on your pipeline and your workflow you may want to use cards or projectors. At some point you will need both of them, so is nice to have quick controls to switch between them

In this tutorial we are going to use the card mode. For now leave it as card and remove the sphere.

  • Set the camera in the viewport and lock it.
  • Now you can zoom in and out without loosing the camera.
  • Set the horizon line playing with the rotation.
  • Copy and paste the camera projector group and set the horizon in the next image by doing the same than before; locking the camera and playing with camera rotation.
  • Create a scene node and add both images. Check that all the images have an alpha channel. Auto alpha should be fine as long as the alpha is completely white.
  • Look through the camera of the first camera projector and lock the viewport. Zoom out and start playing with the rotation and distance of the second camera projection until both images are perfectly blended.
  • Repeat the process with every single image. Just do the same than before; look through the previous camera, lock it, zoom out and play with the controls of the next image until they are perfectly aligned.
  • Create a camera node and call it shot camera.
  • Create a scanline render node.
  • Create a reformat node and type the format of your shot. In this case I'm using a super 35 format which means 1920x817
  • Connect the obj/scene input of the scanline render to the scene node.
  • Connect the camera input of the scanline render to the shot camera.
  • Connect the reformat node to the bg input of the scanline render node.
  • Look through the scanline render in 2D and you will see the panorama through the shot camera.
  • Play with the rotation of the camera in order to place the panorama in the desired position.

That's it if you only need to see the panorama through the shot camera. But let's say you also need to project it in a 3D space.

  • Create another scanline render node and change the projection mode to spherical. Connect it to the scene.
  • Create a reformat node with an equirectangular format and connect it to the bg input of the scanline render. In this case I'm using a 4000x2000 format.
  • Create a sphere node and connect it to the spherical scanline render. Put a mirror node in between to invert the normal of the sphere.
  • Create another scanline render and connect it's camera input to the shot camera.
  • Connect the bg input of the new scanline render to the shot reformat node (super 35).
  • Connect the scn/obj of the new scanline render and connect it to the sphere node.
  • That's all that you need.
  • You can look through the scanline render in the 2D and 3D viewport. We got all the images projected in 3D and rendered through the shot camera.

You can download the sample scene here.

Linear Workflow in Maya with Vray 2.0 by Xuan Prada

I’m starting a new work with V-Ray 2.0 for Maya. I never worked before with this render engine, so first things first.
One of my first things is create a nice neutral light rig for testing shaders and textures. Setting up linear workflow is one of my priorities at this point.
Find below a quick way to  set up this.

  • Set up your gamma. In this case I’m using 2,2
  • Click on “don’t affect colors” if you want to bake your gamma correction in to the final render. If you don’t click on it you’ll have to correct your gamma in post. No big deal.
  • The linear workflow option is something created for Chaos Group to fix old VRay scenes which don’t use lwf. You shouldn’t use this at all.
  • Click on affect swatches to see color pickers with the gamma applied.
  • Once you are working with gamma applied, you need to correct your color textures. There are two different options to do it.
  • First one: Add a gamma correction node to each color texture node. In this case I’, using gamma 2,2 what means that I need to use a ,0455 value on my gamma node.
  • Second option: Instead of using gamma correction nodes for each color texture node, you can click on the texture node and add a V-Ray attribute to control this.
  • By default all the texture nodes are being read as linear. Change your color textures to be read as sRGB.
  • Click on view as sRGB on the V-Ray buffer, if not you’ll see your renders in the wrong color space.
  • This is the difference between rendering with the option “don’t affect colors” enabled or disabled. As I said, no big deal.

Find references with Google sets by Xuan Prada

Some time ago I wrote a post about my workflow to find image references and general information, useful when you are in the researching phase of your work.

I’ve found a nice tool of Google which helps you to find information related to one topic written by you.

For example, if you need references of American cars, but you only know the companies “Chevrolet” and “Buick”, Google gives you related companies of American cars.
It’s very useful!

Google Sets.