look-dev

Combining Zbrush and Mari displacements in Clarisse by Xuan Prada

We all have to work with displacement maps painted in both Zbrush and Mari.
Sometimes we use 32 bits floating point maps, sometimes 16 bits maps, etc. Combining different displacement depths and scales is a common task for a look-dev artist working in the film industry.

Let's see how to setup different displacement maps exported from Zbrush and Mari in Isotropix Clarisse.

  • First of all, have a look at all the individual displacement maps to be used.
  • The first one has been sculpted in Zbrush and exported as .exr 32 bits displacement map. The non-displacement value is zero.
  • The second one has been painted in Mari and exported also as .exr 32 bits displacement map. Technically this map is exactly the same as the Zbrush one, the only difference here is the scale.
  • The third displacement map in this exercise also comes from Mari, but in this case it's a .tif 16 bits displacement map, which means that the mid-point will be 0,5 instead of zero.
  • We need to combine all of them in Clarisse and get the expected result.
  • Start creating a displacement node and assigning it to the mesh.
  • We consider the Zbrush displacement as our main displacement layer. That said, the displacement node has to be setup like the image below. The offset or non-displacement value has to be zero, and the front value 1. This will give us exactly the same look that we have in Zbrush.
  • In the material editor I'm connecting a multiply node after every single displacement layer. The input 2 is 1.1.1 by default. Increasing or reducing this value will control the strength of each displacement layer. It is not necessary to control the intensity of the Zbrush layer unless you want to do it. But it is necessary to reduce the intensity of the Mari displacement layers as they are way off compared with the Zbrush intensity.
  • I also added an add node right after the 16 bits Mari displacement subtracting the value -0.5 in order to remap the value at the same level than the other 32 bits maps with non-displacement value of zero.
  • Finally I used add nodes to mix all the displacement layers.
  • It is a good idea to setup all the layers individually to find the right look.
  • No displacement at all.
  • Zbrush displacement.
  • Mari high frequency detail.
  • Mari low frequency detail.
  • All displacement layers combined.

Environment reconstruction + HDR projections by Xuan Prada

I've been working on the reconstruction of this fancy environment in Hackney Wick, East London.
The idea behind this exercise was recreating the environment in terms of shape and volume, and then project HDRIs on the geometry. Doing this we can get more accurate lighting contribution, occlusion, reflections and color bleeding. Much better environment interaction between 3D assets. Which basically means better integrations for our VFX shots.

I tried to make it as simple as possible, spending just a couple of hours on location.

  • The first thing I did was drawing some diagrams of the environment and using a laser measurer cover the whole place writing down all the information needed for later when working on the virtual reconstruction.
  • Then I did a quick map of the environment in Photoshop with all the relevant information. Just to keep all my annotations clean and tidy.
  • With drawings and annotations would have been good enough for this environment, just because it's quite simple. But in order to make it better I decided to scan the whole place. Lidar scanning is probably the best solution for this, but I decided to do it using photogrammetry. I know it takes more time but you will get textures at the same time. Not only texture placeholders, but true HDR textures that I can use later for projections.
  • I took around 500 images of the whole environment and ended up with a very dense point cloud. Just perfect for geometry reconstruction.
  • For the photogrammetry process I took around 500 shots. Every single one composed of 3 bracketed exposures, 3 stops apart. This will give me a good dynamic range for this particular environment.
  • Combined the 3 brackets to create rectilinear HDR images. Then exported them as both HDR and LDR. The exr HDRs will be used for texturing and the jpg LDR for photogrammetry purpose.
  • Also did a few equirectangular HDRIs with even higher dynamic ranger. Then I projected these in Mari using the environment projection feature. Once I completed the projections from different tripod positions, cover the remaining areas with the rectilinear HDRs.
  • These are the five different HDRI positions and some render tests.
  • The next step is to create a proxy version of the environment. Having the 3D scan this so simple to do, and the final geometry will be very accurate because it's based on photos of the real environment. You could also do a very high detail model but in this case the proxy version was good enough for what I needed.
  • Then, high resolution UV mapping is required to get good texture resolution. Every single one of my photos is 6000x4000 pixels. The idea is to project some of them (we don't need all of them) through the photogrammetry cameras. This means great texture resolution if the UVs are good. We could even create full 3D shots and the resolution would hold up.
  • After that, I imported in Mari a few cameras exported from Photoscan and the correspondent rectilinear HDR images. Applied same lens distortion to them and project them in Mari and/or Nuke through the cameras. Always keeping the dynamic range.
  • Finally exported all the UDIMs to Maya (around 70). All of them 16 bit images with the original dynamic range required for 3D lighting.
  • After mipmapped them I did some render tests in Arnold and everything worked as expected. I can play with the exposure and get great lighting information from the walls, floor and ceiling. Did a few render tests with this old character.

Subdivide multiple objects in Arnold by Xuan Prada

As you probably know Arnold manages subdivision individually per object. There is no way to subdivide multiple objects at once. Obviously if you have a lot of different objects in a scene going one by one adding Arnold's subdivision property doesn't sound like a good idea.

This the easiest way that I found to solve this problem and subdivide tons of objects at once.
I have no idea at all about scripting, if you have a better solution, please let me know :)

  • This is the character that I want to subdivide. As you can see it has a lot of small pieces. I'd like to keep them separate and subdivide every single one of them.

Model by SgtHK.

  • First of all, you need to select all the geometry shapes. To do this, select all the geometry objects in the outliner and paste this line in the script editor.

/* you have to select all the objects you want to subdivide, it doesn’t work with groups or locators.
once the shapes are selected just change aiSubdivType and aiSubdivIterations on the attribute spread sheet.
*/

pickWalk -d down;

string $shapesSelected[] = `ls -sl`;

  • Once all the shapes are selected go to the attribute spread editor.
  • Filter by ai subd.
  • Just type the subdivision method and iterations.
  • This is it, the whole character is now subdivided.

Rendering OpenVDB in Clarisse by Xuan Prada

Clarisse is perfectly capable of rendering volumes while maintaining it's flexible rendering options like instances or scatterers. In this particular example I'm going to render a very simple smoke simulation.

Start by creating and IBL setup. Clarisse allows you to do it with just one click.

Using a couple of matte and chrome spheres will help to establish the desired lighting situation.

To import the volume simulation just go to import -> volume.

Clarisse will show you a basic representation of the volume in the viewport. Always real time.

To improve the visual representation of the volume in viewport just click on Progressive Rendering. Lighting will also affect the volume in the viweport.

Volumes are treated pretty much like geometry in Clarisse. You can render volumes with standard shaders if you wish.

The ideal situation of course it would be using volume shaders for volume simulations.

In the material editor I'm about to use an utility -> extract property node to read any embedded property in the simulation. In this case I'm reading the temperature.

Finally I drive the temperature color with a gradient map.

If you get a lof of noise in your renders, don't forget to increase the volume sampling of your lighting sources.

Final render.

Dealing with Ptex displacement by Xuan Prada

Render using Ptex displacement.

What if you are working with Ptex but need to do some kind of Zbrush displacement work?
How can you render that?

As you probably now, Zbrush doesn't support Ptex. I'm not a super fan of Ptex (but I will be soon) but sometimes I do not have time or simply I don't want to make proper UV mapping. Then, if Zbrush doesn't export Ptex and my assets don't have any sort of UV coordinates, can't I use Ptex at all for my displacement information?

Yes, you can use Ptex.

Base geometry render. No displacement.

  • In this image below, I have a detailed 3D scan which has been processed in Meshlab to reduce the crazy amount of polygons.
  • Now I have imported the model via obj in Zbrush. Only 500.000 polys but it looks great though.
  • We are going to be using Zbrush to create a very quick retopology for this demo. We could use Maya or Modo to create a production ready model.
  • Using the Zremesher tool which is great for some type of retopology tasks, we get this low res model. Good enough for our purpose here.
  • Next step would be exporting both model, high and low resolution as .obj
  • We are going to use these models in Mudbox to create our Ptex based displacement. Yes, Mudbox does support Ptex.
  • Once imported keep both of them visible.
  • Export displacement maps. Have a look in the image below at the options you need to tweak.
  • Basically you need to activate Ptex displacement, 32bits, the texel resolution, etc)
  • And that's it. You should be able to render your Zbrush details using Ptex now.

Combining Zbrush and Mari displacement maps by Xuan Prada

Short and sweet (hopefully).
It seems to be quite a normal topic these days. Mari and Zbrush are commonly used by texture artists. Combining displacement maps in look-dev is a must.

I'll be using Maya and Arnold for this demo but any 3D software and renderer is welcome to use the same workflow.

  • Using Zbrush displacements is no brainer. Just export them as 32 bit .exr and that's it. Set your render subdivisions in Arnold and leave the default settings for displacement. Zero value is always 0 and height should be 1 to match your Zbrush sculpt.
  • These are the maps that I'm using. First the Zbrush map and below the Mari map.
  • No displacement at all in this render. This is just the base geometry.
  • In this render I'm only using the Zbrush displacement.
  • In order to combine Zbrush displacement maps and Mari displacement maps you need to normalise the ranges. If you use the same range your Mari displacement would be huge compared with the Zbrush one.
  • Using a multiply node is so easy to control the strength of the Mari displacement. Connect the map to the input1 and play with the values in the input2.
  • To mix both displacement maps you can use an average node. Connect the Zbrush map to the input0 and the Mari map (multiply node) to the input1.
  • The average node can't be connected straight o the displacement node. Use ramp node with the average node connected to it's color and then connect the ramp to the displacement default input.
  • In this render I'm combining both, Zbrush map and Mari map.
  • In this other example I'm about to combine two displacements using a mask. I'll be using a Zbrush displacement as general displacement, and then I'm going to use a mask painted in Mari to reveal another displacement painted in Mari as well.
  • As mask I'm going to use the same symbol that I used before as displacement 2.
  • And as new displacement I'm going to use a procedural map painted in Mari.
  • The first thing to do is exactly the same operation that we did before. Control the strength of the Mari's displacement using a multiply node.
  • Then use another multiply node with the Mari's map (multiply) connected to it's input1 and the mask connected to it's input2. This will reveal the Mari's displacement only in the white areas of the mask.
  • And the rest is exactly the same as we did before. Connect the Zbrush displacement to the input0 of the average node and the Mari's displacement (multiply) to the input1 of the average node. Then the average node to the ramp's color and the ramp to the displacement default input.
  • This is the final render.

RGB masks by Xuan Prada

We use RGB masks all the time in VFX, don't we?
They are very handy and we can save a lot of extra texture maps combining 4 channels in one single texture map RGB+A.

We use them to mix shaders in look-dev stage, or as IDs for compositing, or maybe as utility passes for things like motion blur o depth.

Let's see how I use RGB masks in my common software: Maya, Clarisse, Mari and Nuke.

Maya

  • I use a surface shader with a layered texture connected.
  • I connect all the shaders that I need to mix to the layered texture.
  • Then I use a remapColor node with the RGB mask connected as mask for each one of the shaders.

This is the RGB mask that I'm using.

  • We need to indicate which RGB channel we want to use in each remapColor node.
  • Then just use the output as mask for the shaders.

Clarisse

  • In Clarisse I use a reorder node connected to my RGB mask.
  • Just indicate the desired channel in the channel order parameter.
  • To convert the RGB channel to alpha just type it in the channel order field.

Mari

  • You will only need a shuffle adjustment layer and select the required channel.

Nuke

  • You can use a shuffle node and select the channel.
  • Or maybe a keyer node and select the channel in the operation parameter. (this will place the channel only in the alpha).

Clarisse UV interpolation by Xuan Prada

When subdividing models in Clarisse for rendering displacement maps, the software subdivides both geometry and UVs. Sometimes we might need to subdivide only the mesh but keeping the UVs as they are originally.

This depends on production requirements and obviously on how the displacement maps were extracted from Zbrush or any other sculpting package.

If you don't need to subdivide the UVs first of all you should extract the displacement map with the option SmoothUV turned off.
Then in Clarisse, select the option UV Interpolation Linear.

By default Clarisse sets the UVs to Smooth.

You can easily change it to Linear.

Render with smooth UVs.

Render with linear UVs.

IBL and sampling in Clarisse by Xuan Prada

Using IBLs with huge ranges for natural light (sun) is just great. They give you a very consistent lighting conditions and the behaviour of the shadows is fantastic.
But sampling those massive values can be a bit tricky sometimes. Your render will have a lot of noise and artifacts, and you will have to deal with tricks like creating cropped versions of the HDRIs or clampling values out of Nuke.

Fortunately in Clarisse we can deal with this issue quite easily.
Shading, lighting and anti-aliasing are completely independent in Clarisse. You can tweak on of them without affecting the other ones saving a lot of rendering time. In many renderers shading sampling is multiplied by anti-aliasing sampling which force the users to tweak all the shaders in order to have decent render times.

  • We are going to start with this noisy scene.
  • The first thing you should do is changing the Interpolation Mode to 
    MipMapping
    in the Map File of your HDRI.
  • Then we need to tweak the shading sampling.
  • Go to raytracer and activate previz mode. This will remove lighting 
    information from the scene. All the noise here comes from the shaders.
  • In this case we get a lot of noise from the sphere. Just go to the sphere's material and increase the reflection quality under sampling.
  • I increased the reflection quality to 10 and can't see any noise in the scene any more. 
  • Select again the raytracer and deactivate the previz mode. All the noise here is coming now from lighting.
  • Go to the gi monte carlo and disable affect diffuse. Doing this gi won't affect lighting. We have now only direct lighting here. If you see some noise just increase the sampling of our direct lights.
  • Go to the gi monte carlo and re-enable affect diffuse. Increase the quality until the noise disappears.
  • The render is noise free now but it still looks a bit low res, this is because of the anti-aliasing. Go to raytracer and increase the samples. Now the render looks just perfect.
  • Finally there is a global sampling setting that usually you won't have to play with. But just for your information, the shading oversampling set to 100% will multiply the shading rays by the anti-aliasing samples, like most of the render engines out there. This will help to refine the render but rendering times will increase quite a bit.
  • Now if you want to have quick and dirt results for look-dev or lighting just play with the image quality. You will not get pristine renders but they will be good enough for establishing looks.

Zbrush displacement in Clarisse by Xuan Prada

This is a very quick guide to set-up Zbrush displacements in Clarisse.
As usually, the most important thing is to extract the displacement map from Zbrush correctly. To do so just check my previous post about this procedure.

Once your displacement maps are exported follow this mini tutorial.

  • In order to keep everything tidy and clean I will put all the stuff related with this tutorial inside a new context called "hand".
  • In this case I imported the base geometry and created a standard shader with a gray color.
  • I'm just using a very simple Image Based Lighting set-up.
  • Then I created a map file and a displacement node. Rename everything to keep it tidy.
  • Select the displacement texture for the hand and set-up the image to raw/linear. (I'm using 32bit .exr files).
  • In the displacement node set the bounding box to something like 1 to start with.
  • Add the displacement map to the front value, leave the value to 1m (which is not actually 1m, its like a global unit), and set the front offset to 0.
  • Finally add the displacement node to the geometry.
  • That's it. Render and you will get a nice displacement.

Render with displacement map.

Render without displacement map.

  • If you are still working with 16 bits displacement maps, remember to set-up the displacement node offset to 0.5 and play with the value until you find the correct behaviour.

Image Based Lighting in Clarisse by Xuan Prada

I've been using Isotropix Clarisse in production for a little while now. Recently the VFX Facility where I work announced the usage of Clarisse as primary Look-Dev and Lighting tool, so I decided to start talking about this powerful raytracer on my blog.

Today I'm writing about how to set-up Image Based Lighting.

  • We can start by creating a new context called ibl. We will put all the elements needed for ibl inside this context.
  • Now we need to create a sphere to use as "world" for the scene.
  • This sphere will be the support for the equirectangular HDRI texture.
  • I just increased the radius a lot. Keep in mind that this sphere will be covering all your assets inside of it.
  • In the image view tab we can see the render in real time.
  • Right now the sphere is lit by the default directional light.
  • Delete that light.
  • Create a new matte material. This material won't be affected by lighting.
  • Assign it to the sphere.
  • Once assigned the sphere will look black.
  • Create an image to load the HDRI texture.
  • Connect the texture to the color input of the matte shader.
  • Select the desired HDRI map in the texture path.
  • Change the projection type to "parametric".
  • HDRI textures are usually 32bit linear images. So you need to indicate this in the texture properties.
  • I created two spheres to check the lighting. Just press "f" to fit them in the viewport.
  • I also created two standard materials, one for each sphere. I'm creating lighting checkers here.
  • And a plane, just to check the shadows.
  • If I go back to the image view, I can see that the HDRI is already affecting the spheres.
  • Right now, only the secondary rays are being affected, like the reflection.
  • In order to create proper lighting, we need to use a light called "gi_monte_carlo".
  • Right now the noise in the scene is insane. This is because all the crazy detail in the HDRI map.
  • First thing to reduce noise would be to change the interpolation of the texture to Mipmapping.
  • To have a noise free image we will have to increase the sampling quality of the "gi_monte_carlo" light.
  • Noise reduction can be also managed with the anti aliasing sampling of the raytracer.
  • The most common approach is to combine raytracer sampling, lighting sampling and shading sampling.
  • Around 8 raytracing samples and something around 12 lighting samples are common settings in production.
  • There is another method to do IBL in Clarisse without the cost of GI.
  • Delete the "gi_monte_carlo" light.
  • Create an "ambient_occlusion" light.
  • Connect the HDRI texture to the color input.
  • In the render only the secondary rays are affected.
  • Select the environment sphere and deactivate the "cast shadows" option.
  • Now everything works fine.
  • To clean the noise increase the sampling of the "ambient_occlusion" light.
  • This is a cheaper IBL method.

Colorway in VFX - chapter 2 by Xuan Prada

A few days ago I did my first tests in Colorway. My idea is to use Colorway as texturing and look-development tool for VFX projects.

I think it can be a really powerful and artist friendly software to work on different type of assets.
It is also a great tool to present individual assets, because you can do quick and simple post-processing tasks like color correction, lens effects, etc. And of course Colorway allows you to create different variations of the same asset in no time.

With this second test I wanted to create an entire asset for VFX, make different variations and put everything together in a dailies template or similar to showcase the work.

At the end of the day I'm quite happy with the result and workflow combining Modo, Mari and Colorway. I found some limitations but I truly believe that Colorway will fit soon my needs as Texture Painter and Look-Dev Artist.

Transferring textures

One of the limitations that I found as Texture Painter is that Colorway doesn't manage UDIMs yet. I textured this character time ago at home using Mari following VFX standards and of course, I'm using UDIMs, actually something around 50 4k UDIMs.

I had to create a new UV Mapping using the 1001 UDIM only. In order to keep enough texture resolution I divided the asset in different parts. Head, both arms, both legs, pelvis and torso.
Then using the great "transfer" tool in Mari, I baked the high resolution textures based on UDIMs on to the low resolution UVs based on one single UV space. I created one 8k resolution texture for each part of the asset. I'm using only 3 texture channels, Color, Specular and Bump.

Layer Transfer tool in Mari.

All the new textures already baked in to the default UV space 1001

My lighting setup in Modo couldn't be more simple. I'm just using an Equirectangular HDRI map of Beverly Hills. This image is actually shipped with Modo.
Image Based Lighting works great in Modo and is also very easy to mix different IBLs in the same scene. Just works great.

Shading wise is also quite simple. Just one shading layer with Color, Specular and Bump maps connected. I'm using one shader for each part of the asset.

The render takes only around 3 minutes on my tiny MacBook Air.
Rendering for Colorway takes more than that but obviously you will save a lot of time later.
Once in Colorway I can easily play with colours and textures. I created a color texture variation in Mari and now in Colorway I can plug it and see the shading changes in no time.

All the different parts exported from Modo are on the left side toolbar.

On the right side all the lights will be available to play with. In this case I only have the IBL.

All the materials are listed on the right side. It is possible to change color, intensity and diffuse textures. This gives you a huge amount of freedom to create different variations of the same asset.

I really like the possibility of using post-precessing effects like Lens distortion or dispersion. You can have a quick visual feedback of very common lens effects used on VFX projects.

Finally I created a couple of color variations for this asset.

Notes

A couple of things that I noticed while working on this asset:

  • I had one part of the asset with the normals flipped. I didn't realize of this and when rendering for Colorway, Modo crashes. Once inverted the normals of that part, it never crashed again.
  • It would be nice to store looks, or having the option to export looks from one project to another one. Let's say that I'm working only on the upper part of the character, render for Colorway and create some nice looks (including effects like lens distortions, color corrections,etc). It would be great to keep that for the next time that I export the whole character to Colorway.

Colorway for Look-Development in VFX by Xuan Prada

A few days ago (or weeks) The Foundry released their latest cool product called "Colorway", and they did it for free.

Colorway is a product created to help designers with their work flow specially when dealing with color changes, texture updates, lighting, etc. Looks in general.
This software allow us to change those small thing once the render is done. We can do it in real time without waiting long hours for rendering again. We can change different things related with shading and lighting.

This is obviously quite an advantage when we are dealing with clients and they ask us for small changes related with color, saturation, brightness, etc. We don't need to render again anymore, just use Colorway to make those changes live in no time.
Even the clients can change some stuff and send us back a file with their changes.

Great idea, great product.

I'm not a designer, I'm a vfx artist doing mainly textures and look-development, and even if Colorway wasn't designed for vfx, it can be potentially be used in the vfx industry, at least for some tasks.

There are a few things that I'd like to have inside Colorway in order to be a more productive texturing&look-dev tool, but so far it can be used in some ways to create different versions of the same asset.

To test Colorway I used my model of War Machine.

  • Colorway allow us to render an asset using a base shader. Later we can apply different versions of the same textures, or just flat colors.
  • It all begins inside Modo (Cinema4D is on it's way).
  • It's very important how you organize your asset and shaders inside Modo. If you want to have a lot of control in Colorway you will have to split your scene in different parts.
  • In this example, I separated the head in different parts, so I can select them individually later on in Colorway.
  • Even if I'm using the same shader for the whole head, I made different copies so I can tweak them one by one if I want to have even more control in Colorway.
  • In Modo work on the look as you usually do. Once you are happy with the results export to Colorway.
  • In this case I'm using textures to create the look. Maybe you can do it without textures and apply them later in Colorway. You will be able also to remove all the textures in Colorway and start from scratch there. This is a personal taste.
  • Once happy just click on the Colorway button.
  • You can export all the materials and lights used in the scene or only those selected.
  • Click on the render button and that's it.
  • Once the render is done, just open the file exported from Modo and Colorway should pop up.
  • The workspace is super simple and well organized. There are selection groups and looks on the right and shaders, lighting and effects on the left.
  • Just select one of the parts on your left, one of the shaders on your right, or simply select in the viewport.
  • Automatically the controls for the material will pop up.
  • In the material options you can change the textures used by the shaders, or remove them if you want to start with a flat color.
  • I'm changing here the textures for just one of the materials, and later for all of them, creating a new version of my asset.
  • As I said before we can remove all the textures and use only the base shaders plus flat colors in order to create a new version of the asset.
  • Finally the versions that I created for this post :)

A few things that I'd like to see in Colorway in future versions in order to have more control and power for look-dev tasks.

  • Right now we can only change RGB textures. It would be nice to have control over secondary maps. Blending textures with masks would be also great.
  • We can't control the shaders parameters. Having that control for look-dev would be amazing.
  • Support UDIMs has to be a must.
  • Not sure how Colorway manages IBL. If you are using different lights seems to be ok, but if using only IBL it doesn't seem to work totally fine.
  • Transparency, glow and other shading options don't work in the current version.

Mari to Modo with just one click by Xuan Prada

UDIM workflow has been around for the last 10 years or so. It became more popular when Mari came out and these days it’s being used by everyone in the vfx industry.

In this blog you can find different ways to setup UDIMs in different software and render engines.
With Modo 801 has never been so easy, fast and great!
With just one click you are ready to go!

  • Export your textures from Mari. I always use the naming “component_UDIM.exr” “RGB_1001.exr”
  • Once in Modo, assign a new shader to your asset.
  • Add a new layer with a texture map, as usual. Add layer -> image map -> load udims.
  • Select the UDIM sequence that you exported from Mari.
  • Change the “effect” to point to the desired shader channel.
  • By default Modo enables the option “use clip udim”. You can check this in the “uv” properties. This means that you don’t need to do anything, Modo will handle the UDIM stuff by itself.
  • That’s it, all done :)
  • As an extra, you can go to the image manager, select one single map and check the UDIM coordinate.
  • Another cool thing, is that you can select all the UDIM sequence in the imagen manager, and change the color space with one single click! This is great if you are working with linear workflow or another color space.

Vector displacement in Modo by Xuan Prada

Another quick entry with my tips&tricks for Modo.
This time I’m going to write about setting up Mudbox’s vector displacements in Modo.

  • Check your displacement in Mudbox and clean your layer stack as much as you can. This will make faster the extraction process.
  • The extraction process is very simple. Just select your low and high resolution meshes.
  • Set the vector space to Absolute if you asset is a static element, like props or environments.
  • Set the vector space to Relative if your asset will be deformed. Like characters.
  • Always use 32 bit images.
  • As I said export the maps using EXR 32 bits.
  • Before moving to Modo or any other 3D package, check your maps in Nuke.
  • Once in Modo, select your asset and go to the geometry options.
  • Check Linear UVs and set the render subdivision level.
  • Assign a new shader to your asset.
  • Add a new texture layer with your vector displacement map.
  • Set it up ass Displacement Effect.
  • Set the low and high value to 0 and 100.
  • You will see a displacement preview in viewport.
  • Set the gamma to 1.0 Remember that 32bit images shouldn’t be gamma corrected using Linear Workflow.
  • In the shader options set the Displacement Distance to 1m this should give you the same result than Mudbox.
  • In the render options you can control the displacement rate, which is your displacement quality more or less.
  • 1.0 is fine, play with that. Lower values will give you sharper results but will need more time to render.
  • Finally render a quick test to see if everything looks as expected.

Zbrush displacement in Modo by Xuan Prada

Another of those steps that I need to do when I’m working on any kind of vfx project and I consider “a must”.
This is how I set up my Zbrush displacements in Modo.

  • Once you have finished your sculpting work in Zbrush, with all the layers activated go to the lowest subdivision level.
  • Go to the morph target panel, click on StoreMT and import your base geometry. Omit this step if you started your model in Zbrush.
  • Once the morph targer is created, you will se it in viewport. Go back to your sculpted mesh by clicking on the switch button.
  • Export all the displacement maps using the multi map exporter. I would recommend you to use always 32bit maps.
  • Check my settings to export the maps. The most important parameters are scale and intensity. Scale should be 1 and intensity will be calculated automatically.
  • Check the maps in Nuke and use the roto paint tool to fix small issues.
  • Once in Modo, import your original asset. Select your asset in the item list and check linear uvs and set the amount of subdivisions that you want to use.
  • Assign a new shader to your asset, add the displacement texture as texture layer and set the effect as displacement.
  • Low value and high value should be set to 0 and 100.
  • In the gamma texture options, set the value to 1.0
  • We are working in a linear workflow, which means that scalar textures don’t need to be gamma corrected.
  • In the shader options, go to the surface normal options and use 1m as value for the displacement distance. If you are using 32bit displacements this value should be the standard.
  • Finally in the render options, play with the displacement rate to increase the quality of your displacement maps.
  • 0.5 to 1 are welcome. Lower values are great but take more time to render, so be careful.
  • Render a displacement checker to see if everything works fine.