texturing

Houdini as scene assembler part 03 by Xuan Prada

In this post I will talk about using texture bitmaps and subdivision surfaces.
I have a material network with a couple of shaders, one for the body of this character and another one for the rest. If using Arnold I would have a shop network.

To bring texture bitmaps I use texture nodes when working with Mantra and image nodes when working with Arnold. The principled shader has tabs with inputs for textures. I rarely use these, I always create nodes to take care of the texturing. At the end of the day I never use only one texture per channel. More of this in future posts.
In Mantra, textures are multiplied by the albedo color. Be careful with this.

With Mantra, this is the UDIM tag textureName.%(UDIM)d.exr with arnold textureName.<UDIM>.exr

There is a triplanar node that can be used with Arnold and a different one called UV triplanar projection for Mantra. I don’t usually work without UVs, but these nodes can be useful when working with terrains or other large surfaces.

To subdivide geometry, at object level you can just go to the Arnold tab and select the type of subdivision and the amount. If you need to subdivide only a few parts of you alembic asset, create an unpack node (transfer attributes and groups) and then a subdivide node. This works with both Mantra and Arnold, although there is a better way of doing this with Arnold. We will talk about it in the future.

Houdini as scene assembler, part 01 (of many) by Xuan Prada

It’s been a while since I used Houdini at work, the very first time I used Houdini on a show it was while working on Happy Feet 2, it was our main scene assembler for the show. Look-dev, lighting and rendering was all done in Houdini and 3Delight.

From there I never used Houdini again until I was working on Geostorm at Dneg. Most of the shots were managed with Houdini and PrMan. That is all my experience with Houdini in a professional environment. No need to say that I have only used Houdini for assembly tasks, look-dev, lighting and rendering, nothing like fx or other fancy stuff.

The common thing between the two shows where I used Houdini as assembler is that we had pretty neat tools to take care of most of the steps through the pipeline. Becasue of that I can’t barely use Houdini out of the box, so I’m going to try to learn how to use it and share it here for future reference.

During my time working at facilities like MPC, Dneg or Framestore, I have used different scene assemblers like Katana, Clarisse or other propietary tools. My goal is to extrapolate my knowledge and experience using those software to Houdini. I’m pretty sure that I’d be using tools and techniques in the wrong way just because Houdini has a different philosophy than other tools or just because my lack of knowledge in general about Houdini and proceduralism. But anyway, I’ll try to make it work, if you see anything that I’m doing terribly wrong, please let me know, I’ll be listening.

I’ll be posting about stuff that I’m dealing with in no particular order but always assembly oriented, do not expect to see here anything related with fx or more “traditional” use of Houdini. Most of the stuff is going to be very basic, specially at the beginning but please bare with me, it will get more interesting in the future.

If you are assembling a scene one of the first steps it would be to bring all your assets from other applications. You can of course generate content in Houdini but usually most of you assets will be created in other packages, being Maya the most common one. So I guess the very first thing you’d have to deal with is how to import alembic caches. If you are working in a vfx facility chances of having automated tools to setup your shots for you are pretty high. Launching Houdini from a context in a terminal will take care of everything. If you are at home or starting to use Houdini in a vfx boutique you will have to setup your shots manually. There are clever and easy ways to create Houdini templates for your show/shot but we will leave this topic for future posts.

To bring your assets as alembic caches just create a file node, step inside and replace the existing file for another file node pointing to your alembic cache, or just use the existing file node and change the path to read you alembic cache.

If you are look-deving a character lets say, it is completely fine to look at the full geometry in the viewport. If you are assembling a big scene like a city or a space ship you’d probably want to change your viewport settings to something like bounding boxes. There are better ways of dealing with bounding box without loading the geo, more to come soon.

Assets are usually complex and we try to keep everything tidy and organised by naming everything properly and structuring groups and hierarchies in a particular way that makes sense for our purposes. The unpack node will allow you to access to all the different parts and componentes of the alembic caches and to perform different operations later on. The groups can be selected based on the hierarchies created in Maya or based on wildcards. It is extremely important to use a clever naming and structuring groups following certain logic to make the assembly process easier and faster.

The blast node will help you also to access to the information contained in the alembic cache and remove whatever you don’t need to use for a particular operation. You can also invert the selection to keep the items that you wrote in the group field and get rid of the rest.

The group node is another very useful node to point to different groups in your alembic caches. Again based on Maya grouping and wildcards.

That is it for now in that sense, there are many ways to manipulate alembic caches but we don’t need to talk about that just yet. In these first posts I will be talking mostly about bringing assets, working with textures and look-dev. That is the first step for assembling a shot, we need assets ready to travel trough the pipeline.

Uv mapping is key for us, a lot of tasks performed in Houdini use procedural UVs or no UVs at all. This is not the case for us. Asset always have proper UV mapping. Generally speaking you will do all the UV related tasks in Maya, UV Layout or similar tools. In order to see the UVs in Houdini we need to unpack the alembic cache first, then we will be able to press “5” and look at the UVs.

Use a quick uv shade node to display a checkered texture in the viewport. You can easily change the size of the checker or use a different texture. There is also a group field that you can use for filtering.

Not ideal but if you are working on extremely simple assets like walls, grounds, maybe terrains, it is totally fine to create the UVs in Houdini. Houdini UV tools are not the best but you will find yourself using them at some point. The uv texture node crates basic projections like cylindrical, orthographic, etc.

The uv unwrap node create automatic UVs based on projection planes.

The uv layout node is a tools for packing your UVs. Using a fixed scale you can distribute the UVs in different UDIMs.

The auto uv node is actually pretty good. It is part of the game development tools shipped with Houdini. You need to activate this package first, just go the shelf, click on the plus button and look for game development tools. Then click on the icon update toolset to get the latest version.

The auto uv tools has different methods for UVing and for packing, it is worth trying them, it works really well specially with messy objects.

The uv transform node deals with anything related to moving, translating and rotating UVs. You don’t really want to do this here in Houdini, but if you have to, this is the tool. I use it a lot if I need to re-distribute UDIM tiles.

Attribute create node (with the following parameters) allos you to create a parameter to move UVs to a specific UDIM. Then add a uv layout node and set the packing method to UDIM attribute.

Mixing displacement and multiple bump maps by Xuan Prada

A very common situation when look-deving an asset is combining various displacement and bump maps. Having them in different texture maps gives you the possibility to play with them and making very fast changes without going back to Mari and Zbrush and waste a lot of time going back and forward until reaching the right look. You also want to keep busy your look-dev team, of course.

While ago I told you how to combine different displacement maps coming from different sources, today I want to show you how to combine multiple bump maps, with different scales and values. This is a very common situation in vfx, I would say every single asset has at least one displacement layer and one bump layer, but usually, you would have more than one. This is how you can combine multiple bump layers in Maya/Arnold.

  • The first thing I'm going to do is add a displacement layer. To make this post easy I'm using a single displacement layer. Refer back to the tutorial I mentioned previously on this post to mix more than one displacement layer.
  • Now connect your first bump map layer as usual. Connecting the red channel to the bump input of the shader.
  • in the hypershade create a file texture for your second bump layer. In this case a low frequency noise.
  • Create an avergage node and two multiply nodes.
  • Connect the red channel of the first bump layer to the input 1 of the multiply node. Control the intensity of this layer with the input 2 of the multiply node.
  • Repeat with previous step with the second bump layer.
  • Connect the outputs of both multiply nodes to the inputs 3D0 and 3D1 of the average node.
  • It is extremely important to leave the bump depth at 1 in order to make this work.

Clarisse shading layers: Crowd in 5 minutes by Xuan Prada

One feature that I really like in Clarisse are the shading layers. With them you can drive shaders based on naming convention or location of assets in the scene. With this method you can assign shaders to a very complex scene structure in no time. In this particular case I'll be showing you how to shade an entire army and create shading/texturing variations in just a few minutes.

I'll be using an alembic cache simulation exported from Maya using Golaem. Usually you will get thousand of objects with different naming convention, which makes the shading assignment task a bit laborious. With shading layer rules in Clarisse we can speed up a lot this tedious process

  • Import an alembic cache with the crowd simulation through file -> import -> scene
  • In this scene I have 1518 different objects.
  • I'm going to create an IBL rig with one of my HDRIs to get some decent lighting in the scene.
  • I created a new context called geometry where I placed the army and also created a ground plane.
  • I also created another context called shaders where I'm going to place all my shaders for the soldiers.
  • In the shaders context I created a new material called dummy, just a lambertian grey shader.
  • We are going to be using shading layers, to apply shaders globally based on context and naming convention. I created a shading layers called army (new -> shading layer).
  • With the pass (image) selected, select the 3D layer and apply the shading layer.
  • Using the shading layer editor, add a new rule to apply the dummy shader to everything in the scene.
  • I'm going to add a rule for everything called heavyArmor.
  • Then just configure the shader for the heavyArmour with metal properties and it's correspondent textures.
  • Create a new rule for the helmets and apply the shader that contains the proper textures for the helmets.
  • I keep adding rules and shaders for different parts of the sodliers.
  • If I want to create random variation, I can create shading layers for specific names of parts or even easier and faster, I can put a few items in a new context and create a new shading rule for them. For the bodies I want to use caucasian and black skin soldiers. I grabbed a few bodies and place them inside a new context called black. Then create a new shading rules where I apply a shader with different skin textures to all the bodies in that context.
  • I repeated the same process for the shields and other elements.
  • At the end of the process I can have a very populated army with a lot of random texture variations in just a few minutes.
  • This is how my shading layers look like at the end of the process.

UDIM workflow in Nuke by Xuan Prada

Texture artists, matte painters and environment artists often have to deal with UDIMs in Nuke. This is a very basic template that hopefully can illustrate how we usually handle this situation.

Cons

  • Slower than using Mari. Each UDIM is treated individually.
  • No virtual texturing, slower workflow. Yes, you can use Nuke's proxies but they are not as good as virtual texturing.

Pros

  • No paint buffer dependant. Always the best resolution available.
  • Non destructive workflow, nodes!
  • Save around £1,233 on Mari's license.

Workflow

  • I'll be using this simple footage as base for my matte.
  • We need to project this in Nuke and bake it on to different UDIMs to use it later in a 3D package.
  • As geometry support I'm using this plane with 5 UDIMs.
  • In Nuke, import the geometry support and the footage.
  • Create a camera.
  • Connect the camera and footage using a Project 3D node.
  • Disable the crop option of the Project 3D node. If not the proejctions wouldn't go any further than UV range 0-1.
  • Use a UV Tile node to point to the UDIM that you need to work on.
  • Connect the img input of the UV Tile node to the geometry support.
  • Use  a UV Project node to connect the camera and the geometry support.
  • Set projection to off.
  • Import the camera of the shot.
  • Look through the camera in the 3D view and the matte should be projected on to the geometry support.
  • Connect a Scanline Render to the UV Project.
  • Set the projection model to UV.
  • In the 2D view you should see the UDIM projection that we set previously.
  • If you need to work with a different UDIM just change the UV Tile.
  • So this is the basic setup. Do whatever you need in between like projections, painting and so on to finish your matte.
  • Then export all your UDIMs individually as texture maps to be used in the 3D software.
  • Here I just rendered the UDIMs extracted from Nuke in Maya/Arnold.

RAW lighting and albedo AOVs in Arnold by Xuan Prada

If you are new to Arnold you are probably looking for RAW lighting and albedo AOVs in the AOV editor. And yes you are right, they are not there. At least when using AiStandard shaders.
The easiest and fastest solution would be to use AlShaders, they include both, RAW lighting and albedo AOVs. But if you need to use AiStandard shaders, you will have to create your own AOVs quite easily).

  • In this capture you can see available AOVs for RAW lighting and albedo for the AlShaders.
  • If you are using AiStandard shaders you won't see those AOVs.
  • If you still want/need to use AiStandard shaders, you will have to render your beauty pass with the standard AOVs and utility passes and you will have to create the albedo pass by hand. You can easily do this replacing AiStandard shaders by Surface shaders.
  • if we have a look at them in Nuke they will look like these.
  • If we divide the beauty pass by the albedo pass we will get the RAW lighting.
  • We can now modify only the lighting without affecting the colour.
  • We can also modify the colour component without modifying the lighting.
  • In this case I'm color correcting and cloning some stuff in the color pass.
  • With a multiply operation I can combine both elements again to obtain the beauty render.
  • If I disable all the modification to both lighting and color, I should get exactly the same result as the original beauty pass.
  • Finally I'm adding a ground using my shadow catcher information.

Import UDIMs in Zbrush by Xuan Prada

One of the most common tasks once your colour textures are painted is going to Zbrush or Mudbox to sculpt some heavy details based on what you have painted in your colour textures.
We all use UDIMs of course, but importing UDIMs in Zbrush is not that easy. Let's see how this works.

  • Export all your colour UDIMs out of Mari.
  • Import your 3D asset in Zbrush and go to Polygroups -> UV Groups. This will create a polygroups based on UDIMs.
  • With ctrl+shift you can isolate UDIMs.
  • Now you have to import the texture that corresponds to the isolated UDIM.
  • Go to Texture -> Import. Do not forget to flip it vertically.
  • Go to Texture Map and activate the texture.
  • At this point you are only viewing the texture, not applying it.
  • Go to Polypaint, enable Colorize and click on Polypaint from texture.
  • This will apply the texture to the mesh. As it's based on polypaint, the resolution of the texture will be based on the resolution of the mesh. If it doesn't look right, just go and subdivide the mesh.
  • Repeat the same process for all the UDIMs and you'll be ready to start sculpting.

Export from Maya to Mari by Xuan Prada

Yes, I know that Mari 3.x supports OpenSubdiv, but I've had some bad experiences already where Mari creates artefacts on the meshes.
So for now, I will be using the traditional way of exporting subdivided meshes from Maya to Mari. These are the settings that I usually use to avoid distortions, stretching and other common issues.

Combining Zbrush and Mari displacements in Clarisse by Xuan Prada

We all have to work with displacement maps painted in both Zbrush and Mari.
Sometimes we use 32 bits floating point maps, sometimes 16 bits maps, etc. Combining different displacement depths and scales is a common task for a look-dev artist working in the film industry.

Let's see how to setup different displacement maps exported from Zbrush and Mari in Isotropix Clarisse.

  • First of all, have a look at all the individual displacement maps to be used.
  • The first one has been sculpted in Zbrush and exported as .exr 32 bits displacement map. The non-displacement value is zero.
  • The second one has been painted in Mari and exported also as .exr 32 bits displacement map. Technically this map is exactly the same as the Zbrush one, the only difference here is the scale.
  • The third displacement map in this exercise also comes from Mari, but in this case it's a .tif 16 bits displacement map, which means that the mid-point will be 0,5 instead of zero.
  • We need to combine all of them in Clarisse and get the expected result.
  • Start creating a displacement node and assigning it to the mesh.
  • We consider the Zbrush displacement as our main displacement layer. That said, the displacement node has to be setup like the image below. The offset or non-displacement value has to be zero, and the front value 1. This will give us exactly the same look that we have in Zbrush.
  • In the material editor I'm connecting a multiply node after every single displacement layer. The input 2 is 1.1.1 by default. Increasing or reducing this value will control the strength of each displacement layer. It is not necessary to control the intensity of the Zbrush layer unless you want to do it. But it is necessary to reduce the intensity of the Mari displacement layers as they are way off compared with the Zbrush intensity.
  • I also added an add node right after the 16 bits Mari displacement subtracting the value -0.5 in order to remap the value at the same level than the other 32 bits maps with non-displacement value of zero.
  • Finally I used add nodes to mix all the displacement layers.
  • It is a good idea to setup all the layers individually to find the right look.
  • No displacement at all.
  • Zbrush displacement.
  • Mari high frequency detail.
  • Mari low frequency detail.
  • All displacement layers combined.

Bake from Nuke to UVs by Xuan Prada

  • Export your scene from Maya with the geometry and camera animation.
  • Import the geometry and camera in Nuke.
  • Import the footage that you want to project and connect it to a Project 3D node.

 

  • Connect the cam input of the Project 3D node to the previously imported camera.
  • Connect the img input of the ReadGeo node to the Project 3D node.
  • Look through the camera and you will see the image projected on to the geometry through the camera.
  • Paint or tweak whatever you need.
  • Use a UVProject node and connect the axis/cam input to the camera and the secondary input to the ReadGeo.
  • Projection option of the UVProjection should be set as off.

 

  • Use a ScanlineRender node and connect it’s obj/scene input to the UVProject.
  • Set the projection mode to UV.
  • If you swap from the 3D view to the 2D view you will see your paint work projected on to the geometry uvs.
  • Finally use a write node to output your DMP work.
  • Render in Maya as expected.

Dealing with Ptex displacement by Xuan Prada

Render using Ptex displacement.

What if you are working with Ptex but need to do some kind of Zbrush displacement work?
How can you render that?

As you probably now, Zbrush doesn't support Ptex. I'm not a super fan of Ptex (but I will be soon) but sometimes I do not have time or simply I don't want to make proper UV mapping. Then, if Zbrush doesn't export Ptex and my assets don't have any sort of UV coordinates, can't I use Ptex at all for my displacement information?

Yes, you can use Ptex.

Base geometry render. No displacement.

  • In this image below, I have a detailed 3D scan which has been processed in Meshlab to reduce the crazy amount of polygons.
  • Now I have imported the model via obj in Zbrush. Only 500.000 polys but it looks great though.
  • We are going to be using Zbrush to create a very quick retopology for this demo. We could use Maya or Modo to create a production ready model.
  • Using the Zremesher tool which is great for some type of retopology tasks, we get this low res model. Good enough for our purpose here.
  • Next step would be exporting both model, high and low resolution as .obj
  • We are going to use these models in Mudbox to create our Ptex based displacement. Yes, Mudbox does support Ptex.
  • Once imported keep both of them visible.
  • Export displacement maps. Have a look in the image below at the options you need to tweak.
  • Basically you need to activate Ptex displacement, 32bits, the texel resolution, etc)
  • And that's it. You should be able to render your Zbrush details using Ptex now.

Combining Zbrush and Mari displacement maps by Xuan Prada

Short and sweet (hopefully).
It seems to be quite a normal topic these days. Mari and Zbrush are commonly used by texture artists. Combining displacement maps in look-dev is a must.

I'll be using Maya and Arnold for this demo but any 3D software and renderer is welcome to use the same workflow.

  • Using Zbrush displacements is no brainer. Just export them as 32 bit .exr and that's it. Set your render subdivisions in Arnold and leave the default settings for displacement. Zero value is always 0 and height should be 1 to match your Zbrush sculpt.
  • These are the maps that I'm using. First the Zbrush map and below the Mari map.
  • No displacement at all in this render. This is just the base geometry.
  • In this render I'm only using the Zbrush displacement.
  • In order to combine Zbrush displacement maps and Mari displacement maps you need to normalise the ranges. If you use the same range your Mari displacement would be huge compared with the Zbrush one.
  • Using a multiply node is so easy to control the strength of the Mari displacement. Connect the map to the input1 and play with the values in the input2.
  • To mix both displacement maps you can use an average node. Connect the Zbrush map to the input0 and the Mari map (multiply node) to the input1.
  • The average node can't be connected straight o the displacement node. Use ramp node with the average node connected to it's color and then connect the ramp to the displacement default input.
  • In this render I'm combining both, Zbrush map and Mari map.
  • In this other example I'm about to combine two displacements using a mask. I'll be using a Zbrush displacement as general displacement, and then I'm going to use a mask painted in Mari to reveal another displacement painted in Mari as well.
  • As mask I'm going to use the same symbol that I used before as displacement 2.
  • And as new displacement I'm going to use a procedural map painted in Mari.
  • The first thing to do is exactly the same operation that we did before. Control the strength of the Mari's displacement using a multiply node.
  • Then use another multiply node with the Mari's map (multiply) connected to it's input1 and the mask connected to it's input2. This will reveal the Mari's displacement only in the white areas of the mask.
  • And the rest is exactly the same as we did before. Connect the Zbrush displacement to the input0 of the average node and the Mari's displacement (multiply) to the input1 of the average node. Then the average node to the ramp's color and the ramp to the displacement default input.
  • This is the final render.

RGB masks by Xuan Prada

We use RGB masks all the time in VFX, don't we?
They are very handy and we can save a lot of extra texture maps combining 4 channels in one single texture map RGB+A.

We use them to mix shaders in look-dev stage, or as IDs for compositing, or maybe as utility passes for things like motion blur o depth.

Let's see how I use RGB masks in my common software: Maya, Clarisse, Mari and Nuke.

Maya

  • I use a surface shader with a layered texture connected.
  • I connect all the shaders that I need to mix to the layered texture.
  • Then I use a remapColor node with the RGB mask connected as mask for each one of the shaders.

This is the RGB mask that I'm using.

  • We need to indicate which RGB channel we want to use in each remapColor node.
  • Then just use the output as mask for the shaders.

Clarisse

  • In Clarisse I use a reorder node connected to my RGB mask.
  • Just indicate the desired channel in the channel order parameter.
  • To convert the RGB channel to alpha just type it in the channel order field.

Mari

  • You will only need a shuffle adjustment layer and select the required channel.

Nuke

  • You can use a shuffle node and select the channel.
  • Or maybe a keyer node and select the channel in the operation parameter. (this will place the channel only in the alpha).

Clarisse UV interpolation by Xuan Prada

When subdividing models in Clarisse for rendering displacement maps, the software subdivides both geometry and UVs. Sometimes we might need to subdivide only the mesh but keeping the UVs as they are originally.

This depends on production requirements and obviously on how the displacement maps were extracted from Zbrush or any other sculpting package.

If you don't need to subdivide the UVs first of all you should extract the displacement map with the option SmoothUV turned off.
Then in Clarisse, select the option UV Interpolation Linear.

By default Clarisse sets the UVs to Smooth.

You can easily change it to Linear.

Render with smooth UVs.

Render with linear UVs.