workflow

Houdini as scene assembler part 05. User attributes by Xuan Prada

Sometimes, specially during the layout/set dressing stage artists have to decide certain rules or patterns to compose a shot. For example let’s say a football stadium. Imagine that the first row of seats is blue, the next row is red and the third row is green.
There are so many ways of doing this, but let’s say that we have thousands of seats and we know the colors that they should have. Then it is easy to make rules and patterns to allow total flexibility later on when texturing and look-deving.

In this example I’m using my favourite tool to explain 3D stuff, Lego figurines. I have 4 rows of Lego heads and I want each of those to have a different Lego face. But at the same time I want to use the same shader for all of them. I just want to have different textures. By doing this I will end up with a very simple and tidy setup, and iteration won’t be a pain.

Doing this in Maya is quite straightforward and I explained the process some time ago in this blog. What I want to illustrate now is another common situation that we face in production. Layout artists and set dresser usually do their work in Maya and then pass it on to look-dev artists and lighting td’s that usually use scene assemblers like Katana, Clarisse, Houdini, or Gaffer.

In this example I want to show you how to handle user attributes from Maya in Houdini to create texture and shader variations.

  • In Maya select all the shapes and add a custom attribute.

  • Call it “variation”

  • Data type integer

  • Default value 0

  • Add a different value to each Lego head. Add as many values as texture variations you need to have

  • Export all the Lego heads as alembic, remember to add the attributes that you want to export to houdini

  • Import the alembic file in Houdini

  • Connect all the texture variations to a switch node

  • This can be done also with shaders following exactly the same workflow

  • Connect an user data int node to the index input of the switch node and type the name of your attribute

  • Finally the render comes out as expected without any further tweaks. Just one shader that automatically picks up different textures based on the layout artist criteria

Houdini as scene assembler part 03 by Xuan Prada

In this post I will talk about using texture bitmaps and subdivision surfaces.
I have a material network with a couple of shaders, one for the body of this character and another one for the rest. If using Arnold I would have a shop network.

To bring texture bitmaps I use texture nodes when working with Mantra and image nodes when working with Arnold. The principled shader has tabs with inputs for textures. I rarely use these, I always create nodes to take care of the texturing. At the end of the day I never use only one texture per channel. More of this in future posts.
In Mantra, textures are multiplied by the albedo color. Be careful with this.

With Mantra, this is the UDIM tag textureName.%(UDIM)d.exr with arnold textureName.<UDIM>.exr

There is a triplanar node that can be used with Arnold and a different one called UV triplanar projection for Mantra. I don’t usually work without UVs, but these nodes can be useful when working with terrains or other large surfaces.

To subdivide geometry, at object level you can just go to the Arnold tab and select the type of subdivision and the amount. If you need to subdivide only a few parts of you alembic asset, create an unpack node (transfer attributes and groups) and then a subdivide node. This works with both Mantra and Arnold, although there is a better way of doing this with Arnold. We will talk about it in the future.

Houdini as scene assembler part 02 by Xuan Prada

In the previous post I showed you how to load alembic caches using the file node and then change the viewport visualization to bounding box. This is good enough if you are let's say look-deving a character. If you want to load a heavy alembic cache, like a very detailed city with a lot of buildings, or a huge spaceship, you might want to use a different approach.

Instead of using a file node, it is better to use the alembic node to load you assets, and then set the option Load As: alembic delayed load primitives, and display as bounding box. This isn't actually loading the geometry in memory and it will be way more efficient down the line.

In this post I'm just talking about shading assignment in Houdini.
The easiest and more simple way to assign shaders is by selecting the asset node in the /obj context and assign a shader in the render tab. Your Mantra shaders should be placed in the /mat context and your Arnold shaders in the /shop context as /mat is not fully supported yet.

In the /mat context you can just go and create a Mantra Principled Shader. For Arnold, it is better to create an arnold shader network and then any arnold shader inside connected to the surface input.

Houdini doesn't have an isolation mode for shading components like Maya (as far as I know) but you can drag and drop shaders and textures onto the viewport or IPR while look-deving. This only works in the /mat context (again, as far as I know).

Another way of assigning shaders is creating material nodes inside of the alembic node. This material nodes can be assigned to different parts of your asset using wildcards. To assign multiple materials you can create different tabs in the material node or you can just concatenate material nodes (which I prefer). This technique works with both Mantra and Arnold.

You will find yourself most of the time creating material networks (Mantra) or shop networks (arnold) containing all the shaders of your asset. In a lighting shot you will end up with different subnetworks for each asset on the shot.
This subnetworks of shaders can be place at the /obj level or inside of the alembics containing your assets.

Another clever way of assignment shaders is using the data tree -> object appearance. This only works at object level. If you want to go deeper in your alembic asset, you need to add first a node called packed edit. Then in the data tree you will have access to all the different parts of your asset.

There is another way of controlling looks in Houdini, and that is using the material style sheets. We will cover this tool in future posts.

Introduction to gaffer by Xuan Prada

By gaffer hq: Gaffer is a free, open-source, node-based VFX application that enables look developers, lighters, and compositors to easily build, tweak, iterate, and render scenes. Built with flexibility in mind, Gaffer supports in-application scripting in Python and OSL, so VFX artists and technical directors can design shaders, automate processes, and build production workflows.

With hooks in both C++ and Python, Gaffer's readily extensible API provides both professional studios and enthusiasts with the tools to add their own custom modules, nodes, and UI.

The workhorse of the production pipeline at Image Engine Design Inc., Gaffer has been used to build award-winning VFX for shows such as Jurassic World: Fallen Kingdom, Lost in Space, Logan, and Game of Thrones.

Houdini as scene assembler, part 01 (of many) by Xuan Prada

It’s been a while since I used Houdini at work, the very first time I used Houdini on a show it was while working on Happy Feet 2, it was our main scene assembler for the show. Look-dev, lighting and rendering was all done in Houdini and 3Delight.

From there I never used Houdini again until I was working on Geostorm at Dneg. Most of the shots were managed with Houdini and PrMan. That is all my experience with Houdini in a professional environment. No need to say that I have only used Houdini for assembly tasks, look-dev, lighting and rendering, nothing like fx or other fancy stuff.

The common thing between the two shows where I used Houdini as assembler is that we had pretty neat tools to take care of most of the steps through the pipeline. Becasue of that I can’t barely use Houdini out of the box, so I’m going to try to learn how to use it and share it here for future reference.

During my time working at facilities like MPC, Dneg or Framestore, I have used different scene assemblers like Katana, Clarisse or other propietary tools. My goal is to extrapolate my knowledge and experience using those software to Houdini. I’m pretty sure that I’d be using tools and techniques in the wrong way just because Houdini has a different philosophy than other tools or just because my lack of knowledge in general about Houdini and proceduralism. But anyway, I’ll try to make it work, if you see anything that I’m doing terribly wrong, please let me know, I’ll be listening.

I’ll be posting about stuff that I’m dealing with in no particular order but always assembly oriented, do not expect to see here anything related with fx or more “traditional” use of Houdini. Most of the stuff is going to be very basic, specially at the beginning but please bare with me, it will get more interesting in the future.

If you are assembling a scene one of the first steps it would be to bring all your assets from other applications. You can of course generate content in Houdini but usually most of you assets will be created in other packages, being Maya the most common one. So I guess the very first thing you’d have to deal with is how to import alembic caches. If you are working in a vfx facility chances of having automated tools to setup your shots for you are pretty high. Launching Houdini from a context in a terminal will take care of everything. If you are at home or starting to use Houdini in a vfx boutique you will have to setup your shots manually. There are clever and easy ways to create Houdini templates for your show/shot but we will leave this topic for future posts.

To bring your assets as alembic caches just create a file node, step inside and replace the existing file for another file node pointing to your alembic cache, or just use the existing file node and change the path to read you alembic cache.

If you are look-deving a character lets say, it is completely fine to look at the full geometry in the viewport. If you are assembling a big scene like a city or a space ship you’d probably want to change your viewport settings to something like bounding boxes. There are better ways of dealing with bounding box without loading the geo, more to come soon.

Assets are usually complex and we try to keep everything tidy and organised by naming everything properly and structuring groups and hierarchies in a particular way that makes sense for our purposes. The unpack node will allow you to access to all the different parts and componentes of the alembic caches and to perform different operations later on. The groups can be selected based on the hierarchies created in Maya or based on wildcards. It is extremely important to use a clever naming and structuring groups following certain logic to make the assembly process easier and faster.

The blast node will help you also to access to the information contained in the alembic cache and remove whatever you don’t need to use for a particular operation. You can also invert the selection to keep the items that you wrote in the group field and get rid of the rest.

The group node is another very useful node to point to different groups in your alembic caches. Again based on Maya grouping and wildcards.

That is it for now in that sense, there are many ways to manipulate alembic caches but we don’t need to talk about that just yet. In these first posts I will be talking mostly about bringing assets, working with textures and look-dev. That is the first step for assembling a shot, we need assets ready to travel trough the pipeline.

Uv mapping is key for us, a lot of tasks performed in Houdini use procedural UVs or no UVs at all. This is not the case for us. Asset always have proper UV mapping. Generally speaking you will do all the UV related tasks in Maya, UV Layout or similar tools. In order to see the UVs in Houdini we need to unpack the alembic cache first, then we will be able to press “5” and look at the UVs.

Use a quick uv shade node to display a checkered texture in the viewport. You can easily change the size of the checker or use a different texture. There is also a group field that you can use for filtering.

Not ideal but if you are working on extremely simple assets like walls, grounds, maybe terrains, it is totally fine to create the UVs in Houdini. Houdini UV tools are not the best but you will find yourself using them at some point. The uv texture node crates basic projections like cylindrical, orthographic, etc.

The uv unwrap node create automatic UVs based on projection planes.

The uv layout node is a tools for packing your UVs. Using a fixed scale you can distribute the UVs in different UDIMs.

The auto uv node is actually pretty good. It is part of the game development tools shipped with Houdini. You need to activate this package first, just go the shelf, click on the plus button and look for game development tools. Then click on the icon update toolset to get the latest version.

The auto uv tools has different methods for UVing and for packing, it is worth trying them, it works really well specially with messy objects.

The uv transform node deals with anything related to moving, translating and rotating UVs. You don’t really want to do this here in Houdini, but if you have to, this is the tool. I use it a lot if I need to re-distribute UDIM tiles.

Attribute create node (with the following parameters) allos you to create a parameter to move UVs to a specific UDIM. Then add a uv layout node and set the packing method to UDIM attribute.

On-set tips: Creating high frequency detail by Xuan Prada

In a previous post I mentioned the importance of having high frequency details whilst scanning assets on-set. Sometimes if we don't have that detail we can just create it. Actually sometimes this is the only way to capture volumes and surfaces efficiently, specially if the asset doesn't have any surface detail, like white objects for example.

If we are dealing with assets that are being used on set but won't appear in the final edit, it is probably that those assets are not painted at all. There is no need to spend resources on it, right? But we might need to scan those assets to create a virtual asset that will be ultimately used on screen.

As mentioned before, if we don't have enough surface detail it will be so difficult to scan assets using photogrammetry so, we need to create high frequency detail on our own way.

Let's say we need to create a virtual assets of this physical mask. It is completely plain, white, we don't see much detail on its surface. We can create high frequency detail just painting some dots, or placing small stickers across the surface.

In this particular case I'm using a regular DSLR + multi zoom lens. A tripod, a support for the mask and some washable paint. I prefer to use small round stickers because they create less artifacts in the scan, but I run out of them.

I created this support while ago to scan fruits and other organic assets.

The first thing I usually do (if the object is white) is covering the whole object with neutral gray paint. It is way more easy to balance the exposure photographing again gray than white.

Once the gray paint is dry I just paint small dots or place the round stickers to create high frequency detail. The smallest the better.

Once the material has been processed you should get a pretty decent scan. Probably an impossible task without creating all the high frequency detail first.

Export from Maya to Mari by Xuan Prada

Yes, I know that Mari 3.x supports OpenSubdiv, but I've had some bad experiences already where Mari creates artefacts on the meshes.
So for now, I will be using the traditional way of exporting subdivided meshes from Maya to Mari. These are the settings that I usually use to avoid distortions, stretching and other common issues.

Stmaps by Xuan Prada

One of the first treatments that you will have to do to your VFX footage is removing lens distortion. This is crucial for some major tasks, like tracking, rotoscoping, image modelling, etc.
Copy lens information between different footage or between footage and 3D renders is also very common. Working with different software like 3D equalizar, Nuke, Flame, etc, having a common and standard way to copy lens information seems to be a good idea. Uv maps are probably the easiest way to do this, as they are plain 32 bit exr images.

  • Using lens grids is always the easiest, fastest and most accurate way of delensing.
  • Set the output type to displacement and look through the forward channel to see the uvs in viewport.
  • Write the image as .exr 32 bits
  • This will output the uv information and can be read in any software.
  • To apply the lensing information to your footage or renders, just use a Stmap connected to the footage and to the uv map.

Bake from Nuke to UVs by Xuan Prada

  • Export your scene from Maya with the geometry and camera animation.
  • Import the geometry and camera in Nuke.
  • Import the footage that you want to project and connect it to a Project 3D node.

 

  • Connect the cam input of the Project 3D node to the previously imported camera.
  • Connect the img input of the ReadGeo node to the Project 3D node.
  • Look through the camera and you will see the image projected on to the geometry through the camera.
  • Paint or tweak whatever you need.
  • Use a UVProject node and connect the axis/cam input to the camera and the secondary input to the ReadGeo.
  • Projection option of the UVProjection should be set as off.

 

  • Use a ScanlineRender node and connect it’s obj/scene input to the UVProject.
  • Set the projection mode to UV.
  • If you swap from the 3D view to the 2D view you will see your paint work projected on to the geometry uvs.
  • Finally use a write node to output your DMP work.
  • Render in Maya as expected.

rendering Maya particles in Clarisse by Xuan Prada

This is a very simple tutorial explaining how to render particle systems simulated in Maya inside Isotropix Clarisse. I already have a few posts about using Clarisse for different purposes, if you check by the tag "Clarisse" you will find all the previous posts. Hope to be publishing more soon.

In this particular case we'll be using a very simple particle system in Maya. We are going to export it to Clarisse and use custom geometries and Clarisse's powerful scatterer system to render millions of polygons very fast and nicely.

  • Once your particle system has been simulated in Maya, export it via Alembic. One of the standard 3D formats for exchanging information in VFX.
  • Create an IBL rig in Clarisse. In a previous post I explain how to do it, it is quite simple.
  • With Clarisse 2.0 it is so simple to do, just one click and you are ready to go.
  • Go to File -> Import -> Scene and select the Alembic file exported from Maya.
  • It comes with 2 types of particles, a grid acting as ground and the render camera.
  • Create a few contexts to keep everything tidy. Geo, particles, cameras and materials.
  • In the geo context I imported the toy_man and the toy_truck models (.obj) and moved the grid from the main context to the geo context.
  • Moved the 2 particles systems and the camera to their correspondent contexts.
  • In the materials context I created 2 materials and 2 color textures for the models. Very simple shaders and textures.
  • In the particles context I created a new scatterer calle scatterer_typeA.
  • In the geometry support of the scatter add the particles_typeA and in the geometry section add the toy_man model.
  • I’m also adding some variation to the rotation.
  • If I move my timeline I will see the particle animation using the toy_man model.
  • Do not forget to assign the material created before.
  • Create another scatterer for the partycles_typeB and configure the geometry support and the geometry to be used.
  • Add also some rotation and position variation.
  • As these models are quite big compared with the toy figurine, I’m offsetting the particle effect to reduce the presence of toy_trucks in the scene.
  • Before rendering, I’d like to add some motion blur to the scene. Go to raytracer -> Motion Blur -> 3D motion blur. Now you are ready to render the whole animation.

Meshlab align to ground by Xuan Prada

If you deal a lot with 3D scans, Lidars, photogrammetry and other heavy models, you probably use Meshlab. This "little" software is great managing 75 million polygon Lidars and other complex meshes. Photoscan experienced users usually play with the align to ground tool to establish the correct axis for their resulting meshes.

If you look for this option in Meshlab you wouldn't find it, at least I didn't. Please let me know if you know how to do this.
What I found is a clever workaround to do the same same thing with a couple of clicks.

  • Import your Lidar or photogrammetry, and also import a ground plane exported from Maya. This is going to be your floor, ground or base axis.
  • This is a very simple example. The goal is to align the sneaker to the ground. I would like to deal with such a simple lidars at work :)
  • Click on the align icon.
  • In the align tool window, select the ground object and click on glue here mesh.
  • Notice the star that appears before the name of the object indicating that the mesh has been selected as base.
  • Select the lidar, photogrammetry or whatever geometry that need to be aligned and click on point based glueing.
  • In this little windows you can see both objects. Feel free to navigate around it behaves like a normal viewport.
  • Select one point at the base of the lidar by double clicking on top of it. Then do the same in one point of the base geo.
  • Repeat the same process. You'll need at least 4 points.
  • Done :)

Dealing with Ptex displacement by Xuan Prada

Render using Ptex displacement.

What if you are working with Ptex but need to do some kind of Zbrush displacement work?
How can you render that?

As you probably now, Zbrush doesn't support Ptex. I'm not a super fan of Ptex (but I will be soon) but sometimes I do not have time or simply I don't want to make proper UV mapping. Then, if Zbrush doesn't export Ptex and my assets don't have any sort of UV coordinates, can't I use Ptex at all for my displacement information?

Yes, you can use Ptex.

Base geometry render. No displacement.

  • In this image below, I have a detailed 3D scan which has been processed in Meshlab to reduce the crazy amount of polygons.
  • Now I have imported the model via obj in Zbrush. Only 500.000 polys but it looks great though.
  • We are going to be using Zbrush to create a very quick retopology for this demo. We could use Maya or Modo to create a production ready model.
  • Using the Zremesher tool which is great for some type of retopology tasks, we get this low res model. Good enough for our purpose here.
  • Next step would be exporting both model, high and low resolution as .obj
  • We are going to use these models in Mudbox to create our Ptex based displacement. Yes, Mudbox does support Ptex.
  • Once imported keep both of them visible.
  • Export displacement maps. Have a look in the image below at the options you need to tweak.
  • Basically you need to activate Ptex displacement, 32bits, the texel resolution, etc)
  • And that's it. You should be able to render your Zbrush details using Ptex now.

Combining Zbrush and Mari displacement maps by Xuan Prada

Short and sweet (hopefully).
It seems to be quite a normal topic these days. Mari and Zbrush are commonly used by texture artists. Combining displacement maps in look-dev is a must.

I'll be using Maya and Arnold for this demo but any 3D software and renderer is welcome to use the same workflow.

  • Using Zbrush displacements is no brainer. Just export them as 32 bit .exr and that's it. Set your render subdivisions in Arnold and leave the default settings for displacement. Zero value is always 0 and height should be 1 to match your Zbrush sculpt.
  • These are the maps that I'm using. First the Zbrush map and below the Mari map.
  • No displacement at all in this render. This is just the base geometry.
  • In this render I'm only using the Zbrush displacement.
  • In order to combine Zbrush displacement maps and Mari displacement maps you need to normalise the ranges. If you use the same range your Mari displacement would be huge compared with the Zbrush one.
  • Using a multiply node is so easy to control the strength of the Mari displacement. Connect the map to the input1 and play with the values in the input2.
  • To mix both displacement maps you can use an average node. Connect the Zbrush map to the input0 and the Mari map (multiply node) to the input1.
  • The average node can't be connected straight o the displacement node. Use ramp node with the average node connected to it's color and then connect the ramp to the displacement default input.
  • In this render I'm combining both, Zbrush map and Mari map.
  • In this other example I'm about to combine two displacements using a mask. I'll be using a Zbrush displacement as general displacement, and then I'm going to use a mask painted in Mari to reveal another displacement painted in Mari as well.
  • As mask I'm going to use the same symbol that I used before as displacement 2.
  • And as new displacement I'm going to use a procedural map painted in Mari.
  • The first thing to do is exactly the same operation that we did before. Control the strength of the Mari's displacement using a multiply node.
  • Then use another multiply node with the Mari's map (multiply) connected to it's input1 and the mask connected to it's input2. This will reveal the Mari's displacement only in the white areas of the mask.
  • And the rest is exactly the same as we did before. Connect the Zbrush displacement to the input0 of the average node and the Mari's displacement (multiply) to the input1 of the average node. Then the average node to the ramp's color and the ramp to the displacement default input.
  • This is the final render.

VFX footage input/output by Xuan Prada

This is a very quick and dirty explanation of how the footage and specially colour is managed in a VFX facility.

Shooting camera to Lab
The RAW material recorded on-set goes to the lab. In the lab it is converted to .dpx which is the standard film format. Sometimes the might use exr but it's not that common.
A lot of movies are still being filmed with film cameras, in those cases the lab will scan the negatives and convert them to .dpx to be used along the pipeline.

Shooting camera to Dailies
The RAW material recorded on-set goes to dailies. The cinematographer, DP, or DI applies a primary LUT or color grading to be used along the project.
Original scans with LUT applied are converted to low quality scans and .mov files are generated for distribution.

Dailies to Editorial
Editorial department receive the low quality scans (Quicktimes) with the LUT applied.
They use these files to make the initial cuts and bidding.

Editorial to VFX
VFX facilities receive the low quality scans (Quictimes) with LUT applied. They use these files for bidding.
Later on they will use them as reference for color grading.

Lab to VFX
Lab provides high quality scans to the VFX facility. This is pretty much RAW material and the LUT needs to be applied.
The VFX facility will have to apply the LUT's film to the work done by scratch by them.
When the VFX work is done, the VFX facility renders out exr files.

VFX to DI
DI will do the final grading to match the Editorial Quicktimes.

VFX/DI to Editorial
High quality material produced by the VFX facility goes to Editorial to be inserted in the cuts.


The basic practical workflow would be.

  • Read raw scan data.
  • Read Quicktime scan data.
  • Dpx scans usually are in LOG color space.
  • Exr scans usually are in LIN color space.
  • Apply LUT and other color grading to the RAW scans to match the Quicktime scans.
  • Render out to Editorial using the same color space used for bringing in footage.
  • Render out Quicktimes using the same color space used for viewing. If wathcing for excample in sRGB you will have to bake the LUT.
  • Good Quicktime settings: Colorspace sRGB, Codec Avid Dnx HD, 23.98 frames, depth million of colors, RGB levels, no alpha, 1080p/23.976 Dnx HD 36 8bit

Introduction to scatterers in Clarisse by Xuan Prada

Scatterers in Clarisse are just great. They are very easy to control, reliable and they render in no time.
I've been using them for matte painting purposes, just feed them with a bunch of different trees to create a forest in 2 minutes. Add some nice lighting and render insane resolution. Then use all the 3D material with all the needed AOV's in Nuke and you'll have full control to create stunning matte paintings.

To make this demo a bit funnier instead of trees I'm using cool Lego pieces :)

  • Create a context called obj and import the grid.obj and the toy_man.obj
  • Create another context called shaders and create generic shaders for the objs.
  • Also create two textures and load the images from the hard drive.
  • Assign the textures to the diffuse input of each shader and then assign each shader to the correspondent obj.
  • Set the camera to see the Lego logo.
  • Create a new context called crowd, and inside of it create a point cloud and a scatterer.
  • In the point cloud set the parent to be the grid.
  • In the scatterer set the parent to be the grid as well.
  • In the scatterer set the point cloud as geometry support.
  • In the geometry section of the scatterer add the toy_man.
  • Go back to the point cloud and in the scattering geometry add the grid.
  • Now play with the density. In this case I’m using a value of 0.7

  • As you can see all the toy_men start to populate the image.

  • In the decimate texture add the Lego logo. Now the toy_men stick to the Logo.
  • Add some variation in the scatterer position and rotation.
  • That’s it. Did you realise how easy was to setup this cool effect? And did you check the polycount? 108.5 million :)
  • In order to make this look a little bit better, we can remove the default lighting and do some quick IBL setup.

Final render.