workflow

Dummy crowds in Houdini Solaris by Xuan Prada

Quick video showing how to use one of my oldest tricks to create “dummy crowds“, but this time in Solaris, which makes this even easier and faster than using SOPs or MOPs.

I’ve used a flavour of this dummy crowds technique in many shots for a bunch of movies. It is very limited in so many ways, but when it works it works. You can populate your environments in minutes!

Please consider subscribing to my Patreon so I can keep making professional VFX training. Thanks. www.patreon.com/elephantvfx

Houdini Solaris and Katana. Custom attributes by Xuan Prada

This is a quick video showcasing how to use custom attributes in Houdini Solaris for USD scattering systems. The USD layer will be exported to Katana to do procedural look-dev using the exported custom parameters.

This technique will be explored in depth in my upcoming video about Houdini Solaris and Katana interoperability.

Subscribe to my Patreon to have full access to my entire library of visual effects training.
www.patreon.com/elephantvfx

Mix 04 by Xuan Prada

Hello patrons,

First video of 2022 will be a mix of topics.

The first part of the video will be dedicated to talk about face building and face tracking in Nuke. Using these tools and techniques will allow us to generate 3D heads and faces using only a few photos with the help of AI. Once we have the 3D model, we should be able to track and matchmove a shot to do a full head replacement or to extend/enhance some facial features.

In the second part of the video I will show you a technique that I used while working on Happt Feet to generate foot prints and foot trails. A pretty neat technique that relies on transferring information between surfaces instead of going full on with complex simulations.

This is a 3.30 hours video, so grab yourself a cup of coffee and enjoy!
All the information on my Patreon channel.

As always, thanks for your support!

Xuan.

Houdini scatterers 2/2 by Xuan Prada

Hello patrons,

This is the second (and last) part of Houdini scatterers. We are going to take the tools that we made in the first video and see how we can use them to create complex and efficient scattering systems for your VFX shots, specially useful when dealing with huge environments.

I will show you some of my favourite worflows and share some techniques that I've used in the past in combination with the HDAs that we created in this series.

For those of you in tiers with downloadable material, you will have access to another post with some links to get the files.

As usual feel free to contact me with questions, suggestions, ideas, etc.
And if you like my content, please help me out and recommend it to you work mates.

All the info on my Patreon feed.

Thanks a lot for your suppor!
Xuan.

Simple spatial lighting by Xuan Prada

Hello patrons,

I'm about to release my new video "Simple spatial lighting". Here is a quick summary of everything we will be covering. The length of this video is about 3 hours.

- Differences between HDRIs and spatial lighting.
- Simple vs complex workflows for spatial lighting.
- Handling ACES in Nuke, Mari and Houdini.
- Dealing with spherical projections.
- Treating HDRIs and practical lights.
- Image based modelling.
- Baking textures in Arnold/Maya.
- Simple look-dev in Houdini/RenderMan.
- Spatial lighting setup in Houdini/RenderMan.
- Slap comp in Nuke.

Thanks,
Xuan.

Head over my Patreon site to access this video and many more.

Intro to LOPs and USD by Xuan Prada

My introduction to Houdini Solaris LOPs and USD is already available on my Patreon feed.
These are the topics that we are going to be covering.

- Introduction to USD and LOPs
- Asset creation worflow
- Simple assets
- Complex assets
- Manual layout and set dressing
- Using instances in LOPs
- Set dressing using information from Maya
- Using departments inputs/outputs
- Publishing system
- Setup for sequence lighting
- Random bits

This introduction is around 4.30 hours long.
Check it out here.

Camera projection masterclass, episode 01 by Xuan Prada

The very first episode for "Camera projection masterclass" has dropped. For more than two and a half hours I will be introducing you to the fascinating work of camera projection for visual effects. This is a long format series where I will be covering many concepts, ideas and practical exercises. Let's see what today's episode is all about.

- Introduction to the course
- Matte painting evolution
- Matte painting in the visual effects pipeline
- Matte painting workflows and tools
- Camera projection fundamentals
- Types of camera projections
- Common issues
- Camera projection elements in Nuke and Maya
- Recipes for all type of camera projections in Nuke
- A few words about Photoshop

Downloadable material will be available for certain tiers.

As always, thanks a lot for your support, you make this channel.
Check out my Patreon for more information.
Xuan.

Introduction to heightfields by Xuan Prada

This is the first part of the "Redshift little project" we are doing to conclude the "Introduction to Redshift for VFX" series. In this case, I will explain to you the basics of Houdini's heightfields. The most common tools, different workflows, how to export attributes, geometry, and textures, how to use real world data, and many more things. In the end, is about three hours of video training that will set you up quickly to start working with heightfields.

The video will be available for subscribers in my Patreon site.
Xuan.

Introduction to Redshift - little project by Xuan Prada

My Patreon series “Introduction to Redshift for VFX” is coming to an end. We have already discussed in depth the most basics features like global illumination and sampling. I shared with you my own “cheat sheets” to deal with GI and sampling. We also talked about Redshift lighting tools, built-in atmospheric effects, and cameras. In the third episode we talked about camera mapping, surface shaders, texturing, displacement maps from Mari and Zbrush, how to ingest Substance Painter textures and did a few surfacing exercises.
This should give you a pretty good base to start your projects in Houdini and Redshift, or whatever 3D app you want to use with Redshift.

The next couple of videos about this series are going to be dedicated to doing from scratch to finish a little project using Redshift. We are going to be able to cover more features of the render engine and also discover more broad techniques that hopefully you will find interesting. Let me explain to you what is all of this about.

We’ll be doing this simple shot below from start to finish, it is quite simple and graphic I know, but to get there I’m going to explain to you many things that you are going to be using quite a lot in visual effects shots, more than we actually end up using in the shot.

We are going to start by having a quick introduction to SpeedTree Cinema 8 to see how to create procedural trees. We will create from scratch a few trees that later will be used in Houdini. Once we have all the models ready, we will see how to deal with SpeedTree textures to use them in Redshift in an ACES pipeline.

These trees will be used in Houdini to create re-usable assets llibraries and later converted to Redshift proxies for memory efficiency and scattering, also to be easily picked up by lighting artists when working on shots.

With all these trees we will take a look at how to create procedural scattering systems in Houdini using Redshift proxies. We will create multiple configurations depending on our needs. We are also going to learn how to ingest Quixel Megascans assets, again preparing them to work with ACES and creating an additional asset for our library. We will also re-use the scatterers made for trees to scatter rocks and pebbles.

To scatter all of that will be used as a base Houdini’s height fields. For this particular shot, we are going a very simple ground made with height fields and Megascans, but I’m going to give you a pretty comprehensive introduction to height fields, way more than what you see in the final shot.

Once all the natural assets are created, we’ll be looking at the textures and look-dev of the character. Yes, there is a character in the shot, you don’t see much but hey, this is what happens in VFX all the time. You spend months working on something barely noticeable. We will look into speed texturing and how to use Substance Painter with Redshift.

suit.png

Now that we are dealing with characters, what if I show you how to “guerrilla” deal with motion capture? So you can grab some random motion capture from any source and apply it to your characters. Look at the clip below, nothing better than a character moving to see if the look actually works.

It looks better when moving, doesn’t it? There is no cloth simulation btw, it is a Redshift course, we are not going that far! Not yet.

Any environment work, of course, needs some kind of volumetrics. They create nice lighting effects, give a sense of scale, look good and make terrible render times. We need to know how to deal with different types of volumetrics in Redshift, so I’m going to show you how to create a couple of different atmospherics using Houdini’s volumes. Quite simple but effective.

Finally, we will combine everything together in a shot. I will show you how to organize everything properly using bundles and smart bundles to configure your render passes. We will take a look at how Redshift deals with AOVs, render settings, etc. Finally, we will put everything together in Nuke to output a nice render.

Just to summarize, this is what I’m planning to show you while working on this little project. My guess is that it will take me a couple of sessions to deliver all this video training.

  • Speed Tree introduction and tree creation

  • ACES texture conversion

  • ACES introduction in Houdini and Redshift

  • Creation of tree assets library in Houdini

  • Megascans ingestion

  • Character texturing and look-dev

  • Guerrilla techniques to apply mocap

  • Introduction to Houdini’s height fields

  • Redshift proxies

  • Scattering systems in Houdini

  • Volume creation in Houdini for atmospherics

  • Scene assembly

  • Redshift render settings

  • Compositing

  • Something that I probably forgot

All of this and much more training will be published on my Patreon. Please consider supporting me.

Thanks,
Xuan.

Arnold interoperability by Xuan Prada

In this video I will guide you trough arnold operators in both Maya and Houdni to show you advanced methods for creating looks, and potentially anything arnold related. Working with arnold operators can be very beneficial in your visual effect pipeline, among other things you are going to be able to transfer "for free" pretty much anything from one 3D package to another, in this case from Maya to Houdini and vice-versa.

These days it is very common to work in a traditional 3D package like Maya while creating assets and then moving to a scene assembler like Houdini or Katana to do shots. With this workflow you are going to be able to do so in a very clean, tidy and efficient way.

On top of that, I'm going to show you how to create look files that can be easily exported to use in lighting shots, independently in Maya or Houdini. You also are going to be able to override looks, versioning looks in Shotgun and many more things.

This is a two plus hours video tutorial posted on my Patreon feed.

Thanks a lot for your support.
Xuan.

Introduction to Reality Capture by Xuan Prada

In this 3 hour tutorial I go through my photogrammetry workflow using Reality Capture in conjunction with Maya, Zbrush Mari and UV Layout.

I will guide you through the entire process, from capturing footage on-set until asset completion. I will explain the most basic settings needed to process your images in Reality Capture, to create point clouds, high resolution meshes and placeholder textures.
Then I will continue to develop the asset in order to make it suitable for any visual effects production.

This are the topics included in this tutorial.

- Camera gear.
- Camera settings.
- Shooting patterns.
- Footage preparation.
- Photogrammetry software.
- Photogrammetry process in Reality Capture.
- Model clean up.
- Retopology.
- UV mapping.
- Texture re-projection, displacement and color maps.
- High resolution texturing in Mari.
- Render tests.

Check it out on my Patreon feed.

Creases from Maya to Houdini by Xuan Prada

This is a quick tip on how to take creases information from Maya to Houdini to be rendered with Arnold. If you are like me and you are using Houdini as scene assembler this is something that you will have to deal with sooner or later.

  • In Maya, I have a simple cube with creases, on the right side you can see how it looks once subdivided twice.

  • Not only you can take creases information into Houdini, you can also export subdivision information and HtoA will interpret it automatically. Make sure you add catclark subdivision type and 2 iterations, or whatever you need.

  • When exporting the alembic caches you need to include the arnold parameters that take care of subdivision and creases. Actually there is no extra parameter for creases, by including subdivision parameters you will already get the creases information.

  • Note that the arnold parameters in Maya start with ar_arnold_parameter, for example ar_subdiv_iterations. But in Houdini arnold parameters don’t use de ar prefix. Because of that make sure you are exporting the parameter without the ar prefix.

  • All this can be of course happen automatically in your pipeline while publishing assets. It actually should to make artists life easier and avoid mistakes.

  • That’s it, if you import the alembic cache in Houdini both creases and subdivisions should render as expected. This information can be overwritten in sops with arnold parameters.

Mari 4.6 new features and production template by Xuan Prada

Hello patrons,

I recorded a new video about the new features in Mari 4.6 released just a few weeks ago. I will also talk about some of the new features in the extension pack 5 and finally I will show you my production template that I've been using lately to do all the texturing and pre-lookDev on many assets for film and tv projects.

This is a big picture of the topics covered in this video. The video will be about 2.5 hours long, and it will be published on my Patreon site.

- Mari 4.6 new features
- New material system explained in depth
- Material ingestion tool
- Optimization settings
- How and where to use geo channels
- New camera projection tools
- Extension pack 5 new features (or my most used tools)
- Production template for texturing and pre-lookDev

All the information on my Patreon feed.

Thanks for your support!
Xuan.

Render mask in HtoA by Xuan Prada

This is how to setup a render mask, or render patch, or whatever you want to call it, in Houdini using Arnold.
Render patches are generally used when a high cost render needs a fix that only affects to a small portion of the frame, or when most of the frame is going to be covered by a foreground plate.

In these scenarios there is no need to waste render time and render the whole frame, but just what is needed to finalize the shot.

  • This is the scene that I’m going to use for this example. Let’s pretend that we have already render 4k full range of this shot. All of the sudden we need to make some changes on the rubber toy screen left.

  • The best way to create a render mask is using Nuke. You can use an old render as template to make sure everything you need in the frame is covered by the mask. Rotopaint nodes are very useful specially if you need to animate your mask.

  • Create a camera shader and connect the render mask to its filter map.

  • Connect the shader to the camera shader input of the camera, in the Arnold tab.

  • If you render now, only the mask area will be rendered, saving us a lot of render time.

Huge limitation, that I don’t know how to fix and I’m hoping for someone to throw some light on this topic. If you are rendering with overscan, this won’t work nicely, let me show you why.

  • I’m rendering with a 120 pixels overscan, I know is generally speaking a lot, but I just want to illustrate this example very clearly.

  • Now if you render the same overscan with the render mask applied, you will get a black border around the render. Below is the render patch comped over the full frame render.

I’m pretty sure the issue is related to the wrap options of the render mask. By changing the wrapping mode you will get away of this issue in some shots, but in an example like the one on this post, there is no fix playing with the wrapping modes.

Any ideas?

You can definitely use the camera crop options and it will work perfectly fine, no issues at all. It is not as flexible as using your own textures, but it will do in most cases.

Katana Fastrack episode 06 by Xuan Prada

Episode 06 of "Katana Fastrack" is already available.

In this episode we will light the first of two shots that I've prepared for this series. A simple full CG shot that will guide you trough the lighting workflow in Katana using the lighting template that we created in the previous episode.

On top of that, we also cover the look-dev of the environment used in this shot.
We'll take a look at how to implement delivery requirements in our lighting template, such as specific resolutions based on production decisions.

We also will take a look and how to create and use interactive render filters, a very powerful feature in Katana. And finally, we will do the lighting and slapcomp of the first shot of this course.

All the info on my Patreon feed.

Lighting a full cg shot in Houdini, part 01 by Xuan Prada

Part 01 of "Lighting a full cg shot in Houdini" is out.

In this first episode I go through everything you need to convert Houdini into a powerful scene assembler, specially focused on look-dev. I will go through other assembly capabilities and lighting/render in future videos.

In this episode we will cover:

- How to organize and prepare assets in Maya to be used in Houdini for assembly and render
- Good uv workflows for vfx and animation productions
- How to assemble multiple assets in Houdini in a scene assembly fashion
- Quick look at speed texturing in Substance Painter
- How to create digital assets and presets in Houdini to re-use in your projects
- Look-dev workflow in Houdini and Arnold

All the information on my Patreon feed.

Thanks for your support,
Xuan.

Katana Fastrack episode 04 by Xuan Prada

Katana Fastrack episode 04 is already available.
In this episode, we will finish the Ant-Man lookDev by tweaking all the shaders and texture maps created in Mari.

Then we will do a very quick slapcomp in Katana and Nuke to check that everything works as expected and looks good. We will do this by render a full motion range of Ant-Man walk cycle. And finally, we will write a Katana look file to be used by the lighters in their shots.

Check it out on my Patreon feed.

Houdini as scene assembler part 05. User attributes by Xuan Prada

Sometimes, specially during the layout/set dressing stage artists have to decide certain rules or patterns to compose a shot. For example let’s say a football stadium. Imagine that the first row of seats is blue, the next row is red and the third row is green.
There are so many ways of doing this, but let’s say that we have thousands of seats and we know the colors that they should have. Then it is easy to make rules and patterns to allow total flexibility later on when texturing and look-deving.

In this example I’m using my favourite tool to explain 3D stuff, Lego figurines. I have 4 rows of Lego heads and I want each of those to have a different Lego face. But at the same time I want to use the same shader for all of them. I just want to have different textures. By doing this I will end up with a very simple and tidy setup, and iteration won’t be a pain.

Doing this in Maya is quite straightforward and I explained the process some time ago in this blog. What I want to illustrate now is another common situation that we face in production. Layout artists and set dresser usually do their work in Maya and then pass it on to look-dev artists and lighting td’s that usually use scene assemblers like Katana, Clarisse, Houdini, or Gaffer.

In this example I want to show you how to handle user attributes from Maya in Houdini to create texture and shader variations.

  • In Maya select all the shapes and add a custom attribute.

  • Call it “variation”

  • Data type integer

  • Default value 0

  • Add a different value to each Lego head. Add as many values as texture variations you need to have

  • Export all the Lego heads as alembic, remember to add the attributes that you want to export to houdini

  • Import the alembic file in Houdini

  • Connect all the texture variations to a switch node

  • This can be done also with shaders following exactly the same workflow

  • Connect an user data int node to the index input of the switch node and type the name of your attribute

  • Finally the render comes out as expected without any further tweaks. Just one shader that automatically picks up different textures based on the layout artist criteria

Houdini as scene assembler part 03 by Xuan Prada

In this post I will talk about using texture bitmaps and subdivision surfaces.
I have a material network with a couple of shaders, one for the body of this character and another one for the rest. If using Arnold I would have a shop network.

To bring texture bitmaps I use texture nodes when working with Mantra and image nodes when working with Arnold. The principled shader has tabs with inputs for textures. I rarely use these, I always create nodes to take care of the texturing. At the end of the day I never use only one texture per channel. More of this in future posts.
In Mantra, textures are multiplied by the albedo color. Be careful with this.

With Mantra, this is the UDIM tag textureName.%(UDIM)d.exr with arnold textureName.<UDIM>.exr

There is a triplanar node that can be used with Arnold and a different one called UV triplanar projection for Mantra. I don’t usually work without UVs, but these nodes can be useful when working with terrains or other large surfaces.

To subdivide geometry, at object level you can just go to the Arnold tab and select the type of subdivision and the amount. If you need to subdivide only a few parts of you alembic asset, create an unpack node (transfer attributes and groups) and then a subdivide node. This works with both Mantra and Arnold, although there is a better way of doing this with Arnold. We will talk about it in the future.