vfx

Intro to VEX for artists 01 by Xuan Prada

Hello Patrons,

I start the new year with an introduction to VEX for artists.
VEX is a crucial tool for any Houdini artists, it doesn't matter if you use Houdini for procedural modelling, environment creation, scene assembly and rendering or FX, you will have to use some VEX at some point.

This video covers the basics of VEX from scratch, there is no need to know any coding at all, and I will be making some basic examples along the way. From something very simple to more complicated setups.

My idea is to record 2 or 3 videos about VEX over the next few months,
This is the time to take your Houdini skills to the next level by understanding the most common uses of VEX.

Thanks!

Info on my Patreon.

Houdini's window box by Xuan Prada

Hello,

I just published on my Patreon a video about Houdini’s window box.
This video is about how to use the new window box system in Houdini. One of those tools that I've been using for many years at different VFX studios, but now it works out of the box with Houdini and Karma. I hope you find it useful and you can start using it soon in your projects!

All the info on my Patreon site.

Solaris Katana interoperability part 2/2 by Xuan Prada

Hello patrons,

In this video we will finish this mini series about Solaris and Katana interoperability.
I'll be covering the topics that I didn't have time to cover in the first video, including.

- Manual set dressing from Solaris to Katana.
- Hero instancing from Solaris to Katana.
- background instancing and custom attributes from Solaris to Katana.
- Dummy crowds from Solaris to Katana.
- Everything using USD of course.

There are many more things that could be covered when it comes to Solaris and Katana interoperability, I'm pretty sure that I'll be covering some of them in future USD videos.

All the info on my Patreon.

Solaris Katana interoperability part 1/2 by Xuan Prada

Hello patrons,

This is small trailer for the video Houdini Solaris / Katana interoperability part 1/2.
The full video is published only for Patrons.
The whole thing is divided in two videos, the first one is around 2.5 hours and hopefully next month I can publish the second video covering the rest of the topics.

In this first video we are covering.

- Working template in Solaris.
- Working template in Katana.
- Full assets from Solaris to Katana.
- Modifying/overriding looks in Katana.
- Geometry assets from Solaris to Katana.
- Publishing looks as KLF.
- Publishing looks as USD files.
- Full assembly USD files.

All the information on my Patreon.

Thanks!
Xuan.

Dummy crowds in Houdini Solaris by Xuan Prada

Quick video showing how to use one of my oldest tricks to create “dummy crowds“, but this time in Solaris, which makes this even easier and faster than using SOPs or MOPs.

I’ve used a flavour of this dummy crowds technique in many shots for a bunch of movies. It is very limited in so many ways, but when it works it works. You can populate your environments in minutes!

Please consider subscribing to my Patreon so I can keep making professional VFX training. Thanks. www.patreon.com/elephantvfx

Houdini Solaris and Katana. Custom attributes by Xuan Prada

This is a quick video showcasing how to use custom attributes in Houdini Solaris for USD scattering systems. The USD layer will be exported to Katana to do procedural look-dev using the exported custom parameters.

This technique will be explored in depth in my upcoming video about Houdini Solaris and Katana interoperability.

Subscribe to my Patreon to have full access to my entire library of visual effects training.
www.patreon.com/elephantvfx

USD in depth, part 01 by Xuan Prada

Hello Patrons,

I'm starting a new series called "USD in depth", where I will be exploring everything USD related. As you probably know USD is a new standard pipeline system/file format based on layers that contain scene descriptions. It is supported by the VFX platform and is or will be the core of many visual effects studios and animation companies around the globe.

In this first video (more than 3 hours) we will be talking about what defines USD. Type of data, USD structure, terminology, attributes and layers. This is a very dense video, so grab a pot of coffee and enjoy.

All the info on my Patreon.

Thanks,
Xuan.

Deep compositing - going deeper by Xuan Prada

Hello patrons,

This is a continuation of the intro to deep compositing, where we go deeper into compositing workflows using depth.

I will show you how to properly use deep information in a flat composition, to work fast and efficient with all the benefits of depth data but none of the caveats.

The video is more than 3 hours long and we will explore:

- Quick recap of pros and cons of using deep comp.
- Quick recap of basic deep tools.
- Setting up render passes in a 3D software for deep.
- Deep holdouts.
- Organizing deep comps.
- How to use AOVs in deep.
- How to work with precomps.
- Creating deep templates.
- Using 3D geometry in deep.
- Using 2D elements in deep.
- Using particles in deep.
- Zdepth using depth information.

Thanks for your support!
Head over to my Patreon for all the info.

Xuan.

Mix 04 by Xuan Prada

Hello patrons,

First video of 2022 will be a mix of topics.

The first part of the video will be dedicated to talk about face building and face tracking in Nuke. Using these tools and techniques will allow us to generate 3D heads and faces using only a few photos with the help of AI. Once we have the 3D model, we should be able to track and matchmove a shot to do a full head replacement or to extend/enhance some facial features.

In the second part of the video I will show you a technique that I used while working on Happt Feet to generate foot prints and foot trails. A pretty neat technique that relies on transferring information between surfaces instead of going full on with complex simulations.

This is a 3.30 hours video, so grab yourself a cup of coffee and enjoy!
All the information on my Patreon channel.

As always, thanks for your support!

Xuan.

VDB as displacement by Xuan Prada

The sphere is the surface that needs to be deformed by the presence of the cones. The surface can't be modified in any way, we need to stick to its topology and shape. We want to do this dynamically just using a displacement map but of course we don't want to sculpt the details by hand, as the animation might change at any time and we would have to re-sculpt again.

The cones are growing from frame 0 to 60 and moving around randomly.

I'm adding a for each connected piece and inside the loop adding an edit to increase the volume of the original cones a little bit.

Just select all in the group field, and set the transform space to local origin by connectivity, so each cone scales from it's own center.

Add a vdb from polygons, set it to distance VDB and add some resolution, it doesn't need to be super high.

Then I just cache the VDB sequence.

Create an attribute from volume to pass the Cd attribute from the vdb cache to the sphere.

To visualize it better you can just add a visualizer mapped to the attribute.

In shading, create an user data float and read the Cd attribute and connect it to the displacement.

If you are looking for the opposite effect, you can easily invert the displacement map.

Detailing digi doubles using generic humans by Xuan Prada

This is probably the last video of the year, let's see about that.

This time is all about getting your concept sculpts into the pipeline. To do this, we are going to use a generic humanoid, usually provided by your visual effects studio. This generic humanoid would have perfect topology, great uv mapping, some standard skin shaders, isolation maps to control different areas, grooming templates, etc.

This workflow will speed drastically the way you approach digital doubles or any other humanoid character, like this zombie here.

In this video we will focus mainly on wrapping a generic character around any concep sculpt to get a model that can be used for rigging, animation, lookdev, cfx, etc. And once we have that, we will re-project back all the details from the sculpt and we will apply high resolution displacement maps to get all the fine details like skin pores, wrinkles, skin imperfections, etc.

The video is about 2 hours long and we can use this character in the future to do some other videos about character/creature work.

All the info on my Patreon site.

Thanks!

Xuan.

Lookdev rig for Houdini by Xuan Prada

Hello patrons,

In this video I show you how to create a production ready lookdev rig for Houdini, or what I like to call, a single click render solution for your lookdevs.

It is in a way similar to the one we did for Katana a while ago, but using all the power and flexibility of Houdini's HDA system.

Talking about HDA's, I will be introducing the new features for HDA's that come with Houdini 18.5.633 that I think are really nice, specially for smaller studios that don't have enough resources to build a pipeline around HDA's.

By the end of this video you should be able to build your own lookdev tool and adapt it to the needs of your projects.

We'll be working with the latest versions of Houdini, Arnold and ACES.

As usually, the video starts with some slides where I try to explain why building a lookdev rig is a must before you do any work on your project, don't skip it, I know it is boring but very much needed. Downloadable material will be attached in the next post.

Thank you very much for your support!

Head over to my Patreon feed.

Xuan.

Small dynamic clouds by Xuan Prada

Hello,

I don't think I will be able to publish a video this month, let's see, but in the meantime here you can download five caches of small dynamic clouds that I simulated in Houdini.
They are a 1000 frames simulation and should work pretty good to create vast cloudscapes.

They are .bgeo caches, feel free to convert them to .vdb if you want to use them in any other software.


The videos below are flipbooks of the animated clouds, not renders.

The downloadable link will be published in the next post.
This is free of charge for all tiers with downloadable resources.

Thanks,
Xuan.

Camera projection masterclass, episode 03 by Xuan Prada

Hello patrons,

I'm about to post "Camera projection masterclass, episode 03".
In this episode we are going to create a nested projection setup, where the camera is moving from far away at the begining of the shot, to end up closer to the subject by the end of the shot. A very common setup that you will see a lot in matte painting and environment tasks.

Then, we are going to take a look at the concept of overscan for camera projection. I will show you different ways of creating overscan, I will explain why overscan is extremely important for all your camera projection setups, and finally we will do a complex overscan camera projection exercise, using an impossible camera.

Make a big pot of black cofee because this is around 5 hours of professional training divided in two videos. Oh and remember that you can download supporting files if your tier includes downloadable material.

All the info on my patreon feed.

Thanks!
Xuan.

Houdini topo transfer - aka wrap3 by Xuan Prada

For a little while I have been using Houdini topo transfer tools instead of Wrap 3. Not saying that I can fully replace Wrap3 but for some common and easy tasks, like wrapping generic humans to scans for both modelling and texturing, I can definitely use Houdini now instead of Wrap 3.

Wrapping generic humans to scans

  • This technique will allow you to easily wrap a generic human to any actor’s scan to create digital doubles. This workflow can be used during modeling the digital double and also while texturing it. Commonly, a texture artist gets a digital double production model in t-pose or a similar pose that doesn’t necessary match the scan pose. It is a great idea to match both poses to easily transfer color details and surface details between the scan and the production model.

  • For both situations, modeling or texturing, this is a workflow that usually involves Wrap3 or other proprietary tools for Maya. Now it can also easily be done in Houdini.

  • First of all, open the ztool provided by the scanning vendor in Zbrush. These photogrammetry scans are usually something around 13 – 18 million of polygons. Too dense for the wrapping process. You can just decimate the model and export it as .obj

  • In Maya align roughly your generic human and the scan. If the pose is very different, use your generic rig to match (roughly) the pose of the scan. Also make sure both models have the same scale. Scaling issues can be fixed in Wrap3 or Houdini in this case, but I think it is better to fix it beforehand, in a vfx pipeline you will be publishing assets from Maya anyway. Then export both models as .obj

  • It is important to remove teeth, the interior of the mouth and other problematic parts from your generic human model. This is something you can do in Houdini as well, even after the wrapping, but again, better to do it beforehand.

  • Import the scan in Houdni.

  • Create a topo transfer node.

  • Connect the scan to the target input of the topo transfer.

  • Bring the base mesh and connect it to the source input of the topo transfer.

  • I had issues in the past using Maya units (decimeters) so better to scale by 0.1 just in case.

  • Enable the topo transfer, press enter to activate it. Now you can place landmarks on the base mesh.

  • Add a couple of landmarks, then ctrl+g to switch to the scan mesh, and align the same landmarks.

  • Repeat the process all around the body and click on solve.

  • Your generic human will be wrapped pretty much perfectly to the actor’s scan. Now you can continue with your traditional modeling pipeline, or in case you are using this technique for texturing, move into Zbrush, Mari and or Houdini for transferring textures and displacement maps. There are tutorials about these topics on this site.

Transferring texture data

  • Import the scan and the wrapped model into Houdini.

  • Assign a classic shader with the photogrammetry texture connected to its emission color to the scan. Disable the diffuse component.

  • Create a bakeTexture rop with the following settings.

    • Resolution = 4096 x 4096.

    • UV object = wrapped model.

    • High res object = scan.

    • Output picture = path_to_file.%(UDIM)d.exr

    • Format = EXR.

    • Surface emission color = On.

    • Baking tab = Tick off Disable lighting/emission and Add baking exports to shader layers.

    • If you get artifacts in the transferred textures, in the unwrapping tab change the unwrap method to trace closest surface. This is common with lidar, photogrammetry and other dirty geometry.

    • You can run the baking locally or on the farm.

  • Take a look at the generated textures.

Simple spatial lighting by Xuan Prada

Hello patrons,

I'm about to release my new video "Simple spatial lighting". Here is a quick summary of everything we will be covering. The length of this video is about 3 hours.

- Differences between HDRIs and spatial lighting.
- Simple vs complex workflows for spatial lighting.
- Handling ACES in Nuke, Mari and Houdini.
- Dealing with spherical projections.
- Treating HDRIs and practical lights.
- Image based modelling.
- Baking textures in Arnold/Maya.
- Simple look-dev in Houdini/RenderMan.
- Spatial lighting setup in Houdini/RenderMan.
- Slap comp in Nuke.

Thanks,
Xuan.

Head over my Patreon site to access this video and many more.

Camera projection masterclass, episode 02 by Xuan Prada

Hello Patrons,

Camera projection masterclass, episode 02 is here! In this video, we are going to be doing two different exercises involving camera projection work in Nuke. The first exercise is a simple one, but also one you'll be doing all the time, a layering camera projection.

The second exercise is a more complex one, we'll be dealing with multiple geometries, different footage, and different approaches to set up our camera projection setup, this will be a coverage projection exercise.

More than two hours of content, also providing working files for you to practice.

As always, I can't thank you enough for your support.

Questions, suggestions, and critiques are always welcome.

All the info on my Patreon site.

Thanks!
Xuan.

Introduction to heightfields by Xuan Prada

This is the first part of the "Redshift little project" we are doing to conclude the "Introduction to Redshift for VFX" series. In this case, I will explain to you the basics of Houdini's heightfields. The most common tools, different workflows, how to export attributes, geometry, and textures, how to use real world data, and many more things. In the end, is about three hours of video training that will set you up quickly to start working with heightfields.

The video will be available for subscribers in my Patreon site.
Xuan.

Wade Tillman - spec job by Xuan Prada

This is just a spec job for Wade Tillman’s character on HBO’s Watchmen. After watching the series I enjoyed the work done by Marz VFX on Tillman’s mask, that I wanted to do my own. Unfortunately, I don’t have much time so creating this asset seemed like something doable to do in a few hours over the weekend. It is just a simple test, it will require a lot more work to be a production-ready asset of course. I’m just playing here the role of a visual effects designer trying to come up with an idea of how to implement this mask into the VFX production pipeline.

I’m planning to do more work in the future with this asset, including mocap, cloth simulation, proper animated HDRI lighting, etc. I also changed the design that they did on the series. Instead of having the seams in the middle of the head from ear to ear I just place my seams in the middle of the face dividing the face in two. I believe the one they did for the real series works much better but I just wanted to try something different. I will definitely do another test mimicking the other design.

So far I just tried one design in two different stages. The mask covering the entire head and the mask pulled up to reveal the mouth and chin of the character, as seen many times in the series. I also tried a couple of looks, one more mirror-like with small imperfections in the reflections. And another one rougher. I believe they tried similar looks but in the end, the went with the one with more pristine reflections.

I think it would be interesting to see another test with different types of materials, introducing some iridescence would also be fun. I will try something else next time.

Capturing lighting and reflections to lit this asset properly has to be the most exciting part of this task. That is something that I haven’t done yet but I will try as soon as I can. It is pretty much like having a mirror ball in the shots. Capturing animated panoramic HDRIs is definitely the way to go or at least the more simple one. Let’s try it next time.

Finally, I did a couple of cloth simulation tests for both stages of the mask. Just playing a bit with vellum in Houdini.

References from the series.

Just trying different looks here for both stages of the mask.

Simple cloth simulation test. From t-pose to anim pose and walk cycle.