uv mapping

Detailing digi doubles using generic humans by Xuan Prada

This is probably the last video of the year, let's see about that.

This time is all about getting your concept sculpts into the pipeline. To do this, we are going to use a generic humanoid, usually provided by your visual effects studio. This generic humanoid would have perfect topology, great uv mapping, some standard skin shaders, isolation maps to control different areas, grooming templates, etc.

This workflow will speed drastically the way you approach digital doubles or any other humanoid character, like this zombie here.

In this video we will focus mainly on wrapping a generic character around any concep sculpt to get a model that can be used for rigging, animation, lookdev, cfx, etc. And once we have that, we will re-project back all the details from the sculpt and we will apply high resolution displacement maps to get all the fine details like skin pores, wrinkles, skin imperfections, etc.

The video is about 2 hours long and we can use this character in the future to do some other videos about character/creature work.

All the info on my Patreon site.

Thanks!

Xuan.

Introduction to Reality Capture by Xuan Prada

In this 3 hour tutorial I go through my photogrammetry workflow using Reality Capture in conjunction with Maya, Zbrush Mari and UV Layout.

I will guide you through the entire process, from capturing footage on-set until asset completion. I will explain the most basic settings needed to process your images in Reality Capture, to create point clouds, high resolution meshes and placeholder textures.
Then I will continue to develop the asset in order to make it suitable for any visual effects production.

This are the topics included in this tutorial.

- Camera gear.
- Camera settings.
- Shooting patterns.
- Footage preparation.
- Photogrammetry software.
- Photogrammetry process in Reality Capture.
- Model clean up.
- Retopology.
- UV mapping.
- Texture re-projection, displacement and color maps.
- High resolution texturing in Mari.
- Render tests.

Check it out on my Patreon feed.

Houdini as scene assembler, part 01 (of many) by Xuan Prada

It’s been a while since I used Houdini at work, the very first time I used Houdini on a show it was while working on Happy Feet 2, it was our main scene assembler for the show. Look-dev, lighting and rendering was all done in Houdini and 3Delight.

From there I never used Houdini again until I was working on Geostorm at Dneg. Most of the shots were managed with Houdini and PrMan. That is all my experience with Houdini in a professional environment. No need to say that I have only used Houdini for assembly tasks, look-dev, lighting and rendering, nothing like fx or other fancy stuff.

The common thing between the two shows where I used Houdini as assembler is that we had pretty neat tools to take care of most of the steps through the pipeline. Becasue of that I can’t barely use Houdini out of the box, so I’m going to try to learn how to use it and share it here for future reference.

During my time working at facilities like MPC, Dneg or Framestore, I have used different scene assemblers like Katana, Clarisse or other propietary tools. My goal is to extrapolate my knowledge and experience using those software to Houdini. I’m pretty sure that I’d be using tools and techniques in the wrong way just because Houdini has a different philosophy than other tools or just because my lack of knowledge in general about Houdini and proceduralism. But anyway, I’ll try to make it work, if you see anything that I’m doing terribly wrong, please let me know, I’ll be listening.

I’ll be posting about stuff that I’m dealing with in no particular order but always assembly oriented, do not expect to see here anything related with fx or more “traditional” use of Houdini. Most of the stuff is going to be very basic, specially at the beginning but please bare with me, it will get more interesting in the future.

If you are assembling a scene one of the first steps it would be to bring all your assets from other applications. You can of course generate content in Houdini but usually most of you assets will be created in other packages, being Maya the most common one. So I guess the very first thing you’d have to deal with is how to import alembic caches. If you are working in a vfx facility chances of having automated tools to setup your shots for you are pretty high. Launching Houdini from a context in a terminal will take care of everything. If you are at home or starting to use Houdini in a vfx boutique you will have to setup your shots manually. There are clever and easy ways to create Houdini templates for your show/shot but we will leave this topic for future posts.

To bring your assets as alembic caches just create a file node, step inside and replace the existing file for another file node pointing to your alembic cache, or just use the existing file node and change the path to read you alembic cache.

If you are look-deving a character lets say, it is completely fine to look at the full geometry in the viewport. If you are assembling a big scene like a city or a space ship you’d probably want to change your viewport settings to something like bounding boxes. There are better ways of dealing with bounding box without loading the geo, more to come soon.

Assets are usually complex and we try to keep everything tidy and organised by naming everything properly and structuring groups and hierarchies in a particular way that makes sense for our purposes. The unpack node will allow you to access to all the different parts and componentes of the alembic caches and to perform different operations later on. The groups can be selected based on the hierarchies created in Maya or based on wildcards. It is extremely important to use a clever naming and structuring groups following certain logic to make the assembly process easier and faster.

The blast node will help you also to access to the information contained in the alembic cache and remove whatever you don’t need to use for a particular operation. You can also invert the selection to keep the items that you wrote in the group field and get rid of the rest.

The group node is another very useful node to point to different groups in your alembic caches. Again based on Maya grouping and wildcards.

That is it for now in that sense, there are many ways to manipulate alembic caches but we don’t need to talk about that just yet. In these first posts I will be talking mostly about bringing assets, working with textures and look-dev. That is the first step for assembling a shot, we need assets ready to travel trough the pipeline.

Uv mapping is key for us, a lot of tasks performed in Houdini use procedural UVs or no UVs at all. This is not the case for us. Asset always have proper UV mapping. Generally speaking you will do all the UV related tasks in Maya, UV Layout or similar tools. In order to see the UVs in Houdini we need to unpack the alembic cache first, then we will be able to press “5” and look at the UVs.

Use a quick uv shade node to display a checkered texture in the viewport. You can easily change the size of the checker or use a different texture. There is also a group field that you can use for filtering.

Not ideal but if you are working on extremely simple assets like walls, grounds, maybe terrains, it is totally fine to create the UVs in Houdini. Houdini UV tools are not the best but you will find yourself using them at some point. The uv texture node crates basic projections like cylindrical, orthographic, etc.

The uv unwrap node create automatic UVs based on projection planes.

The uv layout node is a tools for packing your UVs. Using a fixed scale you can distribute the UVs in different UDIMs.

The auto uv node is actually pretty good. It is part of the game development tools shipped with Houdini. You need to activate this package first, just go the shelf, click on the plus button and look for game development tools. Then click on the icon update toolset to get the latest version.

The auto uv tools has different methods for UVing and for packing, it is worth trying them, it works really well specially with messy objects.

The uv transform node deals with anything related to moving, translating and rotating UVs. You don’t really want to do this here in Houdini, but if you have to, this is the tool. I use it a lot if I need to re-distribute UDIM tiles.

Attribute create node (with the following parameters) allos you to create a parameter to move UVs to a specific UDIM. Then add a uv layout node and set the packing method to UDIM attribute.

UV to Mesh by Xuan Prada

Mi friend David Munoz Velazquez just pointed me to this great script to flatten geometries based on UV Mapping, pretty useful for re-topology tasks. In this demo I use it to create nice topology for 3D garments in Marvelous Designer. Then I can apply any new simulation changes to the final mesh using morphs. Check it out.

UDIM workflow in Nuke by Xuan Prada

Texture artists, matte painters and environment artists often have to deal with UDIMs in Nuke. This is a very basic template that hopefully can illustrate how we usually handle this situation.

Cons

  • Slower than using Mari. Each UDIM is treated individually.
  • No virtual texturing, slower workflow. Yes, you can use Nuke's proxies but they are not as good as virtual texturing.

Pros

  • No paint buffer dependant. Always the best resolution available.
  • Non destructive workflow, nodes!
  • Save around £1,233 on Mari's license.

Workflow

  • I'll be using this simple footage as base for my matte.
  • We need to project this in Nuke and bake it on to different UDIMs to use it later in a 3D package.
  • As geometry support I'm using this plane with 5 UDIMs.
  • In Nuke, import the geometry support and the footage.
  • Create a camera.
  • Connect the camera and footage using a Project 3D node.
  • Disable the crop option of the Project 3D node. If not the proejctions wouldn't go any further than UV range 0-1.
  • Use a UV Tile node to point to the UDIM that you need to work on.
  • Connect the img input of the UV Tile node to the geometry support.
  • Use  a UV Project node to connect the camera and the geometry support.
  • Set projection to off.
  • Import the camera of the shot.
  • Look through the camera in the 3D view and the matte should be projected on to the geometry support.
  • Connect a Scanline Render to the UV Project.
  • Set the projection model to UV.
  • In the 2D view you should see the UDIM projection that we set previously.
  • If you need to work with a different UDIM just change the UV Tile.
  • So this is the basic setup. Do whatever you need in between like projections, painting and so on to finish your matte.
  • Then export all your UDIMs individually as texture maps to be used in the 3D software.
  • Here I just rendered the UDIMs extracted from Nuke in Maya/Arnold.

Stmaps by Xuan Prada

One of the first treatments that you will have to do to your VFX footage is removing lens distortion. This is crucial for some major tasks, like tracking, rotoscoping, image modelling, etc.
Copy lens information between different footage or between footage and 3D renders is also very common. Working with different software like 3D equalizar, Nuke, Flame, etc, having a common and standard way to copy lens information seems to be a good idea. Uv maps are probably the easiest way to do this, as they are plain 32 bit exr images.

  • Using lens grids is always the easiest, fastest and most accurate way of delensing.
  • Set the output type to displacement and look through the forward channel to see the uvs in viewport.
  • Write the image as .exr 32 bits
  • This will output the uv information and can be read in any software.
  • To apply the lensing information to your footage or renders, just use a Stmap connected to the footage and to the uv map.

New features in UV Layout v2.08.06 by Xuan Prada

The new version of UV Layout v2.08.06 was released a few weeks ago and it is time to talk about some of the new exciting features. I'll be mentioning also old tools and features from previous versions that I'm starting to use now and didn't use much before.

  • Display -> Light: It changes the way lighting affects the scene, it is very useful when some parts are occluded in the checking window. It has been there for a while but I just started using it not long ago.
  • Settings -> F1 F2 F3 F4 F5: This buttons will allow you to create shortcuts for other tools, so instead of using the menus you can map one of the function keys to use that tool.
  • Preferences -> Max shells: This option will allow you to increase the number of shells that UV Layout can handle. This is a very very important feature. I use it a lot specially when working with crazy data like 3D scans and photogrammetry.
  • Flatten multiple objects at once: It didn't work before but it does now. Just select a bunch of shells and press "r".
  • Pack -> Align shells to axes: Select your shells, enable the option "align shells to axes" and click on pack.
  • Pack by tiles: Now UDIM organization can be done inside UV Layout. Just need to specify the number of UDIMs in X and Y and click on pack.
  • Pack -> Move, scale, rotate: As part of UDIM organization now you can move whole tiles around.
  • Trace masks: This is a great feature! Specially useful if you already have a nice UV mapping and suddenly need to add more pieces to the existing UV layout. Just mask out the existing UVs and place the new one in the free space. To do so just place in boxes de new UVs and go to displace -> trace and select your mask. Click on pack and that's it, your new UVs will be placed in the proper space.

Segment marked polys: This is great specially for very quick UV mapping. Just select a few faces click on segment marked polys and UV Layout will create flat projections for them.

  • Set size: This is terrific! one of my favourite options. Make the UVs for one object and check the scale under Move/Scale/Rotate -> Set size. Then use that information in the preferences. If later you import a completely different objec, UV Layout will be using the size of the previous object to match the scale between objects. That means all your objects will have exactly same scale and resolution UVs wise. Amazing for texture artists!
  • Pin edges: A classic one. When you are relaxing a shell and want to keep the shape, press "pp" on the outer edges to pin them. Then around the eyes or other interior holes press "shift+t" around the edges. Then you can relax the shell keeping the shape of the object.
  • Anchor points: Move one point on the corner with "ctrl+MMB" and press "a" to make it anchor point. Then move another point in the opposite corner and do the same. Then press "s" on top of each anchor point. Then "ss" on any point in between the anchors to align them. Combining this with pinned edges will give you perfect shapes.

Mari to Softimage by Xuan Prada

Recently I was involved in a master class about texturing and shading for animation movies, and as promised I’m posting here the technical way to set-up different UV sets inside Softimage.
Super simple process and really efficent methodology.

  • I’m using this simple asset.
  • These are the UVs of the asset. I’m using different UV sets to increase the quality. In this particular asset you can find four 4k textures for each channel. Color, Specular and Bump.
  • You probably realized that I’m using my own background image in the texture editor. I think that this one is more clear for UV mapping than the default one. If you want you can download the image, convert it to .pic and replace the original one located on C:\Program Files\Autodesk\Softimage 2012\Application\rsrc
  • This is the render tree set-up. Four 4k textures for color, specular and bump. Each four textures are mixed by mix8color node.
  • Once everything is connected, you still need to offset each image node to match the UV ranges.
  • I know that the UV coordinates in Softimage are a bit weird, so find below a nice cart which will be so helpfull for further tasks.
  • Keep in mind that you should turn off wrap U and wrap V for each texture in the UV editor.
  • Really quick render set-up for testing purposes.

UDIMs workflow, Maya to Mari by Xuan Prada

Sometimes is very useful to work with different range of UV’s, specially when you are working with a huge assets and a high detail is needed.
Find below my workflow dealing with this kind of stuff.

  • Unfold the UV’s in different ranges.
  • If you need to bake procedurals, dirtmaps or whatever, keep in mind to change the UV range in the baking options.
  • I always use the same naming convention.
  • UxxVxx.tiff
  • 0101.tif
  • 2301.tif
  • Create a new project in Mari.
  • Check if the UV’s are placed correctly.
  • Create a new channel called “base” and import your baked textures into it.
  • Ready to keep working.