Import UDIMs in Zbrush by Xuan Prada

One of the most common tasks once your colour textures are painted is going to Zbrush or Mudbox to sculpt some heavy details based on what you have painted in your colour textures.
We all use UDIMs of course, but importing UDIMs in Zbrush is not that easy. Let's see how this works.

  • Export all your colour UDIMs out of Mari.
  • Import your 3D asset in Zbrush and go to Polygroups -> UV Groups. This will create a polygroups based on UDIMs.
  • With ctrl+shift you can isolate UDIMs.
  • Now you have to import the texture that corresponds to the isolated UDIM.
  • Go to Texture -> Import. Do not forget to flip it vertically.
  • Go to Texture Map and activate the texture.
  • At this point you are only viewing the texture, not applying it.
  • Go to Polypaint, enable Colorize and click on Polypaint from texture.
  • This will apply the texture to the mesh. As it's based on polypaint, the resolution of the texture will be based on the resolution of the mesh. If it doesn't look right, just go and subdivide the mesh.
  • Repeat the same process for all the UDIMs and you'll be ready to start sculpting.

Manfrotto Befree for visual effects by Xuan Prada

I've been using Manfrotto Befree tripods for a while now, and I just realize that they are a perfect tool for my on-set work.
I rarely use them as primary tripod, specially when working with big and heavy professional DSLRs and multi zoom lenses. In my opinion these tripods are not stable enough to support such as heavy pieces of gear.

I mean, they are if you are taking "normal" photos, but in VFX we usually do bracketing all the time. Like for texturing references, or HDRI's. The combination of the gear, plus the rotation of the mirror plus the quick pace of the bracketing, will result in slightly different brackets. Which obviously mean that the alignment process will not be perfect. I wouldn't recommend using these tripods for bracketing with big camera bodies and multi zoom lenses. I do use them for bracketing with prime lenses such as 28mm or 50mm. They are not that heavy and the tripods seem to be stable enough with these lenses.

I do strongly recommend these tripods for photogrammetry purposes when you have to move around the subject or set. Mirrorless cameras such a Sony A7 or Sony a6000 plus prime lenses are the best combination when you need to move a lot around the set.

I also use Befree's a lot as support tripods. They just fit perfectly my Akromatic kits, both Mono and Twins. Befree tripods are tiny and light so I can easily move around with two or three and they even fit in my backpacks or hard cases at once.

As you can see below, these tripods offer great flexibility in terms of height and expansion. They are tiny when compact and middle sized when expanded completely. Check the features on Manfrotto's site.

I also use these tripods as support for my photogrammetry turntable.
Moving around with such a small setup has never been so easy.

Obviously I also use them for regular photography. Just attach my camera to the provided ball head and start shooting around the set.
Finally I also use Befree to mount my Nodal Ninja. Again you need to be careful while bracketing and always use a remote trigger, but having the possibility to move around with two or three of these tripods is just great.

They have two different version (both in aluminium and carbon fibre). Both of them come with ball head and quick release plate. But, the ball head in the smallest tripod is fixed, can't be removed. Which means a lot of limitations because you won't be able to attach most of the accessories normally used for VFX.

Export from Maya to Mari by Xuan Prada

Yes, I know that Mari 3.x supports OpenSubdiv, but I've had some bad experiences already where Mari creates artefacts on the meshes.
So for now, I will be using the traditional way of exporting subdivided meshes from Maya to Mari. These are the settings that I usually use to avoid distortions, stretching and other common issues.

Quick renders by Xuan Prada

Quick and dirty exercises that I do when I have nothing else to do.

Combining Zbrush and Mari displacements in Clarisse by Xuan Prada

We all have to work with displacement maps painted in both Zbrush and Mari.
Sometimes we use 32 bits floating point maps, sometimes 16 bits maps, etc. Combining different displacement depths and scales is a common task for a look-dev artist working in the film industry.

Let's see how to setup different displacement maps exported from Zbrush and Mari in Isotropix Clarisse.

  • First of all, have a look at all the individual displacement maps to be used.
  • The first one has been sculpted in Zbrush and exported as .exr 32 bits displacement map. The non-displacement value is zero.
  • The second one has been painted in Mari and exported also as .exr 32 bits displacement map. Technically this map is exactly the same as the Zbrush one, the only difference here is the scale.
  • The third displacement map in this exercise also comes from Mari, but in this case it's a .tif 16 bits displacement map, which means that the mid-point will be 0,5 instead of zero.
  • We need to combine all of them in Clarisse and get the expected result.
  • Start creating a displacement node and assigning it to the mesh.
  • We consider the Zbrush displacement as our main displacement layer. That said, the displacement node has to be setup like the image below. The offset or non-displacement value has to be zero, and the front value 1. This will give us exactly the same look that we have in Zbrush.
  • In the material editor I'm connecting a multiply node after every single displacement layer. The input 2 is 1.1.1 by default. Increasing or reducing this value will control the strength of each displacement layer. It is not necessary to control the intensity of the Zbrush layer unless you want to do it. But it is necessary to reduce the intensity of the Mari displacement layers as they are way off compared with the Zbrush intensity.
  • I also added an add node right after the 16 bits Mari displacement subtracting the value -0.5 in order to remap the value at the same level than the other 32 bits maps with non-displacement value of zero.
  • Finally I used add nodes to mix all the displacement layers.
  • It is a good idea to setup all the layers individually to find the right look.
  • No displacement at all.
  • Zbrush displacement.
  • Mari high frequency detail.
  • Mari low frequency detail.
  • All displacement layers combined.

Film dictionary by Xuan Prada

As every Wednesday, a couple more cinematic words for my film dictionary.

Reaction shot
A shot of a character, generally a close up reacting to someone or something seen in the preceding shot. The shot is generally a cutaway from the main action.

Smoke pot
A small container that produces smoke for mechanical effects. The container holds some chemical, such as naphthalene or bitumen, which is fired either by electricity or a burning fuse.

Film dictionary by Xuan Prada

It's Wed again. Let's write another couple of cinematic words.

Closed set
A set, either in the studio or on location, that is not open to any visitors, including studio executives, and is open only to the director, performers and crew. Sets are closed if a particularly intimate or controversial scene is being photographed, if the subject or treatment is to be kept secret, or if there are problems in the production itself that must be worked out.

Stunt
An individual who substitutes for an actor or actress to perform some difficult or dangerous action. This person must of course, have some resemblance to the original performer and be dressed in an identical manner. Shots of such action are taken so that the identity of the stunt person is hidden. Stunts are specially adept at taking falls, surviving crashes or playing piano. When a film requieres a group of such people performing a number of these action, a stunt coordinator is hired.

Film dictionary by Xuan Prada

A couple of cinematic words every Wednesday. I won't be following any theme or order in particular, just for the fun of learning new film related stuff.

American Museum of the Moving Image
Founded in 1998, and located in Astoria, New York, the first museum in the United States devoted to the history of the production, distribution, and exhibition of film, television, and video art. The museum is concerned with all types of work employing the moving image - fictional, documentary, avant-garde, network television, commercials, etc. Abutting the old Astoria Studios, the museum features changing exhibition while also presenting permanent displays relating to all aspects of the industry.
Especially impressive is its collection of cameras, projectors, television sets, and equipment from the entire history of both cinema and television. The museum also presents screening of old and new films in two theatres, often featuring the director or someone involved with the production.

Website.

Dot
A small, circular gobo or scrim, from 10 to 20 cm of diameter, that blocks part of a luminaire's light from falling on specific area of the set or the lens of the camera. Also called target.

More next week :)

Stmaps by Xuan Prada

One of the first treatments that you will have to do to your VFX footage is removing lens distortion. This is crucial for some major tasks, like tracking, rotoscoping, image modelling, etc.
Copy lens information between different footage or between footage and 3D renders is also very common. Working with different software like 3D equalizar, Nuke, Flame, etc, having a common and standard way to copy lens information seems to be a good idea. Uv maps are probably the easiest way to do this, as they are plain 32 bit exr images.

  • Using lens grids is always the easiest, fastest and most accurate way of delensing.
  • Set the output type to displacement and look through the forward channel to see the uvs in viewport.
  • Write the image as .exr 32 bits
  • This will output the uv information and can be read in any software.
  • To apply the lensing information to your footage or renders, just use a Stmap connected to the footage and to the uv map.

Environment reconstruction + HDR projections by Xuan Prada

I've been working on the reconstruction of this fancy environment in Hackney Wick, East London.
The idea behind this exercise was recreating the environment in terms of shape and volume, and then project HDRIs on the geometry. Doing this we can get more accurate lighting contribution, occlusion, reflections and color bleeding. Much better environment interaction between 3D assets. Which basically means better integrations for our VFX shots.

I tried to make it as simple as possible, spending just a couple of hours on location.

  • The first thing I did was drawing some diagrams of the environment and using a laser measurer cover the whole place writing down all the information needed for later when working on the virtual reconstruction.
  • Then I did a quick map of the environment in Photoshop with all the relevant information. Just to keep all my annotations clean and tidy.
  • With drawings and annotations would have been good enough for this environment, just because it's quite simple. But in order to make it better I decided to scan the whole place. Lidar scanning is probably the best solution for this, but I decided to do it using photogrammetry. I know it takes more time but you will get textures at the same time. Not only texture placeholders, but true HDR textures that I can use later for projections.
  • I took around 500 images of the whole environment and ended up with a very dense point cloud. Just perfect for geometry reconstruction.
  • For the photogrammetry process I took around 500 shots. Every single one composed of 3 bracketed exposures, 3 stops apart. This will give me a good dynamic range for this particular environment.
  • Combined the 3 brackets to create rectilinear HDR images. Then exported them as both HDR and LDR. The exr HDRs will be used for texturing and the jpg LDR for photogrammetry purpose.
  • Also did a few equirectangular HDRIs with even higher dynamic ranger. Then I projected these in Mari using the environment projection feature. Once I completed the projections from different tripod positions, cover the remaining areas with the rectilinear HDRs.
  • These are the five different HDRI positions and some render tests.
  • The next step is to create a proxy version of the environment. Having the 3D scan this so simple to do, and the final geometry will be very accurate because it's based on photos of the real environment. You could also do a very high detail model but in this case the proxy version was good enough for what I needed.
  • Then, high resolution UV mapping is required to get good texture resolution. Every single one of my photos is 6000x4000 pixels. The idea is to project some of them (we don't need all of them) through the photogrammetry cameras. This means great texture resolution if the UVs are good. We could even create full 3D shots and the resolution would hold up.
  • After that, I imported in Mari a few cameras exported from Photoscan and the correspondent rectilinear HDR images. Applied same lens distortion to them and project them in Mari and/or Nuke through the cameras. Always keeping the dynamic range.
  • Finally exported all the UDIMs to Maya (around 70). All of them 16 bit images with the original dynamic range required for 3D lighting.
  • After mipmapped them I did some render tests in Arnold and everything worked as expected. I can play with the exposure and get great lighting information from the walls, floor and ceiling. Did a few render tests with this old character.