In this video I will show you my process to convert 3D scans into assets ready for production. I believe the audio is in Spanish so, feel free to mute it or try to learn some Cervantes language :)
Meshlab is probably the only available solution (proprietary Lidar software doesn't count) when you have to deal with very heavy poly count. I'm working with some complex terrains, some of them up to 50 million polys and Maya or Zbrush just can't handle that. I'm reducing the poly count considerably fast in Meshlab with its polygon reduction tools.
- This terrain has more than 16 million polys. Maya can't handle this very well, and Zbrush can't manage memory to even open it. Just import it in Meshlab.
- You will be using the Quadric Edge Collopse Decimation tool a lot.
- There are different strategies available, I like to use the percentage one. In this case by 0.5
- I'll be getting an 8 million poly terrain.
- I just run the tool one more time to get a 4 million terrain. I can work in Maya with this :)
Having some fun with these small guys.
I will try to do a couple of these a month.
Quick tip here. Whenever possible use some kind of high frequency detail to capture references for your assets. In this scenario I'm scanning with photos this huge rock, with only 50 images and very bad conditions. Low lighting situation, shot hand-held, no tripod at all, very windy and raining.
Thanks to all the great high frequency detail on the surface of this rock the output is quite good to use as modeling reference, even to extract highly detailed displacement maps.
Notice in the image below that I'm using only 50 pictures. Not much you might say. But thanks to all the tiny detail the photogrammetry software does very well reconstructing the point cloud to generate the 3D model. There is a lot of information to find common points between photos.
The shooting pattern couldn't be more simple. Just one eight all around the subject. The alignment was completely successfully in Photoscan.
As you can see here, even with a small number of photos and not the best lighting conditions, the output is quite good.
I did an automatic retopology in Zbrush. I don't care much about the topology, this asset is not going to be animated at all. I just need a manageable topology to create a nice uv mapping and reproject all the fine detail in Zbrush and use it later as displacement map.
A few render tests.
Mi friend David Munoz Velazquez just pointed me to this great script to flatten geometries based on UV Mapping, pretty useful for re-topology tasks. In this demo I use it to create nice topology for 3D garments in Marvelous Designer. Then I can apply any new simulation changes to the final mesh using morphs. Check it out.
One feature that I really like in Clarisse are the shading layers. With them you can drive shaders based on naming convention or location of assets in the scene. With this method you can assign shaders to a very complex scene structure in no time. In this particular case I'll be showing you how to shade an entire army and create shading/texturing variations in just a few minutes.
I'll be using an alembic cache simulation exported from Maya using Golaem. Usually you will get thousand of objects with different naming convention, which makes the shading assignment task a bit laborious. With shading layer rules in Clarisse we can speed up a lot this tedious process
- Import an alembic cache with the crowd simulation through file -> import -> scene
- In this scene I have 1518 different objects.
- I'm going to create an IBL rig with one of my HDRIs to get some decent lighting in the scene.
- I created a new context called geometry where I placed the army and also created a ground plane.
- I also created another context called shaders where I'm going to place all my shaders for the soldiers.
- In the shaders context I created a new material called dummy, just a lambertian grey shader.
- We are going to be using shading layers, to apply shaders globally based on context and naming convention. I created a shading layers called army (new -> shading layer).
- With the pass (image) selected, select the 3D layer and apply the shading layer.
- Using the shading layer editor, add a new rule to apply the dummy shader to everything in the scene.
- I'm going to add a rule for everything called heavyArmor.
- Then just configure the shader for the heavyArmour with metal properties and it's correspondent textures.
- Create a new rule for the helmets and apply the shader that contains the proper textures for the helmets.
- I keep adding rules and shaders for different parts of the sodliers.
- If I want to create random variation, I can create shading layers for specific names of parts or even easier and faster, I can put a few items in a new context and create a new shading rule for them. For the bodies I want to use caucasian and black skin soldiers. I grabbed a few bodies and place them inside a new context called black. Then create a new shading rules where I apply a shader with different skin textures to all the bodies in that context.
- I repeated the same process for the shields and other elements.
- At the end of the process I can have a very populated army with a lot of random texture variations in just a few minutes.
- This is how my shading layers look like at the end of the process.
Texture artists, matte painters and environment artists often have to deal with UDIMs in Nuke. This is a very basic template that hopefully can illustrate how we usually handle this situation.
- Slower than using Mari. Each UDIM is treated individually.
- No virtual texturing, slower workflow. Yes, you can use Nuke's proxies but they are not as good as virtual texturing.
- No paint buffer dependant. Always the best resolution available.
- Non destructive workflow, nodes!
- Save around £1,233 on Mari's license.
- I'll be using this simple footage as base for my matte.
- We need to project this in Nuke and bake it on to different UDIMs to use it later in a 3D package.
- As geometry support I'm using this plane with 5 UDIMs.
- In Nuke, import the geometry support and the footage.
- Create a camera.
- Connect the camera and footage using a Project 3D node.
- Disable the crop option of the Project 3D node. If not the proejctions wouldn't go any further than UV range 0-1.
- Use a UV Tile node to point to the UDIM that you need to work on.
- Connect the img input of the UV Tile node to the geometry support.
- Use a UV Project node to connect the camera and the geometry support.
- Set projection to off.
- Import the camera of the shot.
- Look through the camera in the 3D view and the matte should be projected on to the geometry support.
- Connect a Scanline Render to the UV Project.
- Set the projection model to UV.
- In the 2D view you should see the UDIM projection that we set previously.
- If you need to work with a different UDIM just change the UV Tile.
- So this is the basic setup. Do whatever you need in between like projections, painting and so on to finish your matte.
- Then export all your UDIMs individually as texture maps to be used in the 3D software.
- Here I just rendered the UDIMs extracted from Nuke in Maya/Arnold.
If you are new to Arnold you are probably looking for RAW lighting and albedo AOVs in the AOV editor. And yes you are right, they are not there. At least when using AiStandard shaders.
The easiest and fastest solution would be to use AlShaders, they include both, RAW lighting and albedo AOVs. But if you need to use AiStandard shaders, you will have to create your own AOVs quite easily).
- In this capture you can see available AOVs for RAW lighting and albedo for the AlShaders.
- If you are using AiStandard shaders you won't see those AOVs.
- If you still want/need to use AiStandard shaders, you will have to render your beauty pass with the standard AOVs and utility passes and you will have to create the albedo pass by hand. You can easily do this replacing AiStandard shaders by Surface shaders.
- if we have a look at them in Nuke they will look like these.
- If we divide the beauty pass by the albedo pass we will get the RAW lighting.
- We can now modify only the lighting without affecting the colour.
- We can also modify the colour component without modifying the lighting.
- In this case I'm color correcting and cloning some stuff in the color pass.
- With a multiply operation I can combine both elements again to obtain the beauty render.
- If I disable all the modification to both lighting and color, I should get exactly the same result as the original beauty pass.
- Finally I'm adding a ground using my shadow catcher information.
As every Wed, two new words for our dictionary.
(1) The term sometimes used for the editor. (2) A large flag, long and narrow in shape, used for blocking light from the camera or an area of the set.
A 200 Watt spotlight with a three inches Fresnel lens.
This video was created for elephantvfx.com which means that it's only available in Spanish. But you can mute it and follow what I do visually :)
I basically explain how to render UVs AOVs in a couple of different ways using Maya and Arnold.
The group of directors with the highest level of achievement (Chaplin, Griffith, Hitchcock, Welles, etc.). The term is used in auteur criticism (Andre Sarris).
A second shot of a scene made either as insurance in case the previous shot might be faulty or as an alternative to offer the editor another camera angle or camera distance.
I've been working on this character for a texturing and look-dev course that I'll be presenting pretty soon. Hope you like it.
One of the most common tasks once your colour textures are painted is going to Zbrush or Mudbox to sculpt some heavy details based on what you have painted in your colour textures.
We all use UDIMs of course, but importing UDIMs in Zbrush is not that easy. Let's see how this works.
- Export all your colour UDIMs out of Mari.
- Import your 3D asset in Zbrush and go to Polygroups -> UV Groups. This will create a polygroups based on UDIMs.
- With ctrl+shift you can isolate UDIMs.
- Now you have to import the texture that corresponds to the isolated UDIM.
- Go to Texture -> Import. Do not forget to flip it vertically.
- Go to Texture Map and activate the texture.
- At this point you are only viewing the texture, not applying it.
- Go to Polypaint, enable Colorize and click on Polypaint from texture.
- This will apply the texture to the mesh. As it's based on polypaint, the resolution of the texture will be based on the resolution of the mesh. If it doesn't look right, just go and subdivide the mesh.
- Repeat the same process for all the UDIMs and you'll be ready to start sculpting.
I've been using Manfrotto Befree tripods for a while now, and I just realize that they are a perfect tool for my on-set work.
I rarely use them as primary tripod, specially when working with big and heavy professional DSLRs and multi zoom lenses. In my opinion these tripods are not stable enough to support such as heavy pieces of gear.
I mean, they are if you are taking "normal" photos, but in VFX we usually do bracketing all the time. Like for texturing references, or HDRI's. The combination of the gear, plus the rotation of the mirror plus the quick pace of the bracketing, will result in slightly different brackets. Which obviously mean that the alignment process will not be perfect. I wouldn't recommend using these tripods for bracketing with big camera bodies and multi zoom lenses. I do use them for bracketing with prime lenses such as 28mm or 50mm. They are not that heavy and the tripods seem to be stable enough with these lenses.
I do strongly recommend these tripods for photogrammetry purposes when you have to move around the subject or set. Mirrorless cameras such a Sony A7 or Sony a6000 plus prime lenses are the best combination when you need to move a lot around the set.
I also use Befree's a lot as support tripods. They just fit perfectly my Akromatic kits, both Mono and Twins. Befree tripods are tiny and light so I can easily move around with two or three and they even fit in my backpacks or hard cases at once.
As you can see below, these tripods offer great flexibility in terms of height and expansion. They are tiny when compact and middle sized when expanded completely. Check the features on Manfrotto's site.
I also use these tripods as support for my photogrammetry turntable.
Moving around with such a small setup has never been so easy.
Obviously I also use them for regular photography. Just attach my camera to the provided ball head and start shooting around the set.
Finally I also use Befree to mount my Nodal Ninja. Again you need to be careful while bracketing and always use a remote trigger, but having the possibility to move around with two or three of these tripods is just great.
They have two different version (both in aluminium and carbon fibre). Both of them come with ball head and quick release plate. But, the ball head in the smallest tripod is fixed, can't be removed. Which means a lot of limitations because you won't be able to attach most of the accessories normally used for VFX.
Yes, I know that Mari 3.x supports OpenSubdiv, but I've had some bad experiences already where Mari creates artefacts on the meshes.
So for now, I will be using the traditional way of exporting subdivided meshes from Maya to Mari. These are the settings that I usually use to avoid distortions, stretching and other common issues.
The real ones :)