In this video I will show you my process to convert 3D scans into assets ready for production. I believe the audio is in Spanish so, feel free to mute it or try to learn some Cervantes language :)
- Export your scene from Maya with the geometry and camera animation.
- Import the geometry and camera in Nuke.
- Import the footage that you want to project and connect it to a Project 3D node.
- Connect the cam input of the Project 3D node to the previously imported camera.
- Connect the img input of the ReadGeo node to the Project 3D node.
- Look through the camera and you will see the image projected on to the geometry through the camera.
- Paint or tweak whatever you need.
- Use a UVProject node and connect the axis/cam input to the camera and the secondary input to the ReadGeo.
- Projection option of the UVProjection should be set as off.
- Use a ScanlineRender node and connect it’s obj/scene input to the UVProject.
- Set the projection mode to UV.
- If you swap from the 3D view to the 2D view you will see your paint work projected on to the geometry uvs.
- Finally use a write node to output your DMP work.
- Render in Maya as expected.
What if you are working with Ptex but need to do some kind of Zbrush displacement work?
How can you render that?
As you probably now, Zbrush doesn't support Ptex. I'm not a super fan of Ptex (but I will be soon) but sometimes I do not have time or simply I don't want to make proper UV mapping. Then, if Zbrush doesn't export Ptex and my assets don't have any sort of UV coordinates, can't I use Ptex at all for my displacement information?
Yes, you can use Ptex.
- In this image below, I have a detailed 3D scan which has been processed in Meshlab to reduce the crazy amount of polygons.
- Now I have imported the model via obj in Zbrush. Only 500.000 polys but it looks great though.
- We are going to be using Zbrush to create a very quick retopology for this demo. We could use Maya or Modo to create a production ready model.
- Using the Zremesher tool which is great for some type of retopology tasks, we get this low res model. Good enough for our purpose here.
- Next step would be exporting both model, high and low resolution as .obj
- We are going to use these models in Mudbox to create our Ptex based displacement. Yes, Mudbox does support Ptex.
- Once imported keep both of them visible.
- Export displacement maps. Have a look in the image below at the options you need to tweak.
- Basically you need to activate Ptex displacement, 32bits, the texel resolution, etc)
- To setup your displacement setup in Maya and Vray just follow the 32 bits displacement rule.
- And that's it. You should be able to render your Zbrush details using Ptex now.
These days we use a lot of 3D scans in VFX productions.
They are very useful, the got a lot of detail and we can use them for different purposes. 3D scans are great.
But obviously, a 3D scan needs to be processed in so many ways, depending on the use you are looking for. It can be used for modelling reference, for displacement extraction, for colour and surface properties references, etc.
One of the most common uses, is as base for modelling tasks.
If so, you would need to retopologize the scan to convert it in a proper 3D model, ready to be mapped, textured and so on.
In Maya 2014 we have a few tools that are great and easy to use.
I’ve been using them for quite a while now while processing my 3D scans, so let me explain to you which tools I do use and how I use them.
- In this 3D scan you can see the amount of nice details. They are very useful for a lot of different tasks.
- But if you check the actual topology you will realize is quite useless at this point in time.
- Create a new layer and put inside the 3D scan.
- Activate the reference option, so we can’t select the 3D scan in viewport, which is quite handy.
- In the snap options, select the 3D scan as Live Surface.
- Enable the modelling kit.
- Use live surface as transform constraints.
- This will help us to stick the new geometry on top of the 3D scan with total precision.
- Use the Quad Draw tool to draw polygons.
- You will need 4 points to create a polygon face.
- Once you have 4 pints, click on shift yo see (preview) the actual polygon.
- Shift click will create the polygon face.
- Draw as many polygons as you need.
- LMB to create points. MMB to move points/edges/polys. CTRL+LMB to delete points/edges/polys.
- CTRL+MMB to move edge loops.
- If you want to extrude edges, just select one and CTRL+SHIFT+LMB and drag to a desired direction.
- To add edge loops SHIFT+LMB.
- To add edge loops in the exact center SHIFT+MMB.
- To draw polygons on the fly, click CTRL+SHIFT+LMB and draw in any direction.
- To change the size of the polygons CTRL+SHIFT+MMB.
- To fill an empty space with a new polygon click on SHIFT+LMB.
- To weld points CTRL+MMB.
- If you need to do retopology for cylindrical, tubular or similar surfaces, is even easier and faster.
- Just create a volume big enough to contain the reference model.
- Then go to Modeling Toolkit, edit -> Shrinkwrap Selection.
- The new geometry will stick on to the 3D scan.
- The new topology will be clean, but maybe you were hoping for something more tidy and organize.
- No problem, just select the quad draw. By default the relax tool is activated. Paint wherever needed and voila, clean and tidy geometry followinf the 3D scan.
Recently working with V-Ray I discovered that these are the render passes which I use more often.
Simple scene, simple asset, simple texture and shading and simple lighting, just to show my render passes and pre-compositing stuff.
- Global Illumination
- Direct lighting
- Snow (or up/down)
- XYZ (or global position)
I was lucky enough to find this simple but effective script to import your Mari textures in to Maya in a really quick way.
It is a Python script created by Kushal Goenka.
Follow these instructions to install the script.
# This Script Automates the Process of Setting up given MARI Texture Patches
# into one Single Layered Texture in Maya.
# Copy Script to ‘\maya\2012-x64\scripts’ folder. ’2012-x64′ might by different.
# Source the Script. ( Script Editor > File > Source Script… )
# Call the Python Command: ‘Mari2Maya()’ (or add to Shelf)
# Export textures from MARI with ‘$UDIM.extension’ at the end.
# For Example: $ENTITY_$CHANNEL_$UDIM.tif >> Castle07_color_1003.tif
# 1. Drag Texture Files into Hypershade.
# 2. Drag Select all Imported Texture File Nodes in the Hypershade Work Area.
# 3. Run the Script. via ‘Mari2Maya()’ Let the Magic happen.
- Select the object which you want to bake shading information.
- Go to Lighting&Shading -> Bake (RenderMan).
- Set up the render settings and the outputs that you want to bake.
- Your baked textures will be placed here project/renderman/sceneName/bakedMaps/
- Select the object’s shape node in the Attribute Editor and then go to Attribute -> RenderMan -> Add Sudvision Scheme. This will create a smooth surface.
- Load your displacement texture in the Hypershade.
- Play with the Alpha Gain and Alpha Offset to scale the image.
- Alpha Offset should be half negative than Alpha Gain. So if Alpha Gain is 2 Alpha Offset should be -1
- Drag the displacement texture on to the displacement material in the shading group attributes.
- This will create a displacement node.
- Select the displacement node and go to Attributes -> RenderMan -> Add Displacement Attribues.
- Set the displacement bound to something similar to your highest displacement value.
- If you are using ray trace rendering you need to add ray traced attributes to your displacement.
- Select your shape node and go to Attribute -> RenderMan -> Manage attributes and select TraceDisplacement.
- Turn the shading rate down to increase the quality of your displacement. You can add a RenderMan attribute to control this instead change the global render options, you’ll save a lot of render time.
I’m starting a new work with V-Ray 2.0 for Maya. I never worked before with this render engine, so first things first.
One of my first things is create a nice neutral light rig for testing shaders and textures. Setting up linear workflow is one of my priorities at this point.
Find below a quick way to set up this.
- Set up your gamma. In this case I’m using 2,2
- Click on “don’t affect colors” if you want to bake your gamma correction in to the final render. If you don’t click on it you’ll have to correct your gamma in post. No big deal.
- The linear workflow option is something created for Chaos Group to fix old VRay scenes which don’t use lwf. You shouldn’t use this at all.
- Click on affect swatches to see color pickers with the gamma applied.
- Once you are working with gamma applied, you need to correct your color textures. There are two different options to do it.
- First one: Add a gamma correction node to each color texture node. In this case I’, using gamma 2,2 what means that I need to use a ,0455 value on my gamma node.
- Second option: Instead of using gamma correction nodes for each color texture node, you can click on the texture node and add a V-Ray attribute to control this.
- By default all the texture nodes are being read as linear. Change your color textures to be read as sRGB.
- Click on view as sRGB on the V-Ray buffer, if not you’ll see your renders in the wrong color space.
- This is the difference between rendering with the option “don’t affect colors” enabled or disabled. As I said, no big deal.
- First of all activate Mental Ray in the Rendering Options.
- Create a Physical Sun and Sky system.
- Activate Final Gather. At the moment should be enough if you select the Preset Preview Final Gather. It’s just for testing purposes.
- Check that the mia_exposure_simple lens shader has been added to the camera. And Check that the gamma is set to 2.2
- Launch a render and you’ll realize that everything looks washed.
- We need to add a gamma correction node after each texture node, even procedural color shaders.
- Connect the texture file’s outColor to the “Gamma Correction” node’s value. Then connect the “Gamma Correct” node’s outValue to the shader’s diffuse.
- Use the value 0.455 in the gamma node.
- The gamma correction for sRGB devices (with a gamma of approximately 2.2) is 1/2.2 = 0.4545. If your texture files are gamma corrected for gamma 2.2, put 0.455 into the Gamma attribute text boxes.
- If you launch a render again, everything should looks fine.
- Once you are happy with the look of your scene, to do a batch render you need to put the gamma value of the lens camera shader to 1.
- Under the quality tab, in the framebuffer options, select RGBA float, set the gamma to 1 and the colorspace to raw.
- Render using openExr and that’s it.
Yes I know, make your Mari textures work inside Maya could be a bit weird specially if you never worked before with multi UV spaces.
I hope to give you some clues with this quick and dirty step by step tutorial.
I’m using the blacksmith guy from The Foundry who has 40 textures with 4k resolution each.
- First of all check your model and UVs.
- Export all your textures from Mari. You know, right click on the desired channel and export.
- Now you can type the naming convention that you want to use. I like to use COMPONENT_UDIM.tif COL_1001.tif for example.
- Check your output folder. All your textures should have been exported.
- Import your model in Maya and check the UV mapping. You need to understand how the UV shells are called inside Maya to offsetting your texture maps.
- The default UV space is 0-0 the next one on the right will be 0-1 the next one 1-1 and so on.
- Open the first texture map called COL_1001.tif in the hypershade and rename the image node to COL_1001 and the 2D placement node to UDIM_1001.
- Do the same with all the textures.
- Select all the texture nodes and open the attribute spread sheet.
- Set the default color RGB to 0.
- Select all the 2D place texture nodes and open again the attribute spread sheet.
- Switch off wrapU and wrapV.
- Type the properly offsets in the translate frameU and translate frameV.
- Create a layered texture node.
- Select all the texture images nodes and click and drag with MMB from an empty space of the hypershade to the layered texture node attributes tab. This will create one layer with each texture map.
- Delete the default layer.
- Set the blending mode of all the layers to lightnen.
- Connect the layered texture to the input color of one shader of your election.
- Repeat the whole process with all your channels. (SPEC, BUMP, DISP, etc)
One of the most useful workflows when you are texturing is bake your textures from one UV set to another one.You will need to do this for different reasons, one of them for example could be using different resolution models with different UV mapping or using a different UV mapping for grooming, etc.
The first time I tried to do this in Maya I realize that MentalRay Batch Bake tool doesn't work fine, I don't know why but I couldn't use it.
I solved the problem using Transfer Maps tool for Maya and decided to write down for future chances.
- Check the different UV sets in Maya.
- Apply your textures and shaders.
- I'm using six different shaders with six different texture maps.
- If you use the Mental Ray Batch Bake tool (common used for baking purposes) and configure all the parameters, you'll realize that the baked maps are completely black. Something is wrong realated with UV sets. Bug? I don't know.
- You need to use the Maya Transfer Maps tool. Lighting/Shading -> Transfer Maps.
- Duplicate the mesh and rename to source and target.
- Select target and his UV set.
- Select source.
- Select desired map to bake. (probably diffuse)
- Select the path.
- Select resolution.
- Your baked texture is ready.
Black holes are a key feature in 3D lighting and compositing, but black holes with bounced information are super!
- Apply a Mental Ray Production Shader called “mip_rayswitch_advanced” to your black hole object.
- In the “eye” channel, connect a “surface shader” with the “out_matte_opacity” parameter pure black.
- In the Final Gather input, connect the original shader of your object. (a blinn shader for example).