This is a very quick demo of how to install on Mac and use the gizmo mmColorTarget or at least how I use it for my texturing/references and lighting process. The gizmo itself was created by Marco Meyer.
This is a very quick overview of how to use AOVs in Clarisse.
I started from this very simple scene.
Select your render image and then the 3D layer.
Open the AOV editor and select the components that you need for your compositing. In my case I only need diffuse, reflection and sss.
Click on the plus button to enable them.
Now you can check every single AOV in the image view frame buffer.
Create a new context called "compositing" and inside of it create a new image called "comp_image".
Add a black color layer.
Add an add filter and texture it using a constant color. This will be the entry point for our comp.
Drag and drop the constant color to the material editor.
Drag and drop the image render to the material editor.
If you connect the image render to the the constant color input, you will see the beauty pass. Let's split it into AOVs.
Rename the map to diffuse and select the diffuse channel.
Repeat the process with all the AOVs, you can copy and paste the map node.
Add a few add nodes to merge all the AOVs until you get the beauty pass. This is it, your comp in a real time 3D environment. Whatever you change/add in you scene will be updated automatically.
Lets say that you don't need your comp inside Clarisse. Fine, just select your render image, configure the output and bring the render manager to output your final render.
- Just do the comp in Nuke as usual.
I recorded this video while ago for another website, it's in Spanish but you won't have any problem to follow the instructions just watching the video.
Two quick shots that I did for this character.
This is a work in progress image.
More shots on their way.
Just for fun :)
Short and sweet (hopefully).
It seems to be quite a normal topic these days. Mari and Zbrush are commonly used by texture artists. Combining displacement maps in look-dev is a must.
I'll be using Maya and Arnold for this demo but any 3D software and renderer is welcome to use the same workflow.
- Using Zbrush displacements is no brainer. Just export them as 32 bit .exr and that's it. Set your render subdivisions in Arnold and leave the default settings for displacement. Zero value is always 0 and height should be 1 to match your Zbrush sculpt.
- These are the maps that I'm using. First the Zbrush map and below the Mari map.
- No displacement at all in this render. This is just the base geometry.
- In this render I'm only using the Zbrush displacement.
- In order to combine Zbrush displacement maps and Mari displacement maps you need to normalise the ranges. If you use the same range your Mari displacement would be huge compared with the Zbrush one.
- Using a multiply node is so easy to control the strength of the Mari displacement. Connect the map to the input1 and play with the values in the input2.
- To mix both displacement maps you can use an average node. Connect the Zbrush map to the input0 and the Mari map (multiply node) to the input1.
- The average node can't be connected straight o the displacement node. Use ramp node with the average node connected to it's color and then connect the ramp to the displacement default input.
- In this render I'm combining both, Zbrush map and Mari map.
- In this other example I'm about to combine two displacements using a mask. I'll be using a Zbrush displacement as general displacement, and then I'm going to use a mask painted in Mari to reveal another displacement painted in Mari as well.
- As mask I'm going to use the same symbol that I used before as displacement 2.
- And as new displacement I'm going to use a procedural map painted in Mari.
- The first thing to do is exactly the same operation that we did before. Control the strength of the Mari's displacement using a multiply node.
- Then use another multiply node with the Mari's map (multiply) connected to it's input1 and the mask connected to it's input2. This will reveal the Mari's displacement only in the white areas of the mask.
- And the rest is exactly the same as we did before. Connect the Zbrush displacement to the input0 of the average node and the Mari's displacement (multiply) to the input1 of the average node. Then the average node to the ramp's color and the ramp to the displacement default input.
- This is the final render.
This is the new trailer for Mission Impossible 5. I've been working on this over the last few months.
Great job by our Double Negative's team.
We use RGB masks all the time in VFX, don't we?
They are very handy and we can save a lot of extra texture maps combining 4 channels in one single texture map RGB+A.
We use them to mix shaders in look-dev stage, or as IDs for compositing, or maybe as utility passes for things like motion blur o depth.
Let's see how I use RGB masks in my common software: Maya, Clarisse, Mari and Nuke.
- I use a surface shader with a layered texture connected.
- I connect all the shaders that I need to mix to the layered texture.
- Then I use a remapColor node with the RGB mask connected as mask for each one of the shaders.
- We need to indicate which RGB channel we want to use in each remapColor node.
- Then just use the output as mask for the shaders.
- In Clarisse I use a reorder node connected to my RGB mask.
- Just indicate the desired channel in the channel order parameter.
- To convert the RGB channel to alpha just type it in the channel order field.
- You will only need a shuffle adjustment layer and select the required channel.
- You can use a shuffle node and select the channel.
- Or maybe a keyer node and select the channel in the operation parameter. (this will place the channel only in the alpha).
As VFX artists we always need to place our color charts and lighting checkers (or practical spheres) somewhere on the ground while shooting bracketed images for panoramic HDRI creation. And we know that every single look-development and / or lighting artist is going to request at least all these references for their tasks back at the facility.
I'm tired of seeing my VFX peers working on set placing their lighting checkers and color charts on top of their backpacks or hard cases to make them visible on their HDRIs. In the best scenario they usually put the lighting checkers on a tripod with it's legs bended.
I've been using my own base to place my lighting checkers and all my workmates keep asking me about it, so it's time to make it available for all of you working on set on a daily basis.
The akromatic base is light, robust and made of high quality stainless steel. It is super simple to attach our lighting checkers to it and keep them safe and more important, visible in all your images. Moving all around the set with your lighting checkers and color charts from take to take is now simple, quick and safe.
The akromatic base is compatible with our lighting checkers "Mono" and "Twins".
The new version of UV Layout v2.08.06 was released a few weeks ago and it is time to talk about some of the new exciting features. I'll be mentioning also old tools and features from previous versions that I'm starting to use now and didn't use much before.
- Display -> Light: It changes the way lighting affects the scene, it is very useful when some parts are occluded in the checking window. It has been there for a while but I just started using it not long ago.
- Settings -> F1 F2 F3 F4 F5: This buttons will allow you to create shortcuts for other tools, so instead of using the menus you can map one of the function keys to use that tool.
- Preferences -> Max shells: This option will allow you to increase the number of shells that UV Layout can handle. This is a very very important feature. I use it a lot specially when working with crazy data like 3D scans and photogrammetry.
- Flatten multiple objects at once: It didn't work before but it does now. Just select a bunch of shells and press "r".
- Pack -> Align shells to axes: Select your shells, enable the option "align shells to axes" and click on pack.
- Pack by tiles: Now UDIM organization can be done inside UV Layout. Just need to specify the number of UDIMs in X and Y and click on pack.
- Pack -> Move, scale, rotate: As part of UDIM organization now you can move whole tiles around.
- Trace masks: This is a great feature! Specially useful if you already have a nice UV mapping and suddenly need to add more pieces to the existing UV layout. Just mask out the existing UVs and place the new one in the free space. To do so just place in boxes de new UVs and go to displace -> trace and select your mask. Click on pack and that's it, your new UVs will be placed in the proper space.
Segment marked polys: This is great specially for very quick UV mapping. Just select a few faces click on segment marked polys and UV Layout will create flat projections for them.
- Set size: This is terrific! one of my favourite options. Make the UVs for one object and check the scale under Move/Scale/Rotate -> Set size. Then use that information in the preferences. If later you import a completely different objec, UV Layout will be using the size of the previous object to match the scale between objects. That means all your objects will have exactly same scale and resolution UVs wise. Amazing for texture artists!
- Pin edges: A classic one. When you are relaxing a shell and want to keep the shape, press "pp" on the outer edges to pin them. Then around the eyes or other interior holes press "shift+t" around the edges. Then you can relax the shell keeping the shape of the object.
- Anchor points: Move one point on the corner with "ctrl+MMB" and press "a" to make it anchor point. Then move another point in the opposite corner and do the same. Then press "s" on top of each anchor point. Then "ss" on any point in between the anchors to align them. Combining this with pinned edges will give you perfect shapes.
When subdividing models in Clarisse for rendering displacement maps, the software subdivides both geometry and UVs. Sometimes we might need to subdivide only the mesh but keeping the UVs as they are originally.
This depends on production requirements and obviously on how the displacement maps were extracted from Zbrush or any other sculpting package.
If you don't need to subdivide the UVs first of all you should extract the displacement map with the option SmoothUV turned off.
Then in Clarisse, select the option UV Interpolation Linear.
The very first teaser trailer for Mission Impossible 5 is out!
I've been working on this project for a while :)
The other I published a very simple image that I did just to test a few things. A couple of photographic techniques, my new Promote Control, procedural masks done in Substance Designer and other grading related stuff in Nuke. Just a few things that I wanted to try for a while.
This is a quick breakdown, as simple as the image itself.
- The very first thing that I did was taking a few stills of the plate that I wanted to use as background to place my CG elements. From the very beginning I wanted to create an extremely simple image, something that I could finish in a few hours. With that in mind I wanted to create a very realistic image, and I'm not talking about lighting or rendering, I'm talking about the general feeling of being realistic. With bad framing, bad compositing, with total lack of lighting intention, with no cinematic components at all. The usual bad picture that everyone posts in social networks once in a while, without any narrative or visual value.
- In order to create good and realistic CG compositions we need to gather a lot of information on-set. In this case everything is very simple. When you take pictures you can read the meta-data later in the computer. This will help to see the size of the sensor of your digital camera and the focal used to take the pictures. With this data we can replicate the 3D camera in Maya or any other 3D package.
- It is also very important to get good color references. Just using a Macbeth Chart we can neutral grade the plate and everythign we want to re-create from scratch in CG.
- The next step is to gather lighting information on-set. As you can imagine everything is so simple because this is a very tiny and simple image. There are not practical lights on-set just a couple of tiny bulbs on the ceiling. But they don't affect the subject so don't worry much about them. The main lighting source is the sun (although it was pretty much cloudy) coming through the big glass door on the right side of the image, out of camera. So we could say the lighting here is pretty much ambient light.
- With only an equirectangular HDRI we can easily reproduce the lighting conditions on set. We won't need CG lights or anything like that.
- This is possible because I'm using a very nice HDRI with a huge range. Linear ranges go up to 252.00000
- I didn't eve care about cleaning up the HDRI map. I left the tripod back there and didin't fix some ghosting issues. These little issued didn't affect at all my CG elements.
- It is very important to have lighting and color references inside the HDRI. If you pay attention you will see a Macbeth Chart and my akromatic lighting checkers placed in the same spot where the CG elements will be placed later.
- Once the HDRI is finished, it is very importante to have color and lighting references in the shot context. I took some pictures with the macbeth chart and the akromatic lighting checkers framed in the shot.
- Actually it is not exactly the same framing than the actual shot, but the placement of the checkers, the lighting source and the middle exposure remains the same.
- For this simple image we don't need to make any tracking or rotospocing work. This is a single frame work and we have a 90 degree angle between the floor and the shelf. With that in mind plus the meta-data from the camera reproducing the 3D camera is extremely simple.
- As you probably expected, modellin was very simple and basic.
- With this basic models I also tried to keep texturing very simple. Just found a few references on internet nad tried to match them as close as I could. Only needed 3 texture channels (diffuse, specular and bump). Every single object has a 4k texture map with only 1 UDIM. Didn't need more than that.
- As I said before, lighting wise I only needed an IBL setup, so simple and neat. Just an environment light with my HDRI connected to it.
- it is very important that your HDRI map and your plate share similar exposure so you can neutral grade them. Having same or similar exposure and Macbeth Charts in all your sequences is so simple to copy/paste gradings.
- Akromatic lighting checkers would help a lot to place correctly all the reflections and regulate lighting intensity. They would help also to establish the penumbra area and the behaviour of the lighting decay.
- Once the placement, intensity and grading ob the IBL are working fine, it is a good idea to render a "clay" version of the scene. This is a very smart way to check the behaviour of the shadows.
- In this particular example they work very well. This is because of the huge range that I have in my HDRI. With clampled HDRI this wouldn't be working that good and you would probably have to recreate the shadows using CG lights.
- The render was so quick. I don't know exactly but something around 4 or 5 minutes. Resolution 4000x3000
- Tried to keep 3D compositing simple. Just one render pass with a few AOV's. Direct diffuse, indirect diffuse, direct specular, indirect specular, refraction and 3 IDs to individually play with some objects.
- An this is it :)