VFX footage input/output by Xuan Prada

This is a very quick and dirty explanation of how the footage and specially colour is managed in a VFX facility.

Shooting camera to Lab
The RAW material recorded on-set goes to the lab. In the lab it is converted to .dpx which is the standard film format. Sometimes the might use exr but it's not that common.
A lot of movies are still being filmed with film cameras, in those cases the lab will scan the negatives and convert them to .dpx to be used along the pipeline.

Shooting camera to Dailies
The RAW material recorded on-set goes to dailies. The cinematographer, DP, or DI applies a primary LUT or color grading to be used along the project.
Original scans with LUT applied are converted to low quality scans and .mov files are generated for distribution.

Dailies to Editorial
Editorial department receive the low quality scans (Quicktimes) with the LUT applied.
They use these files to make the initial cuts and bidding.

Editorial to VFX
VFX facilities receive the low quality scans (Quictimes) with LUT applied. They use these files for bidding.
Later on they will use them as reference for color grading.

Lab to VFX
Lab provides high quality scans to the VFX facility. This is pretty much RAW material and the LUT needs to be applied.
The VFX facility will have to apply the LUT's film to the work done by scratch by them.
When the VFX work is done, the VFX facility renders out exr files.

VFX to DI
DI will do the final grading to match the Editorial Quicktimes.

VFX/DI to Editorial
High quality material produced by the VFX facility goes to Editorial to be inserted in the cuts.


The basic practical workflow would be.

  • Read raw scan data.
  • Read Quicktime scan data.
  • Dpx scans usually are in LOG color space.
  • Exr scans usually are in LIN color space.
  • Apply LUT and other color grading to the RAW scans to match the Quicktime scans.
  • Render out to Editorial using the same color space used for bringing in footage.
  • Render out Quicktimes using the same color space used for viewing. If wathcing for excample in sRGB you will have to bake the LUT.
  • Good Quicktime settings: Colorspace sRGB, Codec Avid Dnx HD, 23.98 frames, depth million of colors, RGB levels, no alpha, 1080p/23.976 Dnx HD 36 8bit

Iron Man Mark 7 by Xuan Prada

Speed texturing & look-deving session for this fella.
It will be used for testing my IBLs and light-rigs.
Renders with different lighting conditions and backplates on their way.

These are the texture channels that I painted for this suit. Tried to keep everything simple. Only 6 texture channels, 3 shaders and 10 UDIMs.

Color

Color

Specular

Specular

Mask 1

Mask 1

Color 2

Color 2

Roughness

Roughness

Fine displacement

Fine displacement

Zbrush displacement in Clarisse by Xuan Prada

This is a very quick guide to set-up Zbrush displacements in Clarisse.
As usually, the most important thing is to extract the displacement map from Zbrush correctly. To do so just check my previous post about this procedure.

Once your displacement maps are exported follow this mini tutorial.

  • In order to keep everything tidy and clean I will put all the stuff related with this tutorial inside a new context called "hand".
  • In this case I imported the base geometry and created a standard shader with a gray color.
  • I'm just using a very simple Image Based Lighting set-up.
  • Then I created a map file and a displacement node. Rename everything to keep it tidy.
  • Select the displacement texture for the hand and set-up the image to raw/linear. (I'm using 32bit .exr files).
  • In the displacement node set the bounding box to something like 1 to start with.
  • Add the displacement map to the front value, leave the value to 1m (which is not actually 1m, its like a global unit), and set the front offset to 0.
  • Finally add the displacement node to the geometry.
  • That's it. Render and you will get a nice displacement.

Render with displacement map.

Render without displacement map.

  • If you are still working with 16 bits displacement maps, remember to set-up the displacement node offset to 0.5 and play with the value until you find the correct behaviour.

Colour Spaces in Mari by Xuan Prada

Mari is the standard tool these days for texturing in VFX facilities. There are so many reasons for it but one of the most important reasons is that Mari is probably the only texturing dedicated software that can handles colour spaces. In a film environment this is a very important feature because working without having control over colour profiles is pretty much like working blind.
That's why Mari and Nuke are the standard tools for texturing. We also include Zbrush as standard tool for texture artist but only for displacement maps stuff where color managment doesn't play a key role.

Right now colour management in Mari is not complete, at least is not as good as Nuke's, where you can control colour spaces for input and output sources. But Mari offers a basic colour management tools really useful for film environments. We have Mari Colour Profiles and OpenColorIO (OCIO).

As texture artists we usually work with Float Linear and 8-bit Gamma sources.

  • I've loaded two different images in Mari. One of them is a Linear .exr and the other one is a Gamma2.2  .tif
  • With the colour management set to none, we can check both images to see the differences between them
  • We'll get same results in Nuke. Consistency is extremely important in a film pipeline.
  • The first way to manage color spaces in Mari is via LUT's. Go to the color space section and choose the LUT of your project, usually provided by the cinematographer. Then change the Display Device and select your calibrated monitor. Change the Input Color Space to Linear or sRGB depending on your source material. Finally change the View Transform to your desired output like Gamma 2.2, Film, etc.
  • The second method and recommended for colour management in Mari is using OCIO files. We can load these kind of files in Mari in the Color Manager window. These files are usually provided by the cinematographer or production company in general. Then just change the Display Device to your calibrated monitor, the Input Color Space to your source material and finally the View Transform to your desired output.

Breaking a character's FACE IN MODO by Xuan Prada

A few years ago I worked on Tim Burton's Dark Shadows at MPC. We created a full CG face for Eva Green's character Angelique.
Angelique had a fight with Johnny Depp's character Barnabas Collins, and her face and upper body gets destroyed during the action.

In that case, all the broken parts where painted by hand as texture masks, and then the FX team generated 3D geometry and simulations based on those maps, using them as guides.

Recently I had to do a similar effect, but in this particular case, the work didn't require hand painting textures for the broken pieces, just random cracks here and there.
I did some research about how to create this quickly and easily, and found out that Modo's shatter command was probably the best way to go.

This is how I achieve the effect in no time.

First of all, let's have a look to Angelique, played by Eva Green.

 

  • Once in Modo, import the geometry. The only requirement to use this tool is that the geometry has to be closed. You can close the geometry quick and dirty, this is just to create the broken pieces, later on you can remove all the unwanted faces.
  • I already painted texture maps for this character. I have a good UV layout as you can see here. This breaking tool is going to generate additional faces, adding new uv coordinates. But the existing UV's will remain as they are.
  • In the setup tab you will find the Shatter&Command tool.
  • Apply for example uniform type.
  • There are some cool options like number of broken pieces, etc.
  • Modo will create a material for all the interior pieces that are going to be generated. So cool.
  • Here you can see all the broken pieces generated in no time.
  • I'm going to scale down all the pieces in order to create a tiny gap between them. Now I can see them easily.
  • In this particular case (as we did with Angelique) I don't need the interior faces at all. I can easily select all of them using the material that Modo generated automatically.
  • Once selected all the faces just delete them.
  • If I check the UVs, they seem to be perfectly fine. I can see some weird stuff that is caused by the fact that I quickly closed the mesh. But I don't worry at all about, I would never see these faces.
  • I'm going to start again from scratch.
  • The uniform type is very quick to generate, but all the pieces are very similar in scale.
  • In this case I'm going to use the cluster type. It will generate more random pieces, creating nicer results.
  • As you can see, it looks a bit better now.
  • Now I'd like to generate local damage in one of the broken areas. Let's say that a bullet hits the piece and it falls apart.
  • Select the fragment and apply another shatter command. In this case I'm using cluster type.
  • Select all the small pieces and disable the gravity parameter under dynamics tab.
  • Also set the collision set to mesh.
  • I placed an sphere on top of the fragments. Then activated it's rigid body component. With the gravity force activated by default, the sphere will hit the fragments creating a nice effect.
  • Play with the collision options of the fragments to get different results.
  • You can see the simple but effective simulation here.
  • This is a quick clay render showing the broken pieces. You can easily increase the complexity of this effect with little extra cost.
  • This is the generated model, with the original UV mapping with high resolution textures applied in Mari.
  • Works like a charm.

HDRI shooting (quick guide) by Xuan Prada

This is a quick introduction to HDRI shooting on set for visual effects projects.
If you want to go deeper on this topic please check my DT course here.

Equipment

This list below is a professional equipment for HDRI shooting. Good results can be achieved using amateur gear, don't necessary need to spend a lot of money for HDRI capturing, but the better equipment you own the easier, faster and better result you'll get. Obviously this gear is based on my taste.

  • Lowepro Vertex 100 AW backpack
  • Lowepro Flipside Sport 15L AW backpack
  • Full frame digital DSLR (Nikon D800)
  • Fish-eye lens (Nikkor 10.5mm)
  • Multi purpose lens (Nikkor 28-300mm)
  • Remote trigger
  • Tripod
  • Panoramic head (360 precision Atome or MK2)
  • akromatic kit (grey ball, chrome ball, tripod plates)
  • Lowepro Nova Sport 35L AW shoulder bag (for aromatic kit)
  • Macbeth chart
  • Material samples (plastic, metal, fabric, etc)
  • Tape measurer
  • Gaffer tape
  • Additional tripod for akromatic kit
  • Cleaning kit
  • Knife
  • Gloves
  • iPad or laptop
  • External hard drive
  • CF memory cards
  • Extra batteries
  • Data cables
  • Witness camera and/or second camera body for stills

All the equipment packed up. Try to keep everything small and tidy.

All your items should be easy to pick up.

Most important assets are: Camera body, fish-eye lens, multi purpose lens, tripod, nodal head, macbeth chart and lighting checkers.

Shooting checklist

  • Full coverage of the scene (fish-eye shots)
  • Backplates for look-development (including ground or floor)
  • Macbeth chart for white balance
  • Grey ball for lighting calibration 
  • Chrome ball for lighting orientation
  • Basic scene measurements
  • Material samples
  • Individual HDR artificial lighting sources if required

Grey and chrome spheres, extremely important for lighting calibration.

Macbeth chart is necessary for white balance correction.

Before shooting

  • Try to carry only the indispensable equipment. Leave cables and other stuff in the van, don’t carry extra weight on set.
  • Set-up the camera, clean lenses, format memory cards, etc, before start shooting. Extra camera adjustments would be required at the moment of the shooting, but try to establish exposure, white balance and other settings before the action. Know you lighting conditions.
  • Have more than one CF memory card with you all the time ready to be used.
  • Have a small cleaning kit with you all the time.
  • Plan the shoot: Write a shooting diagram with your own checklist, with the strategies that you would need to cover the whole thing, knowing the lighting conditions, etc.
  • Try to plant your tripod where the action happens or where your 3D asset will be placed.
  • Try to reduce the cleaning area. Don’t put anything on your feet or around the tripod, you will have to hand paint it out later in Nuke.
  • When shooting backplates for look-dev use a wide lens, something around 24mm to 28mm and cover always more space, not only where the action occurs.
  • When shooting textures for scene reconstruction always use a Macbeth chart and at least 3 exposures.

Methodology

  • Plant the tripod where the action happens, stabilise it and level it
  • Set manual focus
  • Set white balance
  • Set ISO
  • Set raw+jpg
  • Set apperture
  • Metering exposure
  • Set neutral exposure
  • Read histogram and adjust neutral exposure if necessary
  • Shot slate (operator name, location, date, time, project code name, etc)
  • Set auto bracketing
  • Shot 5 to 7 exposures with 3 stops difference covering the whole environment
  • Place the aromatic kit where the tripod was placed, and take 3 exposures. Keep half of the grey sphere hit by the sun and half in shade.
  • Place the Macbeth chart 1m away from tripod on the floor and take 3 exposures
  • Take backplates and ground/floor texture references
  • Shoot reference materials
  • Write down measurements of the scene, specially if you are shooting interiors.
  • If shooting artificial lights take HDR samples of each individual lighting source.

Final HDRI equirectangular panorama.

Exposures starting point

  • Day light sun visible ISO 100 F22
  • Day light sun hidden ISO 100 F16
  • Cloudy ISO 320 F16
  • Sunrise/Sunset ISO 100 F11
  • Interior well lit ISO 320 F16
  • Interior ambient bright ISO 320 F10
  • Interior bad light ISO 640 F10
  • Interior ambient dark ISO 640 F8
  • Low light situation ISO 640 F5

That should be it for now, happy shooting :)

Cubes tutorial by Xuan Prada

A few months ago, when my workmates from Double Negative were working on Transcendence, I saw them using Houdini to create such a beautiful animations using tiny geometries. They were like millions of small cubes building shapes and forms.
Some time later other people started doing similar stuff with Maya's XGen and other tools. I tried it and it works like a charm.

Frame from Transcendence.

Frame from Transcendence.

Frame from Transcendence.

Frame from Transcendence.

I was curious about these images and then decided to recreate something similar, but I wanted to do it in a simpler and quicker way. I found out that combining Cinema 4D and Maya is probably the easiest way to create this effect.

If you have any clue to do the same in Modo or Softimage, please let me know, I'm really curious.
This is my current approach.

  • In Cinema 4D create a plane with a lot of subdivisions. Each one of those subdivisions will generate a cube. In this case I’m using a 1000cm x 1000cm plane with 500 subdivisions.

  • Create a new material and assign it to the plane.

  • Select the plane and go to the menu Simulate -> Hair objects -> Add hair.

  • If you zoom in you will see that one hair guide is generated by each vertex of the plane.

  • In the hair options reduce the segments guides to 1 because we just need straight guides we don’t care about hair resolution.

  • Also change the root to polygon center. Now the guides growth from each polygon center instead of each vertex of the plane.

  • Remove the option render hair (we are not going to be rendering hairs) from the generate tab. Also switch the type to square.

  • Right now we can see cubes instead of hair guides, but they are so thin.

  • We can control the thickness using the hair material. In this case I’m using 1.9 cm

  • Next thing would be randomising the height. Using a procedural noise would be enough to get nice results. We can also create animations very quickly, just play with the noise values.

  • Remove the noise for now. We want to control the length using a bitmap.

  • Also hide the hair, it’s quicker to setup if we don’t see the hair in viewport.

  • In the Plane material, go to luminance and select a bitmap. Adjust the UV Mapping to place the bitmap in your desired place.

  • In the hair material, use the same image for the length parameter.

  • Copy the same uv coordinates from the plane material.

  • Add a pixel effect to the texture and type the number of pixels based on the resolution of the plane. In this case 500

  • Do this in both materials, the plane and the hair. Now each cube will be mapped with a small portion of the bitmap.

  • Display the hair system and voila, that’s it.

  • Obviously the greater contrast in your image the better. I strongly recommend you to use high dynamic range images, as you know the contrast ratio is huge compared with low dynamic images.

  • At this point you can render it here in C4D or just export the geometry to another 3D software and render engine.

  • Select the hair system and make it editable. Now you are ready to export it as .obj

  • Import the .obj in your favourite 3D software. Then apply your lighting and shaders, and connect the image that you used before to generate the hair system. Of course, you can control the color of the hair system using any other bitmap or procedurals.

  • In order to keep this work very simple, I’m just rendering a beauty pass and an ambient occlusion pass, but of course you can render as many aov’s as you need.

  • I also animate very quickly the translation of the hair system and added motion blur and depth of field to the camera to get a more dynamic image, but this is really up to you.

  • This is just the tip of the iceberg, with this quick and easy technique you can create beautiful images combining it with your expertise.

Introduction to scatterers in Clarisse by Xuan Prada

Scatterers in Clarisse are just great. They are very easy to control, reliable and they render in no time.
I've been using them for matte painting purposes, just feed them with a bunch of different trees to create a forest in 2 minutes. Add some nice lighting and render insane resolution. Then use all the 3D material with all the needed AOV's in Nuke and you'll have full control to create stunning matte paintings.

To make this demo a bit funnier instead of trees I'm using cool Lego pieces :)

  • Create a context called obj and import the grid.obj and the toy_man.obj
  • Create another context called shaders and create generic shaders for the objs.
  • Also create two textures and load the images from the hard drive.
  • Assign the textures to the diffuse input of each shader and then assign each shader to the correspondent obj.
  • Set the camera to see the Lego logo.
  • Create a new context called crowd, and inside of it create a point cloud and a scatterer.
  • In the point cloud set the parent to be the grid.
  • In the scatterer set the parent to be the grid as well.
  • In the scatterer set the point cloud as geometry support.
  • In the geometry section of the scatterer add the toy_man.
  • Go back to the point cloud and in the scattering geometry add the grid.
  • Now play with the density. In this case I’m using a value of 0.7

  • As you can see all the toy_men start to populate the image.

  • In the decimate texture add the Lego logo. Now the toy_men stick to the Logo.
  • Add some variation in the scatterer position and rotation.
  • That’s it. Did you realise how easy was to setup this cool effect? And did you check the polycount? 108.5 million :)
  • In order to make this look a little bit better, we can remove the default lighting and do some quick IBL setup.

Final render.

Photography assembly for matte painters by Xuan Prada

In this post I'm going to explain my methodology to merge different pictures or portions of an environment in order to create a panoramic image to be used for matte painting purposes. I'm not talking about creating equirectangular panoramas for 3D lighting, for that I use ptGui and there is not a better tool for it.

I'm talking about blending different images or footage (video) to create a seamless panoramic image ready to use in any 3D or 2D program. It can be composed using only 2 images or maybe 15, it doesn't matter.
This method is much more complicated and requires more human time than using ptGui or any other stitching software. But the power of this method is that you can use it with HDR footage recorded with a Blackmagic camera, for example.

The pictures that I'm using for this tutorial were taken with a nodal point base, but they are not calibrated or similar. In fact they don't need to be like that. Obviously taking pictures from a nodal point rotation base will help a lot, but the good thing of this technique is that you can use different angles taken from different positions and also using different focal and different film backs from various digital cameras.

  • I'm using these 7 images taken from a bridge in Chiswick, West London. The resolution of the images is 7000px wide so I created a proxy version around 3000px wide.
  • All the pictures were taken with same focal, same exposure and with the ISO and White Balance locked.
  • We need to know some information about these pictures. In order to blend the images in to a panoramic image we need to know the focal length and the film back or sensor size.
  • Connect a view meta data node to every single image to check this information. In this case I was the person who took the photos, so I know all of them have the same settings, but if you are not sure about the settings, check one by one.
  • I can see that the focal length is 280/10 which means the images were taken using a 28mm lens.
  • I don't see film back information but I do see the camera model, a Nikon D800. If I google the film back for this camera I see that the size is 35.9mm x 24mm.
  • Create a camera node with the information of the film back and the focal length.
  • At this point it would be a good idea to correct the lens distortion in your images. You can use a lens distortion node in Nuke if you shot a lens distortion grid, or just do eyeballing.
  • In my case I'm using the great lens distortion tools in Adobe Lightroom, but this is only possible because I'm using stills. You should always shot lens distortion grids.
  • Connect a card node to the image and remove all the subdivisions.
  • Also deactivate the image aspect to have 1:1 cards. We will fix this later.
  • Connect a transfer geo node to the card, and it's axis input to the camera.
  • If we move the camera, the card is attached to it all the time.
  • Now we are about to create a custom parameter to keep the card aligned to the camera all the time, with the correct focal length and film back. Even if we play with the camera parameters, the image will be updated automatically.
  • In the transform geo parameters, RMB and select manage user knobs and add a floating point slider. Call it distance. Set the min to 0 and the max to 10
  • This will allow us to place the card in space always relative to the camera.
  • In the transform geo translate z press = to type an expression. write -distance
  • Now if we play with the custom distance value it works.
  • Now we have to refer to the film back and focal length so the card matches the camera information when it's moved or rotated.
  • In the x scale of the transform geo node type this expression (input1.haperture/input1.focal)*distance and in the y scale type: (input1.vaperture/input1.focal)*distance being input1 the camera axis.
  • Now if we play with the distance custom parameter everything is perfectly aligned.
  • Create a group with the card, camera and transfer geo nodes.
  • Remove the input2 and input3 and connect the input1 to the card instead of the camera.
  • Go out of the group and connect it to the image. There are usually refreshing issues so cut the whole group node and paste it. This will fix the problem.
  • Manage knobs here and pick the focal length and film back from the camera (just for checking purposes)
  • Also pick the rotation from the camera and the distance from the transfer geo.
  • Having these controls here we won't have to go inside of the group if we need to use them. And we will.
  • Create a project 3D node and connect the camera to the camera input and the input1 to the input.
  • Create a sitch node below the transfer geo node and connect the input1 to the project3D node.
  • Add another custom control to the group parameters. Use the pulldown choice, call it mode and add two lines: card and project 3D.
  • In the switch node add an expression: parent.mode
  • Put the mode to project 3D.
  • Add a sphere node, scale it big and connect it to the camera projector.
  • You will se the image projected in the sphere instead of being rendered in a flat card.

Depending on your pipeline and your workflow you may want to use cards or projectors. At some point you will need both of them, so is nice to have quick controls to switch between them

In this tutorial we are going to use the card mode. For now leave it as card and remove the sphere.

  • Set the camera in the viewport and lock it.
  • Now you can zoom in and out without loosing the camera.
  • Set the horizon line playing with the rotation.
  • Copy and paste the camera projector group and set the horizon in the next image by doing the same than before; locking the camera and playing with camera rotation.
  • Create a scene node and add both images. Check that all the images have an alpha channel. Auto alpha should be fine as long as the alpha is completely white.
  • Look through the camera of the first camera projector and lock the viewport. Zoom out and start playing with the rotation and distance of the second camera projection until both images are perfectly blended.
  • Repeat the process with every single image. Just do the same than before; look through the previous camera, lock it, zoom out and play with the controls of the next image until they are perfectly aligned.
  • Create a camera node and call it shot camera.
  • Create a scanline render node.
  • Create a reformat node and type the format of your shot. In this case I'm using a super 35 format which means 1920x817
  • Connect the obj/scene input of the scanline render to the scene node.
  • Connect the camera input of the scanline render to the shot camera.
  • Connect the reformat node to the bg input of the scanline render node.
  • Look through the scanline render in 2D and you will see the panorama through the shot camera.
  • Play with the rotation of the camera in order to place the panorama in the desired position.

That's it if you only need to see the panorama through the shot camera. But let's say you also need to project it in a 3D space.

  • Create another scanline render node and change the projection mode to spherical. Connect it to the scene.
  • Create a reformat node with an equirectangular format and connect it to the bg input of the scanline render. In this case I'm using a 4000x2000 format.
  • Create a sphere node and connect it to the spherical scanline render. Put a mirror node in between to invert the normal of the sphere.
  • Create another scanline render and connect it's camera input to the shot camera.
  • Connect the bg input of the new scanline render to the shot reformat node (super 35).
  • Connect the scn/obj of the new scanline render and connect it to the sphere node.
  • That's all that you need.
  • You can look through the scanline render in the 2D and 3D viewport. We got all the images projected in 3D and rendered through the shot camera.

You can download the sample scene here.

akromatic Digital Lighting Checkers for arnold by Xuan Prada

Every single facility or 3D artist around the globe have their own way to work with our Lighting Checkers, based on the render engine they use, shaders, pipeline in general. But just to make your life a bit easier, akromatic wants to provide you with a digital version of our Lighting Checkers to quickly match our physical version.

In this case we are offering you digital akromatic Lighting Checkers for arnold render.
We'll be posting other render engines soon.

Download here.

Clarisse, layers and passes by Xuan Prada

I will continue writing about my experiences working with Clarisse. This time I'm gonna talk about working with layers and passes, a very common topic in the rendering world no matter what software you are using.

Clarisse allows you to create very complex organization systems using contexts, layers/passes and images. In addition to that we can compose all the information inside Clarisse in order to create different outputs for compositing.
Clarisse has a very clever organization methods for huge scenes.

  • For this tutorial I'm going to use a very simple scene. The goal is to create one render layer for each element of the scene. At the end of this article we will have foreground, midgrodund, backgorund, the floor and shadows isolated.
  • At this point I have an image with a 3DLayer containing all the elements of the scene.
  • I've created 3 different contexts for foreground, midground and background.
  • Inside each context I put the correspondent geometry.
  • Inside each context I created an empty image.
  • I created a 3DLayer for each image.
  • We need to indicate which camera and renderer need to be used in each 3DLayer.
  • We also need to indicate which lights are going to be used in each layer.
  • At this point you probably realized how powerful Clarisse can be for organization purposes.
  • In the background context I'm rendering both the sphere and the floor.
  • In the scene context I've created a new image. This image will be the recipient for all the other images created before.
  • In this case I'm not creating 3DLayers but Image Layers.
  • In the layers options select each one of the layers created before.
  • I put the background on the bottom and the foreground on the top.
  • We face the problem that only the sphere has working shadows. This is because there is no floor in the other contexts.
  • In order to fix this I moved the floor to another context called shadow_catcher.
  • I created a new 3DLayer where I selected the camera and renderer.
  • I created a group with the sphere, cube and cylinder.
  • I moved the group to the shadows parameter of the 3DLayer.
  • In the recipient image I place the shadows at the bottom. That's it, we have shadows working now.
  • Oh wait, no that fast. If you check the first image of this post you will realize that the cube is actually intersecting the floor. But in this render that is not happening at all. This is because the floor is not in the cube context acting as matte object.
  • To fix this just create an instance of the floor in the cube context.
  • In the shading options of the floor I localize the parameters matte and alpha (RMB and click on localize).
  • Then I activated those options and set the alpha to 0%
  • That's it, working perfectly.
  • At this point everything is working fine, but we have the floor and the shadows together. Maybe you would like to have them separated so you can tweak both of them independently.
  • To do this, I created a new context only with the floor.
  • In the shadows context I created a new "decal" material and assigned it to the floor.
  • In the decal material I activated receive illumination.
  • And finally I added the new image to the recipient image.
  • You can download the sample scene here.

Image Based Lighting in Clarisse by Xuan Prada

I've been using Isotropix Clarisse in production for a little while now. Recently the VFX Facility where I work announced the usage of Clarisse as primary Look-Dev and Lighting tool, so I decided to start talking about this powerful raytracer on my blog.

Today I'm writing about how to set-up Image Based Lighting.

  • We can start by creating a new context called ibl. We will put all the elements needed for ibl inside this context.
  • Now we need to create a sphere to use as "world" for the scene.
  • This sphere will be the support for the equirectangular HDRI texture.
  • I just increased the radius a lot. Keep in mind that this sphere will be covering all your assets inside of it.
  • In the image view tab we can see the render in real time.
  • Right now the sphere is lit by the default directional light.
  • Delete that light.
  • Create a new matte material. This material won't be affected by lighting.
  • Assign it to the sphere.
  • Once assigned the sphere will look black.
  • Create an image to load the HDRI texture.
  • Connect the texture to the color input of the matte shader.
  • Select the desired HDRI map in the texture path.
  • Change the projection type to "parametric".
  • HDRI textures are usually 32bit linear images. So you need to indicate this in the texture properties.
  • I created two spheres to check the lighting. Just press "f" to fit them in the viewport.
  • I also created two standard materials, one for each sphere. I'm creating lighting checkers here.
  • And a plane, just to check the shadows.
  • If I go back to the image view, I can see that the HDRI is already affecting the spheres.
  • Right now, only the secondary rays are being affected, like the reflection.
  • In order to create proper lighting, we need to use a light called "gi_monte_carlo".
  • Right now the noise in the scene is insane. This is because all the crazy detail in the HDRI map.
  • First thing to reduce noise would be to change the interpolation of the texture to Mipmapping.
  • To have a noise free image we will have to increase the sampling quality of the "gi_monte_carlo" light.
  • Noise reduction can be also managed with the anti aliasing sampling of the raytracer.
  • The most common approach is to combine raytracer sampling, lighting sampling and shading sampling.
  • Around 8 raytracing samples and something around 12 lighting samples are common settings in production.
  • There is another method to do IBL in Clarisse without the cost of GI.
  • Delete the "gi_monte_carlo" light.
  • Create an "ambient_occlusion" light.
  • Connect the HDRI texture to the color input.
  • In the render only the secondary rays are affected.
  • Select the environment sphere and deactivate the "cast shadows" option.
  • Now everything works fine.
  • To clean the noise increase the sampling of the "ambient_occlusion" light.
  • This is a cheaper IBL method.