When subdividing models in Clarisse for rendering displacement maps, the software subdivides both geometry and UVs. Sometimes we might need to subdivide only the mesh but keeping the UVs as they are originally.
This depends on production requirements and obviously on how the displacement maps were extracted from Zbrush or any other sculpting package.
If you don't need to subdivide the UVs first of all you should extract the displacement map with the option SmoothUV turned off.
Then in Clarisse, select the option UV Interpolation Linear.
The very first teaser trailer for Mission Impossible 5 is out!
I've been working on this project for a while :)
The other I published a very simple image that I did just to test a few things. A couple of photographic techniques, my new Promote Control, procedural masks done in Substance Designer and other grading related stuff in Nuke. Just a few things that I wanted to try for a while.
This is a quick breakdown, as simple as the image itself.
- The very first thing that I did was taking a few stills of the plate that I wanted to use as background to place my CG elements. From the very beginning I wanted to create an extremely simple image, something that I could finish in a few hours. With that in mind I wanted to create a very realistic image, and I'm not talking about lighting or rendering, I'm talking about the general feeling of being realistic. With bad framing, bad compositing, with total lack of lighting intention, with no cinematic components at all. The usual bad picture that everyone posts in social networks once in a while, without any narrative or visual value.
- In order to create good and realistic CG compositions we need to gather a lot of information on-set. In this case everything is very simple. When you take pictures you can read the meta-data later in the computer. This will help to see the size of the sensor of your digital camera and the focal used to take the pictures. With this data we can replicate the 3D camera in Maya or any other 3D package.
- It is also very important to get good color references. Just using a Macbeth Chart we can neutral grade the plate and everythign we want to re-create from scratch in CG.
- The next step is to gather lighting information on-set. As you can imagine everything is so simple because this is a very tiny and simple image. There are not practical lights on-set just a couple of tiny bulbs on the ceiling. But they don't affect the subject so don't worry much about them. The main lighting source is the sun (although it was pretty much cloudy) coming through the big glass door on the right side of the image, out of camera. So we could say the lighting here is pretty much ambient light.
- With only an equirectangular HDRI we can easily reproduce the lighting conditions on set. We won't need CG lights or anything like that.
- This is possible because I'm using a very nice HDRI with a huge range. Linear ranges go up to 252.00000
- I didn't eve care about cleaning up the HDRI map. I left the tripod back there and didin't fix some ghosting issues. These little issued didn't affect at all my CG elements.
- It is very important to have lighting and color references inside the HDRI. If you pay attention you will see a Macbeth Chart and my akromatic lighting checkers placed in the same spot where the CG elements will be placed later.
- Once the HDRI is finished, it is very importante to have color and lighting references in the shot context. I took some pictures with the macbeth chart and the akromatic lighting checkers framed in the shot.
- Actually it is not exactly the same framing than the actual shot, but the placement of the checkers, the lighting source and the middle exposure remains the same.
- For this simple image we don't need to make any tracking or rotospocing work. This is a single frame work and we have a 90 degree angle between the floor and the shelf. With that in mind plus the meta-data from the camera reproducing the 3D camera is extremely simple.
- As you probably expected, modellin was very simple and basic.
- With this basic models I also tried to keep texturing very simple. Just found a few references on internet nad tried to match them as close as I could. Only needed 3 texture channels (diffuse, specular and bump). Every single object has a 4k texture map with only 1 UDIM. Didn't need more than that.
- As I said before, lighting wise I only needed an IBL setup, so simple and neat. Just an environment light with my HDRI connected to it.
- it is very important that your HDRI map and your plate share similar exposure so you can neutral grade them. Having same or similar exposure and Macbeth Charts in all your sequences is so simple to copy/paste gradings.
- Akromatic lighting checkers would help a lot to place correctly all the reflections and regulate lighting intensity. They would help also to establish the penumbra area and the behaviour of the lighting decay.
- Once the placement, intensity and grading ob the IBL are working fine, it is a good idea to render a "clay" version of the scene. This is a very smart way to check the behaviour of the shadows.
- In this particular example they work very well. This is because of the huge range that I have in my HDRI. With clampled HDRI this wouldn't be working that good and you would probably have to recreate the shadows using CG lights.
- The render was so quick. I don't know exactly but something around 4 or 5 minutes. Resolution 4000x3000
- Tried to keep 3D compositing simple. Just one render pass with a few AOV's. Direct diffuse, indirect diffuse, direct specular, indirect specular, refraction and 3 IDs to individually play with some objects.
- An this is it :)
Quick and dirty render that I did the other day.
Just testing my Promote Control for bracketing the exposures for the HDRI that I created for this image. Tried to do something very simple, to be achieved in just a few hours. Trying to keep realism, tiny details, bad framing and total lack of lighting intention.
Just wanted to create a very simple and realistic image, without any cinematic components. At the end, that's reality, isn't it?
Using IBLs with huge ranges for natural light (sun) is just great. They give you a very consistent lighting conditions and the behaviour of the shadows is fantastic.
But sampling those massive values can be a bit tricky sometimes. Your render will have a lot of noise and artifacts, and you will have to deal with tricks like creating cropped versions of the HDRIs or clampling values out of Nuke.
Fortunately in Clarisse we can deal with this issue quite easily.
Shading, lighting and anti-aliasing are completely independent in Clarisse. You can tweak on of them without affecting the other ones saving a lot of rendering time. In many renderers shading sampling is multiplied by anti-aliasing sampling which force the users to tweak all the shaders in order to have decent render times.
- We are going to start with this noisy scene.
- The first thing you should do is changing the Interpolation Mode to
MipMapping in the Map File of your HDRI.
- Then we need to tweak the shading sampling.
- Go to raytracer and activate previz mode. This will remove lighting
information from the scene. All the noise here comes from the shaders.
- In this case we get a lot of noise from the sphere. Just go to the sphere's material and increase the reflection quality under sampling.
- I increased the reflection quality to 10 and can't see any noise in the scene any more.
- Select again the raytracer and deactivate the previz mode. All the noise here is coming now from lighting.
- Go to the gi monte carlo and disable affect diffuse. Doing this gi won't affect lighting. We have now only direct lighting here. If you see some noise just increase the sampling of our direct lights.
- Go to the gi monte carlo and re-enable affect diffuse. Increase the quality until the noise disappears.
- The render is noise free now but it still looks a bit low res, this is because of the anti-aliasing. Go to raytracer and increase the samples. Now the render looks just perfect.
- Finally there is a global sampling setting that usually you won't have to play with. But just for your information, the shading oversampling set to 100% will multiply the shading rays by the anti-aliasing samples, like most of the render engines out there. This will help to refine the render but rendering times will increase quite a bit.
- Now if you want to have quick and dirt results for look-dev or lighting just play with the image quality. You will not get pristine renders but they will be good enough for establishing looks.
This is a very quick and dirty explanation of how the footage and specially colour is managed in a VFX facility.
Shooting camera to Lab
The RAW material recorded on-set goes to the lab. In the lab it is converted to .dpx which is the standard film format. Sometimes the might use exr but it's not that common.
A lot of movies are still being filmed with film cameras, in those cases the lab will scan the negatives and convert them to .dpx to be used along the pipeline.
Shooting camera to Dailies
The RAW material recorded on-set goes to dailies. The cinematographer, DP, or DI applies a primary LUT or color grading to be used along the project.
Original scans with LUT applied are converted to low quality scans and .mov files are generated for distribution.
Dailies to Editorial
Editorial department receive the low quality scans (Quicktimes) with the LUT applied.
They use these files to make the initial cuts and bidding.
Editorial to VFX
VFX facilities receive the low quality scans (Quictimes) with LUT applied. They use these files for bidding.
Later on they will use them as reference for color grading.
Lab to VFX
Lab provides high quality scans to the VFX facility. This is pretty much RAW material and the LUT needs to be applied.
The VFX facility will have to apply the LUT's film to the work done by scratch by them.
When the VFX work is done, the VFX facility renders out exr files.
VFX to DI
DI will do the final grading to match the Editorial Quicktimes.
VFX/DI to Editorial
High quality material produced by the VFX facility goes to Editorial to be inserted in the cuts.
The basic practical workflow would be.
- Read raw scan data.
- Read Quicktime scan data.
- Dpx scans usually are in LOG color space.
- Exr scans usually are in LIN color space.
- Apply LUT and other color grading to the RAW scans to match the Quicktime scans.
- Render out to Editorial using the same color space used for bringing in footage.
- Render out Quicktimes using the same color space used for viewing. If wathcing for excample in sRGB you will have to bake the LUT.
- Good Quicktime settings: Colorspace sRGB, Codec Avid Dnx HD, 23.98 frames, depth million of colors, RGB levels, no alpha, 1080p/23.976 Dnx HD 36 8bit
Speed texturing & look-deving session for this fella.
It will be used for testing my IBLs and light-rigs.
Renders with different lighting conditions and backplates on their way.
These are the texture channels that I painted for this suit. Tried to keep everything simple. Only 6 texture channels, 3 shaders and 10 UDIMs.
This is a very quick guide to set-up Zbrush displacements in Clarisse.
As usually, the most important thing is to extract the displacement map from Zbrush correctly. To do so just check my previous post about this procedure.
Once your displacement maps are exported follow this mini tutorial.
- In order to keep everything tidy and clean I will put all the stuff related with this tutorial inside a new context called "hand".
- In this case I imported the base geometry and created a standard shader with a gray color.
- I'm just using a very simple Image Based Lighting set-up.
- Then I created a map file and a displacement node. Rename everything to keep it tidy.
- Select the displacement texture for the hand and set-up the image to raw/linear. (I'm using 32bit .exr files).
- In the displacement node set the bounding box to something like 1 to start with.
- Add the displacement map to the front value, leave the value to 1m (which is not actually 1m, its like a global unit), and set the front offset to 0.
- Finally add the displacement node to the geometry.
- That's it. Render and you will get a nice displacement.
- If you are still working with 16 bits displacement maps, remember to set-up the displacement node offset to 0.5 and play with the value until you find the correct behaviour.
Mari is the standard tool these days for texturing in VFX facilities. There are so many reasons for it but one of the most important reasons is that Mari is probably the only texturing dedicated software that can handles colour spaces. In a film environment this is a very important feature because working without having control over colour profiles is pretty much like working blind.
That's why Mari and Nuke are the standard tools for texturing. We also include Zbrush as standard tool for texture artist but only for displacement maps stuff where color managment doesn't play a key role.
Right now colour management in Mari is not complete, at least is not as good as Nuke's, where you can control colour spaces for input and output sources. But Mari offers a basic colour management tools really useful for film environments. We have Mari Colour Profiles and OpenColorIO (OCIO).
As texture artists we usually work with Float Linear and 8-bit Gamma sources.
- I've loaded two different images in Mari. One of them is a Linear .exr and the other one is a Gamma2.2 .tif
- With the colour management set to none, we can check both images to see the differences between them
- We'll get same results in Nuke. Consistency is extremely important in a film pipeline.
- The first way to manage color spaces in Mari is via LUT's. Go to the color space section and choose the LUT of your project, usually provided by the cinematographer. Then change the Display Device and select your calibrated monitor. Change the Input Color Space to Linear or sRGB depending on your source material. Finally change the View Transform to your desired output like Gamma 2.2, Film, etc.
- The second method and recommended for colour management in Mari is using OCIO files. We can load these kind of files in Mari in the Color Manager window. These files are usually provided by the cinematographer or production company in general. Then just change the Display Device to your calibrated monitor, the Input Color Space to your source material and finally the View Transform to your desired output.
I just published another high resolution HDRI panorama for VFX.
This set includes all the original brackets, HDRI panoramas, lighting and color references for look-development and 3D lighting and an IBL setup ready to use.
Check akromatic's site for information and downloads.
A few years ago I worked on Tim Burton's Dark Shadows at MPC. We created a full CG face for Eva Green's character Angelique.
Angelique had a fight with Johnny Depp's character Barnabas Collins, and her face and upper body gets destroyed during the action.
In that case, all the broken parts where painted by hand as texture masks, and then the FX team generated 3D geometry and simulations based on those maps, using them as guides.
Recently I had to do a similar effect, but in this particular case, the work didn't require hand painting textures for the broken pieces, just random cracks here and there.
I did some research about how to create this quickly and easily, and found out that Modo's shatter command was probably the best way to go.
This is how I achieve the effect in no time.
First of all, let's have a look to Angelique, played by Eva Green.
- Once in Modo, import the geometry. The only requirement to use this tool is that the geometry has to be closed. You can close the geometry quick and dirty, this is just to create the broken pieces, later on you can remove all the unwanted faces.
- I already painted texture maps for this character. I have a good UV layout as you can see here. This breaking tool is going to generate additional faces, adding new uv coordinates. But the existing UV's will remain as they are.
- In the setup tab you will find the Shatter&Command tool.
- Apply for example uniform type.
- There are some cool options like number of broken pieces, etc.
- Modo will create a material for all the interior pieces that are going to be generated. So cool.
- Here you can see all the broken pieces generated in no time.
- I'm going to scale down all the pieces in order to create a tiny gap between them. Now I can see them easily.
- In this particular case (as we did with Angelique) I don't need the interior faces at all. I can easily select all of them using the material that Modo generated automatically.
- Once selected all the faces just delete them.
- If I check the UVs, they seem to be perfectly fine. I can see some weird stuff that is caused by the fact that I quickly closed the mesh. But I don't worry at all about, I would never see these faces.
- I'm going to start again from scratch.
- The uniform type is very quick to generate, but all the pieces are very similar in scale.
- In this case I'm going to use the cluster type. It will generate more random pieces, creating nicer results.
- As you can see, it looks a bit better now.
- Now I'd like to generate local damage in one of the broken areas. Let's say that a bullet hits the piece and it falls apart.
- Select the fragment and apply another shatter command. In this case I'm using cluster type.
- Select all the small pieces and disable the gravity parameter under dynamics tab.
- Also set the collision set to mesh.
- I placed an sphere on top of the fragments. Then activated it's rigid body component. With the gravity force activated by default, the sphere will hit the fragments creating a nice effect.
- Play with the collision options of the fragments to get different results.
- You can see the simple but effective simulation here.
- This is a quick clay render showing the broken pieces. You can easily increase the complexity of this effect with little extra cost.
- This is the generated model, with the original UV mapping with high resolution textures applied in Mari.
- Works like a charm.
This is the final trailer for Exodus: Gods and Kings. One of my latest projects at Double Negative.
This is a quick introduction to HDRI shooting on set for visual effects projects.
If you want to go deeper on this topic please check my DT course here.
This list below is a professional equipment for HDRI shooting. Good results can be achieved using amateur gear, don't necessary need to spend a lot of money for HDRI capturing, but the better equipment you own the easier, faster and better result you'll get. Obviously this gear is based on my taste.
- Lowepro Vertex 100 AW backpack
- Lowepro Flipside Sport 15L AW backpack
- Full frame digital DSLR (Nikon D800)
- Fish-eye lens (Nikkor 10.5mm)
- Multi purpose lens (Nikkor 28-300mm)
- Remote trigger
- Panoramic head (360 precision Atome or MK2)
- akromatic kit (grey ball, chrome ball, tripod plates)
- Lowepro Nova Sport 35L AW shoulder bag (for aromatic kit)
- Macbeth chart
- Material samples (plastic, metal, fabric, etc)
- Tape measurer
- Gaffer tape
- Additional tripod for akromatic kit
- Cleaning kit
- iPad or laptop
- External hard drive
- CF memory cards
- Extra batteries
- Data cables
- Witness camera and/or second camera body for stills
- Full coverage of the scene (fish-eye shots)
- Backplates for look-development (including ground or floor)
- Macbeth chart for white balance
- Grey ball for lighting calibration
- Chrome ball for lighting orientation
- Basic scene measurements
- Material samples
- Individual HDR artificial lighting sources if required
- Try to carry only the indispensable equipment. Leave cables and other stuff in the van, don’t carry extra weight on set.
- Set-up the camera, clean lenses, format memory cards, etc, before start shooting. Extra camera adjustments would be required at the moment of the shooting, but try to establish exposure, white balance and other settings before the action. Know you lighting conditions.
- Have more than one CF memory card with you all the time ready to be used.
- Have a small cleaning kit with you all the time.
- Plan the shoot: Write a shooting diagram with your own checklist, with the strategies that you would need to cover the whole thing, knowing the lighting conditions, etc.
- Try to plant your tripod where the action happens or where your 3D asset will be placed.
- Try to reduce the cleaning area. Don’t put anything on your feet or around the tripod, you will have to hand paint it out later in Nuke.
- When shooting backplates for look-dev use a wide lens, something around 24mm to 28mm and cover always more space, not only where the action occurs.
- When shooting textures for scene reconstruction always use a Macbeth chart and at least 3 exposures.
- Plant the tripod where the action happens, stabilise it and level it
- Set manual focus
- Set white balance
- Set ISO
- Set raw+jpg
- Set apperture
- Metering exposure
- Set neutral exposure
- Read histogram and adjust neutral exposure if necessary
- Shot slate (operator name, location, date, time, project code name, etc)
- Set auto bracketing
- Shot 5 to 7 exposures with 3 stops difference covering the whole environment
- Place the aromatic kit where the tripod was placed, and take 3 exposures. Keep half of the grey sphere hit by the sun and half in shade.
- Place the Macbeth chart 1m away from tripod on the floor and take 3 exposures
- Take backplates and ground/floor texture references
- Shoot reference materials
- Write down measurements of the scene, specially if you are shooting interiors.
- If shooting artificial lights take HDR samples of each individual lighting source.
Exposures starting point
- Day light sun visible ISO 100 F22
- Day light sun hidden ISO 100 F16
- Cloudy ISO 320 F16
- Sunrise/Sunset ISO 100 F11
- Interior well lit ISO 320 F16
- Interior ambient bright ISO 320 F10
- Interior bad light ISO 640 F10
- Interior ambient dark ISO 640 F8
- Low light situation ISO 640 F5
That should be it for now, happy shooting :)
A few months ago, when my workmates from Double Negative were working on Transcendence, I saw them using Houdini to create such a beautiful animations using tiny geometries. They were like millions of small cubes building shapes and forms.
Some time later other people started doing similar stuff with Maya's XGen and other tools. I tried it and it works like a charm.
I was curious about these images and then decided to recreate something similar, but I wanted to do it in a simpler and quicker way. I found out that combining Cinema 4D and Maya is probably the easiest way to create this effect.
If you have any clue to do the same in Modo or Softimage, please let me know, I'm really curious.
This is my current approach.
In Cinema 4D create a plane with a lot of subdivisions. Each one of those subdivisions will generate a cube. In this case I’m using a 1000cm x 1000cm plane with 500 subdivisions.
Create a new material and assign it to the plane.
Select the plane and go to the menu Simulate -> Hair objects -> Add hair.
If you zoom in you will see that one hair guide is generated by each vertex of the plane.
In the hair options reduce the segments guides to 1 because we just need straight guides we don’t care about hair resolution.
Also change the root to polygon center. Now the guides growth from each polygon center instead of each vertex of the plane.
Remove the option render hair (we are not going to be rendering hairs) from the generate tab. Also switch the type to square.
Right now we can see cubes instead of hair guides, but they are so thin.
We can control the thickness using the hair material. In this case I’m using 1.9 cm
Next thing would be randomising the height. Using a procedural noise would be enough to get nice results. We can also create animations very quickly, just play with the noise values.
Remove the noise for now. We want to control the length using a bitmap.
Also hide the hair, it’s quicker to setup if we don’t see the hair in viewport.
In the Plane material, go to luminance and select a bitmap. Adjust the UV Mapping to place the bitmap in your desired place.
In the hair material, use the same image for the length parameter.
Copy the same uv coordinates from the plane material.
Add a pixel effect to the texture and type the number of pixels based on the resolution of the plane. In this case 500
Do this in both materials, the plane and the hair. Now each cube will be mapped with a small portion of the bitmap.
Display the hair system and voila, that’s it.
Obviously the greater contrast in your image the better. I strongly recommend you to use high dynamic range images, as you know the contrast ratio is huge compared with low dynamic images.
At this point you can render it here in C4D or just export the geometry to another 3D software and render engine.
Select the hair system and make it editable. Now you are ready to export it as .obj
Import the .obj in your favourite 3D software. Then apply your lighting and shaders, and connect the image that you used before to generate the hair system. Of course, you can control the color of the hair system using any other bitmap or procedurals.
In order to keep this work very simple, I’m just rendering a beauty pass and an ambient occlusion pass, but of course you can render as many aov’s as you need.
I also animate very quickly the translation of the hair system and added motion blur and depth of field to the camera to get a more dynamic image, but this is really up to you.
This is just the tip of the iceberg, with this quick and easy technique you can create beautiful images combining it with your expertise.
Just some experiments using cubes, a lot of cubes.
First attempt to create a shader that looks like rough 2D sketches.
I will definitely put more effort on this in the future.
I'm pretty much combining three different pen strokes.