Let's say it is finished for now. It will be the asset used in my next texturing and look-dev course.
Another wip of this character. More to come.
I'm working on a texturing and look-dev course for elephant vfx and this is the asset that I'm preparing. Just an arm for now but I guess it's time to show you guys something. Stay tuned.
I recently started to use Blackmagic's Fusion at home (budget reasons) and I'm liking it so far but, one of the most important features coming from Nuke, is obviously the ability to shuffle between all the AOVs of your multi channel EXRs. Unfortunately Fusion doesn't support this. It has something called booleans to separate RGB channels but not AOVs.
Chad Ashley pointed me to this third party script that splits a multi channel EXR in many different loaders with each one of your AOVs. Not as good as Nuke's shuffle but good enough!
In a previous post I mentioned the importance of having high frequency details whilst scanning assets on-set. Sometimes if we don't have that detail we can just create it. Actually sometimes this is the only way to capture volumes and surfaces efficiently, specially if the asset doesn't have any surface detail, like white objects for example.
If we are dealing with assets that are being used on set but won't appear in the final edit, it is probably that those assets are not painted at all. There is no need to spend resources on it, right? But we might need to scan those assets to create a virtual asset that will be ultimately used on screen.
As mentioned before, if we don't have enough surface detail it will be so difficult to scan assets using photogrammetry so, we need to create high frequency detail on our own way.
Let's say we need to create a virtual assets of this physical mask. It is completely plain, white, we don't see much detail on its surface. We can create high frequency detail just painting some dots, or placing small stickers across the surface.
In this particular case I'm using a regular DSLR + multi zoom lens. A tripod, a support for the mask and some washable paint. I prefer to use small round stickers because they create less artifacts in the scan, but I run out of them.
I created this support while ago to scan fruits and other organic assets.
The first thing I usually do (if the object is white) is covering the whole object with neutral gray paint. It is way more easy to balance the exposure photographing again gray than white.
Once the gray paint is dry I just paint small dots or place the round stickers to create high frequency detail. The smallest the better.
Once the material has been processed you should get a pretty decent scan. Probably an impossible task without creating all the high frequency detail first.
These days we are lucky enough to apply the same photographic and cinematographic principles to our work as visual effects artists lighting shots. That's why we are always talking about cinematography and cinematic language. Today we are going to talk about some very common techniques in the cinematography world: hard light, soft light, specular light and diffuse light.
The main difference between hard light and soft light do not eradicate in the light itself but in the shadows. When the shadow is perfectly defined and opaque we talk about hard light. When the shadows are diffuse we called it soft lighting, the shadows will also be less opaque.
Is there any specific lighting source that creates hard or soft lighting? The answer is no. Any light can create hard or soft lighting depending on two factors.
- Size: Not only the size of the practical lighting source but also the size in relationship with the subject that is being illuminated.
- Distance: In relation to the subject and the placement of the lighting source.
Diffraction refers to various phenomena that occur when a wave encounters an obstacle or a slit. It is defined as the bending of light around the corners of an obstacle or aperture into the region of geometrical shadow of the obstacle.
When a light beam impacts on the surface of an object, if the size of the lighting source is similar to the size of the object, the light beam will go parallel and get slightly curved towards the interior.
If the size of the lighting source is smaller than the object or it is placed far away from it, the light beam won't bend creating very hard and defined shadows.
If the lighting source is bigger than the subject and it's placed near of it, the light beam will get curved a lot generating soft shadows.
If the lighting source is way bigger than the subject and it's place near of it, the light beam will be curved a lot, even they will get mixed at some point. Consequently the profile of the subject will not be represented in the shadows.
If a big lighting source is placed very far of the subject, its size will be altered in relation with the subject, and its behavior will be the same as a small lighting source, generating hard shadows. The most common example of this is the sun. It is very far but still generates hard lighting. Only on cloudy days the sun lights gets diffused by the clouds.
In two lines
- Soft light: Big lighting sources and or close to the subject.
- Hard light: Small lighting sources and or far from the subject.
Specular light: Lighting source very powerful in the center that gradually loses energy toward its extremes. Like a traditional torch. It generates very exposed and bright areas in the subject. Like the lights used in photo calls and interviews.
Diffuse light: Lighting source with uniform energy all over its surface. The lighting tends to be more compensated when it hits the subject surface.
Diffuse light and soft light are not the same. When we talk about soft lighting we are talking about soft shadows. When we mention diffuse light we are talking about the distribution of the light, equally distributed along its surface.
Some 3D samples with Legos.
- Here the character is being lit by a small lighting source, smaller than the character itself and placed far from the subject. We get hard light, hard shadows.
- Here we have a bigger lighting source, pretty much same size as the character and placed close to it. We get soft lighting, soft shadows.
- This is a big lighting source, much bigger than the subject. We now get extra soft lighting, losing the shape of the shadows.
- Now the character is being lit by the sun. The sun is a huge lighting source but being placed far far away from the subject it behaves like a small lighting source generating hard light.
- Finally there is another example of very hard light caused by the flash of the camera, another very powerful and concentrated point of light placed very close to the subject. You can get this in 3D reducing a lot the spread value of the light.
- Now a couple of images for specular and diffuse light.
In this video I will show you my process to convert 3D scans into assets ready for production. I believe the audio is in Spanish so, feel free to mute it or try to learn some Cervantes language :)
Meshlab is probably the only available solution (proprietary Lidar software doesn't count) when you have to deal with very heavy poly count. I'm working with some complex terrains, some of them up to 50 million polys and Maya or Zbrush just can't handle that. I'm reducing the poly count considerably fast in Meshlab with its polygon reduction tools.
- This terrain has more than 16 million polys. Maya can't handle this very well, and Zbrush can't manage memory to even open it. Just import it in Meshlab.
- You will be using the Quadric Edge Collopse Decimation tool a lot.
- There are different strategies available, I like to use the percentage one. In this case by 0.5
- I'll be getting an 8 million poly terrain.
- I just run the tool one more time to get a 4 million terrain. I can work in Maya with this :)
Having some fun with these small guys.
I will try to do a couple of these a month.
Quick tip here. Whenever possible use some kind of high frequency detail to capture references for your assets. In this scenario I'm scanning with photos this huge rock, with only 50 images and very bad conditions. Low lighting situation, shot hand-held, no tripod at all, very windy and raining.
Thanks to all the great high frequency detail on the surface of this rock the output is quite good to use as modeling reference, even to extract highly detailed displacement maps.
Notice in the image below that I'm using only 50 pictures. Not much you might say. But thanks to all the tiny detail the photogrammetry software does very well reconstructing the point cloud to generate the 3D model. There is a lot of information to find common points between photos.
The shooting pattern couldn't be more simple. Just one eight all around the subject. The alignment was completely successfully in Photoscan.
As you can see here, even with a small number of photos and not the best lighting conditions, the output is quite good.
I did an automatic retopology in Zbrush. I don't care much about the topology, this asset is not going to be animated at all. I just need a manageable topology to create a nice uv mapping and reproject all the fine detail in Zbrush and use it later as displacement map.
A few render tests.
Mi friend David Munoz Velazquez just pointed me to this great script to flatten geometries based on UV Mapping, pretty useful for re-topology tasks. In this demo I use it to create nice topology for 3D garments in Marvelous Designer. Then I can apply any new simulation changes to the final mesh using morphs. Check it out.
One feature that I really like in Clarisse are the shading layers. With them you can drive shaders based on naming convention or location of assets in the scene. With this method you can assign shaders to a very complex scene structure in no time. In this particular case I'll be showing you how to shade an entire army and create shading/texturing variations in just a few minutes.
I'll be using an alembic cache simulation exported from Maya using Golaem. Usually you will get thousand of objects with different naming convention, which makes the shading assignment task a bit laborious. With shading layer rules in Clarisse we can speed up a lot this tedious process
- Import an alembic cache with the crowd simulation through file -> import -> scene
- In this scene I have 1518 different objects.
- I'm going to create an IBL rig with one of my HDRIs to get some decent lighting in the scene.
- I created a new context called geometry where I placed the army and also created a ground plane.
- I also created another context called shaders where I'm going to place all my shaders for the soldiers.
- In the shaders context I created a new material called dummy, just a lambertian grey shader.
- We are going to be using shading layers, to apply shaders globally based on context and naming convention. I created a shading layers called army (new -> shading layer).
- With the pass (image) selected, select the 3D layer and apply the shading layer.
- Using the shading layer editor, add a new rule to apply the dummy shader to everything in the scene.
- I'm going to add a rule for everything called heavyArmor.
- Then just configure the shader for the heavyArmour with metal properties and it's correspondent textures.
- Create a new rule for the helmets and apply the shader that contains the proper textures for the helmets.
- I keep adding rules and shaders for different parts of the sodliers.
- If I want to create random variation, I can create shading layers for specific names of parts or even easier and faster, I can put a few items in a new context and create a new shading rule for them. For the bodies I want to use caucasian and black skin soldiers. I grabbed a few bodies and place them inside a new context called black. Then create a new shading rules where I apply a shader with different skin textures to all the bodies in that context.
- I repeated the same process for the shields and other elements.
- At the end of the process I can have a very populated army with a lot of random texture variations in just a few minutes.
- This is how my shading layers look like at the end of the process.
Texture artists, matte painters and environment artists often have to deal with UDIMs in Nuke. This is a very basic template that hopefully can illustrate how we usually handle this situation.
- Slower than using Mari. Each UDIM is treated individually.
- No virtual texturing, slower workflow. Yes, you can use Nuke's proxies but they are not as good as virtual texturing.
- No paint buffer dependant. Always the best resolution available.
- Non destructive workflow, nodes!
- Save around £1,233 on Mari's license.
- I'll be using this simple footage as base for my matte.
- We need to project this in Nuke and bake it on to different UDIMs to use it later in a 3D package.
- As geometry support I'm using this plane with 5 UDIMs.
- In Nuke, import the geometry support and the footage.
- Create a camera.
- Connect the camera and footage using a Project 3D node.
- Disable the crop option of the Project 3D node. If not the proejctions wouldn't go any further than UV range 0-1.
- Use a UV Tile node to point to the UDIM that you need to work on.
- Connect the img input of the UV Tile node to the geometry support.
- Use a UV Project node to connect the camera and the geometry support.
- Set projection to off.
- Import the camera of the shot.
- Look through the camera in the 3D view and the matte should be projected on to the geometry support.
- Connect a Scanline Render to the UV Project.
- Set the projection model to UV.
- In the 2D view you should see the UDIM projection that we set previously.
- If you need to work with a different UDIM just change the UV Tile.
- So this is the basic setup. Do whatever you need in between like projections, painting and so on to finish your matte.
- Then export all your UDIMs individually as texture maps to be used in the 3D software.
- Here I just rendered the UDIMs extracted from Nuke in Maya/Arnold.
If you are new to Arnold you are probably looking for RAW lighting and albedo AOVs in the AOV editor. And yes you are right, they are not there. At least when using AiStandard shaders.
The easiest and fastest solution would be to use AlShaders, they include both, RAW lighting and albedo AOVs. But if you need to use AiStandard shaders, you will have to create your own AOVs quite easily).
- In this capture you can see available AOVs for RAW lighting and albedo for the AlShaders.
- If you are using AiStandard shaders you won't see those AOVs.
- If you still want/need to use AiStandard shaders, you will have to render your beauty pass with the standard AOVs and utility passes and you will have to create the albedo pass by hand. You can easily do this replacing AiStandard shaders by Surface shaders.
- if we have a look at them in Nuke they will look like these.
- If we divide the beauty pass by the albedo pass we will get the RAW lighting.
- We can now modify only the lighting without affecting the colour.
- We can also modify the colour component without modifying the lighting.
- In this case I'm color correcting and cloning some stuff in the color pass.
- With a multiply operation I can combine both elements again to obtain the beauty render.
- If I disable all the modification to both lighting and color, I should get exactly the same result as the original beauty pass.
- Finally I'm adding a ground using my shadow catcher information.
As every Wed, two new words for our dictionary.
(1) The term sometimes used for the editor. (2) A large flag, long and narrow in shape, used for blocking light from the camera or an area of the set.
A 200 Watt spotlight with a three inches Fresnel lens.
This video was created for elephantvfx.com which means that it's only available in Spanish. But you can mute it and follow what I do visually :)
I basically explain how to render UVs AOVs in a couple of different ways using Maya and Arnold.