Film dictionary by Xuan Prada

As every Wednesday, a couple more cinematic words for my film dictionary.

Reaction shot
A shot of a character, generally a close up reacting to someone or something seen in the preceding shot. The shot is generally a cutaway from the main action.

Smoke pot
A small container that produces smoke for mechanical effects. The container holds some chemical, such as naphthalene or bitumen, which is fired either by electricity or a burning fuse.

Film dictionary by Xuan Prada

It's Wed again. Let's write another couple of cinematic words.

Closed set
A set, either in the studio or on location, that is not open to any visitors, including studio executives, and is open only to the director, performers and crew. Sets are closed if a particularly intimate or controversial scene is being photographed, if the subject or treatment is to be kept secret, or if there are problems in the production itself that must be worked out.

Stunt
An individual who substitutes for an actor or actress to perform some difficult or dangerous action. This person must of course, have some resemblance to the original performer and be dressed in an identical manner. Shots of such action are taken so that the identity of the stunt person is hidden. Stunts are specially adept at taking falls, surviving crashes or playing piano. When a film requieres a group of such people performing a number of these action, a stunt coordinator is hired.

Film dictionary by Xuan Prada

A couple of cinematic words every Wednesday. I won't be following any theme or order in particular, just for the fun of learning new film related stuff.

American Museum of the Moving Image
Founded in 1998, and located in Astoria, New York, the first museum in the United States devoted to the history of the production, distribution, and exhibition of film, television, and video art. The museum is concerned with all types of work employing the moving image - fictional, documentary, avant-garde, network television, commercials, etc. Abutting the old Astoria Studios, the museum features changing exhibition while also presenting permanent displays relating to all aspects of the industry.
Especially impressive is its collection of cameras, projectors, television sets, and equipment from the entire history of both cinema and television. The museum also presents screening of old and new films in two theatres, often featuring the director or someone involved with the production.

Website.

Dot
A small, circular gobo or scrim, from 10 to 20 cm of diameter, that blocks part of a luminaire's light from falling on specific area of the set or the lens of the camera. Also called target.

More next week :)

Akromatic's workshop by Xuan Prada

Just a few photos from the Akromatic's workshop, working on our spheres for VFX.

Stmaps by Xuan Prada

One of the first treatments that you will have to do to your VFX footage is removing lens distortion. This is crucial for some major tasks, like tracking, rotoscoping, image modelling, etc.
Copy lens information between different footage or between footage and 3D renders is also very common. Working with different software like 3D equalizar, Nuke, Flame, etc, having a common and standard way to copy lens information seems to be a good idea. Uv maps are probably the easiest way to do this, as they are plain 32 bit exr images.

  • Using lens grids is always the easiest, fastest and most accurate way of delensing.
  • Set the output type to displacement and look through the forward channel to see the uvs in viewport.
  • Write the image as .exr 32 bits
  • This will output the uv information and can be read in any software.
  • To apply the lensing information to your footage or renders, just use a Stmap connected to the footage and to the uv map.

Environment reconstruction + HDR projections by Xuan Prada

I've been working on the reconstruction of this fancy environment in Hackney Wick, East London.
The idea behind this exercise was recreating the environment in terms of shape and volume, and then project HDRIs on the geometry. Doing this we can get more accurate lighting contribution, occlusion, reflections and color bleeding. Much better environment interaction between 3D assets. Which basically means better integrations for our VFX shots.

I tried to make it as simple as possible, spending just a couple of hours on location.

  • The first thing I did was drawing some diagrams of the environment and using a laser measurer cover the whole place writing down all the information needed for later when working on the virtual reconstruction.
  • Then I did a quick map of the environment in Photoshop with all the relevant information. Just to keep all my annotations clean and tidy.
  • With drawings and annotations would have been good enough for this environment, just because it's quite simple. But in order to make it better I decided to scan the whole place. Lidar scanning is probably the best solution for this, but I decided to do it using photogrammetry. I know it takes more time but you will get textures at the same time. Not only texture placeholders, but true HDR textures that I can use later for projections.
  • I took around 500 images of the whole environment and ended up with a very dense point cloud. Just perfect for geometry reconstruction.
  • For the photogrammetry process I took around 500 shots. Every single one composed of 3 bracketed exposures, 3 stops apart. This will give me a good dynamic range for this particular environment.
  • Combined the 3 brackets to create rectilinear HDR images. Then exported them as both HDR and LDR. The exr HDRs will be used for texturing and the jpg LDR for photogrammetry purpose.
  • Also did a few equirectangular HDRIs with even higher dynamic ranger. Then I projected these in Mari using the environment projection feature. Once I completed the projections from different tripod positions, cover the remaining areas with the rectilinear HDRs.
  • These are the five different HDRI positions and some render tests.
  • The next step is to create a proxy version of the environment. Having the 3D scan this so simple to do, and the final geometry will be very accurate because it's based on photos of the real environment. You could also do a very high detail model but in this case the proxy version was good enough for what I needed.
  • Then, high resolution UV mapping is required to get good texture resolution. Every single one of my photos is 6000x4000 pixels. The idea is to project some of them (we don't need all of them) through the photogrammetry cameras. This means great texture resolution if the UVs are good. We could even create full 3D shots and the resolution would hold up.
  • After that, I imported in Mari a few cameras exported from Photoscan and the correspondent rectilinear HDR images. Applied same lens distortion to them and project them in Mari and/or Nuke through the cameras. Always keeping the dynamic range.
  • Finally exported all the UDIMs to Maya (around 70). All of them 16 bit images with the original dynamic range required for 3D lighting.
  • After mipmapped them I did some render tests in Arnold and everything worked as expected. I can play with the exposure and get great lighting information from the walls, floor and ceiling. Did a few render tests with this old character.

Bake from Nuke to UVs by Xuan Prada

  • Export your scene from Maya with the geometry and camera animation.
  • Import the geometry and camera in Nuke.
  • Import the footage that you want to project and connect it to a Project 3D node.

 

  • Connect the cam input of the Project 3D node to the previously imported camera.
  • Connect the img input of the ReadGeo node to the Project 3D node.
  • Look through the camera and you will see the image projected on to the geometry through the camera.
  • Paint or tweak whatever you need.
  • Use a UVProject node and connect the axis/cam input to the camera and the secondary input to the ReadGeo.
  • Projection option of the UVProjection should be set as off.

 

  • Use a ScanlineRender node and connect it’s obj/scene input to the UVProject.
  • Set the projection mode to UV.
  • If you swap from the 3D view to the 2D view you will see your paint work projected on to the geometry uvs.
  • Finally use a write node to output your DMP work.
  • Render in Maya as expected.

Promote Control + 5D Mark III by Xuan Prada

Each camera works a little bit different regarding the use of the Promote Control System for automatic tasks. In this particular case I'm going to show you how to configure both, Canon EOS 5D Mark III and Promote Control for it's use on VFX look-dev and lighting image acquisition.

  • You will need the following:
    • Canon EOS 5D Mark III
    • Promote Control
    • USB clable + adaptor
    • Shutter release CN3
  • Connect both cable to the camera and to the Promote Control.
  • Turn on the Promote Control and press simultaneously right and left buttons to go to the menu.
  • In the setup menu 2 "Use a separate cable for shutter release" select yes. 
  • In the setup menu 9 "Enable exposures below 1/4000" select yes. This is very important if you need more than 5 brackets for your HDRIs.
  • Press the central button to exit the menu.
  • Turn on your Canon EOS 5D Mark III and go to the menu.
  • Mirror lock-up should be off.
  • Long exposure noise reduction should be off as well. We don't want to vary noise level between brackets.
  • Find your neutral exposure and pass the information on to the Promote Control.
  • Select the desired number of brackets and you are ready to go.



Arnold subdivision scripts by Xuan Prada

These were sent to me by my friend and ex-work mate Ramón López and programmed by Pilar Molina during the production of the shortfilm Shift.
One of the scripts adds Arnold subdivision to all the objects in the scene and the other one adds the same property but only to the selected objects. Finally there is another handy script that substitutes all your textures in the scene by the equivalent .tx textures.

Download them here or here.
Thanks Pilar and Ramón.

Subdivide multiple objects in Arnold by Xuan Prada

As you probably know Arnold manages subdivision individually per object. There is no way to subdivide multiple objects at once. Obviously if you have a lot of different objects in a scene going one by one adding Arnold's subdivision property doesn't sound like a good idea.

This the easiest way that I found to solve this problem and subdivide tons of objects at once.
I have no idea at all about scripting, if you have a better solution, please let me know :)

  • This is the character that I want to subdivide. As you can see it has a lot of small pieces. I'd like to keep them separate and subdivide every single one of them.

Model by SgtHK.

  • First of all, you need to select all the geometry shapes. To do this, select all the geometry objects in the outliner and paste this line in the script editor.

/* you have to select all the objects you want to subdivide, it doesn’t work with groups or locators.
once the shapes are selected just change aiSubdivType and aiSubdivIterations on the attribute spread sheet.
*/

pickWalk -d down;

string $shapesSelected[] = `ls -sl`;

  • Once all the shapes are selected go to the attribute spread editor.
  • Filter by ai subd.
  • Just type the subdivision method and iterations.
  • This is it, the whole character is now subdivided.

A bit more photogrammetry by Xuan Prada

Just a few more screenshots and renders of the last photogrammetry stuff that I've been doing. All of these are part of some training that I'll be teaching soon. Get in touch if you want to know more about it.

Rendering OpenVDB in Clarisse by Xuan Prada

Clarisse is perfectly capable of rendering volumes while maintaining it's flexible rendering options like instances or scatterers. In this particular example I'm going to render a very simple smoke simulation.

Start by creating and IBL setup. Clarisse allows you to do it with just one click.

Using a couple of matte and chrome spheres will help to establish the desired lighting situation.

To import the volume simulation just go to import -> volume.

Clarisse will show you a basic representation of the volume in the viewport. Always real time.

To improve the visual representation of the volume in viewport just click on Progressive Rendering. Lighting will also affect the volume in the viweport.

Volumes are treated pretty much like geometry in Clarisse. You can render volumes with standard shaders if you wish.

The ideal situation of course it would be using volume shaders for volume simulations.

In the material editor I'm about to use an utility -> extract property node to read any embedded property in the simulation. In this case I'm reading the temperature.

Finally I drive the temperature color with a gradient map.

If you get a lof of noise in your renders, don't forget to increase the volume sampling of your lighting sources.

Final render.

More photogrammetry stuff by Xuan Prada

I'm generating content for a photogrammetry course that I'll be teaching soon. These are just a few images of that content. More to come soon, I'll be doing a lot of examples and exercises using photogrammetry for visual effects projects.

rendering Maya particles in Clarisse by Xuan Prada

This is a very simple tutorial explaining how to render particle systems simulated in Maya inside Isotropix Clarisse. I already have a few posts about using Clarisse for different purposes, if you check by the tag "Clarisse" you will find all the previous posts. Hope to be publishing more soon.

In this particular case we'll be using a very simple particle system in Maya. We are going to export it to Clarisse and use custom geometries and Clarisse's powerful scatterer system to render millions of polygons very fast and nicely.

  • Once your particle system has been simulated in Maya, export it via Alembic. One of the standard 3D formats for exchanging information in VFX.
  • Create an IBL rig in Clarisse. In a previous post I explain how to do it, it is quite simple.
  • With Clarisse 2.0 it is so simple to do, just one click and you are ready to go.
  • Go to File -> Import -> Scene and select the Alembic file exported from Maya.
  • It comes with 2 types of particles, a grid acting as ground and the render camera.
  • Create a few contexts to keep everything tidy. Geo, particles, cameras and materials.
  • In the geo context I imported the toy_man and the toy_truck models (.obj) and moved the grid from the main context to the geo context.
  • Moved the 2 particles systems and the camera to their correspondent contexts.
  • In the materials context I created 2 materials and 2 color textures for the models. Very simple shaders and textures.
  • In the particles context I created a new scatterer calle scatterer_typeA.
  • In the geometry support of the scatter add the particles_typeA and in the geometry section add the toy_man model.
  • I’m also adding some variation to the rotation.
  • If I move my timeline I will see the particle animation using the toy_man model.
  • Do not forget to assign the material created before.
  • Create another scatterer for the partycles_typeB and configure the geometry support and the geometry to be used.
  • Add also some rotation and position variation.
  • As these models are quite big compared with the toy figurine, I’m offsetting the particle effect to reduce the presence of toy_trucks in the scene.
  • Before rendering, I’d like to add some motion blur to the scene. Go to raytracer -> Motion Blur -> 3D motion blur. Now you are ready to render the whole animation.