Clarisse, layers and passes by Xuan Prada

I will continue writing about my experiences working with Clarisse. This time I'm gonna talk about working with layers and passes, a very common topic in the rendering world no matter what software you are using.

Clarisse allows you to create very complex organization systems using contexts, layers/passes and images. In addition to that we can compose all the information inside Clarisse in order to create different outputs for compositing.
Clarisse has a very clever organization methods for huge scenes.

  • For this tutorial I'm going to use a very simple scene. The goal is to create one render layer for each element of the scene. At the end of this article we will have foreground, midgrodund, backgorund, the floor and shadows isolated.
  • At this point I have an image with a 3DLayer containing all the elements of the scene.
  • I've created 3 different contexts for foreground, midground and background.
  • Inside each context I put the correspondent geometry.
  • Inside each context I created an empty image.
  • I created a 3DLayer for each image.
  • We need to indicate which camera and renderer need to be used in each 3DLayer.
  • We also need to indicate which lights are going to be used in each layer.
  • At this point you probably realized how powerful Clarisse can be for organization purposes.
  • In the background context I'm rendering both the sphere and the floor.
  • In the scene context I've created a new image. This image will be the recipient for all the other images created before.
  • In this case I'm not creating 3DLayers but Image Layers.
  • In the layers options select each one of the layers created before.
  • I put the background on the bottom and the foreground on the top.
  • We face the problem that only the sphere has working shadows. This is because there is no floor in the other contexts.
  • In order to fix this I moved the floor to another context called shadow_catcher.
  • I created a new 3DLayer where I selected the camera and renderer.
  • I created a group with the sphere, cube and cylinder.
  • I moved the group to the shadows parameter of the 3DLayer.
  • In the recipient image I place the shadows at the bottom. That's it, we have shadows working now.
  • Oh wait, no that fast. If you check the first image of this post you will realize that the cube is actually intersecting the floor. But in this render that is not happening at all. This is because the floor is not in the cube context acting as matte object.
  • To fix this just create an instance of the floor in the cube context.
  • In the shading options of the floor I localize the parameters matte and alpha (RMB and click on localize).
  • Then I activated those options and set the alpha to 0%
  • That's it, working perfectly.
  • At this point everything is working fine, but we have the floor and the shadows together. Maybe you would like to have them separated so you can tweak both of them independently.
  • To do this, I created a new context only with the floor.
  • In the shadows context I created a new "decal" material and assigned it to the floor.
  • In the decal material I activated receive illumination.
  • And finally I added the new image to the recipient image.
  • You can download the sample scene here.

Image Based Lighting in Clarisse by Xuan Prada

I've been using Isotropix Clarisse in production for a little while now. Recently the VFX Facility where I work announced the usage of Clarisse as primary Look-Dev and Lighting tool, so I decided to start talking about this powerful raytracer on my blog.

Today I'm writing about how to set-up Image Based Lighting.

  • We can start by creating a new context called ibl. We will put all the elements needed for ibl inside this context.
  • Now we need to create a sphere to use as "world" for the scene.
  • This sphere will be the support for the equirectangular HDRI texture.
  • I just increased the radius a lot. Keep in mind that this sphere will be covering all your assets inside of it.
  • In the image view tab we can see the render in real time.
  • Right now the sphere is lit by the default directional light.
  • Delete that light.
  • Create a new matte material. This material won't be affected by lighting.
  • Assign it to the sphere.
  • Once assigned the sphere will look black.
  • Create an image to load the HDRI texture.
  • Connect the texture to the color input of the matte shader.
  • Select the desired HDRI map in the texture path.
  • Change the projection type to "parametric".
  • HDRI textures are usually 32bit linear images. So you need to indicate this in the texture properties.
  • I created two spheres to check the lighting. Just press "f" to fit them in the viewport.
  • I also created two standard materials, one for each sphere. I'm creating lighting checkers here.
  • And a plane, just to check the shadows.
  • If I go back to the image view, I can see that the HDRI is already affecting the spheres.
  • Right now, only the secondary rays are being affected, like the reflection.
  • In order to create proper lighting, we need to use a light called "gi_monte_carlo".
  • Right now the noise in the scene is insane. This is because all the crazy detail in the HDRI map.
  • First thing to reduce noise would be to change the interpolation of the texture to Mipmapping.
  • To have a noise free image we will have to increase the sampling quality of the "gi_monte_carlo" light.
  • Noise reduction can be also managed with the anti aliasing sampling of the raytracer.
  • The most common approach is to combine raytracer sampling, lighting sampling and shading sampling.
  • Around 8 raytracing samples and something around 12 lighting samples are common settings in production.
  • There is another method to do IBL in Clarisse without the cost of GI.
  • Delete the "gi_monte_carlo" light.
  • Create an "ambient_occlusion" light.
  • Connect the HDRI texture to the color input.
  • In the render only the secondary rays are affected.
  • Select the environment sphere and deactivate the "cast shadows" option.
  • Now everything works fine.
  • To clean the noise increase the sampling of the "ambient_occlusion" light.
  • This is a cheaper IBL method.

Colorway in VFX - chapter 2 by Xuan Prada

A few days ago I did my first tests in Colorway. My idea is to use Colorway as texturing and look-development tool for VFX projects.

I think it can be a really powerful and artist friendly software to work on different type of assets.
It is also a great tool to present individual assets, because you can do quick and simple post-processing tasks like color correction, lens effects, etc. And of course Colorway allows you to create different variations of the same asset in no time.

With this second test I wanted to create an entire asset for VFX, make different variations and put everything together in a dailies template or similar to showcase the work.

At the end of the day I'm quite happy with the result and workflow combining Modo, Mari and Colorway. I found some limitations but I truly believe that Colorway will fit soon my needs as Texture Painter and Look-Dev Artist.

Transferring textures

One of the limitations that I found as Texture Painter is that Colorway doesn't manage UDIMs yet. I textured this character time ago at home using Mari following VFX standards and of course, I'm using UDIMs, actually something around 50 4k UDIMs.

I had to create a new UV Mapping using the 1001 UDIM only. In order to keep enough texture resolution I divided the asset in different parts. Head, both arms, both legs, pelvis and torso.
Then using the great "transfer" tool in Mari, I baked the high resolution textures based on UDIMs on to the low resolution UVs based on one single UV space. I created one 8k resolution texture for each part of the asset. I'm using only 3 texture channels, Color, Specular and Bump.

Layer Transfer tool in Mari.

All the new textures already baked in to the default UV space 1001

My lighting setup in Modo couldn't be more simple. I'm just using an Equirectangular HDRI map of Beverly Hills. This image is actually shipped with Modo.
Image Based Lighting works great in Modo and is also very easy to mix different IBLs in the same scene. Just works great.

Shading wise is also quite simple. Just one shading layer with Color, Specular and Bump maps connected. I'm using one shader for each part of the asset.

The render takes only around 3 minutes on my tiny MacBook Air.
Rendering for Colorway takes more than that but obviously you will save a lot of time later.
Once in Colorway I can easily play with colours and textures. I created a color texture variation in Mari and now in Colorway I can plug it and see the shading changes in no time.

All the different parts exported from Modo are on the left side toolbar.

On the right side all the lights will be available to play with. In this case I only have the IBL.

All the materials are listed on the right side. It is possible to change color, intensity and diffuse textures. This gives you a huge amount of freedom to create different variations of the same asset.

I really like the possibility of using post-precessing effects like Lens distortion or dispersion. You can have a quick visual feedback of very common lens effects used on VFX projects.

Finally I created a couple of color variations for this asset.

Notes

A couple of things that I noticed while working on this asset:

  • I had one part of the asset with the normals flipped. I didn't realize of this and when rendering for Colorway, Modo crashes. Once inverted the normals of that part, it never crashed again.
  • It would be nice to store looks, or having the option to export looks from one project to another one. Let's say that I'm working only on the upper part of the character, render for Colorway and create some nice looks (including effects like lens distortions, color corrections,etc). It would be great to keep that for the next time that I export the whole character to Colorway.

New akromatic lighting checkers by Xuan Prada

News from akromatic.

"Based on the feedback and requirements of some VFX Facilities, we decided to release a new flavour of our calibrated paint.

Some Look-Development Artists prefer to use grey balls with higher specular components and other Artists are more comfortable using less shiny spheres.
It is matter of personal preference, so let us know which one is your flavour.

Original spheres: Gloss average around 30%
New spheres: Gloss average around 18%
Both of them are calibrated as Neutral Greys and hand painted."

New grey sphere, half hit by the sun, half in shade.

New grey flavour, close up. Soft lighting transition.

The mirror side remains the same. Carefully polished by hand.

Mirror side, close up.

All the information here.

Retopology tools in Modo by Xuan Prada

A few months ago I wrote a post about retopology tools in Maya. I'm not using those tools anymore, now I deal with retopology using Modo.
I'm doing a lot of retopo these days working with 3D scanners and decimated Zbrush models coming from the art department.

Pretty much all the 3D packages these days have similar retopology tools, but working with Modo I feel more freedom and I work more comfortable doing this kind of task.

These are the tools that I usually use.

  • Before starting I like to set the 3D scanner as "static mesh". Doing this I will hide all the item's components making this process much easier.
  • Pen tool: I use this tool to draw my first polygon. That's it, after drawing the firs poly face I drop the tool and don't use it anymore.
  • The type of geo should be polygon and make sure the option "make quads" is activated.
  • As I said, I draw just one face and drop the tool.

To carry on with retopology I use the "topology pen tool" which combines all the other retopology options. I use this tool to make 90% of the work.

These are some of it's options.

  • LMB: Move vertex, edges and faces.
  • Shift+LMB & drag edges: Extrude edges.
  • Shift+LMB & drag points: Create faces.
  • Shift+RMB & drag edges: Extrude edge loops.
  • RMB & drag edges: Move edge loops.
  • CTRL+MMB: Delete components. (faces, edges y vertex).
  • Inner snap: This option allows you to weld interior vertex.
  • Sculpt -> smooth: Allows you to relax the geometry, very useful to make a better distributions of the edge loops.
  • If you work with symmetry you probably want to align the middle points to the center of the world.
  • In order to do so, select all of them and go to vertex -> set position.
  • Then you can assign a common values for all of them.
  • To create the mirror press "shift+v".
  • To merge both parts just select both items, RMB and click on merge meshes.
  • Once merged just select the points and go to vertex -> merge to weld them.
  • Topology sketch tool: Allows you to draw polygons very quick.
  • Contour tool: Allows you to draw curves that will be connected using the bridge option. Very useful for kinda cylinder parts like arms or legs.
  • Obviously if you draw more curves you will get more resolution to match the 3D scanner.
  • You can create a very quick geometry and then add resolution using the "topology pen tool".

Colorway for Look-Development in VFX by Xuan Prada

A few days ago (or weeks) The Foundry released their latest cool product called "Colorway", and they did it for free.

Colorway is a product created to help designers with their work flow specially when dealing with color changes, texture updates, lighting, etc. Looks in general.
This software allow us to change those small thing once the render is done. We can do it in real time without waiting long hours for rendering again. We can change different things related with shading and lighting.

This is obviously quite an advantage when we are dealing with clients and they ask us for small changes related with color, saturation, brightness, etc. We don't need to render again anymore, just use Colorway to make those changes live in no time.
Even the clients can change some stuff and send us back a file with their changes.

Great idea, great product.

I'm not a designer, I'm a vfx artist doing mainly textures and look-development, and even if Colorway wasn't designed for vfx, it can be potentially be used in the vfx industry, at least for some tasks.

There are a few things that I'd like to have inside Colorway in order to be a more productive texturing&look-dev tool, but so far it can be used in some ways to create different versions of the same asset.

To test Colorway I used my model of War Machine.

  • Colorway allow us to render an asset using a base shader. Later we can apply different versions of the same textures, or just flat colors.
  • It all begins inside Modo (Cinema4D is on it's way).
  • It's very important how you organize your asset and shaders inside Modo. If you want to have a lot of control in Colorway you will have to split your scene in different parts.
  • In this example, I separated the head in different parts, so I can select them individually later on in Colorway.
  • Even if I'm using the same shader for the whole head, I made different copies so I can tweak them one by one if I want to have even more control in Colorway.
  • In Modo work on the look as you usually do. Once you are happy with the results export to Colorway.
  • In this case I'm using textures to create the look. Maybe you can do it without textures and apply them later in Colorway. You will be able also to remove all the textures in Colorway and start from scratch there. This is a personal taste.
  • Once happy just click on the Colorway button.
  • You can export all the materials and lights used in the scene or only those selected.
  • Click on the render button and that's it.
  • Once the render is done, just open the file exported from Modo and Colorway should pop up.
  • The workspace is super simple and well organized. There are selection groups and looks on the right and shaders, lighting and effects on the left.
  • Just select one of the parts on your left, one of the shaders on your right, or simply select in the viewport.
  • Automatically the controls for the material will pop up.
  • In the material options you can change the textures used by the shaders, or remove them if you want to start with a flat color.
  • I'm changing here the textures for just one of the materials, and later for all of them, creating a new version of my asset.
  • As I said before we can remove all the textures and use only the base shaders plus flat colors in order to create a new version of the asset.
  • Finally the versions that I created for this post :)

A few things that I'd like to see in Colorway in future versions in order to have more control and power for look-dev tasks.

  • Right now we can only change RGB textures. It would be nice to have control over secondary maps. Blending textures with masks would be also great.
  • We can't control the shaders parameters. Having that control for look-dev would be amazing.
  • Support UDIMs has to be a must.
  • Not sure how Colorway manages IBL. If you are using different lights seems to be ok, but if using only IBL it doesn't seem to work totally fine.
  • Transparency, glow and other shading options don't work in the current version.

Skateboard screenshots by Xuan Prada

Just a few screenshots of my process working on the skateboard images that I posted  few days ago.

  • Jeans modelling in Zbrush.
  • Decimated model exported from Zbrush ready to be reconstructed in Modo.
  • Retopology process in Modo using the topology workspace.
  • UV mapping in Modo.
  • Final models in Modo.
  • Texture work in Mari.
  • Look-dev in Maya and Arnold.
  • Lighting blocking.
  • Final renders.

Texture bleeding in Mari by Xuan Prada

Sometimes Mari seems to have small issues with the texture bleeding.
I just realized that sometimes the bleeding doesn't happen at all. If you find yourself with this problem, the best solution is probably to force Mari to do the texture bleeding.
Only 2 steps are needed.

  • Click on "Toggles on/off whole patch project.
  • Now select the patch or patches, right click on top and select "Bleed patch edges".
  • This should be enough to fix pretty much all the texture bleeding issues inside Mari.

Upcoming VFX films by Xuan Prada

2014

  • Deliver Us From Evil  07/02/2014
  • Dawn of the Planet of the Apes  07/11/2014
  • I Origins  07/18/14
  • Mood Indigo  07/18/14
  • Hercules  07/25/2014
  • Lucy  07/25/2014
  • Guardians of the Galaxy  08/01/2014
  • Into the Storm  08/08/2014
  • Teenage Mutant Ninja Turtles  08/08/2014
  • James Cameron's Deepsea Challenge 3D  08/08/2014
  • The Giver  08/15/2014
  • As Above, So Below  08/15/2014
  • Ragnarok  08/15/2014
  • Sin City: A Dame to Kill For  08/22/2014
  • The Congress  08/29/2014
  • The Zero Theorem  09/19/14
  • The Maze Runner  09/19/2014
  • The Boxtrolls  09/26/2014
  • Gone Girl  10/03/2014
  • Left Behind  10/03/2014
  • The Interview  10/10/2014
  • Birdman  10/17/14
  • Dracula Untold  10/17/2014
  • Kingsman: The Secret Service  10/24/2014
  • Horns  10/31/14
  • Interstellar  11/07/2014
  • Fury  11/14/2014
  • The Hunger Games: Mockingjay - Part I  11/21/2014
  • The Imitation Game  11/21/2014
  • The Pyramid  12/09/2014
  • Exodus: Gods and Kings  12/12/2014
  • The Hobbit: The Battle of the Five Armies  12/17/2014
  • Annie  12/19/14
  • Night at the Museum: Secret of the Tomb  12/19/2014
  • Into the Woods  12/25/2014
  • Paddington  12/25/2014
  • Unbroken  12/25/2014
  • Harbinger Down  2014
  • Space Station 76  2014

 2015

  • Kitchen Sink  01/09/2015
  • Inherent Vice  01/09/2015
  • The Man From U.N.C.L.E.  01/16/2015
  • Cyber  01/16/2015
  • Ex Machina  01/23/2015
  • Black Sea  01/23/2015
  • Seventh Son  02/06/2015
  • Jupiter Ascending  02/06/2015
  • Poltergeist  02/13/2015
  • Selfless  02/27/2015
  • Chappie  03/06/2015
  • Heart of the Sea  03/13/2015
  • Cinderella  03/15/2015
  • Insurgent  03/20/2015
  • Fast and Furious 7  04/10/2015
  • Avengers: Age of Ultron  05/01/2015
  • Mad Max: Fury Road  05/15/2015
  • Pixels  05/15/2015
  • Tomorrowland  05/22/2015
  • Untitled Cameron Crowe Project  05/29/2015
  • San Andreas  06/05/2015
  • Jurassic World  06/12/2015
  • The Fantastic Four  06/19/2015
  • Ted 2  06/26/2015
  • Terminator: Genesis  07/01/2015
  • Pan  07/17/15
  • Ant-Man  07/17/2015
  • Victor Frankenstein  10/02/15
  • The Walk  10/02/15
  • The Jungle Book  10/09/15
  • Crimson Peak  10/16/2015
  • The Hunger Games: Mockingjay - Part 2  11/20/2015
  • Star Wars: Episode VII  12/18/2015
  • The Lobster  2015

2016

  • Gods of Egypt  02/12/2016
  • Warcraft  03/11/2016
  • Batman v Superman: Dawn of Justice  05/06/2016
  • Captain America 3  05/06/2016
  • The Sinster Six  11/11/2016
  • Avatar 2  December 2016

 

Quick Lidar processing by Xuan Prada

Processing Lidar scans to be used in production is a very tedious task, specially when working on big environments, generating huge point clouds with millions of polygons. That’s so complicated to move in any 3D viewport.

To clean those point clouds the best tools usually are the ones that the 3D scans manufacturers ship with their products. But sometimes they are quite complex and not artist friendly.
And also most of the time we receive the Lidar from on-set workers and we don’t have access to those tools, so we have to use mainstream software to deal with this task.

If we are talking about very complex Lidar, we will have to spend a good time to clean it. But if we are dealing with simple Lidar of small environments, props or characters, we can clean them quite easily using MeshLab or Zbrush.

  • Import your Lidar in MeshLab. It can read the most common Lidar formats.
  • This Lidar has around 30 M polys. If we zoom in we can see how good it looks.
  • The best option to reduce the amount of geo is called Remeshing, Simplification and Reconstruction -> Quadric Edge Collapse Decimation.
  • We can play with Percentage reduction. If we use 0.5 the mesh will be reduced to 50% and so on.
  • After a few minutes (so fast) we will get the new geo reduced down to 3 M polys.
  • Then you can export it as .obj and open it in any other program, in this case Nuke.

Another alternative to MeshLab is Zbrush. But the problem with Zbrush is the memory limitation. Lidar are a very big point clouds and Zbrush doesn’t manage the memory very well.
But you can combine MeshLab and Zbrush to process your Lidar’s.

  • Try to import your Lidar en Zbrush. If you get an error try this.
  • Open Zbrush as Administrator, and then increase the amount of memory used by the software.
  • I’m importing now a Lidar processed in MeshLab with 3 M polys.
  • Go to Zplugin -> Decimation Master to reduce the number of polys. Just introduce a value in the percentage field. This will generate a new model based on that value against the original mesh.
  • Then click on Pre-Process Current. This will take a while according with how complex is the Lidar and your computer capabilities.
  • Once finished click on Decimate Current.
  • Once finished you will get a new mesh with 10% polys of the original mesh.

Animated HDRI with Red Epic and GoPro by Xuan Prada

Not too long ago, we needed to create a lightrig to lit a very reflective character, something like a robot made of chrome. This robot is placed in a real environment with a lot of practical lights, and this lights are changing all the time.
The robot will be created in 3D and we need to integrate it in the real environment, and as I said, all the lights will be changing intensity and temperature, some of then flickering all the time and very quickly.

And we are talking about a long sequence without cuts, that means we can’t cheat as much as we’d like.
In this situation we can’t use standard equirectangular HDRIs. They won’t be good enough to lit the character as the lighting changes will not be covered by a single panoramic image.

Spheron

The best solution for this case is probably the Spheron. If you can afford it or rent it on time, this is your tool. You can get awesome HDRI animations to solve this problem.
But we couldn’t get it on time, so this is not an option for us.

Then we thought about shooting HDRI as usual, one equirectangular panorama for each lighting condition. It worked for some shots but in others when the lights are changing very fast and blinking, we needed to capture live action videos. Tricks animating the transition between different HDRIs wouldn’t be good enough.
So the next step it would be to capture HDRI videos with different exposures to create our equirectangular maps.

The regular method

0002.jpeg

The fastes solution would be to use our regular rigs (Canon 5D Mark III and Nikon D800) mounted in a custom base to support 3 cameras with 3 fisheye lenses. They will have to be overlapped by around 33%.
With this rig we should be able to capture the whole environment while recording with a steady cam, just walking around the set.
But obviously those cameras can’t record true HDR. They always record h264 or another compression video. And of course we can’t bracket videos with those cameras.

Red Epic

To solve the .RAW video and the multi brackting we end up using Red Epic cameras. But using 3 cameras plus 3 lenses is quite expensive for on set survey work, and also quite heavy rig to walk all around a big set.
Finally we used only one Red Epic with a 18mm lens mounted in an steady cam, and in the other side of the arm we placed a big akromatic chrome ball. With this ball we can get around 200-240 degrees, even more than using a fisheye lens.
Obviously we will get some distorsion on the sides of the panorama, but honestly, have you ever seen a perfect equirectangular panorama for 3D lighting being used in a post house?

With the Epic we shot .RAW video a 5 brackets, rocedording the akromatic ball all the time and just walking around the set. The final resolution was 4k.
We imported the footage in Nuke and convert it using a simple spherical transform node to create true HDR equirectangular panoramas. Finally we combined all the exposures.

With this simple setup we worked really fast and efficient. Precision was accurate in reflections and lighting and the render time was ridiculous.
Can’t show any of this footage now but I’ll do it soon.

GoPro

We had a few days to make tests while the set was being built. Some parts of the set were quite inaccessible for a tall person like me.
In the early days of set constructing we didn’t have the full rig with us but we wanted to make quick test, capture footage and send it back to the studio, so lighting artists could make some Nuke templates to process all the information later on while shooting with the Epic.

We did a few tests with the GoPro hero 3 Black Edition.
This little camera is great,  light and versatile. Of course we can’t shot .RAW but at least it has a flat colour profile and can shot 4k resolution. You can also control the white balance and the exposure. Good enough for our tests.

We used an akromatic chrome ball mounted on an akromatic base, and on the other side we mounted the GoPro using a Joby support.
We shot using the same methodology that we developed for the Epic. Everything worked like a charm getting nice panormas for previs and testing purposes.

It also was fun to shot with quite unusual rig, and it helped us to get used to the set and to create all the Nuke templates.
We also did some render tests with the final panoramas and the results were not bad at all. Obviously these panoramas are not true HDR but for some indie projects or low budget projects this would be an option.

Footage captured using a GoPro and akromatic kit

In this case I’m in the center of the ball and this issue doesn’t help to get the best image. The key here is to use a steady cam to reduce this problem.

Nuke

Nuke work is very simple here, just use a spherical transform node to convert the footage to equirectangular panoramas.

Final results using GoPro + akromatic kit

Few images of the kit

Nikon D800 bracketing without remote shutter by Xuan Prada

I don’t know how I came to this setting in my Nikon D800 but it’s just great and can save your life if you can’t use a remote shutter.

The thing is that a few days ago the connector where I plug my shutter release fell apart. And you know that shooting brackets or multiple exposures is almost impossible without a remote trigger. If you press the sutter button without a release trigger you will get vibration or movement between brackets, and this will end up with ghosting problems.

With my remote trigger connection broken I only had the chance to take my camera body to the Nikon repair centre, but my previous experiences are to bad and I knew I would loose my camera for a month. The other option it would be to buy the great CamRanger but I couldn’t find it in London and couldn’t wait to be delivered.

On the other hand, I found on internet that a lot of Nikon D800 users have the same problem with this connection so maybe this is a problem related with the construction of the camera.

The good thing is that I found a way to bracket without using a remote shutter, just pushing the shutter button once, at the beginning of the multiple exposures. You need to activate one hidden option in your D800.

  • First of all, activate your brackets.
  • Turn on the automatic shutter option.
  • In the menu, go to the timer section, then to self timer. There go to self timer delay and set the time for the automatic shutter.

Just below the self time opcion there is another setting called number of shots. This is the key setting, if you put a 2 there the camera will shot all the brackets pressing the shutter release just once.
If you have activated the delay shutter option, you will get perfect exposures without any kind of vibration or movement.

Finally you can set the interval between shots, 0.5s is more than enough because you won’t be moving the camera/tripod between exposures.

And that’s all that you need to capture multiple brackets with your Nikon D800 without a remote shutter.
This saved my life while shooting for akromatic.com the other day :)