Quick Lidar processing by Xuan Prada

Processing Lidar scans to be used in production is a very tedious task, specially when working on big environments, generating huge point clouds with millions of polygons. That’s so complicated to move in any 3D viewport.

To clean those point clouds the best tools usually are the ones that the 3D scans manufacturers ship with their products. But sometimes they are quite complex and not artist friendly.
And also most of the time we receive the Lidar from on-set workers and we don’t have access to those tools, so we have to use mainstream software to deal with this task.

If we are talking about very complex Lidar, we will have to spend a good time to clean it. But if we are dealing with simple Lidar of small environments, props or characters, we can clean them quite easily using MeshLab or Zbrush.

  • Import your Lidar in MeshLab. It can read the most common Lidar formats.
  • This Lidar has around 30 M polys. If we zoom in we can see how good it looks.
  • The best option to reduce the amount of geo is called Remeshing, Simplification and Reconstruction -> Quadric Edge Collapse Decimation.
  • We can play with Percentage reduction. If we use 0.5 the mesh will be reduced to 50% and so on.
  • After a few minutes (so fast) we will get the new geo reduced down to 3 M polys.
  • Then you can export it as .obj and open it in any other program, in this case Nuke.

Another alternative to MeshLab is Zbrush. But the problem with Zbrush is the memory limitation. Lidar are a very big point clouds and Zbrush doesn’t manage the memory very well.
But you can combine MeshLab and Zbrush to process your Lidar’s.

  • Try to import your Lidar en Zbrush. If you get an error try this.
  • Open Zbrush as Administrator, and then increase the amount of memory used by the software.
  • I’m importing now a Lidar processed in MeshLab with 3 M polys.
  • Go to Zplugin -> Decimation Master to reduce the number of polys. Just introduce a value in the percentage field. This will generate a new model based on that value against the original mesh.
  • Then click on Pre-Process Current. This will take a while according with how complex is the Lidar and your computer capabilities.
  • Once finished click on Decimate Current.
  • Once finished you will get a new mesh with 10% polys of the original mesh.

Animated HDRI with Red Epic and GoPro by Xuan Prada

Not too long ago, we needed to create a lightrig to lit a very reflective character, something like a robot made of chrome. This robot is placed in a real environment with a lot of practical lights, and this lights are changing all the time.
The robot will be created in 3D and we need to integrate it in the real environment, and as I said, all the lights will be changing intensity and temperature, some of then flickering all the time and very quickly.

And we are talking about a long sequence without cuts, that means we can’t cheat as much as we’d like.
In this situation we can’t use standard equirectangular HDRIs. They won’t be good enough to lit the character as the lighting changes will not be covered by a single panoramic image.


The best solution for this case is probably the Spheron. If you can afford it or rent it on time, this is your tool. You can get awesome HDRI animations to solve this problem.
But we couldn’t get it on time, so this is not an option for us.

Then we thought about shooting HDRI as usual, one equirectangular panorama for each lighting condition. It worked for some shots but in others when the lights are changing very fast and blinking, we needed to capture live action videos. Tricks animating the transition between different HDRIs wouldn’t be good enough.
So the next step it would be to capture HDRI videos with different exposures to create our equirectangular maps.

The regular method


The fastes solution would be to use our regular rigs (Canon 5D Mark III and Nikon D800) mounted in a custom base to support 3 cameras with 3 fisheye lenses. They will have to be overlapped by around 33%.
With this rig we should be able to capture the whole environment while recording with a steady cam, just walking around the set.
But obviously those cameras can’t record true HDR. They always record h264 or another compression video. And of course we can’t bracket videos with those cameras.

Red Epic

To solve the .RAW video and the multi brackting we end up using Red Epic cameras. But using 3 cameras plus 3 lenses is quite expensive for on set survey work, and also quite heavy rig to walk all around a big set.
Finally we used only one Red Epic with a 18mm lens mounted in an steady cam, and in the other side of the arm we placed a big akromatic chrome ball. With this ball we can get around 200-240 degrees, even more than using a fisheye lens.
Obviously we will get some distorsion on the sides of the panorama, but honestly, have you ever seen a perfect equirectangular panorama for 3D lighting being used in a post house?

With the Epic we shot .RAW video a 5 brackets, rocedording the akromatic ball all the time and just walking around the set. The final resolution was 4k.
We imported the footage in Nuke and convert it using a simple spherical transform node to create true HDR equirectangular panoramas. Finally we combined all the exposures.

With this simple setup we worked really fast and efficient. Precision was accurate in reflections and lighting and the render time was ridiculous.
Can’t show any of this footage now but I’ll do it soon.


We had a few days to make tests while the set was being built. Some parts of the set were quite inaccessible for a tall person like me.
In the early days of set constructing we didn’t have the full rig with us but we wanted to make quick test, capture footage and send it back to the studio, so lighting artists could make some Nuke templates to process all the information later on while shooting with the Epic.

We did a few tests with the GoPro hero 3 Black Edition.
This little camera is great,  light and versatile. Of course we can’t shot .RAW but at least it has a flat colour profile and can shot 4k resolution. You can also control the white balance and the exposure. Good enough for our tests.

We used an akromatic chrome ball mounted on an akromatic base, and on the other side we mounted the GoPro using a Joby support.
We shot using the same methodology that we developed for the Epic. Everything worked like a charm getting nice panormas for previs and testing purposes.

It also was fun to shot with quite unusual rig, and it helped us to get used to the set and to create all the Nuke templates.
We also did some render tests with the final panoramas and the results were not bad at all. Obviously these panoramas are not true HDR but for some indie projects or low budget projects this would be an option.

Footage captured using a GoPro and akromatic kit

In this case I’m in the center of the ball and this issue doesn’t help to get the best image. The key here is to use a steady cam to reduce this problem.


Nuke work is very simple here, just use a spherical transform node to convert the footage to equirectangular panoramas.

Final results using GoPro + akromatic kit

Few images of the kit

Nikon D800 bracketing without remote shutter by Xuan Prada

I don’t know how I came to this setting in my Nikon D800 but it’s just great and can save your life if you can’t use a remote shutter.

The thing is that a few days ago the connector where I plug my shutter release fell apart. And you know that shooting brackets or multiple exposures is almost impossible without a remote trigger. If you press the sutter button without a release trigger you will get vibration or movement between brackets, and this will end up with ghosting problems.

With my remote trigger connection broken I only had the chance to take my camera body to the Nikon repair centre, but my previous experiences are to bad and I knew I would loose my camera for a month. The other option it would be to buy the great CamRanger but I couldn’t find it in London and couldn’t wait to be delivered.

On the other hand, I found on internet that a lot of Nikon D800 users have the same problem with this connection so maybe this is a problem related with the construction of the camera.

The good thing is that I found a way to bracket without using a remote shutter, just pushing the shutter button once, at the beginning of the multiple exposures. You need to activate one hidden option in your D800.

  • First of all, activate your brackets.
  • Turn on the automatic shutter option.
  • In the menu, go to the timer section, then to self timer. There go to self timer delay and set the time for the automatic shutter.

Just below the self time opcion there is another setting called number of shots. This is the key setting, if you put a 2 there the camera will shot all the brackets pressing the shutter release just once.
If you have activated the delay shutter option, you will get perfect exposures without any kind of vibration or movement.

Finally you can set the interval between shots, 0.5s is more than enough because you won’t be moving the camera/tripod between exposures.

And that’s all that you need to capture multiple brackets with your Nikon D800 without a remote shutter.
This saved my life while shooting for akromatic.com the other day :)

Mari to Modo with just one click by Xuan Prada

UDIM workflow has been around for the last 10 years or so. It became more popular when Mari came out and these days it’s being used by everyone in the vfx industry.

In this blog you can find different ways to setup UDIMs in different software and render engines.
With Modo 801 has never been so easy, fast and great!
With just one click you are ready to go!

  • Export your textures from Mari. I always use the naming “component_UDIM.exr” “RGB_1001.exr”
  • Once in Modo, assign a new shader to your asset.
  • Add a new layer with a texture map, as usual. Add layer -> image map -> load udims.
  • Select the UDIM sequence that you exported from Mari.
  • Change the “effect” to point to the desired shader channel.
  • By default Modo enables the option “use clip udim”. You can check this in the “uv” properties. This means that you don’t need to do anything, Modo will handle the UDIM stuff by itself.
  • That’s it, all done :)
  • As an extra, you can go to the image manager, select one single map and check the UDIM coordinate.
  • Another cool thing, is that you can select all the UDIM sequence in the imagen manager, and change the color space with one single click! This is great if you are working with linear workflow or another color space.

Retopology tools in Maya 2014 by Xuan Prada

These days we use a lot of 3D scans in VFX productions.
They are very useful, the got a lot of detail and we can use them for different purposes. 3D scans are great.

But obviously, a 3D scan needs to be processed in so many ways, depending on the use you are looking for. It can be used for modelling reference, for displacement extraction, for colour and surface properties references, etc.

One of the most common uses, is as base for modelling tasks.
If so, you would need to retopologize the scan to convert it in a proper 3D model, ready to be mapped, textured and so on.

In Maya 2014 we have a few tools that are great and easy to use.
I’ve been using them for quite a while now while processing my 3D scans, so let me explain to you which tools I do use and how I use them.

  • In this 3D scan you can see the amount of nice details.  They are very useful for a lot of different tasks.
  • But if you check the actual topology you will realize is quite useless at this point in time.
  • Create a new layer and put inside the 3D scan.
  • Activate the reference option, so we can’t select the 3D scan in viewport, which is quite handy.
  • In the snap options, select the 3D scan as Live Surface.
  • Enable the modelling kit.
  • Use live surface as transform constraints.
  • This will help us to stick the new geometry on top of the 3D scan with total precision.
  • Use the Quad Draw tool to draw polygons.
  • You will need 4 points to create a polygon face.
  • Once you have 4 pints, click on shift yo see (preview) the actual polygon.
  • Shift click will create the polygon face.
  • Draw as many polygons as you need.
  • LMB to create points. MMB to move points/edges/polys. CTRL+LMB to delete points/edges/polys.
  • CTRL+MMB to move edge loops.
  • If you want to extrude edges, just select one and CTRL+SHIFT+LMB and drag to a desired direction.
  • To add edge loops SHIFT+LMB.
  • To add edge loops in the exact center SHIFT+MMB.
  • To draw polygons on the fly, click CTRL+SHIFT+LMB and draw in any direction.
  • To change the size of the polygons CTRL+SHIFT+MMB.
  • To fill an empty space with a new polygon click on SHIFT+LMB.
  • To weld points CTRL+MMB.
  • If you need to do retopology for cylindrical, tubular or similar surfaces, is even easier and faster.
  • Just create a volume big enough to contain the reference model.
  • Then go to Modeling Toolkit, edit -> Shrinkwrap Selection.
  • The new geometry will stick on to the 3D scan.
  • The new topology will be clean, but maybe you were hoping for something more tidy and organize.
  • No problem, just select the quad draw. By default the relax tool is activated. Paint wherever needed and voila, clean and tidy geometry followinf the 3D scan.

Vector displacement in Modo by Xuan Prada

Another quick entry with my tips&tricks for Modo.
This time I’m going to write about setting up Mudbox’s vector displacements in Modo.

  • Check your displacement in Mudbox and clean your layer stack as much as you can. This will make faster the extraction process.
  • The extraction process is very simple. Just select your low and high resolution meshes.
  • Set the vector space to Absolute if you asset is a static element, like props or environments.
  • Set the vector space to Relative if your asset will be deformed. Like characters.
  • Always use 32 bit images.
  • As I said export the maps using EXR 32 bits.
  • Before moving to Modo or any other 3D package, check your maps in Nuke.
  • Once in Modo, select your asset and go to the geometry options.
  • Check Linear UVs and set the render subdivision level.
  • Assign a new shader to your asset.
  • Add a new texture layer with your vector displacement map.
  • Set it up ass Displacement Effect.
  • Set the low and high value to 0 and 100.
  • You will see a displacement preview in viewport.
  • Set the gamma to 1.0 Remember that 32bit images shouldn’t be gamma corrected using Linear Workflow.
  • In the shader options set the Displacement Distance to 1m this should give you the same result than Mudbox.
  • In the render options you can control the displacement rate, which is your displacement quality more or less.
  • 1.0 is fine, play with that. Lower values will give you sharper results but will need more time to render.
  • Finally render a quick test to see if everything looks as expected.

Zbrush displacement in Modo by Xuan Prada

Another of those steps that I need to do when I’m working on any kind of vfx project and I consider “a must”.
This is how I set up my Zbrush displacements in Modo.

  • Once you have finished your sculpting work in Zbrush, with all the layers activated go to the lowest subdivision level.
  • Go to the morph target panel, click on StoreMT and import your base geometry. Omit this step if you started your model in Zbrush.
  • Once the morph targer is created, you will se it in viewport. Go back to your sculpted mesh by clicking on the switch button.
  • Export all the displacement maps using the multi map exporter. I would recommend you to use always 32bit maps.
  • Check my settings to export the maps. The most important parameters are scale and intensity. Scale should be 1 and intensity will be calculated automatically.
  • Check the maps in Nuke and use the roto paint tool to fix small issues.
  • Once in Modo, import your original asset. Select your asset in the item list and check linear uvs and set the amount of subdivisions that you want to use.
  • Assign a new shader to your asset, add the displacement texture as texture layer and set the effect as displacement.
  • Low value and high value should be set to 0 and 100.
  • In the gamma texture options, set the value to 1.0
  • We are working in a linear workflow, which means that scalar textures don’t need to be gamma corrected.
  • In the shader options, go to the surface normal options and use 1m as value for the displacement distance. If you are using 32bit displacements this value should be the standard.
  • Finally in the render options, play with the displacement rate to increase the quality of your displacement maps.
  • 0.5 to 1 are welcome. Lower values are great but take more time to render, so be careful.
  • Render a displacement checker to see if everything works fine.

Lego by Xuan Prada

I continue with my transition to Modo. I already know more or less the basics of the software and I have adapted my way of working to be productive in Modo. It’s time to make my first image and put in practice all that I learnt last week.

I have chosen a simple theme. After watching The Lego Movie (and some great references that they used) I wanted to create something related with it. It’s simple enough to allow me to finish the image in half a day or so. Say hello to all my mates who worked on the movie, we worked together on Happy Feet while ago.

The model for the character is quite simple, perfect to try all the modeling tools that come with Modo. Great so far. The work plane is quite useful, love it.

Modo’s uv mapping tools are great and very fast. I love atlas projection and unwrap. I’ll be using them all the time. For this particular model I used only unwrap. Just select a few edgest and that’s it, done. I didn’t worry much about seams, I can fix that later in Mari.
I’m using only one UDIM, this model and textures are simple enough to use only un uv space.

I worked on the textures in Mari. I could have used Modo paint tools, but I’m used to paint in Mari, and it’s definitely faster and more powerful.

Only needed three texture channels. Color, Specular and Bump maps. I used two different bump maps, one with fine noise for the plastic, and another one with scratches and imperfections.
All the textures are 8k resolution sRGB and Scalar and 16bit .tiff

For the look-dev I created an Image Based Lighting rig. With an overcast HDRI, perfect to create atmospheric lighting without too much direct light coming from the sun. It gives me perfect reflections and nice contrast between light and shade.
Always working with a Linear Workflow.

Only used one single shader, with no layers. Simple shader with a bit of reflection driven by a specular map.

For the ground I used a simple grid sculpted in Zbrush. Just a few dunes and procedural noise to simulate sand.

I did a few tests to find the best way to setup Zbrush displacements in Modo.
I’ll be posting soon how to do it. It’s not that complicated :)

For lensing, I used a 50mm focal length camera. I created a low poly version of my characters and ground, just to block the camera angle and lighting.

Finally, I updated the proxy models with the final ones.
To lit the scene I used a nice high resolution panorama shot by myself. It gave me the perfect atmosphere and reflection for this shot. But I couldn’t get the perfect shadows.
I wanted to lit this like a miniature, so I wanted a very strong key light with a perfect and hard shadow. I just removed the sun from the HDRI and then added a 3D light just behind the characters.

I dind’t need to render aov’s or render passes, I just rendered a quick id matte to control the ground and the characters.

This is the final render.

And finally, the black and white image that I conceived from the very beginning.

Multi UDIM workflow in Modo by Xuan Prada

You know I’m migrating to Modo.
Multi UDIM workflow is on my daily basis tasks, so this is how I do the setup.

  • First of all, check textures and UDIMs in Mari and export all the textures.
  • Check the asset and UVs in Modo.
  • Load all your textures in the Modo’s image manager.
  • Create a new material for the asset.
  • Add all the UDIM textures as image layers for each required channel.
  • In the texture locator for each texture change the horizontal repeat and vertical repeat to reset. And change the UV offset. It works with negative values (not like Softimage or Maya).
  • That’s it. Make a render check to see if everything works fine.