lidar

Dealing with lidars by Xuan Prada

Hello patrons,

We haven't talked that much about lidar scanning, and it is time to say a few words about it. Lidar scans are a fundamental piece of the VFX pipeline, used by every single visual effects studio on the planet and counted by dozens on every film or tv show.

Sooner or later you would have to deal with lidar scans, that's why I have recorded this video, more than 3 hours of professional vfx training about how to use lidars scans.

In this video we will learn:

- What is lidar scanning.
- How we use lidars in VFX.
- How lidar technology works.
- Basics of working on-set with lidars.
- Lidar hardware.
- Lidar software.
- How to process point clouds.
- How to generate meshes for 3D work.

All the information on my Patreon feed.

Thanks!

Xuan.

Houdini topo transfer - aka wrap3 by Xuan Prada

For a little while I have been using Houdini topo transfer tools instead of Wrap 3. Not saying that I can fully replace Wrap3 but for some common and easy tasks, like wrapping generic humans to scans for both modelling and texturing, I can definitely use Houdini now instead of Wrap 3.

Wrapping generic humans to scans

  • This technique will allow you to easily wrap a generic human to any actor’s scan to create digital doubles. This workflow can be used during modeling the digital double and also while texturing it. Commonly, a texture artist gets a digital double production model in t-pose or a similar pose that doesn’t necessary match the scan pose. It is a great idea to match both poses to easily transfer color details and surface details between the scan and the production model.

  • For both situations, modeling or texturing, this is a workflow that usually involves Wrap3 or other proprietary tools for Maya. Now it can also easily be done in Houdini.

  • First of all, open the ztool provided by the scanning vendor in Zbrush. These photogrammetry scans are usually something around 13 – 18 million of polygons. Too dense for the wrapping process. You can just decimate the model and export it as .obj

  • In Maya align roughly your generic human and the scan. If the pose is very different, use your generic rig to match (roughly) the pose of the scan. Also make sure both models have the same scale. Scaling issues can be fixed in Wrap3 or Houdini in this case, but I think it is better to fix it beforehand, in a vfx pipeline you will be publishing assets from Maya anyway. Then export both models as .obj

  • It is important to remove teeth, the interior of the mouth and other problematic parts from your generic human model. This is something you can do in Houdini as well, even after the wrapping, but again, better to do it beforehand.

  • Import the scan in Houdni.

  • Create a topo transfer node.

  • Connect the scan to the target input of the topo transfer.

  • Bring the base mesh and connect it to the source input of the topo transfer.

  • I had issues in the past using Maya units (decimeters) so better to scale by 0.1 just in case.

  • Enable the topo transfer, press enter to activate it. Now you can place landmarks on the base mesh.

  • Add a couple of landmarks, then ctrl+g to switch to the scan mesh, and align the same landmarks.

  • Repeat the process all around the body and click on solve.

  • Your generic human will be wrapped pretty much perfectly to the actor’s scan. Now you can continue with your traditional modeling pipeline, or in case you are using this technique for texturing, move into Zbrush, Mari and or Houdini for transferring textures and displacement maps. There are tutorials about these topics on this site.

Transferring texture data

  • Import the scan and the wrapped model into Houdini.

  • Assign a classic shader with the photogrammetry texture connected to its emission color to the scan. Disable the diffuse component.

  • Create a bakeTexture rop with the following settings.

    • Resolution = 4096 x 4096.

    • UV object = wrapped model.

    • High res object = scan.

    • Output picture = path_to_file.%(UDIM)d.exr

    • Format = EXR.

    • Surface emission color = On.

    • Baking tab = Tick off Disable lighting/emission and Add baking exports to shader layers.

    • If you get artifacts in the transferred textures, in the unwrapping tab change the unwrap method to trace closest surface. This is common with lidar, photogrammetry and other dirty geometry.

    • You can run the baking locally or on the farm.

  • Take a look at the generated textures.

Meshlab polygon reduction by Xuan Prada

Meshlab is probably the only available solution (proprietary Lidar software doesn't count) when you have to deal with very heavy poly count. I'm working with some complex terrains, some of them up to 50 million polys and Maya or Zbrush just can't handle that. I'm reducing the poly count considerably fast in Meshlab with its polygon reduction tools.

  • This terrain has more than 16 million polys. Maya can't handle this very well, and Zbrush can't manage memory to even open it. Just import it in Meshlab.
  • You will be using the Quadric Edge Collopse Decimation tool a lot.
  • There are different strategies available, I like to use the percentage one. In this case by 0.5
  • I'll be getting an 8 million poly terrain.
  • I just run the tool one more time to get a 4 million terrain. I can work in Maya with this :)

On-set tips: The importance of high frequency detail by Xuan Prada

Quick tip here. Whenever possible use some kind of high frequency detail to capture references for your assets. In this scenario I'm scanning with photos this huge rock, with only 50 images and very bad conditions. Low lighting situation, shot hand-held, no tripod at all, very windy and raining.
Thanks to all the great high frequency detail on the surface of this rock the output is quite good to use as modeling reference, even to extract highly detailed displacement maps.

Notice in the image below that I'm using only 50 pictures. Not much you might say. But thanks to all the tiny detail the photogrammetry software does very well reconstructing the point cloud to generate the 3D model. There is a lot of information to find common points between photos.

The shooting pattern couldn't be more simple. Just one eight all around the subject. The alignment was completely successfully in Photoscan.

As you can see here, even with a small number of photos and not the best lighting conditions, the output is quite good.

I did an automatic retopology in Zbrush. I don't care much about the topology, this asset is not going to be animated at all. I just need a manageable topology to create a nice uv mapping and reproject all the fine detail in Zbrush and use it later as displacement map.

A few render tests.

Environment reconstruction + HDR projections by Xuan Prada

I've been working on the reconstruction of this fancy environment in Hackney Wick, East London.
The idea behind this exercise was recreating the environment in terms of shape and volume, and then project HDRIs on the geometry. Doing this we can get more accurate lighting contribution, occlusion, reflections and color bleeding. Much better environment interaction between 3D assets. Which basically means better integrations for our VFX shots.

I tried to make it as simple as possible, spending just a couple of hours on location.

  • The first thing I did was drawing some diagrams of the environment and using a laser measurer cover the whole place writing down all the information needed for later when working on the virtual reconstruction.
  • Then I did a quick map of the environment in Photoshop with all the relevant information. Just to keep all my annotations clean and tidy.
  • With drawings and annotations would have been good enough for this environment, just because it's quite simple. But in order to make it better I decided to scan the whole place. Lidar scanning is probably the best solution for this, but I decided to do it using photogrammetry. I know it takes more time but you will get textures at the same time. Not only texture placeholders, but true HDR textures that I can use later for projections.
  • I took around 500 images of the whole environment and ended up with a very dense point cloud. Just perfect for geometry reconstruction.
  • For the photogrammetry process I took around 500 shots. Every single one composed of 3 bracketed exposures, 3 stops apart. This will give me a good dynamic range for this particular environment.
  • Combined the 3 brackets to create rectilinear HDR images. Then exported them as both HDR and LDR. The exr HDRs will be used for texturing and the jpg LDR for photogrammetry purpose.
  • Also did a few equirectangular HDRIs with even higher dynamic ranger. Then I projected these in Mari using the environment projection feature. Once I completed the projections from different tripod positions, cover the remaining areas with the rectilinear HDRs.
  • These are the five different HDRI positions and some render tests.
  • The next step is to create a proxy version of the environment. Having the 3D scan this so simple to do, and the final geometry will be very accurate because it's based on photos of the real environment. You could also do a very high detail model but in this case the proxy version was good enough for what I needed.
  • Then, high resolution UV mapping is required to get good texture resolution. Every single one of my photos is 6000x4000 pixels. The idea is to project some of them (we don't need all of them) through the photogrammetry cameras. This means great texture resolution if the UVs are good. We could even create full 3D shots and the resolution would hold up.
  • After that, I imported in Mari a few cameras exported from Photoscan and the correspondent rectilinear HDR images. Applied same lens distortion to them and project them in Mari and/or Nuke through the cameras. Always keeping the dynamic range.
  • Finally exported all the UDIMs to Maya (around 70). All of them 16 bit images with the original dynamic range required for 3D lighting.
  • After mipmapped them I did some render tests in Arnold and everything worked as expected. I can play with the exposure and get great lighting information from the walls, floor and ceiling. Did a few render tests with this old character.

Meshlab align to ground by Xuan Prada

If you deal a lot with 3D scans, Lidars, photogrammetry and other heavy models, you probably use Meshlab. This "little" software is great managing 75 million polygon Lidars and other complex meshes. Photoscan experienced users usually play with the align to ground tool to establish the correct axis for their resulting meshes.

If you look for this option in Meshlab you wouldn't find it, at least I didn't. Please let me know if you know how to do this.
What I found is a clever workaround to do the same same thing with a couple of clicks.

  • Import your Lidar or photogrammetry, and also import a ground plane exported from Maya. This is going to be your floor, ground or base axis.
  • This is a very simple example. The goal is to align the sneaker to the ground. I would like to deal with such a simple lidars at work :)
  • Click on the align icon.
  • In the align tool window, select the ground object and click on glue here mesh.
  • Notice the star that appears before the name of the object indicating that the mesh has been selected as base.
  • Select the lidar, photogrammetry or whatever geometry that need to be aligned and click on point based glueing.
  • In this little windows you can see both objects. Feel free to navigate around it behaves like a normal viewport.
  • Select one point at the base of the lidar by double clicking on top of it. Then do the same in one point of the base geo.
  • Repeat the same process. You'll need at least 4 points.
  • Done :)