photogrammetry

Professional photogrammetry 04 by Xuan Prada

Hello!

This is the first of two videos about drone photogrammetry of my series "Professional photogrammetry".
In this video I will explain everything that I know about scanning for visual effects using commercial drones.

We will talk about shooting patterns, surveying extra data, equipment, flight missions, etc. And we will also start processing some of the data sets that I have available for you.
The video is around 3 hours long and it will continue in the next episode of this series.

Head over my patreon for more information

Thanks!
Xuan.

Intro to gaussian splatting by Xuan Prada

Bibliography

  • https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/

  • https://github.com/aras-p/UnityGaussianSplatting

  • https://www.youtube.com/watch?v=KFOy354zf9E

Hardware

- You need an Nvidia GPU with at least 24GB VRAM.

Software

  • Git https://git-scm.com/downloads

  • Once installed, open a command line and type git --version to check if it's working.

  • Anaconda https://www.anaconda.com/download

  • It will install all the packages and wrappers that you need.

  • CUDA toolkit 11.8 https://developer.nvidia.com/cuda-toolkit-archive

  • Once installed open a command line and type nvcc --version to check if it's working.

  • Visual Studio with C++ https://visualstudio.microsoft.com/vs/older-downloads/

  • Once installed open Visual Studio Installer and install Desktop development C++.

  • Colmap https://github.com/colmap/colmap/releases

  • This tool is for creating camera positions.

  • Add it to environment variables.

  • Edit environment variables, doble click on "path" variable and add a new one and paste the path where Colmap is stored.

  • ImageMagik https://imagemagick.org/script/download.php

  • This tool is for resizing images.

  • Test it by typing these lines one by one in the command line.

  • magick logo: logo.gif

  • magick identify logo.gif

  • magick logo.gif win:

  • FFMPEG https://ffmpeg.org/download.html

  • Add it to environment variables.

  • Open a command line and type ffmpeg to check if it's working.

  • To convert a video to photos, go to the folder where ffmpeg is downloaded.

  • Type ffmpeg.exe -i pathToVideo.mov -vf fps=2 out%04d.jpg

  • Finally restart your computer.

How to capture gaussian splats?

  • Same rules as photogrammetry but less images are needed.

  • Do not move too fast, we don't want blurry frames.

  • Take between 200 - 1000 photos.

  • Fixed exposure, otherwise it will create flickering in the final model.

Processing

  • Create a folder called "dataset".

  • Inside create another folder called "input" and place all the photos.

  • Now we need to use Colmap to obtain the camera poses. You could use RealityCapture or Metashape to do the same thing.

  • We can do this from the command line, but for simplicity let's use the gui.

  • Open Colmap, file - new. Set the database to your "dataset" folder and call it database.db Set the images to the "input folder". Save.

  • Processing - feature extraction. Enable "shared for all images if there is no changing in zoom in your photos". Click on extract. This will take a few minutes.

  • Processing - feature matching. Sequential is faster and exhaustive more precisse. This will take a few minutes.

  • Save the Colmap scene in "dataset" - "colmap". (create the folder).

  • Reconstruction - Reconstruction options. Uncheck multiple_models as we are reconstructing a single scene.

  • Reconstruction - Start reconstruction. This will take the longer, potentially hours, depending on the amount of photos.

  • Once Colmap has finished you will see the camera poses and the sparse pointcloud.

  • File - Export model and save it in "dataset" - "distorted" - "sparse" - "0". Create directories.

Train the 3D gaussian splatting model

  • Open a command line and type git clone https://github.com/graphdeco-inria/gaussian-splatting --recursive

  • This will be downloaded in your users folder. gaussian-splatting

  • Open an anaconda prompt and go to the directory where the gaussian-splatting was downloaded.

  • Type these line one at a time.

  • SET DISTUTILS_USE_SDK=1

  • conda env create --file environment.yml

  • conda activate gaussian_splatting

  • Cd to the folder where gaussian splatting was downloaded.

  • Type these lines one at a time.

  • pip install plyfile tqdm

  • pip install submodules/diff-gaussian-rasterization

  • pip install submodules/simple-knn

  • Before training the model we need to undistor the images.

  • Type python convert.py -s $FOLDER_PATH --skip_matching

  • This is going to create a folder called sparse and another one called stereo, and also a couple of files.

  • Train the model.

  • python train.py -s $FOLDER_PATH -m $FOLDER_PATH/output

  • This will train the model and export two pointclouds, one at 7000 iterations and another one at 30000 iterations.

Visualizing the model

  • Download the viewer here: https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/binaries/viewers.zip

  • From a terminal: SIBR_gaussianViewer_app -m $FOLDER_PATH/output

  • Unity 2022.3.9f1

  • Load the project. https://github.com/aras-p/UnityGaussianSplatting

  • Tools - Gaussian splats - Create.

  • Select the pointcloud, create.

  • Select the gaussian splats game object and attach the pointcloud.

  • Do your thing!

Professional photogrammetry 03 by Xuan Prada

Hello patrons,

This is the third episode of "Professional photogrammetry".
Two videos, around 4 hours long where we will talk about Photometric stereo scanning. We will discuss the different setups that we have available for this kind of scanning. Then we will work on three different exercises, starting with the calibration of the setup followed by some fabric and vegetation scanning.

Finally, we will take a look at how to use coded targets for variable reduction in our scans.
Hope you learn something new and have some fun.

All the info on my Patreon page.

Thanks!
Xuan.

Mix 04 by Xuan Prada

Hello patrons,

First video of 2022 will be a mix of topics.

The first part of the video will be dedicated to talk about face building and face tracking in Nuke. Using these tools and techniques will allow us to generate 3D heads and faces using only a few photos with the help of AI. Once we have the 3D model, we should be able to track and matchmove a shot to do a full head replacement or to extend/enhance some facial features.

In the second part of the video I will show you a technique that I used while working on Happt Feet to generate foot prints and foot trails. A pretty neat technique that relies on transferring information between surfaces instead of going full on with complex simulations.

This is a 3.30 hours video, so grab yourself a cup of coffee and enjoy!
All the information on my Patreon channel.

As always, thanks for your support!

Xuan.

Scan based materials on a budget by Xuan Prada

Hello patrons,

Last post of the year!

In this two and half hours video I will show you my workflow to create smart materials based on photogrammetry. A technique wideky used in VFX and the game industry.

But we won't be using special hardware or very expensive photographic equipment, we are going to be using only a cheap digital camera or even a smartphone.

In this video you will learn:

- How to shoot photogrammetry for material creation.
- How to process photogrammetry in Reality Capture.
- How to bake textures maps from high resolution geometry in Zbrush.
- How to create smart materials in Substance Designer for Substance Painter or for 3D applications.
- How to use photogrammetry based materials in real time engines.

Thanks for your support and see you in 2022!
Stay safe.

Xuan.

Detailing digi doubles using generic humans by Xuan Prada

This is probably the last video of the year, let's see about that.

This time is all about getting your concept sculpts into the pipeline. To do this, we are going to use a generic humanoid, usually provided by your visual effects studio. This generic humanoid would have perfect topology, great uv mapping, some standard skin shaders, isolation maps to control different areas, grooming templates, etc.

This workflow will speed drastically the way you approach digital doubles or any other humanoid character, like this zombie here.

In this video we will focus mainly on wrapping a generic character around any concep sculpt to get a model that can be used for rigging, animation, lookdev, cfx, etc. And once we have that, we will re-project back all the details from the sculpt and we will apply high resolution displacement maps to get all the fine details like skin pores, wrinkles, skin imperfections, etc.

The video is about 2 hours long and we can use this character in the future to do some other videos about character/creature work.

All the info on my Patreon site.

Thanks!

Xuan.

Houdini topo transfer - aka wrap3 by Xuan Prada

For a little while I have been using Houdini topo transfer tools instead of Wrap 3. Not saying that I can fully replace Wrap3 but for some common and easy tasks, like wrapping generic humans to scans for both modelling and texturing, I can definitely use Houdini now instead of Wrap 3.

Wrapping generic humans to scans

  • This technique will allow you to easily wrap a generic human to any actor’s scan to create digital doubles. This workflow can be used during modeling the digital double and also while texturing it. Commonly, a texture artist gets a digital double production model in t-pose or a similar pose that doesn’t necessary match the scan pose. It is a great idea to match both poses to easily transfer color details and surface details between the scan and the production model.

  • For both situations, modeling or texturing, this is a workflow that usually involves Wrap3 or other proprietary tools for Maya. Now it can also easily be done in Houdini.

  • First of all, open the ztool provided by the scanning vendor in Zbrush. These photogrammetry scans are usually something around 13 – 18 million of polygons. Too dense for the wrapping process. You can just decimate the model and export it as .obj

  • In Maya align roughly your generic human and the scan. If the pose is very different, use your generic rig to match (roughly) the pose of the scan. Also make sure both models have the same scale. Scaling issues can be fixed in Wrap3 or Houdini in this case, but I think it is better to fix it beforehand, in a vfx pipeline you will be publishing assets from Maya anyway. Then export both models as .obj

  • It is important to remove teeth, the interior of the mouth and other problematic parts from your generic human model. This is something you can do in Houdini as well, even after the wrapping, but again, better to do it beforehand.

  • Import the scan in Houdni.

  • Create a topo transfer node.

  • Connect the scan to the target input of the topo transfer.

  • Bring the base mesh and connect it to the source input of the topo transfer.

  • I had issues in the past using Maya units (decimeters) so better to scale by 0.1 just in case.

  • Enable the topo transfer, press enter to activate it. Now you can place landmarks on the base mesh.

  • Add a couple of landmarks, then ctrl+g to switch to the scan mesh, and align the same landmarks.

  • Repeat the process all around the body and click on solve.

  • Your generic human will be wrapped pretty much perfectly to the actor’s scan. Now you can continue with your traditional modeling pipeline, or in case you are using this technique for texturing, move into Zbrush, Mari and or Houdini for transferring textures and displacement maps. There are tutorials about these topics on this site.

Transferring texture data

  • Import the scan and the wrapped model into Houdini.

  • Assign a classic shader with the photogrammetry texture connected to its emission color to the scan. Disable the diffuse component.

  • Create a bakeTexture rop with the following settings.

    • Resolution = 4096 x 4096.

    • UV object = wrapped model.

    • High res object = scan.

    • Output picture = path_to_file.%(UDIM)d.exr

    • Format = EXR.

    • Surface emission color = On.

    • Baking tab = Tick off Disable lighting/emission and Add baking exports to shader layers.

    • If you get artifacts in the transferred textures, in the unwrapping tab change the unwrap method to trace closest surface. This is common with lidar, photogrammetry and other dirty geometry.

    • You can run the baking locally or on the farm.

  • Take a look at the generated textures.

Introduction to Reality Capture by Xuan Prada

In this 3 hour tutorial I go through my photogrammetry workflow using Reality Capture in conjunction with Maya, Zbrush Mari and UV Layout.

I will guide you through the entire process, from capturing footage on-set until asset completion. I will explain the most basic settings needed to process your images in Reality Capture, to create point clouds, high resolution meshes and placeholder textures.
Then I will continue to develop the asset in order to make it suitable for any visual effects production.

This are the topics included in this tutorial.

- Camera gear.
- Camera settings.
- Shooting patterns.
- Footage preparation.
- Photogrammetry software.
- Photogrammetry process in Reality Capture.
- Model clean up.
- Retopology.
- UV mapping.
- Texture re-projection, displacement and color maps.
- High resolution texturing in Mari.
- Render tests.

Check it out on my Patreon feed.

On-set tips: Creating high frequency detail by Xuan Prada

In a previous post I mentioned the importance of having high frequency details whilst scanning assets on-set. Sometimes if we don't have that detail we can just create it. Actually sometimes this is the only way to capture volumes and surfaces efficiently, specially if the asset doesn't have any surface detail, like white objects for example.

If we are dealing with assets that are being used on set but won't appear in the final edit, it is probably that those assets are not painted at all. There is no need to spend resources on it, right? But we might need to scan those assets to create a virtual asset that will be ultimately used on screen.

As mentioned before, if we don't have enough surface detail it will be so difficult to scan assets using photogrammetry so, we need to create high frequency detail on our own way.

Let's say we need to create a virtual assets of this physical mask. It is completely plain, white, we don't see much detail on its surface. We can create high frequency detail just painting some dots, or placing small stickers across the surface.

In this particular case I'm using a regular DSLR + multi zoom lens. A tripod, a support for the mask and some washable paint. I prefer to use small round stickers because they create less artifacts in the scan, but I run out of them.

I created this support while ago to scan fruits and other organic assets.

The first thing I usually do (if the object is white) is covering the whole object with neutral gray paint. It is way more easy to balance the exposure photographing again gray than white.

Once the gray paint is dry I just paint small dots or place the round stickers to create high frequency detail. The smallest the better.

Once the material has been processed you should get a pretty decent scan. Probably an impossible task without creating all the high frequency detail first.

Meshlab polygon reduction by Xuan Prada

Meshlab is probably the only available solution (proprietary Lidar software doesn't count) when you have to deal with very heavy poly count. I'm working with some complex terrains, some of them up to 50 million polys and Maya or Zbrush just can't handle that. I'm reducing the poly count considerably fast in Meshlab with its polygon reduction tools.

  • This terrain has more than 16 million polys. Maya can't handle this very well, and Zbrush can't manage memory to even open it. Just import it in Meshlab.
  • You will be using the Quadric Edge Collopse Decimation tool a lot.
  • There are different strategies available, I like to use the percentage one. In this case by 0.5
  • I'll be getting an 8 million poly terrain.
  • I just run the tool one more time to get a 4 million terrain. I can work in Maya with this :)

On-set tips: The importance of high frequency detail by Xuan Prada

Quick tip here. Whenever possible use some kind of high frequency detail to capture references for your assets. In this scenario I'm scanning with photos this huge rock, with only 50 images and very bad conditions. Low lighting situation, shot hand-held, no tripod at all, very windy and raining.
Thanks to all the great high frequency detail on the surface of this rock the output is quite good to use as modeling reference, even to extract highly detailed displacement maps.

Notice in the image below that I'm using only 50 pictures. Not much you might say. But thanks to all the tiny detail the photogrammetry software does very well reconstructing the point cloud to generate the 3D model. There is a lot of information to find common points between photos.

The shooting pattern couldn't be more simple. Just one eight all around the subject. The alignment was completely successfully in Photoscan.

As you can see here, even with a small number of photos and not the best lighting conditions, the output is quite good.

I did an automatic retopology in Zbrush. I don't care much about the topology, this asset is not going to be animated at all. I just need a manageable topology to create a nice uv mapping and reproject all the fine detail in Zbrush and use it later as displacement map.

A few render tests.

Environment reconstruction + HDR projections by Xuan Prada

I've been working on the reconstruction of this fancy environment in Hackney Wick, East London.
The idea behind this exercise was recreating the environment in terms of shape and volume, and then project HDRIs on the geometry. Doing this we can get more accurate lighting contribution, occlusion, reflections and color bleeding. Much better environment interaction between 3D assets. Which basically means better integrations for our VFX shots.

I tried to make it as simple as possible, spending just a couple of hours on location.

  • The first thing I did was drawing some diagrams of the environment and using a laser measurer cover the whole place writing down all the information needed for later when working on the virtual reconstruction.
  • Then I did a quick map of the environment in Photoshop with all the relevant information. Just to keep all my annotations clean and tidy.
  • With drawings and annotations would have been good enough for this environment, just because it's quite simple. But in order to make it better I decided to scan the whole place. Lidar scanning is probably the best solution for this, but I decided to do it using photogrammetry. I know it takes more time but you will get textures at the same time. Not only texture placeholders, but true HDR textures that I can use later for projections.
  • I took around 500 images of the whole environment and ended up with a very dense point cloud. Just perfect for geometry reconstruction.
  • For the photogrammetry process I took around 500 shots. Every single one composed of 3 bracketed exposures, 3 stops apart. This will give me a good dynamic range for this particular environment.
  • Combined the 3 brackets to create rectilinear HDR images. Then exported them as both HDR and LDR. The exr HDRs will be used for texturing and the jpg LDR for photogrammetry purpose.
  • Also did a few equirectangular HDRIs with even higher dynamic ranger. Then I projected these in Mari using the environment projection feature. Once I completed the projections from different tripod positions, cover the remaining areas with the rectilinear HDRs.
  • These are the five different HDRI positions and some render tests.
  • The next step is to create a proxy version of the environment. Having the 3D scan this so simple to do, and the final geometry will be very accurate because it's based on photos of the real environment. You could also do a very high detail model but in this case the proxy version was good enough for what I needed.
  • Then, high resolution UV mapping is required to get good texture resolution. Every single one of my photos is 6000x4000 pixels. The idea is to project some of them (we don't need all of them) through the photogrammetry cameras. This means great texture resolution if the UVs are good. We could even create full 3D shots and the resolution would hold up.
  • After that, I imported in Mari a few cameras exported from Photoscan and the correspondent rectilinear HDR images. Applied same lens distortion to them and project them in Mari and/or Nuke through the cameras. Always keeping the dynamic range.
  • Finally exported all the UDIMs to Maya (around 70). All of them 16 bit images with the original dynamic range required for 3D lighting.
  • After mipmapped them I did some render tests in Arnold and everything worked as expected. I can play with the exposure and get great lighting information from the walls, floor and ceiling. Did a few render tests with this old character.

A bit more photogrammetry by Xuan Prada

Just a few more screenshots and renders of the last photogrammetry stuff that I've been doing. All of these are part of some training that I'll be teaching soon. Get in touch if you want to know more about it.

More photogrammetry stuff by Xuan Prada

I'm generating content for a photogrammetry course that I'll be teaching soon. These are just a few images of that content. More to come soon, I'll be doing a lot of examples and exercises using photogrammetry for visual effects projects.