I built a platform to 3D scan fruits and food in general. This dragon fruit is my first test. More to come.
And this is the platform that I built to scan food, vegetables and other assets.
One of the first treatments that you will have to do to your VFX footage is removing lens distortion. This is crucial for some major tasks, like tracking, rotoscoping, image modelling, etc.
Copy lens information between different footage or between footage and 3D renders is also very common. Working with different software like 3D equalizar, Nuke, Flame, etc, having a common and standard way to copy lens information seems to be a good idea. Uv maps are probably the easiest way to do this, as they are plain 32 bit exr images.
- Using lens grids is always the easiest, fastest and most accurate way of delensing.
- Set the output type to displacement and look through the forward channel to see the uvs in viewport.
- Write the image as .exr 32 bits
- This will output the uv information and can be read in any software.
- To apply the lensing information to your footage or renders, just use a Stmap connected to the footage and to the uv map.
I've been working on the reconstruction of this fancy environment in Hackney Wick, East London.
The idea behind this exercise was recreating the environment in terms of shape and volume, and then project HDRIs on the geometry. Doing this we can get more accurate lighting contribution, occlusion, reflections and color bleeding. Much better environment interaction between 3D assets. Which basically means better integrations for our VFX shots.
I tried to make it as simple as possible, spending just a couple of hours on location.
- The first thing I did was drawing some diagrams of the environment and using a laser measurer cover the whole place writing down all the information needed for later when working on the virtual reconstruction.
- Then I did a quick map of the environment in Photoshop with all the relevant information. Just to keep all my annotations clean and tidy.
- With drawings and annotations would have been good enough for this environment, just because it's quite simple. But in order to make it better I decided to scan the whole place. Lidar scanning is probably the best solution for this, but I decided to do it using photogrammetry. I know it takes more time but you will get textures at the same time. Not only texture placeholders, but true HDR textures that I can use later for projections.
- I took around 500 images of the whole environment and ended up with a very dense point cloud. Just perfect for geometry reconstruction.
- For the photogrammetry process I took around 500 shots. Every single one composed of 3 bracketed exposures, 3 stops apart. This will give me a good dynamic range for this particular environment.
- Combined the 3 brackets to create rectilinear HDR images. Then exported them as both HDR and LDR. The exr HDRs will be used for texturing and the jpg LDR for photogrammetry purpose.
- Also did a few equirectangular HDRIs with even higher dynamic ranger. Then I projected these in Mari using the environment projection feature. Once I completed the projections from different tripod positions, cover the remaining areas with the rectilinear HDRs.
- These are the five different HDRI positions and some render tests.
- The next step is to create a proxy version of the environment. Having the 3D scan this so simple to do, and the final geometry will be very accurate because it's based on photos of the real environment. You could also do a very high detail model but in this case the proxy version was good enough for what I needed.
- Then, high resolution UV mapping is required to get good texture resolution. Every single one of my photos is 6000x4000 pixels. The idea is to project some of them (we don't need all of them) through the photogrammetry cameras. This means great texture resolution if the UVs are good. We could even create full 3D shots and the resolution would hold up.
- After that, I imported in Mari a few cameras exported from Photoscan and the correspondent rectilinear HDR images. Applied same lens distortion to them and project them in Mari and/or Nuke through the cameras. Always keeping the dynamic range.
- Finally exported all the UDIMs to Maya (around 70). All of them 16 bit images with the original dynamic range required for 3D lighting.
- After mipmapped them I did some render tests in Arnold and everything worked as expected. I can play with the exposure and get great lighting information from the walls, floor and ceiling. Did a few render tests with this old character.
- Export your scene from Maya with the geometry and camera animation.
- Import the geometry and camera in Nuke.
- Import the footage that you want to project and connect it to a Project 3D node.
- Connect the cam input of the Project 3D node to the previously imported camera.
- Connect the img input of the ReadGeo node to the Project 3D node.
- Look through the camera and you will see the image projected on to the geometry through the camera.
- Paint or tweak whatever you need.
- Use a UVProject node and connect the axis/cam input to the camera and the secondary input to the ReadGeo.
- Projection option of the UVProjection should be set as off.
- Use a ScanlineRender node and connect it’s obj/scene input to the UVProject.
- Set the projection mode to UV.
- If you swap from the 3D view to the 2D view you will see your paint work projected on to the geometry uvs.
- Finally use a write node to output your DMP work.
- Render in Maya as expected.
Concept for an installation. More to come.
Each camera works a little bit different regarding the use of the Promote Control System for automatic tasks. In this particular case I'm going to show you how to configure both, Canon EOS 5D Mark III and Promote Control for it's use on VFX look-dev and lighting image acquisition.
- You will need the following:
- Canon EOS 5D Mark III
- Promote Control
- USB clable + adaptor
- Shutter release CN3
- Connect both cable to the camera and to the Promote Control.
- Turn on the Promote Control and press simultaneously right and left buttons to go to the menu.
- In the setup menu 2 "Use a separate cable for shutter release" select yes.
- In the setup menu 9 "Enable exposures below 1/4000" select yes. This is very important if you need more than 5 brackets for your HDRIs.
- Press the central button to exit the menu.
- Turn on your Canon EOS 5D Mark III and go to the menu.
- Mirror lock-up should be off.
- Long exposure noise reduction should be off as well. We don't want to vary noise level between brackets.
- Find your neutral exposure and pass the information on to the Promote Control.
- Select the desired number of brackets and you are ready to go.
These were sent to me by my friend and ex-work mate Ramón López and programmed by Pilar Molina during the production of the shortfilm Shift.
One of the scripts adds Arnold subdivision to all the objects in the scene and the other one adds the same property but only to the selected objects. Finally there is another handy script that substitutes all your textures in the scene by the equivalent .tx textures.
Download them here or here.
Thanks Pilar and Ramón.
As you probably know Arnold manages subdivision individually per object. There is no way to subdivide multiple objects at once. Obviously if you have a lot of different objects in a scene going one by one adding Arnold's subdivision property doesn't sound like a good idea.
This the easiest way that I found to solve this problem and subdivide tons of objects at once.
I have no idea at all about scripting, if you have a better solution, please let me know :)
- This is the character that I want to subdivide. As you can see it has a lot of small pieces. I'd like to keep them separate and subdivide every single one of them.
- First of all, you need to select all the geometry shapes. To do this, select all the geometry objects in the outliner and paste this line in the script editor.
/* you have to select all the objects you want to subdivide, it doesn’t work with groups or locators.
once the shapes are selected just change aiSubdivType and aiSubdivIterations on the attribute spread sheet.
pickWalk -d down;
string $shapesSelected = `ls -sl`;
- Once all the shapes are selected go to the attribute spread editor.
- Filter by ai subd.
- Just type the subdivision method and iterations.
- This is it, the whole character is now subdivided.
Just a few more screenshots and renders of the last photogrammetry stuff that I've been doing. All of these are part of some training that I'll be teaching soon. Get in touch if you want to know more about it.
Clarisse is perfectly capable of rendering volumes while maintaining it's flexible rendering options like instances or scatterers. In this particular example I'm going to render a very simple smoke simulation.
Start by creating and IBL setup. Clarisse allows you to do it with just one click.
Using a couple of matte and chrome spheres will help to establish the desired lighting situation.
To import the volume simulation just go to import -> volume.
Clarisse will show you a basic representation of the volume in the viewport. Always real time.
To improve the visual representation of the volume in viewport just click on Progressive Rendering. Lighting will also affect the volume in the viweport.
Volumes are treated pretty much like geometry in Clarisse. You can render volumes with standard shaders if you wish.
The ideal situation of course it would be using volume shaders for volume simulations.
In the material editor I'm about to use an utility -> extract property node to read any embedded property in the simulation. In this case I'm reading the temperature.
Finally I drive the temperature color with a gradient map.
If you get a lof of noise in your renders, don't forget to increase the volume sampling of your lighting sources.
I'm generating content for a photogrammetry course that I'll be teaching soon. These are just a few images of that content. More to come soon, I'll be doing a lot of examples and exercises using photogrammetry for visual effects projects.
This is a very simple tutorial explaining how to render particle systems simulated in Maya inside Isotropix Clarisse. I already have a few posts about using Clarisse for different purposes, if you check by the tag "Clarisse" you will find all the previous posts. Hope to be publishing more soon.
In this particular case we'll be using a very simple particle system in Maya. We are going to export it to Clarisse and use custom geometries and Clarisse's powerful scatterer system to render millions of polygons very fast and nicely.
- Once your particle system has been simulated in Maya, export it via Alembic. One of the standard 3D formats for exchanging information in VFX.
- Create an IBL rig in Clarisse. In a previous post I explain how to do it, it is quite simple.
- With Clarisse 2.0 it is so simple to do, just one click and you are ready to go.
- Go to File -> Import -> Scene and select the Alembic file exported from Maya.
- It comes with 2 types of particles, a grid acting as ground and the render camera.
- Create a few contexts to keep everything tidy. Geo, particles, cameras and materials.
- In the geo context I imported the toy_man and the toy_truck models (.obj) and moved the grid from the main context to the geo context.
- Moved the 2 particles systems and the camera to their correspondent contexts.
- In the materials context I created 2 materials and 2 color textures for the models. Very simple shaders and textures.
- In the particles context I created a new scatterer calle scatterer_typeA.
- In the geometry support of the scatter add the particles_typeA and in the geometry section add the toy_man model.
- I’m also adding some variation to the rotation.
- If I move my timeline I will see the particle animation using the toy_man model.
- Do not forget to assign the material created before.
- Create another scatterer for the partycles_typeB and configure the geometry support and the geometry to be used.
- Add also some rotation and position variation.
- As these models are quite big compared with the toy figurine, I’m offsetting the particle effect to reduce the presence of toy_trucks in the scene.
- Before rendering, I’d like to add some motion blur to the scene. Go to raytracer -> Motion Blur -> 3D motion blur. Now you are ready to render the whole animation.
On behalf of akromatic.
We are shipping our new 3/8 adaptors that can fit all of our Lighting Checker handles. This is the best way to attach any of our Lighting Checkers individually to any standard 3/8 professional tripod.
This adaptor is included when purchasing Lighting Checker "Mono" from our online store.
If you need to buy additional adaptors for other kits or other purposes, you can buy them as well in our store.
These 3/8 adaptors are made of high quality aluminium.
I've been doing a lot of photogrammetry stuff recently, can't show much yet but I will soon.
These are just a few tests that I did to get comfortable scanning small props.
In order to improve our custom plate solutions to attach akromatic spheres on your tripod, we came out with the akromatic adaptor, which will allow you to attach all of our spheres and carbon fibre handles to any tripod with standard 3/8 attachment.
We'll be sending this adaptor with our akromatic kits very soon.
See it in action.
Visit akromatic.com for more information about this product.
If you deal a lot with 3D scans, Lidars, photogrammetry and other heavy models, you probably use Meshlab. This "little" software is great managing 75 million polygon Lidars and other complex meshes. Photoscan experienced users usually play with the align to ground tool to establish the correct axis for their resulting meshes.
If you look for this option in Meshlab you wouldn't find it, at least I didn't. Please let me know if you know how to do this.
What I found is a clever workaround to do the same same thing with a couple of clicks.
- Import your Lidar or photogrammetry, and also import a ground plane exported from Maya. This is going to be your floor, ground or base axis.
- This is a very simple example. The goal is to align the sneaker to the ground. I would like to deal with such a simple lidars at work :)
- Click on the align icon.
- In the align tool window, select the ground object and click on glue here mesh.
- Notice the star that appears before the name of the object indicating that the mesh has been selected as base.
- Select the lidar, photogrammetry or whatever geometry that need to be aligned and click on point based glueing.
- In this little windows you can see both objects. Feel free to navigate around it behaves like a normal viewport.
- Select one point at the base of the lidar by double clicking on top of it. Then do the same in one point of the base geo.
- Repeat the same process. You'll need at least 4 points.
- Done :)
What if you are working with Ptex but need to do some kind of Zbrush displacement work?
How can you render that?
As you probably now, Zbrush doesn't support Ptex. I'm not a super fan of Ptex (but I will be soon) but sometimes I do not have time or simply I don't want to make proper UV mapping. Then, if Zbrush doesn't export Ptex and my assets don't have any sort of UV coordinates, can't I use Ptex at all for my displacement information?
Yes, you can use Ptex.
- In this image below, I have a detailed 3D scan which has been processed in Meshlab to reduce the crazy amount of polygons.
- Now I have imported the model via obj in Zbrush. Only 500.000 polys but it looks great though.
- We are going to be using Zbrush to create a very quick retopology for this demo. We could use Maya or Modo to create a production ready model.
- Using the Zremesher tool which is great for some type of retopology tasks, we get this low res model. Good enough for our purpose here.
- Next step would be exporting both model, high and low resolution as .obj
- We are going to use these models in Mudbox to create our Ptex based displacement. Yes, Mudbox does support Ptex.
- Once imported keep both of them visible.
- Export displacement maps. Have a look in the image below at the options you need to tweak.
- Basically you need to activate Ptex displacement, 32bits, the texel resolution, etc)
- To setup your displacement setup in Maya and Vray just follow the 32 bits displacement rule.
- And that's it. You should be able to render your Zbrush details using Ptex now.