I built a platform to 3D scan fruits and food in general. This dragon fruit is my first test. More to come.
And this is the platform that I built to scan food, vegetables and other assets.
I built a platform to 3D scan fruits and food in general. This dragon fruit is my first test. More to come.
And this is the platform that I built to scan food, vegetables and other assets.
One of the first treatments that you will have to do to your VFX footage is removing lens distortion. This is crucial for some major tasks, like tracking, rotoscoping, image modelling, etc.
Copy lens information between different footage or between footage and 3D renders is also very common. Working with different software like 3D equalizar, Nuke, Flame, etc, having a common and standard way to copy lens information seems to be a good idea. Uv maps are probably the easiest way to do this, as they are plain 32 bit exr images.
I've been working on the reconstruction of this fancy environment in Hackney Wick, East London.
The idea behind this exercise was recreating the environment in terms of shape and volume, and then project HDRIs on the geometry. Doing this we can get more accurate lighting contribution, occlusion, reflections and color bleeding. Much better environment interaction between 3D assets. Which basically means better integrations for our VFX shots.
I tried to make it as simple as possible, spending just a couple of hours on location.
A couple of HDRI maps that I shot at Tate Modern last week.
You can download them for free in the akromatic website.
Concept for an installation. More to come.
Each camera works a little bit different regarding the use of the Promote Control System for automatic tasks. In this particular case I'm going to show you how to configure both, Canon EOS 5D Mark III and Promote Control for it's use on VFX look-dev and lighting image acquisition.
These were sent to me by my friend and ex-work mate Ramón López and programmed by Pilar Molina during the production of the shortfilm Shift.
One of the scripts adds Arnold subdivision to all the objects in the scene and the other one adds the same property but only to the selected objects. Finally there is another handy script that substitutes all your textures in the scene by the equivalent .tx textures.
Download them here or here.
Thanks Pilar and Ramón.
As you probably know Arnold manages subdivision individually per object. There is no way to subdivide multiple objects at once. Obviously if you have a lot of different objects in a scene going one by one adding Arnold's subdivision property doesn't sound like a good idea.
This the easiest way that I found to solve this problem and subdivide tons of objects at once.
I have no idea at all about scripting, if you have a better solution, please let me know :)
/* you have to select all the objects you want to subdivide, it doesn’t work with groups or locators.
once the shapes are selected just change aiSubdivType and aiSubdivIterations on the attribute spread sheet.
*/
pickWalk -d down;
string $shapesSelected[] = `ls -sl`;
Just a few more screenshots and renders of the last photogrammetry stuff that I've been doing. All of these are part of some training that I'll be teaching soon. Get in touch if you want to know more about it.
Clarisse is perfectly capable of rendering volumes while maintaining it's flexible rendering options like instances or scatterers. In this particular example I'm going to render a very simple smoke simulation.
Start by creating and IBL setup. Clarisse allows you to do it with just one click.
Using a couple of matte and chrome spheres will help to establish the desired lighting situation.
To import the volume simulation just go to import -> volume.
Clarisse will show you a basic representation of the volume in the viewport. Always real time.
To improve the visual representation of the volume in viewport just click on Progressive Rendering. Lighting will also affect the volume in the viweport.
Volumes are treated pretty much like geometry in Clarisse. You can render volumes with standard shaders if you wish.
The ideal situation of course it would be using volume shaders for volume simulations.
In the material editor I'm about to use an utility -> extract property node to read any embedded property in the simulation. In this case I'm reading the temperature.
Finally I drive the temperature color with a gradient map.
If you get a lof of noise in your renders, don't forget to increase the volume sampling of your lighting sources.
Final render.
I'm generating content for a photogrammetry course that I'll be teaching soon. These are just a few images of that content. More to come soon, I'll be doing a lot of examples and exercises using photogrammetry for visual effects projects.
This is a very simple tutorial explaining how to render particle systems simulated in Maya inside Isotropix Clarisse. I already have a few posts about using Clarisse for different purposes, if you check by the tag "Clarisse" you will find all the previous posts. Hope to be publishing more soon.
In this particular case we'll be using a very simple particle system in Maya. We are going to export it to Clarisse and use custom geometries and Clarisse's powerful scatterer system to render millions of polygons very fast and nicely.
On behalf of akromatic.
We are shipping our new 3/8 adaptors that can fit all of our Lighting Checker handles. This is the best way to attach any of our Lighting Checkers individually to any standard 3/8 professional tripod.
This adaptor is included when purchasing Lighting Checker "Mono" from our online store.
If you need to buy additional adaptors for other kits or other purposes, you can buy them as well in our store.
These 3/8 adaptors are made of high quality aluminium.
I've been doing a lot of photogrammetry stuff recently, can't show much yet but I will soon.
These are just a few tests that I did to get comfortable scanning small props.
In order to improve our custom plate solutions to attach akromatic spheres on your tripod, we came out with the akromatic adaptor, which will allow you to attach all of our spheres and carbon fibre handles to any tripod with standard 3/8 attachment.
We'll be sending this adaptor with our akromatic kits very soon.
See it in action.
Visit akromatic.com for more information about this product.
If you deal a lot with 3D scans, Lidars, photogrammetry and other heavy models, you probably use Meshlab. This "little" software is great managing 75 million polygon Lidars and other complex meshes. Photoscan experienced users usually play with the align to ground tool to establish the correct axis for their resulting meshes.
If you look for this option in Meshlab you wouldn't find it, at least I didn't. Please let me know if you know how to do this.
What I found is a clever workaround to do the same same thing with a couple of clicks.
What if you are working with Ptex but need to do some kind of Zbrush displacement work?
How can you render that?
As you probably now, Zbrush doesn't support Ptex. I'm not a super fan of Ptex (but I will be soon) but sometimes I do not have time or simply I don't want to make proper UV mapping. Then, if Zbrush doesn't export Ptex and my assets don't have any sort of UV coordinates, can't I use Ptex at all for my displacement information?
Yes, you can use Ptex.
This is a very quick demo of how to install on Mac and use the gizmo mmColorTarget or at least how I use it for my texturing/references and lighting process. The gizmo itself was created by Marco Meyer.