background preloader


Related:  teach

Grand Tour: Project Overview Tags : RMS 19, physically plausible shaders, image based lighting, gpsurface, glass, volumes, character, hair, fur, primvars, PTEX, vector displacements In this eight chapter "How To" guest contributor Leif Pedersen will break down a production pipeline in RenderMan Studio and RenderMan Pro Server. We will specifically focus on the advantages of RMS and how it can help us decrease setup time without compromising efficient render times. We will divide the training into two distinct approaches to rendering via RenderMan Studio's RIS and REYES rendering architectures. The training will cover the interactions between RMS and Maya for general rendering workflows, Mudbox and Zbrush for importing texture and displacement maps, as well as Fusion and Nuke for Compositing. With this in mind, previous experience with these applications is helpful. This is the pipeline we will be covering. Before we start To fully understand the training, it is very important to understand color management in RMS.

SpacesDefinitions From Odwiki Spaces in VEX First of all, we need to make a subtle but important distinction: VEX, unlike languages like Renderman's Shading Language (RSL), is not just a shading language; it is a generic, multi-context, "vector expression" language. Spaces then, fall into this category of concepts which, while meaningful in some contexts, are either ambiguous or simply not applicable in others. There are others, and we'll get to what they all mean in a second, but for now, it's important to get used to thinking about these names as simply labels -- they won't necessarily match your idea of what "world space", or "texture space" should mean. Spaces in The Shading Contexts There are four predefined spaces for our use inside shaders, and there are an unlimited number of user-defined spaces that can also be accessed as arbitrary "named spaces" (or at least that's how I like to call them, but that's not an official, SESI-approved name for them ;-) . World Space Object Space NDC Space Named Spaces

HOUDINI OTL : normals on curve! Hi everyone ! Yes that’s true , we are talking about Houdini. I have been a long time Maya user , I started back in 2006! here is a quick demo : HOUDINI OTL : normals on curve (with parallel frame transportation) from Marco Giordano on Vimeo. otl can be downloaded from here : C&C appreciated If you want to stay up to date about my tutorial , maya plugins or houdini otls and much more subscribe to the newsletter!

eetu's lab - od[forum] - Page 5 diula, on Feb 17 2009, 10:13 PM, said: Thanks for the info! Still it's a bit unclear: what do you mean by resimulation? Are you moving the new particles according to the velocity in the previous frame? The basics are in the original post; Quote The method I ended up with was simple; the new particles get their velocityfrom nearby particles and move according to that. The transfer recipients are the new particles, and the source are the original houdini-simulatedparticles in the neighborhood. eetu.

March 2014 VHUG | FX-TD.COM I presented some Flip and Point Advection stuff at the March 2014 Vancouver Houdini User Group. The files are here for anybody who was there or anybody interested: VHUG file (200 something megs. This has the hip file and also the quicktimes and one frame of geo from the rest field setup) Fliptricks hip file Pyro Point Advect / Color Volume file I have two hip files. The second is a setup that uses Gas Advect to advect points along with a high velocity pyro sim. Flip Tricks (VHUG – March 2014) from Ian Farnsworth on Vimeo.

Deforming RBDs Simple setup that uses a SOP solver and deforms the geo at points of impact. The impact data is already available in the SOP solver, the normal is the impact direction and there is a pscale attribute for the magnitude of the impact. In this setup I just used attributeTransfer to get the impact data onto the mesh, then a vopsop to deform the mesh. There is a minimum impact so that resting objects don't get deformed. Also I divide the impact magnitude by the mass... the heavy box was denting itself from just resting there. Houdini - Impact deformation for RBDs from Sam Hancock on Vimeo. Also going to try getting some volume preservation going on!

Igor Kharitonov | Houdini FX TD | rnd_textureflow_pt1 In our latest feature movie I have developed a lot of procedural animated water surfaces. Starting from small puddles to huge ocean surfaces - all this stuff was created using procedural custom noises, including Houdini Ocean Toolkit and 2d ripple solver. For a few close-up shots I used FLIP simulations sewed with animated plates. To enhance spatial details I used texture displacement projected with a simple planar projection. So next I faced the task of creating large scale water simulation. After making some tests I came to conclusion that I can not avoid using texture displacement. Using simple planar projection mapping with FLIP suffers severe drawbacks, because texture doesn’t reflect base movement. Methods of creating advected flows are widely discussed. In Houdini the idea is just the same as when creating dual rest fields in smoke simulation. Making these impovements I managed to enhance spatial detailization of liquid flow without icreasing the resolution of FLIP simulation.

Mantra VS Mantra VS Etc... Recently I have begun to evaluate and update our rendering pipeline to work with H11, in particular to look at perhaps starting to use PBR and Photon maps for indirect lighting. A bit of background. Our current method has been to use the Gather raytracing mechanism to look for specific exports as opposed to Cf (for control and optimization purposes), this has been successful so far in matching (H10) PBR renders/speed with the regular Raytrace renderer and was faster if optimizations where in use (like max distance, object scopes, not Gathering expensive stuff, etc). Its amazing that the flexibility to do this stuff in Houdini exists, and although I have a great deal of fun doing this I'd rather Mantra had the Vray/Modo like lighting accelerators to begin with. My test environment started as Cornel box... Apologies if it isn't the most pleasing model, it was totally random. There is no hope for using them directly as a sort of light bake.

Additive and Subtractive Color Mixing The world is full of colors. Some researchers report that humans can distinguish about 16 million different colors. But what's more interesting is that most of the colors we see around us and all the colors we see on a TV or computer monitor can be created from just three different colored lights. How are all the colors made from just three different colors? Simply by combining the light in different ratios. Additive Colors Colored lights are mixed using additive color properties. The additive primary colors are red, green and blue (RGB). By changing the brightness of each of the three primary colors by varying degrees, you can make a wide range of colors. Computer Monitors and Televisions Computer monitors and televisions are an application of additive color. Animated RGB Color Cube Here is an animated RGB color cube. Another Way to Make Colors Subtractive Color Mixing The subtractive primary colors are cyan, magenta and yellow (CMY). But how do we print something that's red?

Volume Workflows Used as a hint as to whether the volume should compute indirect illumination inside the volume (also known as multiple scattering, because light will scatter more than once inside the volume). If the multi scatter parameter is set to 0, and the integrator respects this hint, PxrVolume will only perform single scattering: points inside the volume will only be lit directly by light sources. If set to 1, points inside the volume will be lit by indirect illumination as well.For very dense volumes with high anisotropy, it is often the case that light will scatter many times inside the volume before reaching the eye, and multiple scattering is the only way to achieve the correct look. It is also often the only way to correctly render certain effects such as volume caustics.

portfolio - Fancy 3DFancy 3D Houdini Tutorials | geneome All the video tutorials were originally created using the XviD codec at varying resolutions, but I have since moved them to Vimeo. Please keep in mind that these videos are not intended to be professional as I just hit the record button and start talking… no scripts, just showing things I think would be of interest. Naming Scheme: Tutorial Name (key topics if any) (Time [Minutes:Seconds]) – Version Used. CompositingShadingRenderingParticlesDynamicsClothGeneral Tips Compositing Shading Rendering Image Planes:Image Plane Video Series Files: Particles Emitting Particles Using A Bounding Object And Its Normals – (the post): Dynamics Using dophassubdata To Change Colors Dynamically (7:42) – 9.5.146 – (the post) Cloth General Tips Tips on Organizing Operators (04:01) – 9.1.179 – (the post) Share this: Like this: Dark Sun, etc. Follow Get every new post delivered to your Inbox. Build a website with %d bloggers like this:

BW Design Houdini Video Tutorials — Arthur Yidi Updated: January 15, 2015 First time learning? Start Here and here. Houdini Quick Start Houdini User Guide Houdini First Steps Houdini 14 Features Houdini Projects (New Series) Houdini Procedural Animation Houdini Procedural Modeling Houdini Light Shade and Render Houdini Masterclass Houdini Digital Assets Houdini Off the Shelf Houdini Engine for Maya Houdini Engine for Unity Houdini Extras Houdini HScript Expressions Houdini Pyro FX User Created Rohan Dalvi Yancy Lindquist Andrew Lowell (Lost Boys) Kim Goossens Peter Quint (PQ) PQ Basics PQ Animation PQ CHOPs PQ Compositing PQ Dynamics PQ Particles PQ Rendering PQ SOPs