Substance Designer - Physics Simulator

General / 23 June 2025

Substance Designer is my favourite tool to work with; this doesn't come as a surprise to anyone, being a Material Artist myself. I've crafted some powerful tools in the past to improve workflows and get cool results quickly with my work, but if you have worked with Designer in a more advanced manner in the past, you already know that the Pixel Processor and FX Map are "the" nodes.

The Pixel Processor allows you to compute simple arithmetic algorithms on each pixel of an image and output the result. There are some wizards out there who do a lot of crazy things with this. Andrei Zelenco is a good example of this, and he has a YouTube channel full of incredible tutorials and introductions to this world.

Before starting my career as a 3D artist and later as a Material artist, I started my journey as a programmer, so I have a special place in my heart for fun but yet useless coding challenges.

This started just as a proof of concept of what is possible to do with the pixel processor and data management, and to complete this I had to come up with some clever solutions to many different problems, it has been the most instructive practice I've ever made in Substance Designer and I encourage anyone with a basic knowledge in Vectors to do their own implementation of this topic, you will surprise yourself of what you can achieve!

But that's enough for an introduction. I decided to share my process to develop this simple simulator, not only to show my work but as an introduction to anyone willing to learn more about the Pixel Processor.

I won't be giving any introduction about the laws of physics behind this, since they are quite basic and I assume anyone reading this post will have at least a basic knowledge about kinematics and mid-level math, this post will focus only in the Substance Designer implementation.

This is the complete overview of the main graph. We will go through each part of this to unravel the magic behind the beautiful colors.

The first thing to discuss here is how we are going to manage data and variables. In order for this to work, there are at least 3 basic components that we need to keep track of. Position, Velocity, and Acceleration. Substance Designer isn't great for data sharing, and the best practice is using images to solve this. The problem is how to encode those variables that use an (X, Y) format into images? Well, the answer is simple but tricky. We need to encode those values in the RGBA.

A very critical here to remember is that I will keep all my variables as a 1x1 size image with an output format of HDR High Resolution(32f), so all my data doesn't get limited by the 256 bits of the rgba channel, and most importantly with this Substance Will be able to work with negative values inside the pixel processor and output them for visualization and debugging.

We will start with the most simple of the variables, the Position. For simplification purposes, I will treat this variable as an int2, but keep in mind that we will still encode them into float2; we will just not use the decimal part of the variable.

For this project, I'm assuming a 1024x1024 grid, since we don't have that many bits of information in the 0-255 rgba channels, we need to encode them using a High and Low byte encoder. So the RG channel will save the X value, and BA will be Y, and we can encode our coordinates into the colors using the following formula:

X' = X - 1
Y' = Y - 1

Since we want to make this as user friendly as possible, we want the user to pass the coordinets with values from 1 to 1024, but for the encoder is way better to work with values from 0 to 1023, so we first need to substact 1 for each value, this can be avoided if we already give the fixed coordinate values from the beginning, but that wouldn't be so clean.

Now the fun part:

R = X' % 256
G = floor(X'/256)

B = Y' % 256
A = floor(Y'/256)

We are just using R to encode how many times we need to multiply R by 256 in order to get as close as possible to X'? And G is how many numbers from 0 to 256 I need to add to that number to match the X' coordinates, and I will do the same for the Y' BA values. If this is tricky, try searching High and Low byte, and it should give you all the information that you need!


This is the Pixel processor implementation.

For the Velocity and Acceleration this trick doesn't work anymore, since we need floating numbers for decimal precision and negative values, since a velocity and acceleration can be negative.

In that case, we are using a variation of the same approach, but this time with 16 bits of data.

valueNormalized = (value + maxValue) / (2 * maxValue)
value16bits = round(valueNormalized * 65535)

Value is our input, maxValue is the biggest number expected, in my implementation I decided to use 32 as the max value since RGBA has 256 bits of information in each channel, and that was the maximum in that case, but later I fixed that issue changing the Output Format of the pixel processor to HDR High Precision (32F). 65535 may seem an arbitrary number, but it's just the max value in 16 bits of information.

After this, we can do:

lowByte = value16bits % 256
highByte = floor(value16bits / 256)

This is exactly what we did with the position, so lowByte will be R and highByte G, and we have our first component encoded. We can do the same for BA, and that way we can encode X -> RG and Y -> BA.

If you pay attention to my implementation, you can see that I multiply by -1 the Y axis before encoding, which is once again for a user-friendly experience. In Substance Designer, as most Image processor softwares, increasing the Y value makes the image go down, this is because the pivot of the image is in the top left corner. Once again, this step is not needed if you prefer to avoid this. But I wanted my gravity to be -9.8 as the Earth's gravity is.

Now we have our values encoded in a pixel image! But in order to read those values, we have to sample that image back and decode. We just need to apply our math formula in reverse, and we will get our original values back.

This is the Pixel Processor that updates the velocity based on the acceleration. As you can see on the left, I decoded the image values and once they are back to simple float numbers I can just add them normally. For a most physically accurate result here, we could multiply each axis by 0.033, since each frame in a 30 FPS video takes 33ms, but I wanted this simulation to be faster so I avoided this here, that said I keep the 0.033 multiplication for the position update, since it gaves us a smoother result and is also better for collision detection since each frame takes a smaller step


I think it's time to finally render something on the screen, in this case I used the FX Map for that purpose, this node is known for its ability to iterate, subdivide and apply different algorithms to an image or pattern, but in this case I will use it to just render a circle. For that, I just put my position as a single input in my FX Map  (after updating it with the new velocity). I will use a disc pattern and a size of 0.025, but any configuration here is valid since it is just a visualization and will not affect any calculations.

The trick part with the renderer comes with the position, we can use the Pattern Offset here, but this uses a value of -0.5 to 0.5 on each axis, and our position is encoded in a 1024x1024 grid, but we can solve this very easily! Just use an empty function as the Pattern Offset and normalize our position to a value from 0 to 1, and then subtract 0.5 to properly adjust the coordinate system to the FX Map offset.

With this, the only movable part of our simulation has been rendered, we can just add it to the collision visuals using a blend note and output the render frame.

Now if we input an acceleration of (0, -9.8) we have a super cool falling ball!

This is cool enough, but a little work on a collision system would be super cool. The first thing we can do is quite simple actually, just input a collision map, of the same dimensions as our render (1024, 1024) and decide that any black color is air and white is a collision, this is cool because we can now sample the position with the collision texture and if the pixel in that position is black, there is no collision, but if it's white we collided with something! And this is a fancy boolean.

Just as a test, we can make a new Pixel Processor before the one that adds the acceleration to the velocity to check this, with the "boolean" pixel image as input if the pixel R value is greater than 0.5 then we have a collision and we can multiply the Y velocity for -0.8 and we have a cool bouncing effect! I won't be showing these nodes here since, as you may have guessed, that's a really bad collision detection, since it only works with a perfectly vertical bouncing ball.

In a classic Physics Simulator we would be able to read the Normal direction of the vertex/face that we are colliding with, or if the collision is a pure mathematical object it would be made out of vectors and therefore we could extract a perpendicular vector from it. But since we only have an image full of pixels to work with, we have to come up with a clever solution for this.

My solution may not be perfect, but I decided to use the derivative of the collision image to map the edge of the collision and get the "normal vector". The derivative of any function allows us to study the change in its values. Since our collision map is black and white, we have a great change of values here to work with. For this implementation, I used an Andrei Zelenco algorithm that I learned while watching his video about an Advanced simulation tutorial. And looks like this:

It's important to mention that it is crucial to have this output format configured as HDR High Precisión (32F), that way it will be able to compute negative numbers here, even if the output image is just black, all negative numbers are black in color.

This is the result, as you can see now we have a Vector map on the edge, this is great and we can blur it a bit just to fill some extra space in case our simulation extends the boundaries just a bit.

Now with this vector graph we can get the current position and add the velocity to see if it will collide with anything this frame, and check the derivative color of that pixel to get the "normal vector".


I will also do this with the black and white collision map, so I can get a "boolean" to check if there is a collision; that way now we have collision detection done.

And now for our final node we just need to check if the boolean value is true (again we check if the R component is bigger than 0.5) and if so we get the normal vector, normalize it to get the direction, invert the direction multiplying it by -1 and if we want some kind of energy loss we can multiply the X and Y component by a number lower than 1, or bigger if we want to gain energy.

With this done, we are now ready to get some collisions!


Now you may be wondering, how am I exporting those videos? Well, we can encapsulate everything we have done in a new graph, and get custom inputs, and now we can chain them a lot of times to output each frame.

And with this done, now we can just have fun!


Thank you for reading (or scrolling to see the images and skipping all the text), and hope that you have learned something!

You can get the FREE version of this as a learning resource at my store: https://www.artstation.com/a/48288816




Report