Art Directing Signals

Signals, as a general concept, are seen all the time in computer graphics. A signal is a function that conveys information about a phenomenon. Information and function is the key word here. Information being data, and function being one input, one output.

When computers used to generate 3D art, information is everything. We try and manipulate data in order to get intended visual results. This manifests in the form of motion, rendered images, physics simulations, attributes used to procedurally manipulate and model geometry, etc. Anything you can think of in 3D is data-driven at its core, and as a technical, shading, or VFX artist, being able to manipulate signals is one of the most important toolkits at your disposal. Once you open this toolbox, you will begin to realize the endless amount of customization you have, and that, in of itself, is a superpower. In my humble opinion, of course.

What are some examples of signals that you’ve worked with? Sometimes even on a regular basis?

Animation curves! They are a function of time, and they can represent a variety of meaningful transformations through different outputs. For example, you can directly animate the position of the object itself, or if you are working with simulations, you can also affect their velocities, accelerations, etc. It’s nice to directly with art-directable curves, but you can also work with motion procedurally by using equations or by working with simulations/custom solvers.

Textures! Images are signals, functions of UV coordinates or any arbitrary coordinate system which output a color per pixel. Then when a computer renders this in 3D, it has to do a lot of filtering methods to get the resolution of the geometry, of the texture, and of the output rendered image to work properly. This involves anything from within a shader graph/code within a game engine or 3D software, or in Photoshop when touching up photos, Procreate, etc. Textures, in general do not strictly have to directly be involved with the surface of a 3D model/object, rather something you can sample from using spatial coordinates.

How do we determine the texture of a model all the way to the final color of a pixel on a screen? A lot happens! All the way from a texture, to sampling that texture based on UV coordinates of a model, all the way to applying relevant shader models. Physically based, stylized, mixture of both, a lot of creativity can happen here!

https://www.pokitec.com/research/DynamicTextureAtlasAllocation.html


A more general, fundamental idea is that as a 3D artist, you work with attributes. Attributes are data stored on geometry, and this is something that is involved in almost everything you do as a 3D artist, even if you are not directly aware of it. For example, weight paints are stored on vertices of a geometry’s rig, which correspond to bones which guide the deformation of the mesh for the animator. If you are a modeler, you will want to work with attributes on geometry to construct a set of rules to create mesh generators (i.e. procedural houses, foliage, etc.). For the case of the environment artist, you may scatter instanced geometry across a scene, and for each instance, you create variety of different geometry attributes. For example, the geometry’s uniform scale or rotation, which in this case the attribute is stored for the entire geometry as a whole.

A more obvious example of an attribute that is essential is position of points that define a geometry. Each vertex on a mesh has a defined position that gives it any visual meaning. Similarly, normals are defined on the vertices so that light can shade the model properly.

There is so much to explain in terms of attributes and what a 3D artist can do with them that it warrants a whole separate discussion and set of resources. But today we will focus on processing these attributes and thinking of them as an input signal.

In general, the goal is modify signals, or the various attributes you define in a 3D scene, via a huge toolkit of filtering methods. Filters modify a signal to get a desired visual result. We accomplish this by designing algorithms to modify signals. An easy way of thinking about this is an image filter, for example you can take the negative of an image

You may modify movement of an object via animation curves and splines.

You can also use functions/mathematical equations to filter an input signal. Here is one extremely simple example: An object is moving along the x direction at some function f(x). But… it’s too far to the left in our composition. If we define the right to be the positive x direction, then we can simple define a function g(x) = x + c to shift the animation to the right. So, we if filter the function f(x) within g(x), we have successfully filtered the animation to behave like how we want it to! g(f(x)) = f(x) + c.

You might think duh, this is obvious. I’m being didactic for a reason, but you should get into the habit of thinking about them in terms of general functions. Later down the line I will explore a whole toolkit of elegant functions that you can mix, match, and combine in order to build towards incredibly complex behavior.

But why bother doing it in functions if you can just use animation curves, you ask?

There are pros and cons to both approaches.

Animation curves:

Pros: Maximum artist control for heavily nuanced motion. Almost required as the major element of control for things that are the focal point of an animation.

Cons: The more manual adjustments you make, the harder it is to make broad adjustments further down in production.

Mathematically defined curves and filtered curves through functions:

Pros: Once you get good at it, you can construct complex curves that are extremely flexible. Quick to iterate.

Cons: High learning curve, can be unintuitive and hard to construct curves for extremely specific motion.

This is pretty much the same argument for non-destructive (procedural) vs. destructive workflows. I simply think that in general, after scouring the internet, there is simply not a lot of people talking about signal processing from an artist’s perspective. So here we are, I am trying to be an extra resource that provides a bit more technical know-how, barring the rigorous math people would have to learn to work with this concept from scratch.

Okay, time to jump into the good stuff!


Going back to the definition of a signal: a function that conveys information about a phenomenon. In computer graphics, numbers are supreme. So we want to process and filter signals so that they behave to our liking.

One important idea to understand with signals is that some are discrete, others are continuous.

A discrete signal is one that is defined by a countable number of samples.

What makes most sense to me, and is in my opinion what I think is extremely important when being a technical artist, is understanding the range and distribution of a signal.

Range describes the difference the minimum and maximum of a signal. Range, along with the minimum and maximum, is incredibly important to understand. When working with a signal in CG, only certain ranges of values are relevant depending on your context.

Distribution describes the likelihood of a signal outputting certain values based on the input values. You can visualize the distribution as the shape of the output signal, or values, depending on the input values. The easiest way to examine this is via a histogram.

Frequency in this context represents how often an event happens. Or rather, how often an “output” occurs given the respective input along the x axis. For a theoretically perfectly random variable X where all events are equally likely, we should expect a flat distribution, where the frequency of each event is the same no matter the input x.

You don’t always have to think of frequency in terms of an event, but rather an output in general. And models don’t exactly have to be based on a single input/scalar x, but rather a vector of multiple inputs x_n=<x1,x2,…,n>. For example, using a noise texture that depends on position coordinates (which can be multiple dimensions across different axes) to output to, say, the size of scattered objects.

Let me give some context to this example. For example, take the example of an image texture. There are a million ways we can deconstruct the signal of an image and the information it is trying to represent. Let me use all the terminology I have defined thus far.

In general, colors are broken down into normalized components. Color is a vector which is most commonly represented in Red, Green, and Blue components. And they are typically normalized, meaning a component has a range of 0-1, where 0 means 0 intensity (the channel itself is black) and 1 means full intensity (the channel is fully saturated). This describes the range of a color’s definition.

Suppose you’re take a picture of the same outdoor scene. But you take them at different times of day. One is midday, the other is sunset. The picture taken around noon has a bright blue sky, with tons of green, a tad yellow. Concrete in shadow is lower in intensity, but is hue-shifted towards blue due to the blue sky lighting these areas. Areas in light that are not overly reflective are yellow-tinted. This is because the wavelengths of light that are cooler (blue, violet) were scattered through the air, so the light that hits on the surface of the earth is more yellow. Compare this to a scene at sunset, the overall intensity of the image is lower; the light source is fizzling. We still have a bluer sky, but shifted towards warmer, redder colors as the angle of the sun’s light is scattered further through the air. I’m rambling, but through this vocabulary we can pinpoint the distribution of the different visual elements related to color.

An image is a reconstruction of reality defined on a screen. The real world is a nearly infinite resolution through our eyes, while an image is composed of pixels of discrete resolution. We are only able to capture as much of reality limited by the resolution of the screen.

For example, one can try representing a circle of infinite resolution via a formula: r^2=x^2+y^2 (assuming we are working in two dimensions). Say that we color every pixel within the radius of this circle as black, else the pixel is white. Our discrete image will never be able to perfectly represent this circle, but it can get pretty darn close. In fact, there are ways to quantify how we sample other signals to create a new signal or representations in order to ensure we do not lose any information. I will go further into this later.


There are some bread and butter tools when it comes to adjusting range and distribution of a signal. Range and distribution is important, as range describes the breadth of whatever you are conveying with your signal, and distribution describes the variety and contrast of your signal. At least, if I were to describe them in qualitative, artistic terms.

For example, some cases you may want a small or large range. Maybe in an image you want to extend to much darker or lighter values by adjusting the range if the image only covers a small spectrum of grayscale values. Or you may want the darks of the image to become much darker without affecting the lights, where you would affect the distribution via levels.

There are extremely simple manipulations you can do you adjust the range. For example, take the function of a given signal and add a constant (like we did before). This will raise the min and max of that signal by the constant. You can multiply by a constant to stretch or compress a range. Or combine multiplication and addition, but this can be described as a special function that I will mention in a moment.

There are two basic functions which I think are essential.

Ramp is one of the essential functions. What ramp does is it controls the distribution of a signal without affecting the range. This separation is super nice, and ramps are extremely artist friendly because it uses art directable interpolated curves (i.e. Bezier curves, Catmull-Rom, etc.).

Fit is another one of the essential functions. In some 3D programs, fit may be named range (aiRange in Maya Arnold or Map Range in Blender shader nodes). Fit controls the range of a signal without affecting the distribution. What do I mean by that?

We take the domain of the input and stretch or compress it out. And we can also choose to change its center. For example, given a normalized range from 0 to 1 for one signal, we may want to make the signal have a different range, say 0-100. Or maybe we want the signal to be zero centered, changing from 0-1 to -c to +c. One example of wanting this is if we were doing an add operation with the signal, and we wanted the signal to add on top of another with variance using a noise texture.

Fit is easily one of the most versatile tools, because outputs of signals can fit a variety of different contexts based on the range of that signal. For example, when using a signal to drive an image texture, which often works with normalized values, and comparing that to driving the scale of scattered instances, which may want a scale much larger than 1. It’s nice to use fit because then you can use the same signal with the same distribution to drive many other elements in a scene, and is also efficient.

Here are a few caveats:

One key thing to note is that the distribution is maintained because it stretches space equally, or linearly interpolates from the old range to the new range in equally spaced repetitions. Blender’s Map Range node allows you to interpolate nonlinearly, using an ease-in ease-out curve. In this case, if the original distribution of the signal was uniform, then more samples of data will be concentrated towards the endpoints of the range. In a way, you can think of this operating similarly to a ramp, except it applies this transformation to the input domain instead.

A fit function takes in an input range, which maps to an output range. Different software can have different behavior for what happens OUTSIDE of the input range (i.e. feed in input range 0-1, but there are some values within the signal outside of that input range). Sometimes, it makes it so that values outside the input range are automatically clamped to the input range.

One common mistake I often went through when playing around with fit is that I often procedurally made it so that I take an input range that strictly goes from the minimum and maximum of a signal. But say that we are working with a strange signal, such as a discrete signal with outliers (e.g. min and max of 0 to 100, where the typical values range from 0-10 but there happens to be 1 data point with a value of 100). Often times, we don’t really care about the clamping behavior for outliers, so we may as well reduce the maximum input range well to the typical values. That way, there is appropriate contrast with the signal of the new range rather than trying to account for outlier behavior.


Work in progress:

Signal processing using geometric properties

E.g., edge detection mask

Masking as texture or attribute!

Normalized values to use as a mask

Power, spherical vector interpolation rather than element wise interpolation

Convolutions, kernels

Shaping functions:

Machine learning bbox blog

Scattering rocks: Noise texture, frequency, curvature filtering, cinching the range of alignment with ground normals (more relevant for grass), range and distribution of rock scale and rotation

Filtering using geometric properties, for example curvature for scattering or dot product for material filtering.

Acerola

Mention series and Taylor series and Taylor polynomials (using polynomials as an approximation for other functions that are more computationally costly)

Sources:

https://www.youtube.com/watch?v=YJB1QnEmlTs (SimonDev Lerp, Smoothstep, Shaping Functions)

https://iquilezles.org/articles/functions/

https://easings.net/

https://mynameismjp.wordpress.com/2012/10/21/applying-sampling-theory-to-real-time-graphics/



https://thebookofshaders.com/05/

http://www.flong.com/archive/texts/code/shapers_poly/

http://www.flong.com/archive/texts/code/shapers_exp/

Previous
Previous

Applied Discrete Differential Geometry in Houdini #1: Curvature Flow

Next
Next

Noise Fundamentals for 3D Artists