It seems that this series of articles has struck a cord with the Poser/Studio artists. I’m glad, and flattered by your interest.

The goals of these articles are to:

  • Question the status quo: the nodal system or similar “computer-friendly/artist-unfriendly” models
  • Open the discussion on what alternatives we can have to the old models
  • Establish the idea that the common user, and not just the super-hacker, should be able to customize materials
  • Show how you can master the materials in your scenes

Establishing the idea that a 3D artist with an intermediate knowledge of the program can customize materials is “heresy” in the world of 3D. And yet, that is what needs to happen if we really want to make 3D art available to everyone.

As we saw in Part 3, we need two fundamental conditions to make materials approachable by anyone:

  • Materials need to be recognizable as being of a certain type: metal, glass, skin, water, etc
  • Materials need to only show the relevant properties and show them in a consistent manner

As  I mentioned to Paul Bussey in his interview for Digital Art Live, when I was designing Reality 3 I started with the idea of adding a nodal system. Given that Reality 3 introduced a sophisticated texturing system with nested textures, it seemed logical the choice. Logical, that is, based on the point of view that “this is how it has always been done” for most 3D programs.

The more I thought about it, the less I liked it. I just could not see the nodal system as a good User Interface. In fact I realized that it was quite the opposite. The solution provided by Studio, the “property bucket,” was not much better. The UI of Reality 1.0 had been designed exactly to avoid the way Studio organizes the Surfaces tab and a lot of you told me how much you liked that approach.

Unfortunately the Reality 1.0 UI could not scale up to more complex materials, so we needed to find a compromise: something like Reality 1.0, but expandable. It took months to arrive to a working solution but I think that the current implementation has many advantages. Let’s see it in action.

Materials in action

Reality partitions the material data so that what is most relevant is in foreground and the less used stuff is available with a simple click. Whenever I design a UI I measure how many clicks of the mouse are needed to find the information needed. One click is optimal, two-clicks are acceptable, three clicks start presenting a problem and should be analyzed for better solutions.

So, when we click on a material name we see the main material data:

Material default presentation

Material default presentation

The Opacity controls are not shown but are one click away. The bump map, displacement map, subdivision, and emission controls are in the Modifiers tab, one click away, but out of sight. What is important to remember is that this is the same presentation that we get every time we click on a material. It’s consistent, it shows only the most used properties of the material, and it gives us a simple view on what can be edited.

Each material in Reality has one or more channels that include textures. A texture is a graphic pattern that is painted on the surface of the material. The simplest texture possible is a solid color: a texture where all pixels have the exact same value. The image map texture is a texture that paints an image kept on a file, like a jpeg or PNG file. Reality has 14 types of textures and they are presented on the screen using a “Texture Preview” widget. The widget has 6 components:

Texture Preview Widget

  1. This is the function of the texture. In this case it affects the Diffuse channel of the material. Diffuse is the base color of the material.
  2. This is the type of the texture. As we said, Reality has 14 types of texture. In this case we are looking at an Image Map type of texture. An Image Map is a texture that paints the surface of the material using a file on disk, like a jpg or png image.
  3. This is the encoding of the texture. The color wheel tells us that the texture uses RGB data. Reality supports two types of texture encodings: RGB and numeric. A numeric texture provides numeric values for each point of the texture. For example, a bump map texture is of type numeric, where each point in the texture is set to a value between -1.0 and 1.0. A value indicates the elevation of the surface. A value of -1.0 is a “dip” and a value of 1.0 is a “bump.” Something like 0.25 will be rendered as a raised area, although not at the highest point.
  4. This is the name of the texture. Each texture has a unique name that identifies the texture within the material. We are really not concerned about this, it’s just a piece of data that sometimes comes in handy.
  5. This is the preview of the texture. Since we have 14 different types, Reality shows a “summary” of the texture. The summary changes. For image maps the program shows the file name. For colors it shows the color and RGB value. See below for more examples of texture previews.
  6. When clicking on the gear menu Reality will show a menu of actions that can be performed on the texture.

Here are some of the other texture types in action:

Color Texture widget  Texture Combine preview


The Color texture shows us the color and RGB value, while the Texture Combine texture shows us that it’s combining two Image Map textures together via multiplication ( the “M” between the two previews). Lastly, if we hover the mouse over an Image Map texture Reality will show us a tooltip with the full path of the image, which can come in handy in several situations.

The Texture Preview widget provides a succinct way of looking at the essential texture information without needing to click around.

Next week we will explore more of the texture functions and how we can use them to our advantage.


Pin on PinterestShare on FacebookTweet about this on TwitterShare on Google+