barre logo Eclat Digital

Accurate material simulation with Ocean™: combining texture capture and spectral data

Texture captured mixed with spectral measured data simulated in an Ocean™ material.
barre logo Eclat Digital

Introduction to surface texture simulation with Ocean™

To simulate surface texture or bump effect on a 3D object, bump map, normal or height map can be used in OceanTM and applied to the given object. Normal or height map can be procedural with a given pattern that repeats, or can be captured on a real sample to fully reproduce it in the simulation.

Common tools can be used to capture the texture of a given sample. However, these tools generally use RGB albedo data to render the object’s color (in addition of the elevation data’s texture). In case of spectral rendering with Ocean™, this RGB information is not sufficient, especially for rendering objects under different light conditions : see for example our article on metamerism.

In this article we will show a method that combines captured texture and spectral measured values to obtain spectral textured material for Ocean™. [fig 1]

Basic concept of mixing input data : (left ) Captured sample of material surface elevation (normal map) + (right) combined with measured data imported into ocean™

Figure 1 : Basic concept of mixing input data : (left ) Captured sample of material surface elevation (normal map) + (right) combined with measured data imported into ocean™

One way to create high-quality textures is to capture real-world textures using photography and then process them with specialized software. We will show the process of capturing elevation data texture – using a photographic method within the framework of texture mapping management software – and simulate material optical behavior with Ocean™.

barre logo Eclat Digital

Basic concepts of texture mapping: A brief history of normal map data, or how it bends the light rays:

At first sight normal maps are a bit strange, not because of their dazzling and flashy colors but rather because these colors contain information that can only be understood if you have a basic knowledge of the 3D graphics representation system. Fortunately, it’s not as complex as it seems:

The role of normals in 3D graphics:

To represent the surface behavior of any object in modern 3D graphics, each polygon has a normal, which is a vector (or a direction from one position to another position that forms a line) that is perpendicular to the surface of the object’s polygons. It is used to determine how a surface should be lightened or shaded by the angle between this vector (direction) on the surface and the direction of the light. These normals are useful for calculating how light interacts with the surface it encounters.

There are other normal vectors that are present on each vertex of each polygon. Their angle interpolation with the polygon of the object are used to display a smooth surface, even without the need for many polygons. It’s the way to have a Phong shader.

Function of normal maps

Normal maps are “just” 2D images that can embed the behavior of a surface in terms of elevation, using only 3 color channels (Red, Green, Blue).

As you may have guessed, we’re just using technical tools to trick the viewer, taking advantage of the 3D software rendering method and to limiting modifications to the 3D model. It saves time and complex operations on objects without having to modify or create any polygons.

This 3D information observable in RGB, and stored within a normal map image, corresponds to the respective X Y Z coordinates of the surface normal. This means that each pixel of a normal image map corresponds to a specific point on the 3D model, thanks to the UV coordinate system.

Limits of normal maps

Normal map intensity can be increased or decreased by pushing the colors of the three channels (Red Green Blue) to their respective maximum saturation. In short, the more saturated one of the color channels is, the more visible the volume for that axis direction effect will be. Over-saturation greatly increases the risk of visual aberrations. Successful use of this technique lies in subtlety in its application and intensity, as there are limits which, if reached, will ultimately ruin the desired effect. [fig 3]

Normal map low intensity
Normal map high intensity
Normal map extrem intensity

Figure 3: 3 levels of normal map intensity. From left to right : Low / medium / extreme. Extreme color saturation ruins the surface elevation simulation and makes the effect difficult to understand, because normals are at extreme angles.

This technical “trick” has its limits, as explained previously, it’s just a way to fake light behavior. It’s not about representing huge indentations on the surface, for that, we would have to model each of these gaps, adding complexity and manufacturing time. Normal maps are useful for dust, scratches, small imperfections. Since this is an image, it’s important to keep in mind that it is resolution dependent, meaning that the higher the resolution, the better the surface simulation. [fig 4] 

Normal map with RGB channel data and its result

Figure 4 : (LEFT) Normal map with RGB channel data and its result (RIGHT) in real-time rendering software of how light is calculated. Elevation values for this material are about 0.7 millimeters high. The blueish color seen in the normal maps is the value for zero elevation.

barre logo Eclat Digital

Achieve real-world accuracy in the simulation of textures with ocean™:

Step 1: Capturing real-world textures:

a) Equipment and set-up

The first step in creating a texture is to capture the real-world texture using photographs. For this, we need a large size texture sample a high-resolution camera and a lighting set-up.

As shown in [fig 5 & 6], the light is placed at 8 different angles around the sample to illuminate all the elevation variations, which will help capture the details of the texture. How does it work? Simply with the basic concept of light and shadow: when you illuminate an object from at least four different angles (8 is better) around it, you can perceive the object’s topography through shifting cast shadows and illuminated areas. Obviously, this operation is done without any other lamp than the one used to illuminate the sample.

Rig for texture capture : a camera and a soft cone D65 light

Figure 5: Rig for texture capture : a camera and a soft cone D65 light

Figure 6: Conceptual representation of how D65 light is moved around the sample to capture elevation at eight different angles.

b) Images processing

The pictures are captured in a RAW format, which stores the sensor data including the raw pixel values, at a high bit depth that allows us to handle a wide range of color and tonal information. The raw format is also free of compression artifacts. It offers great flexibility for post processing tasks; allowing the adjustment of some parameters that help to minimize image differences (white balance, contrast, color temperature, sharpening, geometrical modifications, etc). In addition to this, we analyze the color temperature at the time of shooting so that we can use it for raw data adjustment, to be as consistent as possible in the process. Once all raw images are equalized, they are rasterized into a format that is lighter and easier to handle than the raw format (non-compressed PNG for example). [fig 7 & 8]

 

One of the eight raw image from camera setting up before being sent to texture editing software

Figure 7: One of the eight raw image from camera setting up before being sent to texture editing software

spectral light meter

Figure 8: Spectral light meter used to measure color temperature and values

The eight texture sample photos are then imported and combined to generate the normal map. The generation of these images is achieved by interpolating the luminance values collected for each image during capture process described above. The normal map is then modified to be tileable on large surfaces: some parts of the captured sample are removed to improve tilling. [fig 9]

Figure 9: Combination and normal map result of eight png images that contain elevation informations highlighted by variations in luminance due to different light positions

Step 2: Ocean™’s innovative approach for material creation: combining spectral data and captured textures:

Creating an opaque material in Ocean™ is quite simple; mixing measured inputs with captured data material is made easy thanks to the multiple possibilities of import methods. Ocean™ can read various measured data formats (Xrite Files, Tabulated data, etc…)

In Ocean™, generic materials are made of four basic input categories : bulk, bump/height/normal, emitter and bsdf. [fig 10]

The four basic input nodes in an Ocean™ generic material

Figure 10: The four basic input nodes in an Ocean™ generic material

  • Bulk defines the volume properties of the material or how light rays pass through it: refractive index, absorbance value for each wavelength as Ocean™ is a spectral simulation engine. This section of the Ocean™ material is used to simulate transparent, diffuse volume materials
    with or without scattering.

Bump/height/normal is used to manage the data for elevation:

  • Bump data are pretty much “old-school” and is only black and white values, with 256 gradient levels, where black means there is no elevation and white means higher elevation, it only changes the polygon normal values for one direction, Z axis (White is Z up, Black is Z down). 
  • Height map is also a black & white data, but it is mostly used to displace the geometry and move the topology of the 3D mesh along Z axis. Ocean™ doesn’t displace or deform the 3D meshes, displacement is managed into the CAD softwares. It requires that the 3D data have enough polygons to be properly displaced or it must be tesselated. Height map value can be more defined in terms of depth, it means that it can be more than 8bits/pixel (256 level gradient value), contains more information and therefore can be used to displace the geometry with better precision avoiding aliasing effect generated by a low depth map value. [fig 11 & 12]   
Depths of images. Low depth generates aliasing because a lack of precision

Figure 11

animation Depths of images

Figure 12

Figure 11: The four basic input nodes in an Ocean™ generic material. Figure 12: Depths of images : Low depth to high depth example used for mesh displacement. Low depth induces a hard displacement of the mesh and creates artifacts due to the exaggerated stretching of polygons (right part of the image show the depth variation). Adding depth creates smooth and natural displacement.

  • Normal map is the one detailed previously and uses colored information to reproduce the surface’s visible features.

Finaly, the part that describes surface and light interaction:

  • BSDF is used to represent the surface behavior of a material: how the light interacts with the object’s surface, is it glossy, rough, metallic etc …. [fig13 & 14]
BSDF

BSDF

BSDF with different roughness values, from mirror to rough

BSDF with different roughness values, from mirror to rough (left to right)

Read more about the visualization of materials in the following articles:

Step 3: Ocean™ material creation is fast and easy!:

Once the measurements and elevations are captured, we can create a mixed input material into Ocean™ in less than a minute!

In the Ocean™ objects panel: Create a new material using the add button.

  • Add a Bump node to manage the captured material elevation data and load the captured normal map image.
  • Choose a BSDF node (here a Phong one) with a Uniform Filter Shader for the Diffuse Color node.
  • Then, add the tabulated data containing the material surface wavelength measurement to a Tabulated Spectrum
  • The material is now created and ready for simulation rendering in Ocean™! [fig 15]
Mixed data Ocean™ material : Captured Normal Map + Measured Tabulated data

Figure 15: Mixed data Ocean™ material : Captured Normal Map + Measured Tabulated data

barre logo Eclat Digital

Conclusion - Ocean™ simulation process enables accurate digital material visualization

In this article the study of a rough dark grey plastic with an embossed pattern is studied. The albedo surface properties of this material are measured (using the spectro D8 process) and mixed with elevation data captured in house at Eclat Digital.

By making normal maps an integral part of material simulation processing, we can achieve fidelity without the design effort of complex geometry and without distorting or modifying the topology of the 3D objects provided: Because replicating the fine details of the organic characteristics of the surface through 3D modeling is time-consuming.

Using normal maps with other data in the 3D simulation domain is a powerful process that fits perfectly into the Ocean™ pipeline. It enables highly accurate digital material visualization suitable for carrying out precise performance analysis, such as aesthetic requirements or lighting studies. See an example of material iterations for an automotive interior below:

figure0_b No normal map
barre logo Eclat Digital

Let's talk about your project

Read about our services & solutions or contact us for customized support:

barre logo Eclat Digital
The latest
barre logo Eclat Digital

Blog articles

Responses