The Visual Asset Creation Pipeline defines the workflow to produce Visual Assets usable for 3D real-time visualization of highly configurable products.
The pipeline consists of different steps whose existence and scope highly depend on the input of the pipeline (quality and completeness) and the desired output (intended use case and corresponding constraints and goals).
(Image of a general Asset Creation Pipeline)
Input for the pipeline is typically the 3D CAD model/data of the desired product to be visualized, either in the native CAD file format or a general interchange format like STEP or JT. The challenge is to gather the right data and information to build suitable visual assets.
The first mandatory step is to prepare the data to be used in real-time visualization. This includes the actual tessellation of the CAD data as well as simplifying and repairingthem if necessary. Removing Intellectual property concerning parts is also part of that step.
Depending on whether parts have been removed, are broken, have been missing in the initial input delivery or are just too complex for real-time visualization, new 3D models need to be created to complete the product visualization. This is typically done based on reference images of the specific parts. Part of this second step is it as well to make sure that the parametric features of the product are present and can be steered correctly afterward by the configurator.
The third step deals with the visual appearance of the product. Materials and corresponding textures need to be created and assigned to the individual parts of the product or be prepared for later configuration changes. Additional input necessary to achieve correct results are reference images or samples of the materials and colors. Setting the presentation stage of the product (environment, lighting, interaction possibilities) is also part of this step.
The last step deals with combining individual assets to assemble the final product and verify that the assetscan visualize the different configuration possibilities correctly.
Before using the visual assets in a live system, the individual files are automatically optimized depending on the channel, platforms and devices they are going to be used.
Optimizing how visual assets are created is as important as it’s ever been for manufacturers. Creating top of the line visualizations for customers takes your sales pitch from good to great.
Optimization refers to the process of finding a trade-off between Visual Fidelity,download speed and Rendering performance.
In modern engineering design, nearly all 3D geometries are created by computer-aided design (CAD) systems. And those geometries are the natural starting point or input for creating Visual Assets for a 3D real-time visualization.
B-Reps and NURBS
Boundary representations (B-Rep) are the primary method of representing modeled objects in those CAD systems. The mathematical description of curve and surface elements can vary but they are usually given in parametric forms represented by non-uniform rational B-splines(NURBS).
(Image of teapot represented by a B-Rep)
The main advantage of this representation is the ability to compactly describe a surface of almost any shape and store it in an efficient way. Additionally, the underlying math calculates an accurate definition of the surface shape independent of the distance the surface is examined. They do not have any pre-defined “resolution”.
The CNC machine tools that create the tooling for final products work from these accurate, smooth NURBS data.
Although NURBS are ubiquitous in the CAD industry, there is currently no built-in hardware support for displaying NURBS surfaces. To be displayed in a 3D application, NURBS surfaces need to be translated into meshes(polygons, edges, vertices), the native language of modern graphics cards.Graphics Processing Unit (GPU) pipelines are very efficient in processing triangles and they do not work properly with parametric surfaces.
(Image of a teapot represented by a triangle model)
A Mesh Surface
A mesh is composed of multiple connected polygons, or triangles, forming a mesh surface that is understandable by a GPU, to be rendered in a 3D application.The number of triangles in the polygonal representation depends on accuracy used when approximating the original precise B-Rep representation. This process of taking the continuous, mathematical equation of a surface and approximating it with polygons is called meshing, triangulation or tessellation.
Since the direct evaluation of NURBS surfaces on the GPU isa highly complex and computationally intensive task, they are usually converted in simpler surface descriptions and tessellated on the CPU (CentralProcessing Unit) as a pre–processing step. Afterward, the set of generated triangles is sent to the GPU.
The resource demands (CPU, GPU, Memory) to execute a dynamic re-tessellation at every frame on top of all other relevant tasks necessary for an interactive real-time visualization are simply too much for an average consumer device.Therefore,tessellation is not done on the fly while the 3D real-time visualization is running, it is done as a pre-process upfront.
It is important to know that when using triangles to approximate smooth edges and 3D it is not possible to achieve the perfect smoothness of an image initially created in NURBS. Unless a very high number of triangles is used, relating to performance issues on the other side.Removing, combining or simplifying non-visual elements from CAD filesis crucial for generating high-performance and high-quality visual assets. Optimizing visual asset creation is a key measure of success for any visual configuration project.
(Images of a teapot with different tessellation versions)
A note about the author: This week’s author is Marco Lang, Tacton’s Senior Visualization Product Manager. Marco has more than 10 years’ experience working with visual asset creation, and configuration at Lumo Graphics, and now Tacton.
Showcasing your manufacturing product to buyers is an important aspect of the sales pitch for your company. Many companies lose opportunities because they rely heavily on text-based descriptions of products. That’s why it’s as important as ever to add visual configuration to your sales pitch. (Read the basics of visual configuration)
Visual representations of your products, services and brand are key drivers of online engagement. Digital assets link your customers and company by giving a way to accurately interact with your products.
In this blog, we’re going to take a deep dive into the visual asset creation process and how putting in the time on the back end will ultimately benefit not only your sales team but most importantly, your customers.
A Digital Twin is a digitalmodel of a physical counterpart (product, system or process), describing its characteristics, attributes and behaviors.Or tobe more precise, a Digital Twinisthe link between aDigital Master (Master Data or Template)and a Digital Shadowrepresenting a unique relationship to aspecific real-world counterpart.
This virtual representation dynamicallycollects and combinesdata from the field as the counterpart matures along the different life-cycle stages.Business systemssuch as datafrom engineering, manufacturing, marketing and salessensor data from operation to service and maintenance.This connection allows new ways of analysis, simulation, optimization, prediction, monitoring, documentation and communication for increased operational efficiency.
A Digital Thread
Learning and growing by capturing data from the past and current behaviorcreates a Digital Thread representing the birth-to-retirement records of the product as it moves through its lifecycle.
Depending on the use case or the goal of a Digital Twin (e.g. manufacturing, operation or service, …),different kinds of data is collectedon a Digital Twin.
Visual representations (3D Models), as part of a Digital Twin, enable the end-user to evaluate the product by looking at its shape, form or fit and analyzing and understanding its features, options and constraints.
(Image showing form, fit and function use cases of visual representations)
Configuration-compatible 3D real-time Modelsbuild the core of Tacton’s Visual Configuration solution.
(Image of some configuration possibilities of a truck)
All data and filesthat are necessary for a configuration-compatible 3D real-time visualizationare referred to under the umbrella term “Visual Asset”.
Important Visual Assets are:
3D Scene with its environment
Materials and Textures
3D Model (Mesh/Geometry)
A polygon-basedmodel representing the product’s shapeand componentsin 3D.
(Image of a polygon-based model of a teapot)
The virtual 3D scene defines the scenery in the 3-dimensional world the product is put in. This includes the environment/surrounding, lighting and cameras to look at the product.
The environment can be a simple Background image or a complete 3D surrounding adding additional 3D models to the visualization. Check out how we do it with our Tructon or Parker Lift Demos.
Lighting in a virtual 3D world is as essential as it is in real life. Without any lighting it is dark and the product is not visible at all.Therefore, Lighting is a crucial part in setting up a virtual 3D scene.
There are two ways of lighting. Gathering the information from a 360° high dynamic range image (HRDI) or by setting light with specific light types and their individual properties like in a real photo studio.
Cameras define the different viewpoints on a product in the scene, often along with interaction possibilities and constraints likedegrees of freedom and distance/zoom restrictions.
(Image of different scene elements like light, camera and environment)
While 3D Models define the shape of an object, Materials define the surface properties and therefore the look and feel of the object in the scene. Depending on the underlying calculation models and their parameters the visualization can range from simple colors to real-world looking surfaces approximating real-lifelighting behaviors, for example by using Physical Based Rendering.
(Image of a teapot with two different materials)
Textures are imagesused by Materials tohelp create realistic materials.They help to add details and real-life variations of materials over a surface. Those images can be created from scratch in dedicated applications or retrieved from real-life photographs.
In many cases, it can be helpful to simulate e.g. heights or small details on a surface via textures to keep the overall performance of the visualization as high as possible.
(Image of materials using additional textures for diverse looks)
End-User Constraints to Consider:
Since 3D visualization is executed on the customers’ device, (think cell phone or tablet) the performance of the system is critical to the experience.
Although there have been huge improvements in consumer hardware (graphics cards) and platformcapabilities (native and browser applications) over the last few years, there is still a very disconnectedtechnology landscape out there.
Less capable systems prevent the usage of optimized functionalities. And if a devicedoesn’t support the amount of data necessary for the visualization, the visualization will not be complete or not showing anything at all. Therefore, experiences are typically designed to work well on the lowest common denominator.
Since all necessary visual assets need to be transferred to the users’ device to get displayed by the visualization, the amount of Visual Assets, the internet connection speed and stability is affecting the experience as well, especially the initial loading time till the visualization is visible for the first time.The bigger thevisual assets are used or the slower the connection is, the longer it takes to download the content and to start the experience.
Depart from the capabilities of the underlying system, therendering performance depends on other criteria as well, like Visual Asset Weight and the resultingVisual Fidelity (Realism).
Visual Asset Weight refers to the “complexity” of the Asset.The important performance measurement for 3D Models are the number of polygons those models are composed of. The higher the number of polygons, the slower the performance of that 3D model.Lightweight assets are 3D models with a low polygon count.
Besides the individual weight, the actual number of different objects in the scenecan be a very crucial factor as well. Many different objects with different materialstypically relate in a higher number of Draw Calls on the graphics card, impacting the rendering performance since more work needs to be done.
For Texture, the weight defines the dimensions and the texel variety within the texture. For Materials the weight defines the complexity of the underlying shader, referring to the number of textures combined and the complexity of the underlying algorithms.
Most CPQ systems merely encode a procedure for generating a quote as quickly as possible. For simple high-volume products, that might be enough, but for products with a high degree of variability and many technical interdependencies, the navigation must be flexible, as we noted in Part 1: Navigating Manufacturing Product Variability.
CPQ needs to be a smart product navigator, empowering different stakeholders to explore the product variability in their own ways, and ensure that it is correctly encoded.
So, let us look at what is required under the hood.
The greatest challenge of navigating the variability of physical B2B products, such as machinery or transportation equipment, is that the variability is constrained by thousands of compatibility restrictions. Those interdependencies interact. A valid combination for one constraint might violate another one. And there are often thousands or more. The combined consequence for the product variability is beyond human comprehension.
In order to support flexible navigation of the product variability, two things are essential:
A smart engine: that can search the vast maze of restricted product variability in many ways, and tools to present the variability intuitively for different audiences.
A smart definition of product variability:interdependencies are defined intuitively, clearly corresponding to the real interdependencies. The definition also needs to be concise, so that the same interdependency does not have to be defined multiple times. The definition should be inspected, understood, and validated by the people involved in deciding about variability.
Most configurators on the market rely on sequential encoding of rules (even if they support some form of constraints). They encode a procedure, a sequence of decisions for configuring the product. The interdependencies are in the head of the person encoding the decision sequence. The same interdependency must be implicitly considered in multiple places for different decisions in the procedure.
One by one, each decision-rule might look simple but that’s not the case.
With a visual flow-chart, the procedure might even look intuitive. But with many interdependencies, the procedure quickly becomes a mess that is a nightmare to maintain. The procedure is not flexible for different uses. And there is no way to validate that the rules are correct. Millions of combinations cannot be tested.
Messy Variability = Messy Business
Almost everything related to products in the whole organization depends on variability, which changes frequently when the products evolve. When the product variability remains a mess, everything else remains messy. This includes pricing, quoting, order processing, e-commerce, product planning, engineering, and much more. With a transparent product variability as the foundation, everything else related to products can be simplified and truly digitalized.
When more touchpoints in the product life cycle digitally access the product variability, more stakeholders need to be considered, which increases the demand for flexible navigation. Conversely, the easier it is to navigate the product variability, the more useful it will be for different stakeholders in the life cycle. The greatest challenge will be faced by manufacturers that transform into selling their products as a service since they will need to minimize the product life-cycle cost of the variability they offer.
Tacton Smart Commerce addresses all the needs of the product life cycle and product variability with CPQ. So, what is your business waiting for?
These challenges and so much more can be addressed with Tacton CPQ. No need to worry about messy variability anymore. Check out how Tacton CPQ can connect all your stakeholders and so much more in our Manufacturers’ Guide to Smart Commerce.
Tech with Tacton is a series that takes a deep dive into the technical topics that impact the manufacturing industry.
In B2B manufacturing, where the buyer of the equipment is an organization, the variation in customer needs is often large, and the order volumes are relatively low. This makes efficient mass production of predefined products infeasible. Customization is necessary.
However, customization introduces complexity, since it is unclear what the company can offer and what can profitably, and accurately be delivered – with billions of potential combinations the product ca quickly become confusing.
As a consequence, almost every activity related to products in the organization becomes complex. The most obvious pains are lost sales opportunities, inefficient quoting processes, and very high costs of quality assurance but managing the product lines and after-market are equally challenging.
Most B2B manufacturing companies strive to systematize these processes by designing their products configurable to meet a vast range of customer needs. They define what product variability they offer. The core challenge then is to empower stakeholders to navigate the product variability for their needs.
Customers: want to find out what solutions match their needs without costing too much
Sales teams:want to quickly optimize their proposals for the customer’s needs, to win the deal without losing profits, and without getting bogged down in technical details.
Sales engineers want to find out how to assess and overcome technical limitations for the customer without unforeseen consequences.
The engineers: encoding the product variability needs to quickly revise and validate interdependencies when they make changes to the product line over time. Ensuring correctness and deliverability
Product managers: need to optimize the variability for profitability. Which customer needs are compatible, which component variants are rarely used? What happens to the overall variability if we replace or remove this component variant?
Pricing managers: need to balance margins with customer value and cost
Each of these tasks is daunting when we look closely, and they are all very different.
Interdependencies always make it necessary for all parts to be compatible. Configure, Price, Quote (CPQ) technology can be your versatile autopilot when it comes to navigating your product variability.
Stay tuned for Part 2: Staying Flexible with Your Product Variability.