• Non ci sono risultati.

Creation and validation of a new metric for robustness of a products and mechanisms.

N/A
N/A
Protected

Academic year: 2021

Condividi "Creation and validation of a new metric for robustness of a products and mechanisms."

Copied!
101
0
0

Testo completo

(1)

1

D

IPARTIMENTO DI

I

NGEGNERIA DELL

’E

NERGIA DEI

S

ISTEMI

,

DEL

T

ERRITORIO E DELLE

C

OSTRUZIONI

RELAZIONE PER IL CONSEGUIMENTO DELLA LAUREA MAGISTRALE IN INGEGNERIA GESTIONALE

Creation and Validation of a New Metric for measuring

the Robustness of Products and Mechanisms.

RELATORI CANDIDATO

Prof. Antonella Martini Ilenia Gaglioti

Dipartimento di ingegneria dell'energia dei sistemi,

del territorio e delle costruzioni ileniagaglioti@libero.it

Ing. Francesco Paolo Appio

Dipartimento di ingegneria dell'energia dei sistemi, del territorio e delle costruzioni

Ing. Thomas J. Howard

Dipartimento di ingegneria meccanica

Sessione di Laurea del 29/04/2015 Anno Accademico 2014/2015 Consultazione NON consentita

(2)

2

A mio Padre e mia Madre, la cui presenza riempie di gioia e amore le mie giornate.

Ai miei Nonni da sempre la mia isola felice nei momenti di tristezza.

(3)

3

Abstract

The thesis work focuses on the research of new metrics for robust design, that make the products/processes more robust and less sensitive to changes caused by factors outside our control. It deals with the construction and validation of these metrics using data mining techniques and statistical analysis, which are the real innovation for robust design. Then, the defined metrics are applied to a real study case in order to obtain a formal method that provides an indicator aiming at understanding whether or not is necessary to act on product design, through substantial changes, to make it more robust. To get this information we have found a method able to transform the relative indices obtained in absolute indicators. Finally we have created a program that, examining all combinations of the process parameters, offers an utility to validate the methods obtained.

(4)

4

TABLE OF CONTENTS

1. INTRODUCTION ... 6

2. LITERATURE REVIEW ... 10

On the meaning and aim of Robust design ... 10

Achieving robustness ... 11

Taguchi and the robust design ... 14

State of art ... 27

3. RESEARCH PROBLEM AND CONCEPTUAL FRAMEWORK ... 29

The Quality ... 32

Applying quality ... 34

Reliability ... 36

A scientific approach to quality ... 39

Why use robust design techniques ... 40

4. PROJECT APPROACH ... 42

The system... 43

Signal to noise ratio ... 45

Factorial planes... 47

Partial derivatives ... 50

5. CASE OF STUDY ... 53

Toyota gas pedal ... 53

Signal To Noise Ratio ... 58

Factorial planes... 60

Partial derivatives ... 64

(5)

5

6. AN ABSOLUTE METHOD ... 68

7. METHODS VALIDATION ... 72

8. DISCUSSION... 76

Introduction ... 76

The theory contribution ... 76

Future research directions ... 80

9. APPENDAGE A ... 81

10. REFERENCES ... 98

(6)

6

1. Introduction

The life cycle of a product is characterized by different phases summarized in Figure 1 below:.

Figure 1: The life cycle of a product

The most common definition of quality refers both to the characteristics of the product and industrial activities developed to both produce and use it.

From these definitions it is possible for us, to imagine the existence of:

 a form of "intrinsic" quality of the product, that depends on how it was designed;

 a form of "achieved" or "realized" quality, that depends on how the activities were carried out subsequent to the design.

For these reasons, the maximum level of the product quality is determined in the first two phases of its cycle of life, when we identify the characteristics that it has to possess. In the remaining stages all efforts can only be aimed at maintaining the level of intrinsic quality, not improve it.

(7)

7

Theoretically, the product should pass from one stage to the next, without coming back to the previous phases. Actually, for each phase, resulting changes are requested since the product does not fit all customer needs. This usually happens because during its design the customer requirements were not taken sufficiently account. Other problems relate to the costs of the changes made to the project at various stages of the life cycle of the product. Changes made to the product design phase are the cheapest ones, since they do not involve major organizational changes in the production or the setting of machines. More expensive interventions are operated in preproduction and production.

A question then arises is: "how do we take into account, in the design phase, all the requirements related to the life cycle of the product?"

An answer to this question comes from the methodologies of "Design for X" developed in recent decades. These methodologies include design rules that are nothing more than the requirements expressed by the customer-production, customer-procurement and customer-end user.

Of particular interest are the application and the methods developed in order to improve the quality in the design. In this paper we will refer in particular to:

 Taguchi Methods (Robust Design);  Design Of Experiment (DOE);  local Sensitivity analysis.

In the following chapters the techniques mentioned above will be analyzed in detail. These will be used to try to get to the goal of this work, that is the definition of a sensitivity index that will tell us whether or not a system is robust. For robust system we mean a system that maintains high performance despite the intervening action of the noise factors. Fundamental will be to try to incorporate these studies in the very first stage of design, in order to get as fast as possible on the market (Galgano, 1992). The work is divided into 4 parts.

(8)

8

The first part provides a research and review of some articles and works published by several authors on the specific topics of our work. This part aims at finding as many information as possible on the subject, so as to create the knowledge base, necessary to perform the study. Relevant is also the definition of the main used in the field field of the metrics processed, necessary to understand the importance of their use.

The second part is the heart of the work in which we have developed three methods that will yield the sensitivity indices we were looking for:

 the first method starts from the study of the signal to noise ratio and through its processing, allowing us to obtain the first indicator of sensitivity;

 the second method uses the factorial planes, an approach based on the logic of the DOE that will take you to define another index of sensitivity;  the last method is based on the partial derivatives. It follows an approach

quite different from the previous ones as it is based on a mathematical model to obtain the index of sensitivity we wanted.

The third part provides the application of the methods developed to a real study case. To do this, we chose the Toyota gas pedal, a faulty design that has created many problems for the company. We departed from the description of the pedal and its operation, and then we described the critical aspects. Then, we applied the three methods, in order to know where we would have to act to change the design of the product correctly.

The fourth part provides the definition of a method that in an absolute way tells us what would be the threshold value of the sensitivity index; this, in order to say that a certain design needs to be changed. After that, we have built a program in C++ that has resulted in a tool for validation of the models obtained. From an analysis of the results obtained with the program we may say that the techniques applied to elaborate the selected case gave us the expected results.

(9)

9

In the last part we instead identified all the advantages and disadvantages of each method, so as to indicate future research directions for the completion and integration of the three methods.

(10)

10

2. Literature Review

On the meaning and aim of Robust design

Since the beginning of the 80s onwards, the methodologies of "Design Of Experiment" and "Robust Design" become subject of growing interest to the industrial world, as their applications have provided results of growing importance, mainly due to refining their enforcement strategies.

The experience gained so far, thanks to a strong industrial development, has shown that the economic success of industrial initiatives, is conditioned by proper economic planning, construction and production of the works. Hence, the aim is to minimize the still inevitable design errors, leading to a strong improvement on the result of any initiative. When we talk about design, it must be understood in a broad sense. It cannot be considered as a sequence of static phases leading to the making of the final product, rather, it has to be considered as a coordinated set of phases, each heavily dependent on the previous and of which it is difficult to identify both beginning and end.

Design therefore, cannot be considered closed when the production phase begins, but it should be thought of as a continuous journey as it may be necessary to go back to that stage even when you are far along in the development process; this is often due to the constant need to, implement changes and improvement to the original project. Basically the design phase lasts throughout the development cycle of the product itself. (Yuin at al., 2000)

The techniques for Robust Design fit right in the context of design, focusing on the problems that may arise from the use of the product in different environmental conditions and during its entire life cycle. They tell us how the performance of a product/process remain predictable, despite varying some parameters that characterize it. Having a robust design is synonymous with quality for the process, because it allows us to meet customer requirements, approaching as much as possible

(11)

11

to the specifications indicated by them. Its main purpose is to find the best combination of the project parameters, so that the dispersion of the global response of the system around the requested value is minimal, regardless of the conditions of use.

However, It should be noted, that the robust design, being oriented to find only the best values of the factors studied, deals with only deleting the unwanted effects without dealing with the removal of the causes. (Yuin et al.,2000)

Achieving robustness

Finding out the robustness of a system means ensuring that its performance will remain stable regardless of the effect of the influential -yet not controllable - variables. A system is a set of elements and relationships that interact with each other in order to perform one or more functions. Elements and relationships, that allow the system to perform its intended functions, are connected by a network of processes. Thanks to a decrease in the input variance of the individual process we get an overall improvement of the average response of the system, which then increasingly moves towards the values quoted in the specification of the project and, as a consequence, closer to what the customer required.

One of the tools used by today's companies to get a robust system is the "control of-line" process. As is known (Kackar (1985), Ryan (1989)), from 1920 to the present, in the field of quality control has been a shift of the operational phase of the inspection, during the actual production, to the phase of product design or production process. And then you made a change: while in the 20s the quality control was made primarily in the on-line, ie during the processing cycle, with the passage of time, the main role has been assumed by the control off-line, and in particular in the design phase of the product.

The quality of-line is a methodology that leverages the design of the experiments, aimed at optimizing quality performance of products and production processes, it improves the intrinsic quality of the products or their robustness. The

(12)

12

evolution of the concept of accompany as silent in Japan is well represented in the picture 2 where you can see that already in the 80s about 50% of the quality was ensured through the design of the experiments.

Figure 2: Quality contributions

In relation to what was said, How shall mean the quality? The company aims through it to minimize waste and produce parts within specification, while the customer understands how the product's ability to maintain their functional characteristics even when subjected to harsh operating conditions. These products are said to be robust, this means that are no longer enough traditional quality control of the product and the process, but that quality is primarily a virtue of the project.

So the question is how designers behave. They often reduce unjustifiably tolerances, in the belief of improving the performance of the product, by imposing an excessive precision machining and creating difficulties in the production process. In many cases it happens that the production process inadequate to maintain constant tolerances so tight, will often produce non-compliant features. In this way the cost of

(13)

13

quality salt unnecessarily. On the contrary the designers in order to achieve the quality should:

• Specify tolerances widest possible

• Search for the elements of the product more susceptible to process variability

• Operate changes of these elements to make the product more stable.

That allows to obtain a robust product that is able to tolerate the effects of some sources of variation, maintaining consistent performance. The off-line process control is the tool to improve the quality of the product and it does this through statistical experiments. It differs from the traditional approach online that you take for granted the presence and influence of the variability in the production process and merely remain within specifications.

Especially, the quality control offline methods, are "intervention techniques" for cost and quality control made during the early stages of product design or production process. The benefits of the application of these methods are:

• the manufacture improve of the product; • product reliability improve;

• reduce the cost of development and maintenance throughout the life of the product "(Kackar, 1985).

To Genichi Taguchi, Japanese engineer, basically goes the merit of having given impetus to the off-line Quality Control in the early '80s. These introduced a study methodology once both the design of products available, reducing the variability, both to the achievement of the production processes more reliable and stable. He introduces these methods in the off-line phasejust to minimize the subsequent action techniques online.

(14)

14

The American Society for Quality Control defines quality as "The totality of the aspects and characteristics of a product or service that affect its ability to meet certain demands."

In this sense, the product must obviously meet the material and psychological demands of the consumer, and is therefore in the interests of the manufacturer that the product is as responsive to these requests. Much more the product moves away from the desired characteristics as will cause a monetary loss, that loss to Taguchi affects society as a whole.

Taguchi and the robust design

Taguchi methodology is considered today one of the best techniques for robust design. Taguchi said that "the quality of a product is the loss imparted to the society from when the product leaves the factory" (Taguchi, 2005).

In this sense, Taguchi associates a loss to every product that reaches the consumer, whose cost includes customer dissatisfaction, agency costs for the manufacturer, loss of reputation and market share for the manufacturing company. In other words, the quality loss of a product is inversely proportional to the loss that it imparts to the society.

So now we analyze the meaning of loss (of quality) to Taguchi. First of all, for Taguchi, quality is not a value, since the evaluation of the value of a product is a subjective evaluation, which competes in the fields of marketing and / or sales planning when, but it is essentially a technical problem. Secondly, the loss in quality can be reduced to:

1) loss due to variability in the operation; 2) loss for harmful effects.

For Taguchi an object of good quality, it must have small harmful side effects and have no variability in its technical performance. Quality control must intervene

(15)

15

precisely in the reduction of these two kinds of losses, so that the product, once placed on the market, causing less possible losses to the company.

This new way of thinking about quality, is opposed to the previous vision called "Zero defects", (Dean, 1991) according to which a product is defective when the value of its characteristic comes from the fields of tolerance set.

The real problem with this view is that it does not conceive quality from the point of view of the customer, also in the highly competitive market like the current one, it cannot be ignored the opinion of the customer, whose voice must be brought within the company from the earliest stages of product design (Simatupang et al., 2004). Actually what the customer expects from our product is that its performance will remain stable over time, regardless of the action of humidity, high temperatures, accidental drops, wear of materials etc.. In brief, regardless of the action of those who have called noise factors, whose actual events in time are difficult to predict.

The techniques of Taguchi operate through experiments, that lead to the realization of a sturdy product, or that fail to maintain the performance even under the action of uncontrollable factors.

Taguchi's thought revolves around two fundamental concepts. The first is that a product can be defined as robust if it is insensitive to the conditions of use which apply to the end use; similarly for the processes, a robust process is the one that continues to produce good parts even if there are disturbances on the parameters involved.

The robust design, therefore, is a design aiming at designing a process so that it is able to operate at a high level of quality and reliability and with little supervision, despite a normal level of disturbances.

The overall goal of the techniques of Robust design, therefore, is to find the right combination of the values of the controllable factors which allows to minimize the variation in the output induced by the noise factors. The higher the change in the performance of the system is reduced, the greater the quality offered to the customer. The methodology of action by the techniques of robust design is to act in the design

(16)

16

phase of the process going to operate on those most influential parameters impacting the variability of the product. Through the methods of robust design, we can make observations on the variability of the parameters assuming alternative scenarios and assessing the influence of disturbing factors. Robustness therefore means "small variations in performance of a product as a result of a change of its parameter". (Phadke, 2013) This will also mean, that the parameters have a low level of sensitivity, which represents an index of the relative importance of identifying the critical factors on which action is needed. The goal of the study will be to find a method that makes our sensitivity index, absolute, in order to define the conditions in which you need to change the design of the product to increase the quality.

The second concept, on which the Taguchi method is based, is the quality assessment of a product, through what he calls "quality loss function" (loss function). The goal must be to achieve improvements in cost and quality, optimizing the design and production phases (Jeremy Poole, 2008).

Taguchi is not convinced that the distinction between good and bad products can be done by checking if the values of the specifications fall or not in the tolerance range defined. In fact, between a product that is just inside the specifications, and one that is just outside, there is very little difference from the point of view of the quality perceived by the customer. However one is considered good (and then will be taken), the other will be discarded.

Taguchi said that: (Taguchi Genichi, 1990):

“Some market surveys indicate that consumers like more images having a particular color density: let us say that the nominal value (optimal) color density is 10, when the color density deviates from 10, the vision becomes increasingly unsatisfactory so that in the functional was set by Sony a tolerance range of between 7 and 13.

The televisions were manufactured in Tokyo and San Diego. In San Diego existed the practice of not delivering to the customer any device with color density outside the permissible tolerance (zero defects): the density of color appeared evenly distributed

(17)

17

within the same tolerance. TVs in Tokyo focused instead around the nominal value, though - about 100 devices- about 3 ended outside of the tolerance range: Tokyo delivered all equipment producing.

What is the cheapest policy? Suppose you buy a device with a color density of 12.9 while our neighbor bought one with color density of 13.1. Obviously we are not able to distinguish the difference, looking at pictures of the two devices. But suppose that both customers see pictures of a device with a density 10: The next day both will call the service center or ask for replacement.”

According to Taguchi (Taguchi, 2005) you have to insist that every little change compared to our goal (target value defined for the parameter) is a cost of quality, or the more we move away from the target value the more the loss of quality for the customer increases.

One method to clearly identify all costs related to the variance, is the "quality loss function". The parabolic curve "U" defines the "quality loss function" or loss function of the quality; it which describes the economic loss suffered by a producer due to the variability around the target value of one or more parameters belonging to its production processes (see figure 3).

Figure 3: Quality loss function1

1

(18)

18

As can be seen from Figure 3, the costs related to the quality are minimal when the dispersion of the parameters of a product approximates the nominal value.

The formulation will depend upon the type of feature to be measured (Taguchi, 2005):

 Lower is better: it is suitable for quantities such as noise, consumption, losses etc., or for all sizes that need to minimize the target value to achieve maximum product quality (see figure 4).

Figure 4: lower is better

 Higher is better: it fits sizes, type, speed duration etc., that is variables which need to be maximized in order to obtain a high quality product (see figure 5).

(19)

19

Figure 5: higher is better

 Normal is the best: in this case we have a target value, defined for the product, which must be reached. We have a certain upper and lower limit. Quality is, in this case, defined in terms of deviation from the nominal value (see figure 6).

(20)

20

Hence, the economic loss can be reduced, by continuously reducing the variability of the response of the system. The control of the variability can occur at different stages of the development cycle of a product:

 product design;  process design;  manufacturing.

As we can see in the table 1, the phase in which it is possible to intervene to limit the best variability is the design phase of the product which is carried out in the processing cycle, while the other 2 stages give the possibility of intervention only to limit the variability from unit to unit.

stages of product development Environmental variation Product deterioration Production variation

product design intervention possibility

intervention possibility

intervention possibility process design impossibility of

intervention impossibility of intervention intervention possibility Manufacturing impossibility of intervention impossibility of intervention intervention possibility Table 1: Phases to limit the variability

Taguchi (Taguchi, 2005) also argues that: "the study of the quality in the stage of product design is particularly important because, while the variability can be reduced even during the stages of production, this cannot occur for the perishability of the product or the inadequacy of the environment. All these quality problems can only be addressed in the design stages".

The variability is caused in general by three types of noise: • external;

(21)

21

• noise

which are characterized by specific factors, dependent on the production process under consideration.

The external noise (outer noise factors for Taguchi) constitute environmental variables or conditions of use that alter the functionality of the product. Two typical examples are the temperature and the humidity.

Internal disturbances (inner noise factors for Taguchi) are disturbances due to deterioration for the use of the product or for the long storage time.

Finally, the so called noise (noise factors) constitutes the source of accidental error, which causes the difference between individual products, reducing the reliability and uniformity.

The good functional quality of a product is thus achieved when it is possible to reduce the three above-mentioned types of noise, so that the product is able to provide a smooth operation under a wide range of environmental conditions of use and, for the duration of its useful life. In this sense, the functional quality is synthesized as a deviation from nominal desired target. To reduce the three types of disturbance Taguchi proposes three phases of intervention, which may be introduced in each of the three stages of the development cycle of a product , but which are obviously of fundamental interest in the stage of design of the product. The three phases are: (Wysk et al., 2000):

 system design;  parameter design;  tolerance design. System design

In order to design of the system you can take the P of Plan-Do-Check-Act (PDCA) cycle, as a reference, developed by Deming (M. Sokovic a, 2010).

 Plan: is the design phase, that starts from definition of objectives in input to the process. These objectives resulting from the deployment of

(22)

22

the objectives defined by the high direction at the higher levels of the organization. In this, the processes needed to achieve the planned objectives, and relevant interactions between processes, are identified. It also studies the process according to the customer requirements, and if necessary, is broken down into sub-processes. The decomposition of the process will come up to a level of detail such that it will be possible to define the method to achieve the aim.

 Do: for each process is defined the method of "doing" allowing you to reach the identified is defined.

 Check: it is the control phase of the process, in which all the necessary methods for its control, in order to obtain the desired performance, are scheduled.

 Act: it is the phase allowing to establish, on the basis of the test results, if action is needed to make changes to the process, in order to obtain the desired objectives.

Still in the planning stage, a graphical representation of the system can support us. It is made through the p-diagram, (Phadke, 2013) which allows us to highlight the various elements that constitute the system (see the figure 7):

 Inputs values necessary to get the desired response (signal);  output values that represent the objective sought (response);  control factors;

(23)

23

Figure 7 : p-diagram

As shown in Figure 7, the inputs and outputs of the system are strongly correlated with each other: that is why manipulating the value of the input parameters of each process, you can improve the overall output of the system. It is very important for the study of the system, to identify the relationship between the input parameters with the output. This relation is called transfer function. (Taylor, 2004)

Figure 8: relationship between input and output

Understanding how the inputs affect the output, is crucial to find the values of input parameters, that may allow us to have the answer that I have defined.

(24)

24

However, on the system, other parameters can also act to modify the response, and these do not always affect the system in a predictable manner. In the making of a system, not all variables affecting its performance are controllable both for technological and for economic reasons. The response of the system is therefore subject to a random dispersion much more pronounce when input deviates from the nominal value in an unpredictable manner.

The objective for robust design is therefore to identify, through appropriate techniques, those factors (called noise factors) which make the system slightly predictable and therefore reduce the robustness, taking action to reduce the variability caused by these parameters.

The causes of the deviation from the target value of the parameters of a product are two:

 Control factors: also known as design factors on which designers can take action very easily to change the values of the output, being both measurable and controllable. They represent the internal variables, to the manufacturing process that affect the optimal calibration of the product into the design stage. They are indeed of high interest, since their optimal combination allow for the definition of the best experimental result in the design phase. In this regard, both the achievement of the target level and the minimization of the variability.  Noise factors: on which it is not possible to act, because they cannot be

measured or controlled by the designers. The problem of the impossibility of measuring such factors, may be linked to the excessive cost for their measurement, or may be out of control because their variation is not easily predictable. However, controlling them is essential to calibrate the product on customer requests. (Yuin & Alan, Taguchi Methods for Robust Design, 2000)

(25)

25

Parameter design

In the parameter design we are going to define the target values of the two types of factors that, by acting on the process, alter the response and therefore may influence the customer satisfaction.

This phase is at the heart of robust design techniques, in which tests and calculations are performed to find the best combination of control parameters and noise parameters. To reduce the process variability is possible by acting both on the controllable factors and those of noise. However, acting on noise factors is very difficult for designers, since they do not know well how these parameters changes. For this reason, it is generally preferable to act on the controllable factors, so as to bring the system's response to the desired values despite the action of noise parameters.

For Taguchi, this pursuit of optimal combination of levels generally leads to improved quality and reduced costs. More factor levels deviate from the optimum situation, the more the output (response variable y) not deviate from the nominal value set, producing a loss, expressed in:

L (y) = k * ( y - ϒ)

(1)

So the combination of factor levels of the design is optimal when it minimizes the expected loss, or the risk.

The creation of a production system so that ensures the stability of quality compared to external variations (ex .: environmental characteristics) and internal. The importance of this step is not only due to the improvement of the quality but also the reduction of costs, because they are used inexpensive components. Starting from inexpensive components and reduce the variability around the average of the characteristic target y, without increasing costs, constitutes a design technique also known as the use of non-linearity (Taguchi 1991).

The PD is based on the study of the factors involved in the system. Then we have a response (output) of the process and more factors involved in the system. The choice

(26)

26

of such factors should be considered is essentially due to prior knowledge of the investigator. Obviously, not all factors can be considered, both because some are not known, and because the inclusion of a large number of factors can create problems related to the interpretation of the results and by the low abundance of evidence.

The fundamental distinction of the factors of the design is in control factors and noise factors. The control factors are the variables whose specifications (levels) are to be determined by the manufacturer. Each combination of the levels of the control factors determines an accurate output. The more generic the combination deviates from the optimum, the more the expected value of Y: E (Y); deviates by ϒ.

The noise factors can be divided into external and internal. The distinction is linked to that already described above, between the internal and external disturbance. We must observe that the performance of a product (performance) may differ from face value because the controlling factors are affected by external noise. In this case the deviations from the nominal value are due to external influences. If a product is stable with respect to the internal effects will be stable even with respect to those external disturbances acting on it indirectly.

The phase of PD can be articulated in the following points: (Kackar 1985)

1) initial identification of both factors of the design factors that noise, and determination of the related fields of variation;

2) construction of the experimental design, articulated in design matrix (internal) and

1) matrix noise (external);

2) run the experiment and evaluation of the Signal-to-Noise Ratio (SN) for each experimental test; the SN may be defined as a measure of performance, which is different, as we shall see, according to the experimental situation analyzed;

(27)

27

3) use of the values of the SN to determine the optimum combination of factors

4) drawing;

5) Verification of the actual improvements made by the result of the previous phase.

Tolerance design

The last step involves the tolerance ranges definition, representing the variation range of the parameters process values. Industrial development is based on the concept of "tolerance machining", or rather there will never be in a production two identical objects, for a variety of reasons such as:

 machinery inaccuracies;  man inaccuracies;  machinery wear.

Moreover, the finished product is itself an aggregate of parts in tolerance. Product tolerances express the different levels of performance that a customer can expect by purchasing an asset.

This is a very delicate phase because the tolerance ranges should not be neither too large, as you are likely to make a part too inaccurate nor too tight because you are likely to discard pieces that might be fine. In the previous phase, the most sensitive parameters are identified, or rather those whose change mostly influences the change in results; and on them we will focus the attention in the definition of tolerance ranges.

State of art

Nowadays, the use of robust design in the industry, is still limited because of the difficulty of applying it in the early stages of the development of a new product.

(28)

28

The challenge today is to do well on the first try, then going to market with the right product, shortening the time to market. The difficulty of applying the robust design is due to the difficulties of getting all the information needed to implement the techniques of robust design in the early stages, making it possible to obtain values of sensitivity in order to immediately know whether any changes to the design shod be implemented. The metrics that we will develop in this work will be good to use in the design phase, because they will not need to entail the tolerance range (which are generally processed in the later stages).

(29)

29

3. Research Problem and Conceptual framework

"Quality First", has been the mantra many studies done in the field of quality. The main reason is that the studies have been carried out on the customer, today the consumer; however today the costumer no longer considers only the price but also the quality of the product. And this is because the increase of the wealth status and the specific skills acquired, put him/her in a position to consciously choose what he/she is buying. Consequently, he/she is willing to spend even more to get the quality that he/she expects. So the new target for the companies, to measure in a highly competitive market, must be to fully satisfy the customer. (Evans et al., 2012)

The first key concept for customer satisfaction is filling the gap between the actual quality and the expected quality.

The former is the quality actually perceived by the customer when using a certain product, whereas the later is the quality that the customer expects to be buying that product. And it is the main reason why he/she chooses just one and not another product on the same shelf.

The quality is related to many factors, including (among others) the quality guarantee given by the brand and then the history of the company. The gap between the expected and actual quality, depends on the degree of customer satisfaction.

W. Edwards Deming said:

"... it is not enough to have quite satisfied customers. Satisfied customers go away for no good reason, just to try something new. Why not?

The profit and growth of a company come from customers who are delighted by your product or your service, loyal customers. These customers do not need and do not need advertising incentives: they come to you to buy and carry with them a friend. "

Considering that:

(30)

30

 the increasing standardization and dissemination of technologies;  we are assisting at an almost total saturation of the economies of scale

(given the tendency of large oligopolistic markets); it is very difficult to further reduce costs.

Therefore to increase competitiveness, we must act on the variable quality, (also inflected in terms of a proper quality policy), linked to the control of production processes, reducing the enormous cost of non-quality. It is possible to lower the prices of the products regardless of the technology used or economies of scale, but through quality research itself. From this point of view we see that the price and quality can be thought of a two related quantities.

They are also related through another element that is the utility function of the customer. The utility is the value that indicates the level of satisfaction that a consumer gets from a good or from a basket of goods with given characteristics. Then the utility function is the function that assigns a level of utility to every good or basket of goods.

U = f(C,P)

(2)

U = utility level;

C = characteristics of the good or basket of goods; P = price of the product.

This function grows with the expected quality and decreases as with the price, as both are characteristics of the product. Then the company can act both on the price or on the characteristics of quality to allow the user to maximize its utility by purchasing the products of the company and making it at the same time more competitive.

Of course, to do this, we must also try to estimate the utility function of the customer, by carrying out appropriate market research. In this context, the customer

(31)

31

can be defined as "a judge of quality", being the starting point for making quality; Feigenbaum (Feigenbaum, 1991), "Quality is what the customer says, not what the company says." Without customers, the company would no longer have any reason to exist, also if it does not conform to its demands falling behind in danger of being overtaken by the competition. That is why when Deming (1986) describes the production as a system, he puts the customer first. At this point we must find a way to chase the customer's needs.

(32)

32

The Quality

We have so far talked about quality, but to achieve the quality we must first define what is it and how it should be measured. In the literature we find various definitions on the subject:

 Quality is the degree by which a specific product meets the needs of a specific consumer (Gilmore 1974);

 compliance with specifications (Crosby, 1979);

 loss generated by the product from the moment in which it is shipped to the customer (Taguchi, 2005);

 predictable degree of uniformity and reliability at low cost and suitable to the market (DEMING, 1986);

 Doing the right things the first time (Price 1985);  suitability for use (Juran, 1974);

 complex of commercial characteristics, design, production, maintenance that allow a product / service to meet customer expectations. (Feigenbaum, 1991);

 Keep deviations within the tolerances established (Toyota).

Among all these definitions what today seems more complete, and above all centered on the basic concept of the spirit of action of a company, is argued by Eppinger and Urlich (Ulrich & Eppinger, 2005) "Quality is the set of characteristics of a system designed to meet the needs of the customer, the User and the Company." basically the characteristics of systems that can meet the needs of the customer / user / company. It is obvious that we cannot define a list of features that go well for any system and in any situation; (Galetto, 1989) has proposed a list of 10 variables that may be fine for industrial products; they are placed at the vertices of a tetrahedron (see figure 9):

(33)

33

Figure 9: quality tetrahedron adapted from (Galetto, 1989)

It is obvious that the weight of these 10 features varies depending on the system that is considered.

According to Eppinger and Urlich (2005) today many books dealing with quality are lacking to treat it according to the characteristics mentioned above, that is not highlighting "needs to be covered"; rather they focus only on "customer satisfaction". Meeting the needs of the customer, the User and the Company is much more worthy than satisfying the customer. This means that the quality has not to be measured by "how many are happy", but by "meeting the needs" of the three actors aforementioned. It is still very important, when designing a system, to determine, by means of appropriate market research, what are the characteristics that best meet the

(34)

34

needs of the of the customer (from now on we will use only the term client to refer also to users and to society), starting from those for obtaining the maximum satisfaction.

Applying quality

What does it mean to do quality? It means to achieve the quality of products, processes and of any necessary system in order to implement preventive actions, able to predict to avoid potential problems in the future. In this view then the Quality requires that you prevent mistakes.

To do this, it is essential to educate people to work thinking, because otherwise they will never be able to critically analyze situations and make decisions that avoid potential problems.

Crosby (1986) intended quality as prevention to the extent to which, insert it in what he called "absolute truths" Quality on the grounds that:

1. quality is conformance to specifications; 2. prevention ensures quality;

3. the standard must be Zero defects;

4. the evaluation criterion of quality is the cost of non-compliance.

It is not easy to fully share these "absolute truths" because compliance is just one of many aspects of quality, as well as the concept of "Zero defects" that has not always translated into total elimination of defects. In this view, how can we expect to prevent failures?.

The second "absolute truth" instead brings out the importance of preventing which, according to Crosby (1986) is to eliminate all occasions of error on a large scale. Crosby (1986), however, seems not to consider that is not always possible to prevent errors: when that happens, you have to recognize them with intellectual honesty and scientific spirit and implement appropriate corrective actions.

(35)

35

We can then take inspiration from Crosby (1986), to say that prevention and corrective actions ensure the quality.

(36)

36

Reliability

It is clear that to prevent one must consider all stages of the life cycle of the product, from the identification of the needs of the market, the use of the product by the customer, until a new product will be launched on the market. The phases of the development process of a new product are the following (see figure 10) (Cooper, 2008):

Figure 10: phases of development process of a product

The process of setting achieving goals for a company always starts from the identification of customer needs, based on which the design and engineering will intervene turning ideas into new products/processes that will meet those needs. This can be expressed in terms of the graph, with the cycle of development proposed in figure 11:

(37)

37

Figure 11: Development Cycle

Let us dwell on the design phase that precedes product engineering.

It represents the first stage in which the product idea is realized what will be the actual product, namely the phase in which the product takes shape. Consequently, it is a very delicate phase from which the phases of subsequent development depends on.

When we make a new product is good to keep in mind that the early stages of design are the most important ones in which to concentrate the maximum effort because the more we go forward in the development, the more lowering costs to make changes to the product becomes difficult (see figure 12).

(38)

38

Figure 10:

Figure 12: Diagrams costs of changes in the product2

Often many companies believe that it is more useful to concentrate the maximum energy in the later stages of product development in order to have clear all the problems that can come up and try to remedy them.

This approach, however, leads to higher design costs because making changes to a product often already prototyped, or at least in the last stages of the development cycle, involves high costs, implying the need to go back quite early in the design process to solve problems. Because of this approach, it may be even necessary to perform new prototypes, new tests etc. In contrast, much more effective and less expensive is the prevention, which allows to anticipate problems that may occur, and then remedy in advance. This will make it possible to obtain a product that is what the customer wants, as quickly as possible and more reliable. To do this, however, must have full knowledge of the system that we are implementing in order to be able to "predict" the behavior of the system to vary, for various reasons, some geometric parameters. During the design phase, testing on prototypes of the design solutions

2

(39)

39

identified, is a very important phase for the design: it allows to study the behavior of the product and highlight areas of potential weaknesses that help to prevent future problems. However, the trial, to be really useful, has to be conducted with "Scientific Method".

Only in this way it allows a thorough appraisal of the goals set on Quality. But, what can be based on the scientific method?

A scientific approach to quality

Montgomery (2009) said:

"Quality Improvement is the reduction of variability in processes and products" ... "Note That this definition implies That if variability decreases, the quality of the product increases."

The variability in the response of the system results in a loss of quality, as it gives rise to unexpected effects that can ward off the system response from the targets it has set itself the designer also according to customer requirements.

In this context, the concept of "Robust Design" fits (Wysk et al., 2000). The techniques of robust design fit precisely in the (most critical)design phase, focusing on the problems that may arise from the use of the product in different environmental conditions and during its entire life cycle. They tell us how the performance of a product/process remain predictable given the changes in certain parameters which characterize the process.

Having a robust design is synonymous with quality, i.e. reliability, for the product/process; precisely it tells us that varying the parameters of the design system, the response remains predictable and allows us to meet customer requirements, approaching as much as possible to the specifications indicated by him. Its main purpose is to find the best combination of design parameters that makes minimal dispersion of the global response of the system around the required value, regardless of the conditions of use.

(40)

40

It should be noted, however, that robust design, works to eliminate only the side effects, without dealing with the removal of the causes, being it oriented to find only the best values of the factors studied. The real strength of these metrics is certainly linked to the fact that can be applied in the early stages of the development cycle of a product/process, so that it can immediately identify the critical parameters of the system, allowing to intervene for corrections and as consequence, to contain costs changes. The greatest contribution of this approach has been to connect the engineering with statistical methods, in order to obtain an improvement in terms of cost, time and deliveries.

This is the main reason why such techniques have been widely adopted in the industrial field and especially in that of the "quality engineering".

The purpose of this work will be to go in search of the scientific methods to make quality.

With the term scientific method we mean the typical mode with which science proceeds to reach a knowledge of reality that is possible, reliable and verifiable. It deals with the collection of empirical data and subsequently their rigorous mathematical analysis. Through the objective of the metrics of robust design, we will try to identify the methodologies that lead to define indicators of sensitivity to the design parameters that affect the response (output) system. Through this indicator of sensitivity it will be possible to identify the parameters that make the system more subject to variability in the response, and hence may reduce the quality of the product. On them, in the later stages preventive actions will be planed that will help to prevent future problems impacting the quality of the final product.

Why use robust design techniques

Robust Design techniques are applied mainly in the field of Quality Engineering. In fact, they represent a variety of approaches that allow to predict and prevent problems associated with the introduction into the market of a product after it has

(41)

41

been sold and used by customers. The traditional engineering, which does not use the metric of Robust Design is concerned, troubleshooting, failure analysis, use of a process of design and construction of tests, which allow to predict, with a high degree of certainty, the problems you may encounter. This process is very expensive and especially long, because of the difficulty in obtaining and analyzing data, and also cannot always give the desired results. The techniques of robust design, improve the quality of existing products and processes and at the same time manage to reduce costs with the minimum use of resources and man hours. This is the main reason why, these techniques have been widely used in the industrial field. Such techniques are acting in the design phase of the process, so in the early stages of the development cycle of the product, going to examine the most sensitive parameters, namely those that have most influence on the variability of the response. This allows, if necessary, to make changes to lower costs because later we enter in phases where the process of introducing changes to the product itself is much more costly. (Yuin et al., 2000)

(42)

42

4. Project approach

In the previous chapters we have highlighted how important is to have a robust design that allows us to come to market in the shortest possible time and with a product that meets customer specifications.

The objective of this study will therefore be to start from a system that already exists, to apply the methodologies that allow us to build reliable systems and therefore to obtain quality products. With the help of the metrics of robust design, we want to find new indicators that describe the sensitivity of the design parameters of the system itself. By the term "sensitivity" we denote the concept opposite to that of robustness of the parameter. Precisely a parameter with high level of sensitivity will tell us that by varying that parameter the response of the system will vary greatly and unpredictably. This indicator is of great importance because it is closely linked to the concept of system variability.

Each system is characterized by design parameters that affect the behavior of the variable response of the system; the latter being, the dependent variable and the output.

For every designer is very important to check the response of the system because the more predictable it will be, the easier products that meet the specifications required by the customer will be made. With this goal then, we will try to find ways to show all of the system parameters which are the ones that mostly need control, mainly because they are responsible for the excessive variability of the system.

For the determination of the sensitivity index, we will use the classical methods of robust design. They are based on statistical methods for the design of experiments (Design Of Experiments, DOE), (Yuin et al., 2000); such methods were conceived in the 20s and oriented efficiently to obtain information on the correlation between different variables that characterize a phenomenon under experimentation.

(43)

43

Taguchi has developed an original DOE method and integrated it with a particular formulation of the concept of quality, arriving to the definition of a project "philosophy" known just as Robust Design. The basic objective of this approach is to maximize the quality of a product or a process, which minimizes the sensitivity of all the factors that tend, in various ways, to deviate its performance from those deemed optimal. It is therefore an optimization method which, by being applied to industrial products or processes, takes into account the intrinsic variability by means of statistical methods.

A product, to be robust, must therefore have a small variation of performance even in the presence of noise factors such as: variation of use conditions, variations in the production process, deterioration, etc. This means that a robust product must be designed going beyond simple technical feasibility; it must ensure its full functionality over time and in any situation or environment of use. Therefore, in line with the basic concept of the metrics of robust design, we will make use of statistical and mathematical methods to arrive to draw the indices of sensitivity that indicate the degree of robustness of each parameter of the system.

The system

First we give a clear definition of the system, which will be the main object of our study.

A system is defined as an entity, both physical and functional, consisting of several parts (system variables) interacting with each other (and with other systems), forming a whole in which every part gives a contribution for a purpose of the whole system (or the output response of the system). The characteristic of an engineering system is to be always well organized, with a specific denomination and a certain purpose.

In systems theory, a method of representing systems is the "black box model" in which we have a system that, similar to a black box, can be described essentially in its

(44)

44

external behavior, looking at how the output reacts to a given stress in input. Obviously, its internal operation is not visible or unknown. This definition comes from the consideration that in the analysis of the system what is really important (at the macro level or for practical purposes) is the external behavior, especially in a context of interconnected multiple systems, rather than the internal operation whose result is precisely the very external behavior. (Backlund, 2000)

Although the black box models, are unknown in their operation or behavior, it is still possible to trace the dynamics of their internal characteristics, in test phase or to the rear: in fact, what characterizes the dynamic behavior of the system is its transfer function Y = f (x), an empirical model that links the performance of a product, and therefore the output of a system with design parameters which represent the inputs of the system (see figure 13)

Figure 13: black box model

On each system, some process parameters that influence the response also act. These parameters represent the design parameters that the designer can control to adjust the response to the levels that the customer requires. However, the system is not isolated, being immersed in the external environment that somehow influence it; and it does so through noise parameters of the system.These are parameters that the designer cannot control and that may affect the response unpredictably, making away from the target values.

It is therefore very important to try to study the variability of the system caused by these parameters in order, to try to reduce it to a minimum, and then obtaining the most possible adherence to the specific values.

Y=f(x)

(45)

45

System analysis starts from the definition of the response value that we want to achieve, defined on the basis of what customer requests and developed by the designers that translate into product specifications. This is followed by the identification of all the factors that may influence the response, that is, the controllable parameters, and each one will be selected a range of variation that will allow studying the variation that causes each response. These parameters are critical, because they are the only ones that the designer can check and adjust easily. That is why our study will focus on their analysis. We will try to find the best combination of the values of the controllable parameters, which makes the system response slightly variable, despite the action of the noise factors.

Signal to noise ratio

The first method that we analyze is based on the Signal To Noise Ratio (SNR). This is an index that measures the performance of the product/process, calculating how much the noise influences the average of the input signal. Although it arises in the field of electronics, it is now widely used in the field of design. This index is inversely proportional to the loss of quality caused by the distancing from the target value, defined in the design phase. This allows us to say that it is just what we have defined robustness, and then its inverse is what we call "sensitivity". (Yuin et al., 2000) The SNR value is a number. It expresses how much the part of the signal, which in our case is represented by the controllable parameters of the system, is influenced by that part of parameters that cannot be controlled (noise factors). Its value is calculated for each parameter and indicates the robustness of the same, or how that parameter affects the response of the system in a predictable manner. The SNR is worked as follows (equation 3):

(46)

46

To see how the single parameter influences the response of the system, we consider:

 the nominal value of each parameter Xn;

 a range of variation for each of them (Xn - 1%, Xn + 1%).

With a program (for example Excel), we generate a sample of 100 random numbers in this range, and for each of the values of the input parameter, we calculate the value of the response. In this way we can see how to change the system response (the variation of the parameter) within that range. Finally we calculate the arithmetic mean of the responses obtained for each parameter, and the standard deviation from the nominal value of the parameter.

To obtain the value of SNR we use equation (3) written in terms of the variance:

(4)

This ratio represents an elaboration of the SNR, in which the average of the signal is represented by the average value taken from the response to changes in the parameter; When the value of this index is high means that the denominator is small. As denominator we have the variance, which is that part of the variability of the parameter that cannot be controlled (noise). Then if this index is high means that the parameter is robust and therefore influence in a fairly predictable system response. This relationship can then be considered an example of the sensitivity index: the lower is the result of the SNR, the higher the sensitivity of that parameter.

This method of analysis can be very useful because it represents a good method to simulate the behavior of a process that produces a series of products. In fact, although two products leave by the same process, it is impossible that they are perfectly identical; indeed, these will certainly have small differences due precisely to those factors of disturbance that cannot be predicted (e.g., wear on the machine, inaccuracy of the operator, environmental conditions, etc.). Therefore to know which

(47)

47

are the most sensitive parameters, this ratio can help to orient the controls in the later stages of the processes of product development.

Factorial planes

The method described before allows us to have the information we need in a relatively short time, but uses the classic way of proceeding which is to change one parameter at a time in order to see how the response changes. We see another method that always uses a statistical approach to get the result we want, but by varying simultaneously many parameters at a time.

The simplest and most frequently used scheme in the industrial field deals with performing one or more tests for each value considered, the parameter assumed as independent, keeping constant all other conditions. To evaluate the effects of the other parameters, we repeat the procedure for each one. The extreme simplicity of this way of proceeding does not allow to create any estimate of the extent of interactions between the various parameters; in other words one cannot absolutely predict the effect of variation of two or more parameters simultaneously.

The use of factorial planes is instead based on a multivariate approach, where more than one parameter is changed simultaneously from one experiment to another.

We have

 factors, that are what we want to examine the effects on the response;  levels, that are the different values that can take on the factors in

different circumstances;

 treatment, that will be a particular combination of factors, each at a given level.

Setting the levels of the system parameters, the maximum achievable information, you may have through the construction of a full factorial plane includes the performance of tests for all possible combinations of the levels of the various

(48)

48

parameters and allows the estimation of models containing all terms of interaction between the parameters involved.

The full factorial planes are the simplest type of experimental plane covered by multivariate DOE techniques. When the number of parameters considered is high, however, the problem that arises with the use of full factorial planes is the number of tests to be performed, as it grows geometrically with increasing of the parameters (yk with y=number of levels k=number of factors). Considering also that usually the contributions of higher order interactions are decreasing (think of quantitative variables where the effects correspond to the terms of a series expansion of Taylor), it is reasonable to try to reduce the evidence giving up to have information on the effects of interactions of higher order.

"Fractional factorial planes" are then used allowing us to exploit knowledge known or "working hypothesis" about the experimental context in which it operates, estimating the parameters "not insignificant" of the model without resorting to expensive complete plans.

The construction of a fractional factorial is conducted respecting two properties:  orthogonality, which allows to estimate each model term statistically

uncorrelated or "independent" from the others;

 symmetry of the plan, that is when all the factors have the same number of levels.

The plan is normally indicated with the number of levels elevated to the power of (k-p) with the number of factors k, and p is the index of fractionation. In this case the number of tests will be equal to [y(k-p)], where y is the number of levels (see table 2).

Riferimenti

Documenti correlati

Il 40.1% dei ginecologi che hanno partecipato all’indagine ritiene che la diagnosi di Premenstrual Dysphoric Disorder (PMDD) definita in accordo al DSM- IV corrisponda alla diagnosi

The sports hall Pinki, designed by architect Ivan Antić, is one of the most important examples of architecture in the 20th century, and from its detailed analysis it

Il confronto tra elaborazione teorica e pratica progettuale, la re- lazione tra esperienza professionale e didattica dell’architettura, la pratica del progetto come occasione

Studies included in the analysis relative to in situ L. monocytogenes inactivation in fermented sausages. Range a/a Reference No. monocytogenes strain Type of fermented sausage

Densitometric analysis of BDNF- and trkB- like immunoreactivity (LI) in brain sections showed that BDNF- LI is 24% to 34% lower in the different sectors of the Ammon’s horn of

Among the proposed methods seem particularly effective retrieving images according to their similarity with respect to an image given as query (Query by Example) and to refine

The effect of Lactococcus lactis CBM21, alone or in combination with natural antimicrobials, on the shelf-life and safety of minimally processed apples and lamb’s lettuce was

Because there were four different combinations between kaolin application (whole canopy and bunch localized) and bunch-zone leaf removal (yes and no), two