• Non ci sono risultati.

Correlation analysis methods in multi-stage production systems for reaching zero-defect manufacturing

N/A
N/A
Protected

Academic year: 2021

Condividi "Correlation analysis methods in multi-stage production systems for reaching zero-defect manufacturing"

Copied!
6
0
0

Testo completo

(1)

ScienceDirect

Available online at www.sciencedirect.com Available online at www.sciencedirect.com

ScienceDirect

Procedia CIRP 00 (2017) 000–000

www.elsevier.com/locate/procedia

2212-8271 © 2017 The Authors. Published by Elsevier B.V.

Peer-review under responsibility of the scientific committee of the 28th CIRP Design Conference 2018.

28th CIRP Design Conference, May 2018, Nantes, France

A new methodology to analyze the functional and physical architecture of

existing products for an assembly oriented product family identification

Paul Stief *, Jean-Yves Dantan, Alain Etienne, Ali Siadat

École Nationale Supérieure d’Arts et Métiers, Arts et Métiers ParisTech, LCFC EA 4495, 4 Rue Augustin Fresnel, Metz 57078, France

* Corresponding author. Tel.: +33 3 87 37 54 30; E-mail address: paul.stief@ensam.eu

Abstract

In today’s business environment, the trend towards more product variety and customization is unbroken. Due to this development, the need of agile and reconfigurable production systems emerged to cope with various products and product families. To design and optimize production systems as well as to choose the optimal product matches, product analysis methods are needed. Indeed, most of the known methods aim to analyze a product or one product family on the physical level. Different product families, however, may differ largely in terms of the number and nature of components. This fact impedes an efficient comparison and choice of appropriate product family combinations for the production system. A new methodology is proposed to analyze existing products in view of their functional and physical architecture. The aim is to cluster these products in new assembly oriented product families for the optimization of existing assembly lines and the creation of future reconfigurable assembly systems. Based on Datum Flow Chain, the physical structure of the products is analyzed. Functional subassemblies are identified, and a functional analysis is performed. Moreover, a hybrid functional and physical architecture graph (HyFPAG) is the output which depicts the similarity between product families by providing design support to both, production system planners and product designers. An illustrative example of a nail-clipper is used to explain the proposed methodology. An industrial case study on two product families of steering columns of thyssenkrupp Presta France is then carried out to give a first industrial evaluation of the proposed approach.

© 2017 The Authors. Published by Elsevier B.V.

Peer-review under responsibility of the scientific committee of the 28th CIRP Design Conference 2018.

Keywords: Assembly; Design method; Family identification

1. Introduction

Due to the fast development in the domain of communication and an ongoing trend of digitization and digitalization, manufacturing enterprises are facing important challenges in today’s market environments: a continuing tendency towards reduction of product development times and shortened product lifecycles. In addition, there is an increasing demand of customization, being at the same time in a global competition with competitors all over the world. This trend, which is inducing the development from macro to micro markets, results in diminished lot sizes due to augmenting product varieties (high-volume to low-volume production) [1]. To cope with this augmenting variety as well as to be able to identify possible optimization potentials in the existing production system, it is important to have a precise knowledge

of the product range and characteristics manufactured and/or assembled in this system. In this context, the main challenge in modelling and analysis is now not only to cope with single products, a limited product range or existing product families, but also to be able to analyze and to compare products to define new product families. It can be observed that classical existing product families are regrouped in function of clients or features. However, assembly oriented product families are hardly to find.

On the product family level, products differ mainly in two main characteristics: (i) the number of components and (ii) the type of components (e.g. mechanical, electrical, electronical).

Classical methodologies considering mainly single products or solitary, already existing product families analyze the product structure on a physical level (components level) which causes difficulties regarding an efficient definition and comparison of different product families. Addressing this

Procedia CIRP 72 (2018) 635–640

2212-8271 © 2018 The Authors. Published by Elsevier B.V.

Peer-review under responsibility of the scientific committee of the 51st CIRP Conference on Manufacturing Systems. 10.1016/j.procir.2018.03.163

© 2018 The Authors. Published by Elsevier B.V.

Peer-review under responsibility of the scientific committee of the 51st CIRP Conference on Manufacturing Systems. Available online at www.sciencedirect.com

Procedia CIRP 00 (2018) 000–000 www.elsevier.com/locate/procedia

51st CIRP Conference on Manufacturing Systems

Correlation analysis methods in multi-stage production systems for reaching

zero-defect manufacturing

Florian Eger

a,*

, Colin Reiff

a

, Bernd Brantl

a

, Marcello Colledani

b

, Alexander Verl

a

aInstitute for Control Engineering of Machine Tools and Manufacturing Units, Seidenstr. 36, 70174 Stuttgart, Germany bPolitecnico di Milano, Department of Mechanical Engineering, Via la Masa, 1, 20156 Milan, Italy

Corresponding author. Tel.: +49-711-685-82470; fax: +49-711-685-72470. E-mail address: Florian.Eger@isw.uni-stuttgart.de

Abstract

Based on the amount of production steps and the related complexity, multi-stage production systems are very error-prone. In order to compensate for this disadvantage and to achieve zero-defect manufacturing, a data-driven approach is needed. The increasing availability of sensor and machine data provides a high informational content of the individual processes, which can be evaluated with appropriate methods. Literature shows various methods of data analysis for examining the correlations of data sets. These methods and strategies are analyzed, hierarchically structured and extended by four developed algorithms. Finally, the data-driven analysis tool is presented and validated using two industrial use cases.

c

 2018 The Authors. Published by Elsevier B.V.

Peer-review under responsibility of the scientific committee of the 51st CIRP Conference on Manufacturing Systems. Keywords: Zero-defect Manufacturing, Multi-stage Production System, Correlation Analysis

1. Introduction

Product quality and the associated customer satisfaction are key factors to success in the highly competitive manufactur-ing industries. To keep up with customer demand and in-creased mass personalization, new strategies meeting the ever-changing requirements of the market have to be developed. One such strategy is the Zero-Defect Manufacturing (ZDM) which is based on compensating defects from a previous process in preceding step in the current or following step [1].

Modern-day optimization efforts of multi-stage production systems focus on optimization of a single process step indepen-dent of preceding or succeeding steps. However, there is no re-quirement to compensate defects in the current process stage for they may also be corrected in another process stage. In case of such a production strategy, speaking of preceding or succeed-ing process steps may no longer be accurately describsucceed-ing the process layout but merely a product’s average process path. If a defect is to be compensated in another process stage, there are mainly two pieces of information needed: what type of defect is detected and which process stage can be used to compensate said defect. Ideally, a defect introduced in an earlier stage of production is detected and the gathered data are then used in a later process stage to compensate the defect using a possibly even better suited machine.

The basis for this is the constantly growing availability of sensor data, which provides a high level of information about

the individual as well as the multi-stage production process. It must though always be considered that having more data does not necessarily imply gaining better insights on the correlation between defects and the results of a single process step. This is especially true for multi-stage production systems, where it is necessary to check the data basis’ distribution type before performing any correlation analysis. On the other hand, some significant process data may not be recorded yet and requires the installation of additional sensors, considering also codified human feedback as an important input. All in all, the shift to a ZDM-line implies a highly connected production environment reflecting in detailed recording and analysis of the data. The gathered information may then be used to identify complex inter-stage correlations by means of analytical and statistical methods. With these data, the basis for adaption of process pa-rameters is laid to avoid occurrence and propagation of defects by means of downstream compensation [2].

Based on correlation analysis of single-step processes, this paper develops a methodology for identification of correlations within a multi-stage production system. This paper is struc-tured as follows: Section 2 briefly presents the fundamentals of correlation analysis while section 3 presents the newly de-veloped multi-stage correlation analyzer method. In section 4, validation results of the tool are given based on both artificially generated data as well as data from an extrusion process. Sec-tion 5 concludes this contribuSec-tion with open tasks and future definition.

2212-8271 c 2018 The Authors. Published by Elsevier B.V.

Peer-review under responsibility of the scientific committee of the 51st CIRP Conference on Manufacturing Systems. Available online at www.sciencedirect.com

Procedia CIRP 00 (2018) 000–000 www.elsevier.com/locate/procedia

51st CIRP Conference on Manufacturing Systems

Correlation analysis methods in multi-stage production systems for reaching

zero-defect manufacturing

Florian Eger

a,*

, Colin Reiff

a

, Bernd Brantl

a

, Marcello Colledani

b

, Alexander Verl

a

aInstitute for Control Engineering of Machine Tools and Manufacturing Units, Seidenstr. 36, 70174 Stuttgart, Germany bPolitecnico di Milano, Department of Mechanical Engineering, Via la Masa, 1, 20156 Milan, Italy

Corresponding author. Tel.: +49-711-685-82470; fax: +49-711-685-72470. E-mail address: Florian.Eger@isw.uni-stuttgart.de

Abstract

Based on the amount of production steps and the related complexity, multi-stage production systems are very error-prone. In order to compensate for this disadvantage and to achieve zero-defect manufacturing, a data-driven approach is needed. The increasing availability of sensor and machine data provides a high informational content of the individual processes, which can be evaluated with appropriate methods. Literature shows various methods of data analysis for examining the correlations of data sets. These methods and strategies are analyzed, hierarchically structured and extended by four developed algorithms. Finally, the data-driven analysis tool is presented and validated using two industrial use cases.

c

 2018 The Authors. Published by Elsevier B.V.

Peer-review under responsibility of the scientific committee of the 51st CIRP Conference on Manufacturing Systems. Keywords: Zero-defect Manufacturing, Multi-stage Production System, Correlation Analysis

1. Introduction

Product quality and the associated customer satisfaction are key factors to success in the highly competitive manufactur-ing industries. To keep up with customer demand and in-creased mass personalization, new strategies meeting the ever-changing requirements of the market have to be developed. One such strategy is the Zero-Defect Manufacturing (ZDM) which is based on compensating defects from a previous process in preceding step in the current or following step [1].

Modern-day optimization efforts of multi-stage production systems focus on optimization of a single process step indepen-dent of preceding or succeeding steps. However, there is no re-quirement to compensate defects in the current process stage for they may also be corrected in another process stage. In case of such a production strategy, speaking of preceding or succeed-ing process steps may no longer be accurately describsucceed-ing the process layout but merely a product’s average process path. If a defect is to be compensated in another process stage, there are mainly two pieces of information needed: what type of defect is detected and which process stage can be used to compensate said defect. Ideally, a defect introduced in an earlier stage of production is detected and the gathered data are then used in a later process stage to compensate the defect using a possibly even better suited machine.

The basis for this is the constantly growing availability of sensor data, which provides a high level of information about

the individual as well as the multi-stage production process. It must though always be considered that having more data does not necessarily imply gaining better insights on the correlation between defects and the results of a single process step. This is especially true for multi-stage production systems, where it is necessary to check the data basis’ distribution type before performing any correlation analysis. On the other hand, some significant process data may not be recorded yet and requires the installation of additional sensors, considering also codified human feedback as an important input. All in all, the shift to a ZDM-line implies a highly connected production environment reflecting in detailed recording and analysis of the data. The gathered information may then be used to identify complex inter-stage correlations by means of analytical and statistical methods. With these data, the basis for adaption of process pa-rameters is laid to avoid occurrence and propagation of defects by means of downstream compensation [2].

Based on correlation analysis of single-step processes, this paper develops a methodology for identification of correlations within a multi-stage production system. This paper is struc-tured as follows: Section 2 briefly presents the fundamentals of correlation analysis while section 3 presents the newly de-veloped multi-stage correlation analyzer method. In section 4, validation results of the tool are given based on both artificially generated data as well as data from an extrusion process. Sec-tion 5 concludes this contribuSec-tion with open tasks and future definition.

2212-8271 c 2018 The Authors. Published by Elsevier B.V.

(2)

636 Florian Eger et al. / Procedia CIRP 72 (2018) 635–640

2 Florian Eger / Procedia CIRP 00 (2018) 000–000

Table 1: Level of measurement

Level Scale Classification Distribution Frequenc

y

Rank

order

Distance Absolute

Zero

I Ratio Quantitative ContinuousDiscrete x x x x

I Interval Quantitative ContinuousDiscrete x x x

II Ordinal Qualitative (*) Discrete x x

III/IV Nominal Qualitative Discrete x

* underlying quantitative variable

2. Fundamentals

A correlation is a statistical measure of the relationship be-tween two or more variables [3].

It is worth noting that a high value of correlation does not inevitably imply causality, which is why correlation does not provide information about the cause-and-effect relationship of the data. In general, classification of the to-be-examined char-acteristics are the basis for correlation analysis, a differentiation in levels of measurements is important for processing. Not all variables contain the same amount of information and thus dis-agree with a uniform comparison.

According to Stanley Smith Stevens [4], data can be sep-arated into the four levels of measurement: nominal, ordinal, interval and ratio scales. Table 1 shows the scale types based on [5], [6] and [7] in addition to the possible distribution and the properties: frequency, rank order, distance and the absolute zero-point.

The nominal scale classifies data without a meaningful rank order, for example material groups. The nominal scale type has a discrete distribution. Discrete scales can be differentiated into dichotom (III) or polytom (IV) categories. A variable is dichotom if only two characteristics are possible. In contrast, polytomous variables have more than two distinct categories. Compared to the other levels, nominal variables have the low-est information content. Ordinal variables (II) additionally have ranked categories. Every category has a definite relationship to each of the other categories, but no defined distance. To differ the attributes an underlying quantitative measure is necessary. An example is the categorization of stored material into the classes A, B and C. In this case, the current storage time is the underlying quantitative measure to differ the single attributes. Quantitative variables, such as the interval and the ratio scale are also called metric scales (I). The interval scale describes the distance between ranked values, such as the temperature mea-surement in Celsius. In contrast to the interval scale, the ratio scale also has an absolute zero-point, for example the temper-ature in Kelvin. With ratio scales, all arithmetic operations are possible.

Especially in multi-stage production systems, a lot of var-ious data types can be recorded (e.g. binary, categorical, bi-nominal, real-valued, etc.). In general, it is necessary to check the data basis for its distribution form before starting the corre-lation analysis. Literature offers the Chi-Squared-Test [8], the

Kolmogorov-Smirnov-Test [9] or the Shapiro-Wilk-Test [10]. The distribution of the data sets decides about the applicable methods to calculate the correlation coefficient. In the follow-ing, different approaches for examining data sets to identify cor-relations are presented.

The most used method for investigating dependencies is the Bravais-Pearson Correlation [11]. In literature it can be found under different names, ranging from Bravais-Pearson Corre-lation, Pearson Product-Moment Correlation [12] up to Cross Correlation [13]. The method describes the deviation of the data set X and Y from a linear expected value E(x) and stan-dardizes it with the variation σ according to:

r = cov(X, Y)

σxσy =

E[(X − ¯X)(Y − ¯Y)] σxσy

=

n

i=1(Xi− ¯X)(Yi− ¯Y)

n

i=1(Xi− ¯X)2ni=1(Yi− ¯Y)2

(1)

The result is a correlation coefficient r with the value range [−1; 1]. The value stands for the degree of the correlation: 0 for none correlation and |1| for a perfect linear correlation. The sign describes the type of dependency: − stands for a counteractive and + for a co-rotating behavior. One benefit is the simple com-putation, a drawback is the fact that only linear correlations are recognized.

Rank correlation coefficients are based on non-parametric methods and record the monotonous relationships in data pairs [11]. In practical terms, this means that this method can also be used to identify non-linear correlations. Another ad-vantage of this method is that no demands are placed in regard to the data distribution. With this method, in turn, information loss during the rank transformation has to be accepted, which describes the replacement of a value by its rank (Eq. (2)). The rank describes the position of the value within the value range of the variables to be compared. Most commonly used are the methods Spearman’s Rho and Kendall’s Tau [14]. The method after Spearman’s Rho is presented in the following (Eq. (3)). Basically, the procedure is analogous to Bravais-Pearson. In-stead of the value of the second variable, its ranking R is con-sidered, which is why in a preparatory step a rank transforma-tion of these variables is needed [15]. In case that two variables have an identical value, they will be assigned with their medium rank ¯R. ¯R(x/y) = 1 n n  i=1 R(xi/yi) (2) rS p = n

i=1(R(xi) − ¯R(x))(R(yi) − ¯R(y))

n

i=1(R(xi) − ¯R(x))2ni=1(R(yi) − ¯R(y))2 (3)

There are different variants of the Chi-Square Test. One of them serves as independence test for variables [16]. A require-ment for applying this statistical method is the existence of at least nominally scaled variables, whereas for the computation only frequencies are used. The characteristics of the variables n

Florian Eger / Procedia CIRP 00 (2018) 000–000 3

Table 2: Cross table according to [8]

Feature Y Sum  Feature X 1 2 k r 1 n11 n12 n1k n1r n1. 2 n21 n22 n2k n2r n2. j nj1 nj1 njk njr nj. m nm1 nm2 nmk nmr nm. Sum  n.1 n.2 n.k n.r n..

to be examined, which have to be categorized, are shown in a cross table (Table 2), where the last column and row contain the sums of the features.

The computation of the test value of the Chi-Square Test χ2

is performed according to Eq. (4):

χ2 = m  j=1 r  k=1 (njk− ˆnjk)2 ˆnjk (4)

with the expected frequencies ˆnjk:

ˆnjk = nj.n.k

n (5)

The degree of freedom d f of the system is calculated as fol-lows:

d f = ( j − 1)(k − 1) (6)

In the last calculation step, the probability P to receive a value bigger or identical to the value from Eq. (4) is computed. The cumulated distribution function F(x) is used:

P = 1 − Fχ2d f (7)

The cumulated distribution function accumulated on the right can be taken from Eq. (8), including the mean value ¯M:

F(x) = x  0 xM2¯−1ex2 2M2¯Γ(M2¯)dx (8)

Usually a significance level of 5% is determined. If the prob-ability determined from Eq. (7) is lower than the significance level, the test becomes significant and a variable dependency is assumed [8]. In addition to the presented methods, there are many other methods that are not discussed here.

3. ForZDM-base Methods

In addition to the approaches in literature, own approaches are implemented in algorithms. The key ideas are explained in the following.

(a) Rotated parabola (b) Parabola

Fig. 1: Swapping of the axes for the correlation analysis

3.1. Modified Analysis Methods

The procedures of the Modified Spearman’s Rho (abbrevi-ated MSR-I) and Modified Kendall’s Tau (MKT-II) are based on the analysis of the monotony behavior of the data sets. In a first step, the values of the first variable are examined for coherent areas and divided into appropriate subsections. Thus, the iden-tification of recurring patterns becomes feasible. Then, the sec-ond variable is also divided into monotonous subsections. Since this means that an examination of both features takes place, the methods belong to the double-sided tests.

This procedure is shown in Eq. (9a) and Eq. (9b). The vari-ables x and y stand for the two features:

xi<xi+1|xi>xi+1|xi=xi+1 (9a)

yi<yi+1|yi>yi+1|yi=yi+1 (9b)

For a better illustration of the functional principle, Fig. 1a and Fig. 1b show the back-to-front application of two features of the same data set. A parabolic correlation with a dispersion is depicted.

An analysis of this data set by means of a rank correlation results in no relation for the constellation in Fig. 1a, since the measured values are assigned to the upper and lower parts in no particular order. The application of the same rank correlation to the constellation shown in Fig. 1b, however, does provide a relation. A mirroring takes place within the application, so that the analysis is independent of the random order during the reading of the data sets. In addition to the filtering of random patterns, the first value after each monotony change is removed from the computation. This procedure is of benefit in case of complex correlations with several monotony changes, because this way random short monotony sections lose influence on the result or can even be neglected. In the next step, the selected data sections are examined using MSR-I or MKT-II. The third method, Modified Taylor (MT-III), is also based on the analysis of the monotony behavior, since this procedure has proved to be effective in methods MSR-I and MKT-II. Like most methods, the presented ones have the drawback of being prone to outliers or larger variances within the data to be examined. In order to still be able to analyze these data sets reasonably, method MT-III was developed. In a first step, a smoothing of the data structure is performed. This adaptation of the initial data is done through setting up a compensating curve in the form of a polynomial. Basically, the idea of the Taylor series is pursued. Eq. (10) shows the procedure. The variable p stands for the coefficients and n for the degree of the polynomial.

(3)

Florian Eger et al. / Procedia CIRP 72 (2018) 635–640 637

2 Florian Eger / Procedia CIRP 00 (2018) 000–000

Table 1: Level of measurement

Level Scale Classification Distribution Frequenc

y

Rank

order

Distance Absolute

Zero

I Ratio Quantitative ContinuousDiscrete x x x x

I Interval Quantitative ContinuousDiscrete x x x

II Ordinal Qualitative (*) Discrete x x

III/IV Nominal Qualitative Discrete x

* underlying quantitative variable

2. Fundamentals

A correlation is a statistical measure of the relationship be-tween two or more variables [3].

It is worth noting that a high value of correlation does not inevitably imply causality, which is why correlation does not provide information about the cause-and-effect relationship of the data. In general, classification of the to-be-examined char-acteristics are the basis for correlation analysis, a differentiation in levels of measurements is important for processing. Not all variables contain the same amount of information and thus dis-agree with a uniform comparison.

According to Stanley Smith Stevens [4], data can be sep-arated into the four levels of measurement: nominal, ordinal, interval and ratio scales. Table 1 shows the scale types based on [5], [6] and [7] in addition to the possible distribution and the properties: frequency, rank order, distance and the absolute zero-point.

The nominal scale classifies data without a meaningful rank order, for example material groups. The nominal scale type has a discrete distribution. Discrete scales can be differentiated into dichotom (III) or polytom (IV) categories. A variable is dichotom if only two characteristics are possible. In contrast, polytomous variables have more than two distinct categories. Compared to the other levels, nominal variables have the low-est information content. Ordinal variables (II) additionally have ranked categories. Every category has a definite relationship to each of the other categories, but no defined distance. To differ the attributes an underlying quantitative measure is necessary. An example is the categorization of stored material into the classes A, B and C. In this case, the current storage time is the underlying quantitative measure to differ the single attributes. Quantitative variables, such as the interval and the ratio scale are also called metric scales (I). The interval scale describes the distance between ranked values, such as the temperature mea-surement in Celsius. In contrast to the interval scale, the ratio scale also has an absolute zero-point, for example the temper-ature in Kelvin. With ratio scales, all arithmetic operations are possible.

Especially in multi-stage production systems, a lot of var-ious data types can be recorded (e.g. binary, categorical, bi-nominal, real-valued, etc.). In general, it is necessary to check the data basis for its distribution form before starting the corre-lation analysis. Literature offers the Chi-Squared-Test [8], the

Kolmogorov-Smirnov-Test [9] or the Shapiro-Wilk-Test [10]. The distribution of the data sets decides about the applicable methods to calculate the correlation coefficient. In the follow-ing, different approaches for examining data sets to identify cor-relations are presented.

The most used method for investigating dependencies is the Bravais-Pearson Correlation [11]. In literature it can be found under different names, ranging from Bravais-Pearson Corre-lation, Pearson Product-Moment Correlation [12] up to Cross Correlation [13]. The method describes the deviation of the data set X and Y from a linear expected value E(x) and stan-dardizes it with the variation σ according to:

r = cov(X, Y)

σxσy =

E[(X − ¯X)(Y − ¯Y)] σxσy

=

n

i=1(Xi− ¯X)(Yi− ¯Y)

n

i=1(Xi− ¯X)2ni=1(Yi− ¯Y)2

(1)

The result is a correlation coefficient r with the value range [−1; 1]. The value stands for the degree of the correlation: 0 for none correlation and |1| for a perfect linear correlation. The sign describes the type of dependency: − stands for a counteractive and + for a co-rotating behavior. One benefit is the simple com-putation, a drawback is the fact that only linear correlations are recognized.

Rank correlation coefficients are based on non-parametric methods and record the monotonous relationships in data pairs [11]. In practical terms, this means that this method can also be used to identify non-linear correlations. Another ad-vantage of this method is that no demands are placed in regard to the data distribution. With this method, in turn, information loss during the rank transformation has to be accepted, which describes the replacement of a value by its rank (Eq. (2)). The rank describes the position of the value within the value range of the variables to be compared. Most commonly used are the methods Spearman’s Rho and Kendall’s Tau [14]. The method after Spearman’s Rho is presented in the following (Eq. (3)). Basically, the procedure is analogous to Bravais-Pearson. In-stead of the value of the second variable, its ranking R is con-sidered, which is why in a preparatory step a rank transforma-tion of these variables is needed [15]. In case that two variables have an identical value, they will be assigned with their medium rank ¯R. ¯R(x/y) = 1 n n  i=1 R(xi/yi) (2) rS p = n

i=1(R(xi) − ¯R(x))(R(yi) − ¯R(y))

n

i=1(R(xi) − ¯R(x))2ni=1(R(yi) − ¯R(y))2 (3)

There are different variants of the Chi-Square Test. One of them serves as independence test for variables [16]. A require-ment for applying this statistical method is the existence of at least nominally scaled variables, whereas for the computation only frequencies are used. The characteristics of the variables n

Florian Eger / Procedia CIRP 00 (2018) 000–000 3

Table 2: Cross table according to [8]

Feature Y Sum  Feature X 1 2 k r 1 n11 n12 n1k n1r n1. 2 n21 n22 n2k n2r n2. j nj1 nj1 njk njr nj. m nm1 nm2 nmk nmr nm. Sum  n.1 n.2 n.k n.r n..

to be examined, which have to be categorized, are shown in a cross table (Table 2), where the last column and row contain the sums of the features.

The computation of the test value of the Chi-Square Test χ2

is performed according to Eq. (4):

χ2 = m  j=1 r  k=1 (njk− ˆnjk)2 ˆnjk (4)

with the expected frequencies ˆnjk:

ˆnjk = nj.n.k

n (5)

The degree of freedom d f of the system is calculated as fol-lows:

d f = ( j − 1)(k − 1) (6)

In the last calculation step, the probability P to receive a value bigger or identical to the value from Eq. (4) is computed. The cumulated distribution function F(x) is used:

P = 1 − Fχ2d f (7)

The cumulated distribution function accumulated on the right can be taken from Eq. (8), including the mean value ¯M:

F(x) = x  0 xM2¯−1ex2 2M2¯Γ(M2¯)dx (8)

Usually a significance level of 5% is determined. If the prob-ability determined from Eq. (7) is lower than the significance level, the test becomes significant and a variable dependency is assumed [8]. In addition to the presented methods, there are many other methods that are not discussed here.

3. ForZDM-base Methods

In addition to the approaches in literature, own approaches are implemented in algorithms. The key ideas are explained in the following.

(a) Rotated parabola (b) Parabola

Fig. 1: Swapping of the axes for the correlation analysis

3.1. Modified Analysis Methods

The procedures of the Modified Spearman’s Rho (abbrevi-ated MSR-I) and Modified Kendall’s Tau (MKT-II) are based on the analysis of the monotony behavior of the data sets. In a first step, the values of the first variable are examined for coherent areas and divided into appropriate subsections. Thus, the iden-tification of recurring patterns becomes feasible. Then, the sec-ond variable is also divided into monotonous subsections. Since this means that an examination of both features takes place, the methods belong to the double-sided tests.

This procedure is shown in Eq. (9a) and Eq. (9b). The vari-ables x and y stand for the two features:

xi<xi+1|xi>xi+1|xi=xi+1 (9a)

yi<yi+1|yi>yi+1|yi=yi+1 (9b)

For a better illustration of the functional principle, Fig. 1a and Fig. 1b show the back-to-front application of two features of the same data set. A parabolic correlation with a dispersion is depicted.

An analysis of this data set by means of a rank correlation results in no relation for the constellation in Fig. 1a, since the measured values are assigned to the upper and lower parts in no particular order. The application of the same rank correlation to the constellation shown in Fig. 1b, however, does provide a relation. A mirroring takes place within the application, so that the analysis is independent of the random order during the reading of the data sets. In addition to the filtering of random patterns, the first value after each monotony change is removed from the computation. This procedure is of benefit in case of complex correlations with several monotony changes, because this way random short monotony sections lose influence on the result or can even be neglected. In the next step, the selected data sections are examined using MSR-I or MKT-II. The third method, Modified Taylor (MT-III), is also based on the analysis of the monotony behavior, since this procedure has proved to be effective in methods MSR-I and MKT-II. Like most methods, the presented ones have the drawback of being prone to outliers or larger variances within the data to be examined. In order to still be able to analyze these data sets reasonably, method MT-III was developed. In a first step, a smoothing of the data structure is performed. This adaptation of the initial data is done through setting up a compensating curve in the form of a polynomial. Basically, the idea of the Taylor series is pursued. Eq. (10) shows the procedure. The variable p stands for the coefficients and n for the degree of the polynomial.

(4)

638 Florian Eger et al. / Procedia CIRP 72 (2018) 635–640

4 Florian Eger / Procedia CIRP 00 (2018) 000–000

This approach permits a simplification of the complex se-quence of states to a manageable function during the examina-tion of data clouds. After this treatment step the analysis of the monotony features of the substitute function follows. All further steps are analogous to MKT-I. Method IV serves the retrieval of constants within data sets. Many of the methods presented so far are not suitable for collecting this type of corre-lation because of the underlying validity range. The procedure of this analysis begins parallel to the other presented methods with a division of the data sets into sub-areas, so that an iden-tification of recurring patterns is possible. These sections are then analyzed in regard to their variance. Based on the variance the consistency as well as the independence can be determined in a first step, whereby an integrated threshold allows that the behavior towards outliers and deviations can be modeled. With the results found, the constants can be determined manually. The causality in the form of temporal dependency of the data, which is available in the developed program after modeling the process sequence, permits a clear solution. Thus, the automatic determination of constants regardless of the data set reading be-comes possible.

With a various number of data of the recorded variables, the methods presented so far reach their limits. A simple replen-ishment of the missing data or deletion of surplus data distorts the results or leaves the gathered information unused. For this reason, for such applications an analysis method is developed. A comparison of data sets is mostly done towards the end of a processing step. This means, for example, that at the end of a processing step an energy curve is compared with a diame-ter. The easiest way is the determination of the average and scattering of the larger data set with subsequent formation of a substitute value so that the most important features are taken into account. Based on this reduction of the dimension, the analysis can be conducted as usual. Here, however, a consider-able information loss has to be accepted, which can be avoided by using the new approach. The objective of the new method is also the reduction of the larger data set to the scope of the

smaller one. The number of entries into the larger data set is determined until the smaller one contains another value. The basic idea is the comparison of these vectors with the already developed methods.

Therefore, in a first step, a substitute reference curve is plot-ted. For this purpose, the mean value of all vectors of the faster sensor is computed. Afterwards, the already presented correla-tion methods are used to calculate the similarity of all vectors with the averaged curve. The resulting indicators for the simi-larity of each section, which plot their deviations in quantitative terms, can then be used in a second analysis for the computa-tion of the correlacomputa-tion coefficient for the result mask. Since the compensatory property of the larger data set has a limited value range, the methods according to Bravais-Pearson and Spear-man’s Rho are used in the second analysis. All in all, the de-termined correlation coefficient is larger than with the methods presented earlier. This means that a higher significance level has to be chosen.

3.2. Implementation of Correlation Analyzer

The analysis tool developed for the identification of cor-relations is presented below. The development is done in MATLABR. To enable a user-friendly handling of the appli-cation, a graphical user interface (GUI) is implemented.

The tool is to be used generically, it pursues the goal of sim-plicity. For this reason, the user is guided directly to the main control panel after starting the program, which is the interface between the program and the user during the entire operating time. The executable operations range from the load function of the sensor data to storing the results. The methods described in Section 2 and some more are integrated into the tool. An extract of the used methods and their area of application de-pending on scale types of the data sets is presented in Table 3.

The automatic mode, which checks the scale type of the loaded data sets in a first step, selects the appropriate analy-sis method for each comparison depending on the result. The

Fig. 2: Screenshot of the tool implemented in MATLABR

Florian Eger / Procedia CIRP 00 (2018) 000–000 5

Table 3: Areas of application of the methods

I (metric) II (ordinal) III (nominal) IV (nominal)

Ratio/Interval Dichtom Polytom

I Bravais-Pearson Covariance Spearman’s Rho Kendall’s Tau MSR-I MKT-II MT-III Chi-square Test Yates correction ... Pint-biserial cor. Cramer’s V Cramer’s V II Spearman’s Rho Kendall’s Tau MSR-I MKT-II MT-III Chi-square Test Yates correction ... Cramer’s V Biserial rank corr.

Cramer’s V

III Tetrachoric corr.

Phi-coefficient

Cramer’s V

IV Cramer’s V

methods used can be found in an information window of the GUI. The result of the comparison can be taken from the corre-lation matrix (Fig. 2). The entries in the cells are the correcorre-lation values of the analysis. The respective rows and columns repre-sent the data records to be compared. The matrix is always half-filled in order to provide a quick and clear indication of the correlation between two types of data sets. Since the value range is from 0 to |1|, the results are colored from green to yel-low in steps of 0.2 to make their relevance easier for the user to identify. At the top left not only the used analysis method is visualized after clicking on any result in the correlation matrix, also the exact result of the correlation coefficient, the number of data used for the computation as well as the terms stored for the just compared data are shown.

If the process sequence is modeled beforehand in the mod-eling GUI, the affiliation of the variables to the respective pro-cesses is shown additionally. This leads to a quick identifica-tion of the dependency between two processes among numer-ous available variables. By selecting a value in the correlation matrix a scatter plot of the analyzed data will appear in the right top of the window, in parallel to the detailed information. This enables the user to visually control the determined results for simple connections like linearity.

4. Validation Results

The validation of the developed tool and the implemented methods is separated into two sections. At first, a use case based on correlations modeled with mathematical functions is used to proof the functionality of the methods. The second use case demonstrates the applicability of the tool by means of an extru-sion process.

4.1. Use-case I: Theoretical Validation

The theoretical validation of the developed tool is based on mathematical functions. These functions are designed to model

different possible dependencies, which can occur in industrial processes. The main objective of the theoretical validation is to proof the usability and recognition quality of the implemented non-nominal methods. In addition to linear relations, parabolic, polynomial, trigonometric and exponential functions are tested. The functions and the exemplary used parameters are listed in Table 4. The values are generated from random data between 1 and 100 to ensure that no unwanted correlations occur. A scat-tering of 0%, 25%, 50% and 75% is applied to simulate the in-accuracy of a measurement. The checked dataset includes 300 values for each function. Further tests have shown that at least a set of 30 values for each data set is necessary for meaningful outcomes as already proven by Melzer [17].

The validation is carried out using the automatic mode of the developed tool. This mode provides the possibility to check data sets without knowledge about the scale type of the anal-ysed characteristics. For this purpose the scale type is auto-matically checked by the tool and then a compatible method is selected and executed. The calculated results are shown in Table 5. The calculated correlation coefficient is approaching to zero for an increasing scattering. Exponential correlations are best recognized even with increasing dispersion. Trigono-metric functions deliver the weakest results. The correlation coefficients between the random data sets are between 0.01 and 0.09. This means that no false correlations are displayed. In summary, all of the tested mathematical functions are reliably detected.

Table 4: Function types used for validation

Function Formula Parameters

Linear f (x) = mx + t m = 0.5; t = 4

Parabolic f (x) = a(x − b)2+c a = 0.1; b = 4; c = 7

Polynomial f (x) =(x+a)(x+b)(x−c)d a = 3; b = 1; c = −5; d = 3

Trigonometric f (x) = sin(ax) + t a = 3; t = 4

Exponential f (x) = a + exp(b(x + c)) + t a = 1; b = 1; c = 1; t = 0

4.2. Use-case II: Validation in an Extrusion Process

In addition to the theoretical validation of the developed tool, an application case from the project is used to validate the tool in a practical way. The extrusion process of micro catheters is an complex procedure that takes a lot of time in the ramp-up phase. The operators have to set up the parameters manually in order to achieve a perfect result without defect parts. In order to reduce this time and to optimize the process, it is necessary to identify dependencies between parameters. A simplified pro-cess flow can be seen in Fig. 3. The aim is to identify how the conditions of the raw material store, or more precisely the Table 5: Correlation coefficients for different scatterings

Function 0% 25% 50% 75% Linear 1 0.964 0.846 0.731 Parabolic 0.999 0.991 0.975 0.926 Polynomial 0.999 0.994 0.984 0.955 Trigonometric 0.675 0.575 0.424 0.385 Exponential 1 0.999 0.999 0.999

(5)

Florian Eger et al. / Procedia CIRP 72 (2018) 635–640 639

4 Florian Eger / Procedia CIRP 00 (2018) 000–000

This approach permits a simplification of the complex se-quence of states to a manageable function during the examina-tion of data clouds. After this treatment step the analysis of the monotony features of the substitute function follows. All further steps are analogous to MKT-I. Method IV serves the retrieval of constants within data sets. Many of the methods presented so far are not suitable for collecting this type of corre-lation because of the underlying validity range. The procedure of this analysis begins parallel to the other presented methods with a division of the data sets into sub-areas, so that an iden-tification of recurring patterns is possible. These sections are then analyzed in regard to their variance. Based on the variance the consistency as well as the independence can be determined in a first step, whereby an integrated threshold allows that the behavior towards outliers and deviations can be modeled. With the results found, the constants can be determined manually. The causality in the form of temporal dependency of the data, which is available in the developed program after modeling the process sequence, permits a clear solution. Thus, the automatic determination of constants regardless of the data set reading be-comes possible.

With a various number of data of the recorded variables, the methods presented so far reach their limits. A simple replen-ishment of the missing data or deletion of surplus data distorts the results or leaves the gathered information unused. For this reason, for such applications an analysis method is developed. A comparison of data sets is mostly done towards the end of a processing step. This means, for example, that at the end of a processing step an energy curve is compared with a diame-ter. The easiest way is the determination of the average and scattering of the larger data set with subsequent formation of a substitute value so that the most important features are taken into account. Based on this reduction of the dimension, the analysis can be conducted as usual. Here, however, a consider-able information loss has to be accepted, which can be avoided by using the new approach. The objective of the new method is also the reduction of the larger data set to the scope of the

smaller one. The number of entries into the larger data set is determined until the smaller one contains another value. The basic idea is the comparison of these vectors with the already developed methods.

Therefore, in a first step, a substitute reference curve is plot-ted. For this purpose, the mean value of all vectors of the faster sensor is computed. Afterwards, the already presented correla-tion methods are used to calculate the similarity of all vectors with the averaged curve. The resulting indicators for the simi-larity of each section, which plot their deviations in quantitative terms, can then be used in a second analysis for the computa-tion of the correlacomputa-tion coefficient for the result mask. Since the compensatory property of the larger data set has a limited value range, the methods according to Bravais-Pearson and Spear-man’s Rho are used in the second analysis. All in all, the de-termined correlation coefficient is larger than with the methods presented earlier. This means that a higher significance level has to be chosen.

3.2. Implementation of Correlation Analyzer

The analysis tool developed for the identification of cor-relations is presented below. The development is done in MATLABR. To enable a user-friendly handling of the appli-cation, a graphical user interface (GUI) is implemented.

The tool is to be used generically, it pursues the goal of sim-plicity. For this reason, the user is guided directly to the main control panel after starting the program, which is the interface between the program and the user during the entire operating time. The executable operations range from the load function of the sensor data to storing the results. The methods described in Section 2 and some more are integrated into the tool. An extract of the used methods and their area of application de-pending on scale types of the data sets is presented in Table 3.

The automatic mode, which checks the scale type of the loaded data sets in a first step, selects the appropriate analy-sis method for each comparison depending on the result. The

Fig. 2: Screenshot of the tool implemented in MATLABR

Florian Eger / Procedia CIRP 00 (2018) 000–000 5

Table 3: Areas of application of the methods

I (metric) II (ordinal) III (nominal) IV (nominal)

Ratio/Interval Dichtom Polytom

I Bravais-Pearson Covariance Spearman’s Rho Kendall’s Tau MSR-I MKT-II MT-III Chi-square Test Yates correction ... Pint-biserial cor. Cramer’s V Cramer’s V II Spearman’s Rho Kendall’s Tau MSR-I MKT-II MT-III Chi-square Test Yates correction ... Cramer’s V Biserial rank corr.

Cramer’s V

III Tetrachoric corr.

Phi-coefficient

Cramer’s V

IV Cramer’s V

methods used can be found in an information window of the GUI. The result of the comparison can be taken from the corre-lation matrix (Fig. 2). The entries in the cells are the correcorre-lation values of the analysis. The respective rows and columns repre-sent the data records to be compared. The matrix is always half-filled in order to provide a quick and clear indication of the correlation between two types of data sets. Since the value range is from 0 to |1|, the results are colored from green to yel-low in steps of 0.2 to make their relevance easier for the user to identify. At the top left not only the used analysis method is visualized after clicking on any result in the correlation matrix, also the exact result of the correlation coefficient, the number of data used for the computation as well as the terms stored for the just compared data are shown.

If the process sequence is modeled beforehand in the mod-eling GUI, the affiliation of the variables to the respective pro-cesses is shown additionally. This leads to a quick identifica-tion of the dependency between two processes among numer-ous available variables. By selecting a value in the correlation matrix a scatter plot of the analyzed data will appear in the right top of the window, in parallel to the detailed information. This enables the user to visually control the determined results for simple connections like linearity.

4. Validation Results

The validation of the developed tool and the implemented methods is separated into two sections. At first, a use case based on correlations modeled with mathematical functions is used to proof the functionality of the methods. The second use case demonstrates the applicability of the tool by means of an extru-sion process.

4.1. Use-case I: Theoretical Validation

The theoretical validation of the developed tool is based on mathematical functions. These functions are designed to model

different possible dependencies, which can occur in industrial processes. The main objective of the theoretical validation is to proof the usability and recognition quality of the implemented non-nominal methods. In addition to linear relations, parabolic, polynomial, trigonometric and exponential functions are tested. The functions and the exemplary used parameters are listed in Table 4. The values are generated from random data between 1 and 100 to ensure that no unwanted correlations occur. A scat-tering of 0%, 25%, 50% and 75% is applied to simulate the in-accuracy of a measurement. The checked dataset includes 300 values for each function. Further tests have shown that at least a set of 30 values for each data set is necessary for meaningful outcomes as already proven by Melzer [17].

The validation is carried out using the automatic mode of the developed tool. This mode provides the possibility to check data sets without knowledge about the scale type of the anal-ysed characteristics. For this purpose the scale type is auto-matically checked by the tool and then a compatible method is selected and executed. The calculated results are shown in Table 5. The calculated correlation coefficient is approaching to zero for an increasing scattering. Exponential correlations are best recognized even with increasing dispersion. Trigono-metric functions deliver the weakest results. The correlation coefficients between the random data sets are between 0.01 and 0.09. This means that no false correlations are displayed. In summary, all of the tested mathematical functions are reliably detected.

Table 4: Function types used for validation

Function Formula Parameters

Linear f (x) = mx + t m = 0.5; t = 4

Parabolic f (x) = a(x − b)2+c a = 0.1; b = 4; c = 7

Polynomial f (x) =(x+a)(x+b)(x−c)d a = 3; b = 1; c = −5; d = 3

Trigonometric f (x) = sin(ax) + t a = 3; t = 4

Exponential f (x) = a + exp(b(x + c)) + t a = 1; b = 1; c = 1; t = 0

4.2. Use-case II: Validation in an Extrusion Process

In addition to the theoretical validation of the developed tool, an application case from the project is used to validate the tool in a practical way. The extrusion process of micro catheters is an complex procedure that takes a lot of time in the ramp-up phase. The operators have to set up the parameters manually in order to achieve a perfect result without defect parts. In order to reduce this time and to optimize the process, it is necessary to identify dependencies between parameters. A simplified pro-cess flow can be seen in Fig. 3. The aim is to identify how the conditions of the raw material store, or more precisely the Table 5: Correlation coefficients for different scatterings

Function 0% 25% 50% 75% Linear 1 0.964 0.846 0.731 Parabolic 0.999 0.991 0.975 0.926 Polynomial 0.999 0.994 0.984 0.955 Trigonometric 0.675 0.575 0.424 0.385 Exponential 1 0.999 0.999 0.999

(6)

640 Florian Eger et al. / Procedia CIRP 72 (2018) 635–640

6 Florian Eger / Procedia CIRP 00 (2018) 000–000

Fig. 3: Simplified extrusion process PEwith subprocesses PE xindicating the

sensor measured quantities yE xifor the amount of sensors i

temperature and humidity, are related to the extrusion process. Sensor data can be taken from three locations. In the raw material storage (PMS), the room temperature and humidity are

permanently recorded, and when the material is removed the material-depending information is stored. In the extrusion pro-cess, a process control system is used to regulate the heating cartridge voltages depending on the mold temperature. Finally, with the aid of a simplified quality control system, the outside diameter of the outgoing product is detected by two sensors, which are offset by 90 degrees.

The correlation matrix in Fig. 2 contains the results of the analysis. The Bravais-Pearson method identifies the linear cor-relations. The Correlation Analyzer recognizes the connections that were physically expected. It shows a strong dependence of the heating cartridges and temperatures, which in turn strongly depend on the quality (wall thickness) of the micro catheter. The correlation between the wall thickness and the condition of the department store can be seen, but is not very strong. 5. Conclusions and Future Work

Zero-Defect manufacturing strategies for multi-stage pro-duction systems require the knowledge of existing complex cor-relations from different parameters. Thus, it is possible to de-rive cause and effect relationships independently of the level of measurement of the gathered data in highly connected pro-cesses. For this purpose a software application was devel-oped to process the gathered data efficiently and to enable users without special knowledge to analyze big data in a short time. Therefore, a set of various methods is implemented and sup-ports the user by automatically smoothing or rotating data sets. The graphical representation allows a user-friendly evaluation of the determined correlations. The knowledge gained can be used to predict defects and to enable decision making strategies regarding the process sequence. As a result, process parame-ters can be adapted and defects can be corrected by downstream compensation.

In the future, the application will be enlarged with the con-sideration of probability distributions and an approach to eval-uate three-dimensional data sets. Lastly, the applied methods will be integrated into a higher-level framework to provide a powerful tool. The aim is to develop strategies and policies that automatically allow corrections within the production process with the knowledge of the occurrence of defects, and thus ide-ally eliminating the need for rework. For the conversion, the system must be able to determine concrete intervention param-eters for the corrective measures. In addition, it is conceivable to evaluate large datasets in real time using machine learning

algorithms and neural networks. Correlation analysis thus will form the backbone of smart production for the manufacturing of defect-free products with a flexible design, such as in the context of mass personalization.

Acknowledgements

This project has received funding from the European Union’s Horizon 2020 research and innovation programme un-der grant agreement No 723698. This paper reflects only the author’s views and the Commission is not responsible for any use that may be made of the information contained therein. References

[1] Eger, F., Coupek, D., Caputo, D., Colledani, M., Penalva, M., Ortiz, J.A., et al. Zero defect manufacturing strategies for reduction of scrap and inspection effort in multi-stageproduction systems. Procedia CIRP 2017;2017.

[2] Lee, J., Kao, H.A., Yang, S.. Service Innovation and Smart Analytics for Industry 4.0 and Big Data Environment. Procedia CIRP 2014;16:3–8. doi:10.1016/j.procir.2014.02.001.

[3] Murdoch, J., Barnes, J.A.. Statistics: Problems and Solutions. Lon-don and s.l.: Palgrave Macmillan UK; 1973. ISBN 978-0-333-12017-0. doi:10.1007/978-1-349-01063-9.

[4] Stevens, S.S.. On the theory of scales of measurement. Science Vol 103 1946;(No. 2684):677–680.

[5] Roberts, F.S.. Measurement theory: With applications to decisionmaking, utility and the social sciences; vol. 7 of Encyclopedia of mathematics and its applications. Cambridge: Cambridge Univ. Press; 1985. ISBN 978-0-521-10243-8.

[6] Wallgren, A.. Graphing statistics and data: Creating better charts. New-bury Park, California: Sage Publications; 1996. ISBN 0761905995. [7] Afifi, A., May, S., Clark, V.A.. Practical multivariate analysis; vol. 93. 5.

ed. ed.; Boca Raton, Fla.: CRC Press; 2012. ISBN 978-1-439-81680-6. [8] Plackett, R.L.. Karl Pearson and the Chi-Squared Test. International

Statistical Review / Revue Internationale de Statistique 1983;51(1):59–72. doi:10.2307/1402731.

[9] Massey Jr., F.J.. The Kolmogorov-Smirnov Test for Goodness of

Fit. Journal of the American Statistical Association 1951;46(253):68–78. doi:10.2307/2280095.

[10] Shapiro, S.S., Wilk, M.B.. An Analysis of Variance Test for

Normality (Complete Samples). Biometrika 1965;52(3/4):591–611.

doi:10.2307/2333709.

[11] Artusi, R., Verderio, P., Marubini, E.. Bravais-Pearson and Spearman correlation coefficients: Meaning, test of hypothesis and confidence inter-val. The International Journal of Biological Markers 2002;17(2):148–151. doi:10.5301/JBM.2008.2127.

[12] Kowalski, C.J.. On the Effects of Non-Normality on the Distribution of the Sample Product-Moment Correlation Coefficient. Applied Statistics 1972;21(1):1. doi:10.2307/2346598.

[13] Coupek, D., Verl, A., Aichele, J., Colledani, M.. Proactive quality control system for defect reduction in the production of electric drives. In: 2013 3rd International Electric Drives Production Conference (EDPC). 2013, p. 1–6. doi:10.1109/EDPC.2013.6689762.

[14] Hauke, J., Kossowski, T.. Comparison of Values of Pearson’s and Spear-man’s Correlation Coefficients on the Same Sets of Data. Quaestiones Ge-ographicae 2011;30(2):20. doi:10.2478/v10117-011-0021-1.

[15] Fieller, E.C., Hartley, H.O., Pearson, E.S.. Tests for Rank Correlation Coefficients. I. Biometrika 1957;44(3/4):470. doi:10.2307/2332878. [16] Pearson, K.. X. On the criterion that a given system of deviations from

the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 2009;50(302):157–175. doi:10.1080/14786440009463897. [17] Melzer, A.. Six Sigma - Kompakt und praxisnah: Prozessverbesserung

effizient und erfolgreich implementieren. Wiesbaden: Springer Gabler; 2015. ISBN 978-3-658-09853-7.

Riferimenti

Documenti correlati

Use of all other works requires consent of the right holder (author or publisher) if not exempted from copyright protection by the

دوجوم یاه هدش داجیا دیلوت هلحرم رد دنتسه صخشم لکش رد هک هنومن رد .دنا بورپ زا دنوشزکرمتم ه شبور تست ماجنا یارب C یع نیا هک هدش هدافتسا و یمن ب دناوت

While Beh, Simonetti and D’Ambra (2007) considered the partition of the Marcotorchino index for ordinal categorical variables using orthogonal polynomials, in this paper we

Epidemiology and Biostatistics, Institute for Research in Extramural Medicine, Institute for Health and Care Research, VU University Medical Center, 1081BT Amsterdam,

Solid [CuI( piperazine) 0.5 ] ∞ shows an unexpected dual luminescence at room temperature: upon excitation in the 250 –350 nm region a bright emission peaking at 672 nm is

The root colonization process by AM fungi is depicted as a chronological series of events including the pre- symbiotic phase, contact, fungal entrance and intra-radical

Sviluppatasi tra gli Stati Uniti e l’Europa, in particolar modo in Germania, la teologia femminista della liberazione si pose come critica alla teologia nera e alla

Durante l’attraversamento della Calabria e la visita della Sicilia Ignazio Toraldo raccoglie osservazioni e registra le sue emozioni in un taccuino, desti- nato sicuramente