Analyzing Your Data
Analyzing RC survey data is a multi-step process that can serve multiple purposes, depending on the needs of an individual organization. At a basic level, it provides the basis for testing the validity of your RC index. Depending on the reason for administering the RC survey, it can be used to analyze patterns of relational coordination within and between functional groups, to test for differences between sites or between intervention and non-intervention groups, to test the performance effects of relational coordination, and to test the predictors of relational coordination.
For the paper survey, analyses can be conducted using a standard software package such as SAS or STATA and require some level of statistical expertise. The RC online survey tool’s built-in features are equipped to validate the RC index, confirm the significance of relational coordination ties reported at the p<.05 and p<.01 significance levels, and to analyze within and across-site differences in relational coordination (this functionality is not yet available). Raw RC survey data is provided in Excel format to allow additional analysis of the RC survey data. The RCRC team is also available to provide technical assistance on projects that entail more detailed analysis. As RCRC grows, additional functionality will be incorporated into the online survey tool.
Determining Index Validity
Cronbach’s Alpha and Factor Analysis are used to test the validity of aggregating the seven dimensions of RC into a single index. Using individual survey responses as the unit of observation, Cronbach's alpha is tested among the seven dimensions of RC to see if they constitute a valid index. For index validity, Cronbach's alpha should be greater than 0.70 for an exploratory study, and greater than 0.80 for a non-exploratory study. Once index validity is determined, exploratory factor analysis is then used to test whether relational coordination behaves as a single factor or whether it instead separates into multiple factors.
Collectively, Cronbach’s Alpha and Factor Analysis confirm reliability of the relational coordination index and convergent validity of each of the survey items.
Analyzing Patterns of Relational Coordination Between Functional Groups
Once it has been determined that the dimensions of relational coordination as measured in your survey do indeed constitute a reliable index, patterns of relational coordination found within and between different functional groups can then be analyzed. Your RC survey data can be used to build a matrix diagram to visualize patterns of relational coordination between the functional groups in the focal work process. This type of diagram, shown above, is also known as a "Dependency Structure Matrix” and is traditionally used to capture complex engineering and design processes. Using the matrix to analyze this type of data allows for comparisons of the strength of ties between each of the functional groups in a focal work process and of the strength of ties within each of the functional groups involved in the focal work process.
Testing for Differences Between Sites, Between Intervention/Non-Intervention,
To assess differences in the strength of ties between sites or between intervention and non-intervention in the same site, analyses of variance is conducted to determine if differences between units of analysis (e.g. cross-site or between an intervention and non-intervention) are significant.
To further assess treating relational coordination as a site-level construct, intra-class correlations ICC(1) and ICC(2) can also be computed. ICC(1) is the proportion of total variance that is explained by site membership with values ranging from -1 to +1 and values between 0.05 and 0.30 being most typical. This number provides an estimate of the reliability of a single respondent's assessment of the site mean. ICC(2) provides an overall estimate of the reliability of site means, with values equal to or above 0.70 being acceptable.
Creating a Site-Level Relational Coordination Index
If significant site-level differences and significant intra-site correlations are found in your relational coordination construct, a site-level construct can be created. To aggregate to the site level, a weighted mean in which individual responses are weighted according to the size of their function in a particular site is used. A weighted mean is used so that the site-level measures of relational coordination reflect the functional composition of that site. Otherwise, an un-weighted mean is subject to being influenced by the relative response rates of different functional groups.
For further details, see Weighting Data When Aggregating to the Site-Level.
Performance Effects of Relational Coordination
Relational coordination is expected to improve both the quality and efficiency performance of a given work process, particularly when that work process is characterized by high levels of task interdependence, uncertainty and time constraints. To measure performance effects, relational coordination is treated as the independent variable, measured at the site-level. It is also important to understand and measure the other factors that affect those performance outcomes. Industry practitioners can be a vital source of information for identifying the relevant covariates, also treated as independent variables in the model. Quality and efficiency performance outcomes are treated as the dependent variables, with a separate regression model being used to predict each measure of performance.
Multi-level regression analysis, most commonly a random effects model, is used to adjust coefficients and standard errors for the multi-level nature of the data. The random effects regression model produces both a within-site and a between site value, interpreted as the percent of within-site variation that is explained by the variables in the models and the percent of between-site variation that is explained by the variables in the model, respectively.
For further details, see Analyzing Performance Effects of Relational Coordination.
Analyzing Predictors of Relational Coordination as well as
Mediation and Moderation Effects
In addition to the analyses described above, the relational coordination metric can also be used to assess the predictors of relational coordination as well as mediation and moderation effects.
To analyze the predictors of relational coordination, relational coordination is treated as the dependent variable and organizational practice(s) are treated as the independent variable(s).
For further details, see Analyzing the Predictors of Relational Coordination.
Mediation analysis tests the hypothesis that performance outcomes are mediated by levels of relational coordination using a three-equation model developed by Baron and Kenny (1986).
For further details, see Testing Mediation Effects.
Moderation analysis is used to assess the impact of factors such as interdependence, uncertainty, and time constraints on the strength of relational coordination’s performance effects.
For further details, see Testing Moderation Effects.
Whether or not it is necessary to run these additional analyses is determined by the nature of the research questions being asked. For example, to learn about the organizational factors that support relational coordination, you can analyze the predictors of relational coordination. To learn about the conditions under which relational coordination matters most for performance, you can test for potential moderators.
For more detailed information on Relational Coordination, see our Relational Coordination Manual.