• Non ci sono risultati.

2. Literature Review

2.6 Assessing the Quality of Internationalization: From Theory to Practice

international accreditations, external pressure of alumni, requirement to produce international managers and compete with private companies and corporate educational structures which provide business education. At the same time, four traditional clusters of rationales

(academic, social/cultural, political, and economic ones) or their combinations have remained relevant for university business schools too. The rationales of internationalization for

business schools may also vary depending on the region or market. For this reason, it is worth taking into account the distinctive rationales of the business school’s internationalization, since these directly affect the chosen strategies in business education units and the specific characteristics of their positioning in comparison with other university academic units.

internationalization assessment from the statement of mission and vision of an institution, to the creation of goals and indicators and internationalization map. The section follows with a description of the models for assessing internationalization, the dimensions for measuring its quality, and outlines a range of global methods suggested for this purpose. The section ends by picturing UrFU’s approach to assessing internationalization and provides some

conclusions on its alignment with international good practices and impact on UrFU’s strategy.

It is important to start by underlining that assessment of internationalization can be conducted at different levels. Hudzik (2015, pp. 92-93) outlines four levels of analysis: global and supranational (goals of internationalization are justice, world peace, global economy, environment, etc.), national (related to the national higher education system), individual higher education institution and institutional stakeholders (students and other clientele).

Hudzik also classifies the levels of analysis according to their impacts and outcomes. The assessment of outcomes (or impacts/ends), according to de Wit (2009b, p.6), evaluates “the achievement of goals and results” (Hudzik, 2015, p.92), which includes identifying “the ends or goals” and further measurement of results. The researchers also state that outcomes and impacts need to be evaluated within the context of the rationales and objectives: discussing the quality assurance of internationalization, Aerden (2015, p.8) writes that “the imperative for higher education to internationalise is currently evident, but the reasons and challenges differ”. De Wit (2009a, p.126) also links the quality assessment of outcomes with the context of rationales and objectives.

In the present work, the analysis was carried out at institutional level and, thus, this section further focuses on the assessment of internationalization in higher education

institutions and their academic units. At the same time, the study includes data collection and analyses at the level of institutional stakeholders, since Hudzik (2015, p. 93) also emphasizes

that assessment at the institutional level “[…] would be unacceptably narrow […] if not connected to the institution’s goals for outcomes related to Level 1”, and, namely, students and clientele.

Another lens for assessing internationalization results has been indicated by Green (2012, p. 2) who noted two related and overlapping concepts: a) performance of institutions and their subunits and b) student learning outcomes. In both cases, internationalization assessment is part of a large multidisciplinary conversation among researchers and practitioners, i.e., student learning assessment can be traced back to the 1930s and 1940s when assessment of student learning in colleges was considered part of a broader

psychological theory (Learned & Wood, 1938).

In this section, for the purposes of my study, I focus on performance assessment for institutions and their subunits. When discussing the planning part of this type of assessment, Green emphasized how essential it is to articulate the institution’s reasons for developing an internationalization strategy and show how this contributes to the greater university or academic unit goals. The identification of the reasons to internationalize and the shift from focusing on specific instrumentalities only to outcomes helps change the positioning of internationalization to a dimension that supports institutional goals and is meaningful for a range of stakeholders and not only a small group of “internationalization advocates” (Green, 2012, p.4). The next stage, according to Green (2012), would be that of creating goals of internationalization, which is a challenging exercise because the goals should be

simultaneously conceptual and measurable. This could be achieved in the following stage – which requires the definition of a series of indicators that sharpens the understanding of what achieving the goals means. Finally, the planning part should be completed during the stage of

“mapping the institutional landscape of international programs, policies, and strategies”

(my emphasis) before an institution is able to assess its own international dimension (Green, 2012, p.5).

The similar points of internationalization quality assessment which characterize most existing approaches are provided by Beerkens at al. (2010, p. 9): (1) mapping – identify the level of internationalization in an institution, (2) evaluating –determine the contribution of efforts towards internationalization, and (3) profiling – define institutional identity and illustrate the advantages and ambitions of an organization in terms of its international dimension. The stages of internationalization assessment, suggested by Green and Beerkens et al., raise a question as to which indicators should reflect the goals since measuring can be applied to an assortment of indicators.Hudzik and Stohl (2009, p. 9) also noted that the assessment of the impact and outcomes of internationalization should be linked to the overall missions and goals of the institution. Since there is a wide range of institutional types,

missions and rationales of internationalization, assessment focus points and criteria vary accordingly. Hudzik (2015, p. 100) listed four models for assessing internationalization:

• the program evaluation model

• the program learning and results model

• the systems model

• a variant of the third option labeled “the logic model”

The program evaluation model (Hudzik, 2015, pp. 100-101) was originally suggested 50 years ago for social programs. Typical components of this model are:

• efforts: funds spent and work or activity undertaken;

• efficiency: analysis of the program costs;

• effectiveness: the degree to which program objectives or goals have been met;

• process: determining whether the program was responsible for achieving goals.

The program learning and results model (Hudzik, 2015, pp. 101-102) was elaborated by Kirkpatrick (1977) and it measured four dimensions:

• reactions: how participants reacted on international programs, i.e., surveys or interviews on students’ satisfaction with study abroad programs;

• learning: what was learned by the participants (principles, facts, techniques and/or skills). It can be measured subjectively (open questions) or more objectively (pre-/post- tests);

• behavior changes: to which extent changes in participants following a program or a course are observed (is there a difference in their behaviors, reactions, abilities or skills connected with participation in international programs?);

• results: what participation in the program brings to organizational or societal results, i.e., how returning study-abroad students change the learning environment in a classroom.

The origins of the systems model indicated by Hudzik (2015, pp. 102-103) lay in systems engineering and analysis. The model also includes four elements:

• inputs: resources that exist to support international activities, i.e., funding, people, policies and others;

• outputs: activities assumed to support internationalization efforts;

• outcomes: impacts and results that are usually associated with the estimation of university achievements and their missions.

Finally, Deardorff et al. (2009, pp. 23-25) suggested an expanded approach to assess the outcomes of internationalization and used the fourth model – the Program Logic Model (also called the logic model):

• inputs: resources needed to achieve the stated goals, i.e., people, time, money, partners;

• activities: components of internationalization, i.e., learning opportunities across the curriculum, experience of learning abroad, doing research;

• outputs of internationalization: the participants (numbers), i.e., number of students in education-abroad programs;

• outcomes of internationalization: results of numbers, i.e., measuring intercultural competence. Includes both short-term and more medium-term outcomes;

• impact: long-term impact (consequences/results).

After this overview of the theoretical models illustrating how quality assurance works, it is important to consider the practical approaches which prescribe what concrete aspects should be assessed. As there is no unified way to assess the performance of a university in the area of internationalization (Aerden, 2017, p. 42; Aerden et al., 2013), below I review a number of practical approaches to the quality assurance of specific aspects of

internationalization.

In an early volume, Knight and de Wit (1999) provided the guidelines and instruments for assessing internationalization strategies on the basis of institutional pilot reviews in

different countries. The authors indicated four dimensions:

• context (higher education system, profile of an institution, analysis of the national and/or international context);

• internationalization policies and strategies at institutional level;

• organizational and support structures (organization, planning and evaluation, financial support and resource assignment, support services and facilities);

• academic programs and students (internationalization of curriculum, domestic and international students, study abroad, and student mobility programs).

As regards the practical approaches to measuring internationalization a decade after the publication mentioned above, Knight (2008, p. 43) emphasizes the importance of measuring progress rather than output only and concludes that both qualitative and

quantitative measures should be used in quality assessment processes, and their identification is a challenge:

They need to be relevant, clear, reliable, consistent, accessible, and easy to use. […]

Such measures need to be pertinent to the desired objective and limited to the most relevant […] (and they) need to stand the test of time, as they should be used over a period to get a true picture of progress toward reaching the objective and whether there is any improvement.” (Knight, 2008, pp. 48-49)

In terms of existing approaches to internationalization quality assessment, it is important to name the key ones that emerged in previous years. Beerkens et al. (2010) provide a broad overview of such approaches that partly overlaps with another extensive overview by de Wit (2010). Both studies detail approaches developed by national and global associations (ACE, NAFSA, Nuffic, CHE, Japanese Indicator List, DAAD, SIU, NVAO, IMQT and others), and suggested by individual practitioners and researchers (Chin and Ching, 2009; Ellingboe, 1998; Mestenhauser, as cited in Joris, 2008; and others). On the whole, Beerkens et al (2010) review 33 tools and indicator sets for assessing

internationalization. All the approaches are based on one of the following tools or their combination: self-evaluation, benchmarking or classification/ranking. The authors conclude that there is not a single standard for such assessment that would fit any institution and that some of the listed approaches are based on already existing ones. The literature and case studies considered in the publication highlight that the method for measuring

internationalization chosen by an institution has to address at least five points: the goal of the toolbox, type of indices to be measured, dimensions of measurement, the structure to be applied and the method of validating indicators.

After discussing the existing levels, dimensions and approaches to the quality assurance of internationalization, I further outline the way of measuring internationalization in Ural Federal University during the implementation of the Russian excellence initiative.

The strategy of UrFU development in 2013-2020 was based on Project 5-100 criteria (as detailed in Chapter 4) and included a road map providing an illustration of the four stages of the excellence scheme. The declared strategic goal was to form a research, educational and innovative international center in the Ural Federal District with UrFU becoming its core (Ural Federal University, 2013b, p.4). The key mechanisms implied cooperation with leading research institutions and corporations and entering global academic rankings.

Therefore, the list of the first-level indicators (Ural Federal University, 2013b, pp. 5-6) primarily included positions in QS, THE, ARWU ranking lists of world universities and subject ranking tables. Most of the other indicators directly influenced positions in the above-mentioned rankings: number of publications in Scopus and Web of Science databases, average number of citations per faculty member, number of international students and faculty. In addition to these, there was a list named “additional indicators” and some of its points belonged to the international dimension. They included number of publications with international co-authors, number of international students from non-CIS countries (which, in fact, means number of non-Russian speaking students), number of master and PhD students in English. The first-level indicators were respectively split into second-level indicators of academic units and third level metrics of research centers.

The list of indicators shows how strongly the internationalization assessment of the university depended on Project 5-100 goals, which were oriented towards gaining positions in

international rankings. This work will further explain that two out of the three studied academic units followed the internationalization measurement suggested by UrFU. The only exception was the Graduate School of Economics and Management, which followed its own approach to the quality assurance of internationalization as a result of pursuing international accreditations of business education, in addition to the university system of metrics.

Several conclusions emerge from this section. Firstly, the quality assurance of

internationalization can be implemented at various levels, and due to the characteristics of my study, this section mainly focused on the individual level of the institutions. Secondly, upon reviewing a broad list of models and approaches, it could be concluded that a universal approach to evaluating performance and quality of internationalization does not exist yet.

Each institution may choose an approach based on its own context before developing a list of indicators, and it can do so by defining rationales for internationalization, the goals of the international dimension within the larger goals of an institution and the mapping of the international landscape. Thirdly, the section showed that UrFU had developed its own list of indicators to measure the performance of internationalization and this was influenced by the requirements of Project 5-100. The measuring was focused on mostly quantitative outcomes.

Finally, the approaches to quality assessment considered in this section were also taken into account for constructing the research design and, in particular, the interview plans which are described in Chapter 3.