Literature DB >> 36065300

Developing a digital competence scale for teachers: validity and reliability study.

Muhammed Murat Gümüş1, Volkan Kukul1.   

Abstract

Teachers' digital competence is very significant in terms of integrating digital technologies into the education process. This study aims to develop an up-to-date scale that can determine the digital competencies required for teachers to acquire new skills that arise with the change and development of technology and use them in educational environments. A total of 695 teachers participated in the study. Exploratory and confirmatory factor analyses were used to examine the construct validity of the scale. To assess the discrimination index of the items, the lower 27% and upper 27% groups were determined, and the differences between the groups were examined. Internal consistency coefficients were calculated for the reliability analysis. According to the results of the analysis, the developed scale consists of six factors and 46 items, and the Cronbach Alpha coefficient of the entire scale is 0.975. The factors were identified as "Safety," "Data Literacy," "Problem Solving," "Digital Content Creation," "Communication and Collaboration," and "Ethics," respectively, according to the content of the items. When compared with the DigComp 2.1 framework developed by the European Union, it was determined that the ethical factor emerged differently in this study. As a result, it can be said that the Digital Competency Scale for Teachers is a valid and reliable scale that can be used to measure teachers' digital competencies.
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022, Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Entities:  

Keywords:  Digital Competencies; Digital literacy; Teachers’ Digital competence

Year:  2022        PMID: 36065300      PMCID: PMC9433515          DOI: 10.1007/s10639-022-11213-2

Source DB:  PubMed          Journal:  Educ Inf Technol (Dordr)        ISSN: 1360-2357


Introduction

Today, a significant part of society uses digital tools and mediums almost in all areas of life. It is observed that the audience, which we can call digital users or digital population, is growing more and more rapidly with each passing day. Studies show that more than 60% of the world’s population uses digital media and tools (WeAreSocial [WAS], 2021). There are 4.66 billion active internet users and 4.32 billion active mobile internet users (Statista, 2021). According to the social media analyzes of WAS (2021), it has been observed that the number of social media users has increased by 6% compared to 2020 and reached 4.48 billion people in a year, despite the covid-19 pandemic, and the time spent on social media has increased even more compared to previous years. (WAS, 2021). Therefore, the increase in the digital population has shown that the tendency to use digital tools has increased. Researchers state that the increasing rate of technology use with the development of technology affects the daily lifestyles of society (Cihan, 2021; Reeves, Ram, Robinson, Cummings, Giles, Pan, & Yeykelis, 2021; Gilleard & Higgs 2021). Therefore, studies emphasized that society should have digital competence for a better quality of life (Bejakovic & Mrnjavac, 2020; Pangrazio et al., 2020). In this regard, the ability of society to acquire digital competencies has a significant role in the future of institutions and even countries (Castro-Granados & Artavia-Diaz, 2020; Matli & Ngoepe, 2020). National and international institutions and organizations around the world provide digital competencies reports so that citizens can improve their digital competence (Carretero et al., 2017; International Society for Technology in Education [ISTE], 2017; Koehler & Mishra 2009; United Nations Educational, Scientific and Cultural Organization [UNESCO], 2018). In these reports, different competence areas and competencies required for 21st-century citizenship profiles are specified. In this study, it is desired to design and verify an up-to-date measurement tool, taking into account the reports and studies carried out. Therefore, in line with this main purpose, the following objectives have been determined; To analyze the validity and reliability of the Digital Competence scale for teachers, To examine the exploratory and confirmatory factor structure of the Digital Competency scale for teachers.

Digital competence

In the digital environment, we need tools with which we can perform many operations, from accessing information to changing, editing, saving and sharing information. The name information and communication technologies is used to describe all these transactions made with these tools, which are called telecommunication devices (UNESCO, 2002). In addition, the Turkish Language Institution (TLI, 2021) states that “The collection, processing and storage of information, transmission to any place, accessing this information from any place, electronic etc. He defined the concept of information technology as “the whole of technologies that provide Information and communication technologies are a necessary tool for us to use digital media. Therefore, we also need information and communication technologies to ensure digital competence. In general, the concept of digital competence is based on the effective use of technology devices in order to obtain information, which are the basic skills of information and communication technologies, to process, store, evaluate, produce, share and communicate and cooperate via the internet (Ilomaki et al., 2011). Digital competence is a concept that expresses the skills required for technology use, and that develops, renews, and changes as technology develops (Ilomaki et al., 2011). According to Alamutka et al., (2008), as technology develops, the content and definition of the concept of digital competence will continue to change. Different terms such as information and communication technology skills, digital skills, 21st-century skills, and information and digital literacy are also used to express digital competence (Adeyemon, 2009; Krumsvik, 2008). Various studies are carried out by national and international institutions to define the concept of digital competence and to determine digital competence standards. In the Digital Competences Framework (Dig.Comp. 2.1), developed by the European Commission, digital competencies consist of 5 dimensions. These competencies are information and data literacy, communication and collaboration, digital content creation, problem solving and security dimensions. According to this framework, it is stated that individuals with all dimensions and competencies can be defined as digitally competent. Research reports have been created by many institutions, organizations, and researchers using different frameworks such as Digital Competences Framework (DigComp. 2.1) by the European Commission, ICT Competency Framework for Teachers by UNESCO, ISTE Educator Standards by the International Society for Technology in Education, and TPACK framework by Mishra & Koehler (2006). The digital competence and information about determining the digital competencies of society were included in the reports created in these studies.

Digital competence in Education

Numerous studies were carried out on the concept of digital competence in education to use digital technologies effectively and efficiently in education. The European Commission (2017), which directed one of these studies, defined the skills that all individuals in the field of education should have and some competence items to acquire these skills (Carretero et al., 2017). The International Society for Technology in Education (ISTE) has presented standards that can enable educators to make education more efficient and quality, and it is aimed to determine digital competencies in education in line with these standards (ISTE, 2017). On the other hand, it can be argued that UNESCO ICT Competency Framework for Teachers (UNESCO, 2018), which offers a framework for teachers’ information and communication technologies use competencies, offers a different perspective to education policies. Also, Organization for Economic Co-operation and Development [OECD] (2019) has provided guidance reports and frameworks for digital competence skills directly for the field of education to ensure both teachers and students organize their knowledge and skills with developing technology. Considering all these reports and studies, it can be stated that the acquisition of digital competencies in education plays a significant role in improving the knowledge and skills of teachers and students around the world. In addition, it is stated that these skills to be gained in the field of education are among the essential competencies that are deemed necessary for economic and social development as a society (Hatlevik & Christophersen, 2013). Therefore, to develop digital competencies in education and to be supported by governments, new policies are being developed all over the world and policies continue to be updated depending on technology development (Carretero et al., 2017; Tomte, 2013).

Digital Competencies for Teachers

Within the scope of updated education programs, teachers should have digital competence to use digital technologies more efficiently in the education process and improve students’ digital skills (Instefjord & Munthe, 2017). Previous studies have shown that the digital experiences of teachers during their undergraduate education affect their ability to use their digital competence effectively and efficiently in their professional life (Tomte et al., 2015). Having digital skills is essential for teachers in terms of being aware of the development of technology and integrating these technologies into teaching (Hatlevik & Christophersen, 2013). Teachers’ lack of these skills can negatively affect many aspects, from students’ academic success to the general outcomes of the education system (Yazar & Keskin, 2016). Also, it is stated in previous studies that the deficiencies in the education system are due to the low levels of teacher digital competence (Hanell, 2018). Having high digital competencies of teachers will not only facilitate digital learning of students and increase their motivation, but also will improve their learning quality and support faster and more enjoyable learning (Caena & Redecker, 2019; Redecker, 2017). Therefore, as technology develops and new technologies are integrated into schools, teachers need to develop their digital skills accordingly (Starkey, 2020). There are many reports and frameworks that provide guidance for teachers to develop their digital competence (Carretero et al., 2017; ISTE, 2017; OECD, 2019; UNESCO, 2018). According to these reports and frameworks, teachers should have digital skills and constantly improve themselves in information and data literacy, communication and collaboration, digital content creation, Safety, problem-solving, and similar competencies. Also, it is considered that designing valid and reliable measurement tools that can determine the digital competencies of teachers and eliminate the deficiencies by revealing the digital profiles of teachers within the scope of these reports and frameworks will play a significant role in the development of teachers’ digital competence.

Method

Study Group

Teachers working at primary, secondary, and high school levels in Amasya in the 2020–2021 academic year constitute the study group of this study. A total of 695 teachers participated in the study, the male teacher group consisted of 343 teachers, and the female teacher group consisted of 352 teachers. Table 1 shows the distribution of teachers participating in the study by gender and field of study.
Table 1

Distribution of Participants by Field of Study and Gender

Field of StudyTeachers
FemaleMaleTotal
Information Technologies452873
Other Fields301545
Religious Culture and Moral Knowledge233659
Science Group382563
Fine Arts Group82331
Mathematics252853
Vocational High School Group102737
Primary School8872160
Social Sciences Group113849
Turkish Language332962
Foreign Language Group412263
Total352343695
Distribution of Participants by Field of Study and Gender

Scale Development Process

Devellis’s (2014) scale development principles were followed while developing the Teacher Digital Competence Scale. To create an item pool, a literature review was conducted; frameworks and researches related to digital competence were examined. The study of Mannila et al., (2018) and other related studies, based on the DigComp 2.1 framework developed by the European Commission, were also examined. In addition to these studies, digital competence concepts, definitions, and sub-dimensions presented in published frameworks on digital competence such as OECD, UNESCO, and ISTE were also taken into account. As a result, an item pool of 69 items was created. Lawshe (1975) and McKenzie et al. (1999) stated that it would be sufficient to consult at least 5 experts when obtaining expert opinion in scale content validity studies. Opinions of eight experts were taken to determine whether the items were clear. These experts consist of three faculty members working in the Computer Education and Instructional Technologies department, an assessment specialist, a language expert, and three teachers working in the Ministry of National Education. The comments of these experts on the scale items were received. In line with the opinions of the experts, the items were re-examined and necessary adjustments were made. Items that were deemed unsuitable were removed from the scale, appropriate ones from the suggested items were added, and items that needed to be adjusted were updated in line with the expert opinions. As a result, a draft scale consisting of a total of 69 items with a 5-point Likert scale (1="Strongly Disagree”, 2="Disagree”, 3="Undecided”, 4="Agree”, 5="Strongly Agree”), was prepared.

Data Analysis

The 69-item draft scale was applied to 740 teachers in total. The obtained data were analyzed using the SPSS software. In the analysis phase, since 45 of the collected data distorted the normality distribution, the data obtained from these participants were excluded from the data set, and finally, 695 data were analyzed. KMO and Bartlett tests were performed to measure the construct validity of the scale, and exploratory factor analysis was performed to reveal the structures between the items (Şencan, 2005:355; Büyüköztürk 2002). The factor loadings of the scale items and how many factors the scale would consist of were determined by using principal component analysis. Factor loads were examined using the Varimax rotation method. Items with factor loadings less than 0.40 were removed from the scale and the analyzes were repeated. Confirmatory factor analysis was performed to determine to what extent the results obtained in the exploratory factor analysis fit the structure desired to be measured. The values obtained as a result of confirmatory factor analysis and the intervals interpreted according to Schermelleh-Engel et al., (2003) are summarized in Table 2. The 27% upper group and 27% lower group were determined, and the difference between the groups was examined to determine the item discrimination regarding the validity of the scale. The Cronbach Alpha reliability coefficient was calculated to determine the level of internal consistency.
Table 2

Factor Loads of the Scale Items

Factors
Item NoFactor 1Item NoFactor 2Item NoFactor 3Item NoFactor 4Item NoFactor 5Item NoFactor 6
I 520.756I 20.803I 660.738I400.794I 220.695I 260.874
I 490.754I 30.737I 680.724I410.781I 190.685I 250.866
I 500.742I 70.719I 650.719I390.762I 210.651I 270.835
I 510.742I 10.704I 640.718I380.738I 240.649I 170.733
I 530.739I 50.700I 690.692I430.728I 200.632I 290.664
I 540.735I 40.688I 670.671I420.727I 160.630
I 470.700I 60.615I 600.597I 230.618
I 460.670I 100.577I 620.589
I 480.660I 110.540I 610.581
I 440.635
Eigenvalue 22.0603.7702.5211.7671.7161.271
Variance(%)15.91612.66412.34812.1179.9348.988
Total Variance (%) = 71.967
Factor Loads of the Scale Items

Findings and Interpretations

A) findings regarding the validity of the Scale

The exploratory and confirmatory factor analyzes for the validity and reliability analyzes of the “Teacher Digital Competence Scale,“ item discrimination levels, and the internal consistency coefficients were examined. Findings related to these analyzes are explained below.

Construct validity

: The construct validity results of the developed scale were obtained by the factor analysis method. Before the factor analysis, Kaiser Meyer Olkin (KMO) and Bartlett Sphericity suitability tests were performed to test whether the data were suitable for factor analysis and the adequacy of the sample size. The KMO value was calculated as 0.969. A KMO value greater than 0.8 is also an indication of sampling adequacy (Field, 2013; Worthington & Whittaker, 2006). When the significance value (p) of this test is examined, it is observed that it is less than 0.05. According to the test results of this value, it can be stated that significant factors may occur (Büyüköztürk, 2002). On the other hand, when the results for the Bartlett test were examined, it was observed that the test was significant for the p < .05 value. The Approx Chi-Square value was found to be 30863,467 and the degree of freedom (DF) was found to be 1035. In terms of Bartlett values, these values are found suitable in the literature for the factor analysis phase (Kalaycı, 2010:322). Factor analysis in scale development studies aims to reveal the structures between the items by classifying the items according to the item correlation coefficients (Şencan, 2005; Büyüköztürk, 2002). Accordingly, factor analysis was applied to 46 items in the teacher digital competence scale development study. In the literature, factor distributions are expected to be greater than one (Çokluk et al., 2014). In addition, it is stated in the literature that if an item is under more than one factor and the values between them are less than 0.10, the items should be removed (DeVellis, 2014: 128). In this analysis, the results were obtained by using the principal components analysis and the varimax rotation method, used in the literature (Büyüköztürk, 2002). It is observed that six factors are formed according to these results. Figure 1 shows the scree plot used to determine the final factor number.
Fig. 1

Scree Plot Graph

Scree Plot Graph As can be seen in Fig. 1, there is a flattening after the sixth item, and the value decreases below one. Hence, it can be stated that according to the scree plot, the factor number of the scale was observed to be six. The eigenvalues, explained variance, and cumulative variance of six factors with eigenvalues above one are given in Table 3 both as eigenvalues and rotation sums of squared loadings. According to the analysis, the total variance value of 46 items was found to be 71.967%.
Table 3

Eigenvalue and Explained Variance Table

EigenvaluesRotation Sums of Squared Loadings
Factor Eigenvalue Explained Variance (%) Cumulative Variance (%) Eigenvalue Explained Variance (%) Cumulative Variance (%)
122.06047.95647.9567.32115.91615.916
23.7708.19556.1515.82512.66428.580
32.5215.48261.6335.68012.34840.928
41.7673.84165.4735.57412.11753.045
51.7163.73169.2044.5709.93462.979
61.2712.76371.9674.1348.98871.967
Eigenvalue and Explained Variance Table As can be seen in Table 3, each eigenvalue of the factors is greater than one, and the values are parallel according to the scree plot. Therefore, it was decided that the scale developed in the study should have six factors. Values analyzed using the Varimax rotation method were filtered to exclude those less than 0.3 (Büyüköztürk, 2002; Çokluk et al., 2014; Pallant, 2001). This method was chosen for better interpretation of the data and for further observing the appropriateness of the items for the factors (DeVellis, 2014). Also, this value is expressed as adequate for multi-factor scales by Büyüköztürk et al., (2019). On the other hand, in the field of social sciences, values between 0.3 and 0.4 are considered adequate (Büyüköztürk, 2002). When the eigenvalues of the items were examined, it was observed that the scale had six factors, and the factor loads of the items were given in Table 2. As can be seen in Table 2, it was determined that 46 items, excluding the items removed, constitute six factors. According to the results obtained, the factors were formed as follows: factor1: I 52, I 49, I 50, I 51, I 53, I 54, I 47, I 46, I 48 and I 44; factor2: I 2, I 3, I 7, I 1, I 5, I 4, I 6, I 10 and I 11; factor 3: I 66, I 68, I 65, I 64, I 69, I 67, I 60, I 62, and I 61; factor4: I 40, I 41, I 39, I 38, I 43 and I 42; factor 5: I 22, I 19, I 21, I 24, I 20, I 16 and I 23; factor 6: I 26, I 25, I 27, I 17 and I 29. Accordingly, item loads of factor 1 varied between 0.635 and 0.756, item loads of factor 2 varied between 0.503 and 0.840, item loads of factor 3 varied between 0.581 and 0.738, item loads of factor 4 varied between 0.727 and 0.794, item loads of factor 5 varied between 618 and 0.695, and item loads of factor 6 varied between 664 and 0.874. The rate of total variance explained was found to be 71.967. It is considered adequate for this value to be more than 40% (Büyüköztürk, KılıçÇakmak, Akgün, Karadeniz, & Demirel, 2019). Table 4 shows the sample items and factor titles of the scale.
Table 4

Distribution of Items by Factors and Sample Items

FactorsItem NumberSample Item
Safety44,46,47,48,49,50,51,52,53,54I can take precautions against risks and threats in digital environments.
Data Literacy1,2,3,4,5,6,7,10,11I can find data, information and content by searching digital media.
Problem Solving60,61,62,64,65,66,67,68,69I can find solutions to the problems I will experience in developing my digital competence.
Digital Content Creation38,39,40,41,42,43I can design the most appropriate program in the digital environment to solve a specific problem or perform a specific task.
Communication and Collaboration16,19,20,21,22,23,24I can use appropriate tools and technologies for collaborative processes in the digital environment.
Ethics17,25,26,27,29I follow ethical principles when using or disseminating content belonging to others in the digital environment.
Distribution of Items by Factors and Sample Items As can be seen in Table 4, it was determined that the items gathered under the first factor were related to expressions such as risk, threat, privacy, danger, etc., and this factor was given the title “Safety”. The safety factor consists of ten items. Since it was determined that the items collected under the second factor were related to expressions such as filtering, implementation, interpretation, decision making, etc., this factor was titled “Data Literacy”. The Data Literacy factor consists of nine items. Since it was determined that the items collected under the third factor were related to expressions such as problem, solution, improvement, evaluation, etc., this factor was titled “Problem-solving”. The problem-solving factor consists of nine items. Since it was determined that the items collected under the fourth factor were related to expressions such as copyright, license, design, development, etc., this factor was titled “Digital Content Creation”. The digital content creation factor consists of six items. Since it was determined that the items collected under the fifth factor were related to expressions such as sharing, digital citizenship, cooperation, etc., this factor was titled “Communication and Collaboration”. The communication and collaboration factor consists of seven items. Finally, since it was determined that the items collected under the sixth factor were related to expressions such as ethics, principles, social convention, etc., this factor was titled “Ethics”. The ethics factor consists of five items. Before the factor analysis was performed, there was no general difference in the factor-item distribution estimated at the beginning. In other words, the item distributions made before the analysis matched after the factor analysis. However, after the factor analysis, apart from the 5 factors mentioned earlier, a new factor called the 6th Factor, the ethical factor, was formed. While it was thought that the items under the ethical factor in the final distribution should be under the safety factor before factor analysis, it was decided that these items should be below the ethical factor after factor analysis. : According to the exploratory factor analysis, it was determined that the Teacher Digital Competence Scale consisted of six factors and 46 items in total. Confirmatory factor analysis is performed to determine to what extent the results obtained in the exploratory factor analysis fit the structure desired to measure (DeVellis, 2014: 151). Table 5 shows the data obtained as a result of the confirmatory factor analysis, the names of these indices, and acceptable values in the literature.
Table 5

Confirmatory Factor Analysis Fit IndicesValues

IndexesTeacher Digital Competence Scale ValuesAcceptable ValuesInterpretation
CMIN/DF3.264CMIN/DF < 5Acceptable
GFI0.827GFI > 0.9-
AGFI0.807AGFI > 0.9-
NFI0.900NFI > 0.8Acceptable
IFI0.928IFI > 0.8Acceptable
CFI0.928CFI > 0.9Acceptable
RMSEA0.57RMSEA < 0.8Acceptable
RMR0.042RMR < 0.08Acceptable
SRMR0.0483SRMR < 0.05Acceptable
Confirmatory Factor Analysis Fit IndicesValues Confirmatory factor analysis fit index values and their interpretations can be observed in Table 5. It is stated in the literature that it is appropriate to interpret these values by considering all of the values (Tabachnick & Fidell, 2013; Yaşlıoğlu 2017; Çapık, 2014). Accordingly, the chi-square goodness-of-fit test (CMIN/DF) value, which explains the suitability of the sample size of the data, was observed as 3.264 and interpreted as “Acceptable”. The goodness of fit index (GFI), another value less sensitive to sampling, was observed as 0.827. The adjusted goodness of fit index (AGFI) value achieved by separating the GFI value from the degrees of freedom was observed as 0.807. The normed fit index (NFI), which is not affected by the parameters in the model, was observed as 0.900 and interpreted as “Acceptable”. The incremental fit index (IFI) value, which is less sensitive to the sample value, was observed as 0.928 and interpreted as “Acceptable”. The comparative fit index (CFI), which examines the inconsistencies between the items, was observed as 0.928 and interpreted as “Acceptable”. The root mean square residual (RMR) value, which expresses the inconsistency between the model and the sample, was observed as 0.042 and interpreted as “Acceptable”. The SRMR value, which is another standardized value of the RMR value, was observed as 0.0483 and interpreted as “Acceptable”. The root mean square error of approximation (RMSEA) value, which was used for sampling problems with the analysis made between the parameter model and covariance, was observed as 0.57042 and interpreted as “Acceptable”. Figure 2 shows the factor model produced as a result of the analysis and the findings regarding the relationship between factors and their items.
Fig. 2

Diagram of Confirmatory Factor Analysis

Diagram of Confirmatory Factor Analysis Standardized correlation values can be observed in Fig. 2 of the confirmatory factor analysis results. How well each item fits or expresses the factor to which it belongs is determined according to the results of this correlation. Accordingly, correlation values between 0.78 and 0.89 in the Safety (S) factor, between 0.61 and 0.83 in the Data Literacy (DL) factor, between 0.63 and 0.88 in the Problem Solving (PS) factor, between 0.79 and 0.86 in the Communication and Collaboration (CC) factor, between 0.76 and 0.95 in the Digital Content Creation (DCC) factor, and between 0.63 and 0.93 in the Ethics (E) factor were observed. Also, to improve the model, the covariance method can be applied to the relations between the items according to the values formed (Schreiber, Nora, Stage, Barlow, & King, 2006). Accordingly, covariance plots were drawn between the error terms e4-e5, e8-e9, e18-e19, e21-e22, e36-e37, and e40-e41, respectively, according to the relations between the items, due to the relationships formed between the items I 49 - I 50, I 53 - I 54, I 10 - I 11, I 61 - I 62, I 38 - I 39, and I 42 - I 43. As a result, the fact that the standardized values between the factors observed as 0.61 and above and each one are below one shows that the items represent the variables well (Aytaç & ve Öngen, 2012).

Item discrimination

To determine the discrimination ability of the teacher digital competence scale in terms of the features it measures, the discrimination of the highest and lowest scores was calculated over the total score of the answers given to the scale items (Büyüköztürk, 2002). An item analysis based on the difference between lower and upper group means was performed as two separate groups to the lower 27% and the upper 27% group, calculated over the total scores. An independent samples t-test was performed to determine the difference between the scores obtained from the lower group and the upper group for all items and to determine whether these values are significant (Büyüköztürk et al., 2019: 187). Table 6 shows the t-test analysis results.
Table 6

t-Test Analysis Result for the Lower Group and Upper Group Mean Scores

GroupsNSsdtp
Lower 27% group188135.037214.0550437450.3140.000*
Upper 27% group188207.377713.82360

*p < .000

t-Test Analysis Result for the Lower Group and Upper Group Mean Scores *p < .000 As can be seen in Table 6, a significant difference was found between the lower 27% and upper 27% groups (t374 = 50.314, p < .05). Table 7 shows the results of the item analysis of the lower 27% and upper 27% group differences, which was carried out to determine the proficiency of all the items in the scale in terms of the features they measure.
Table 7

Teacher Digital Competence Scale Lower 27% and Upper 27% Intergroup t-Test Results

F1F2F3
Item NotItem NotItem Not
111.8161725.2493321.574
216.1091813.3243421.906
318.2001913.6003525.186
422.5182014.4603622.629
516.1322110.1943727.142
619.0712222.5313828.816
717.4382322.8493927.202
816.1992422.5354027.156
919.2382526.0694112.646
1020.5432622.5624219.337
1111.1972719.7264318.541
1219.9012823.2984427.675
1321.4052924.3844524.644
1419.8883020.6794620.823
1520.7313121.061
1621.7343223.650
Total (F1, F2, F3) = 50.314; p < .001
Teacher Digital Competence Scale Lower 27% and Upper 27% Intergroup t-Test Results As can be seen in Table 7, it was determined that the difference between the Lower 27% and Upper 27% groups of the Teacher Digital Competency Scale was significant. It has been observed that the obtained values are between 10,194 and 28,816. The t value of all of the items was determined as 50,314. The determined results are at a significant level (p < .001). Therefore, it can be stated that the differences for each factor, item, and the whole scale are significant and the discrimination is high.

B) findings regarding scale reliability

After the validity study, a scale consisting of 46 items with six factors was created as a result of factor analysis. Reliability analysis is performed to measure the consistency of the scale items among each other and to determine how well the items measure the features desired to measure (Büyüköztürk,2002, Kalaycı 2010; Yiğit, Bütüner & Dertlioğlu, 2008). Therefore, it can be stated that the reliability study supports factor analysis.

Internal consistency level

In the literature, it is stated that methods such as test-retest, parallel form, split-half test, and Cronbach’s alpha are used to measure the reliability of the scale (Büyüköztürk, 2002). In this study, Cronbach Alpha values recommended for Likert-type scales were calculated to determine internal consistency after factor analysis (Sönmez, 2005). As a result of the analysis, the alpha value was observed as 0.975. According to the literature, this value is considered reliable when it is over 0.60 (George & Mallery, 2003yüköztürk,2002; Pedersen & Lui, 2003; Field 2013). On the other hand, according to Çokluk et al., (2014), a value between 0.80 and 1.00, the value found in this study is 0.975, is considered highly reliable. Therefore, it can be stated that the internal consistency coefficient obtained in this study is very good. Table 8 shows the internal consistency coefficients calculated for each of the factors in the scale.
Table 8

Teacher Digital Competence Scale Reliability Coefficients

DimensionsNumber of ItemsInternal Consistency Coefficient (Cronbach Alpha)Level of Reliability
Teacher Digital Competence Scale460.975High
Safety100.957High
Data Literacy90.916High
Problem Solving90.947High
Digital Content Creation60.933High
Communication and Collaboration70.951High
Ethics50.908High
Teacher Digital Competence Scale Reliability Coefficients As can be seen in Table 8, it was observed that the reliability analysis result for the entire 46-item scale was 0.975; the reliability analysis result for the safety sub-dimension consisting of ten items was 0.957; the reliability analysis result for the data literacy sub-dimension consisting of nine items was 0.916; the reliability analysis result for the problem-solving sub-dimension consisting of nine items was 0.947; the reliability analysis result for the digital content production sub-dimension consisting of seven items was 0.933; the reliability analysis result for the communication and collaboration sub-dimension consisting of six items was 0.951; the reliability analysis result for the ethics sub-dimension consisting of five items was 0.908.

Discussion

In this study, the Teacher Digital Competence Scale was developed to monitor the digital profiles of teachers. The five-point Likert scale consists of six factors and a total of 46 items. The items determined under the factors were examined one by one. Accordingly, the first factor consisting of ten items was titled as “Safety” factor, the second factor consisting of nine items was titled as. “Data Literacy” factor, the third factor consisting of nine items was titled as “Problem Solving” factor, the fourth factor consisting of six items was titled as “Digital Content Creation” factor, the fifth factor consisting of seven items titled as “Communication and Collaboration” factor, and the sixth factor consisting of five items was titled as “Ethics” factor. When these results are compared with the DigComp 2.1 framework developed by the European Union, the ethical factor emerged in the study, which is different from the conceptual framework. This difference may be because ethics and safety factors measure different skills or may be due to social and cultural differences. The Turkish Language Association (TLA) defines the concept of ethics as “the set of behaviors that should be followed or avoided by the parties among various professions” and states that it is related to moral values. The concept of safety, on the other hand, is defined as “the legal order in social life without interruption, the situation where people can live without fear, safety” (TLI, 2021). Although both concepts are intertwined and associated with each other in the digital sense, it can be stated that the security and ethics concepts are expressed as two different concepts. Accordingly, it can be stated that individuals who can ensure their safety in a digital sense do not also mean that they comply with ethical values, in the same way, it can be stated that individuals who exhibit ethical behavior in the digital sense or who attach importance to ethical values do not mean that they can also ensure their safety in a digital sense. There are many scale studies in the literature that have both similar and different results to this study. One of them is the scale developed to measure the indicators of information and communication technologies for teacher candidates in Turkey (Akbulut, Kesim & Odabaşı, 2007). This scale consists of 41 items and ten dimensions, including technology aptitude, learning-teaching methods, ethics, special education requirements, infrastructure, professional development, access, health, safety and information technologies, and content knowledge. Another measurement tool developed for teacher candidates is the educational technology standards self-efficacy scale designed by Çoklar (2008). This 41-item measurement tool consists of six factors, including knowledge of technological processes and concepts, planning and designing learning environments and learning experiences, assessment and evaluation, productivity and professional practices, social, ethical, legal, and humanitarian issues, and planning instruction according to individual differences and special needs. Bayraktar (2015) designed a scale to determine the level of teachers’ use of educational technologies. This scale is a 38-item scale consisting of four dimensions, including technology literacy, technology integration into the course, social ethics and legal provisions, and communication. In another scale development study, Mannila et al., (2018) developed a 27-item scale consisting of five dimensions, including information and data literacy, communication and collaboration, digital content creation, safety, and problem-solving. Kong et al., (2019) designed a 16-item scale that can follow the development of students’ digital skills and consists of four dimensions: meaningfulness, impact, creativity belief, and competence belief. In another scale study for students, Bayrakcı (2020) developed a 29-item measurement tool for digital competencies and digital literacy, consisting of six dimensions, including ethics and responsibility, general knowledge and functional skills, daily use, professional production, privacy and safety, and social dimension. There are many studies in the literature that have developed measurement tools both have similar and different structures to the measurement tool designed in this study. While some of these studies (Manila et al., 2018) are similar to this study in terms of both dimensions, subject and target audience, some studies (Akbulut et al., 2007; Bayrakcı, 2020; Bayraktar, 2015; Çoklar, 2008; Kong et al., 2019; Mannila et al., 2018) show similarity in terms of subject or target audience but differ in dimensions. This situation may vary depending on the target audience to be assessed and evaluated. Also, it may differ due to the differences in the conceptual frameworks that the scale is based on, the technological developments of the period in which the scale was designed, or social and cultural differences. The data obtained from the 69-item form were tested with exploratory factor analysis, confirmatory factor analysis, and the lower 27% group and upper 27% group mean differences. Beforehand, the Bartlett test was used to perform factor analysis with the obtained data. As a result of this analysis, it was observed that the values were significant. In the factor analysis, the principal component analysis used in social sciences and varimax vertical rotation methods were used to better interpret the factors. Expert opinions were taken for the validity of the scale, and it was found that the items were at the desired level to be measured. In addition, it was determined that the construct validity of the scale consisted of 46 items and six factors by factor analysis. After the validity study of the scale, a scale consisting of 46 items with six factors was created as a result of factor analysis. Cronbach’s alpha test was performed to measure how consistent the items in this scale were among each other, and the values were found to be reliable. The exploratory factor analysis results were confirmed by confirmatory factor analysis. In confirmatory factor analysis, some fit indices were used, and results were compared with acceptable values expressed in the literature. When all of these values are examined, it has been determined that the results obtained are within the range of acceptable values specified in the literature. Therefore, the structure to be measured was found appropriate according to these results. Unlike other scales, the Teacher Digital Competence Scale includes up-to-date expressions that can determine digital competencies required to acquire new skills created in our lives with the change and development of technology and to use these skills in education life. Thus, with this scale, teachers will be able to determine their digital competence regarding current technology skills and will be able to eliminate their deficiencies in this regard, if any. As a result, it can be said that the Teacher Digital Competence Scale is a valid and reliable scale that can be used to measure teachers’ digital competencies.

Conclusions

In this study, a valid and up-to-date measurement tool was developed to determine teachers’ digital competencies. Developed by the European Commission in this direction, Dig.Comp. The 2.1 framework is used. A measurement tool consisting of a total of 6 factors and 49 items was developed: “Security”, “Data Literacy”, “Problem Solving”, “Digital Content Production”, “Communication and Cooperation” and “Ethics”. This measurement tool is thought to be important in determining teachers’ digital competencies. Because teachers’ digital competencies play an important role in teachers’ effective and efficient use of technology inside and outside the classroom and guiding students about technology use. In addition, it can directly or indirectly affect the interaction between the student and the teacher, the student’s interest in the lesson and the academic success of the student in general. Keeping teacher digital competencies up-to-date, in other words, means that teachers are ready for education in the digital sense and in this sense, they respond to the necessary needs and problems of the students. In addition, the role of measurement tools is important to monitor teachers’ digital competencies and to find out what level these skills are. Therefore, as technology develops, different competencies may be needed and skills may need to be developed. Therefore, in this study, a scale tool was designed in which teachers can test their current digital competencies based on current technologies.

Limitations and Recommendations

Although the sample size is sufficient in this study, teachers’ digital competencies can be examined with a larger group of teacher participants. In addition, different research studies which aims to modelling of teachers competencies can be planned with using the developed scale in this study. In this study, exploratory and confirmatory factor analysis was performed during the scale development process. Then, a safety analysis was performed using the Cronbach Alpha internal consistency coefficient. Due to the pandemic, determination studies could not be conducted for reliability. The analysis results of the study are limited to the data obtained from the specified analyzes.
  1 in total

1.  Screenomics: A Framework to Capture and Analyze Personal Life Experiences and the Ways that Technology Shapes Them.

Authors:  Byron Reeves; Nilam Ram; Thomas N Robinson; James J Cummings; C Lee Giles; Jennifer Pan; Agnese Chiatti; M J Cho; Katie Roehrick; Xiao Yang; Anupriya Gagneja; Miriam Brinberg; Daniel Muise; Yingdan Lu; Mufan Luo; Andrew Fitzgerald; Leo Yeykelis
Journal:  Hum Comput Interact       Date:  2019-03-13       Impact factor: 4.750

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.