| Literature DB >> 24959600 |
Peng Lu1, Xiao Cong2, Dongdai Zhou3.
Abstract
The computerized evaluation is now one of the most important methods to diagnose learning; with the application of artificial intelligence techniques in the field of evaluation, the computerized adaptive testing gradually becomes one of the most important evaluation methods. In this test, the computer dynamic updates the learner's ability level and selects tailored items from the item pool. In order to meet the needs of the test it requires that the system has a relatively high efficiency of the implementation. To solve this problem, we proposed a novel method of web-based testing environment based on simulated annealing algorithm. In the development of the system, through a series of experiments, we compared the simulated annealing method and other methods of the efficiency and efficacy. The experimental results show that this method ensures choosing nearly optimal items from the item bank for learners, meeting a variety of assessment needs, being reliable, and having valid judgment in the ability of learners. In addition, using simulated annealing algorithm to solve the computing complexity of the system greatly improves the efficiency of select items from system and near-optimal solutions.Entities:
Mesh:
Year: 2014 PMID: 24959600 PMCID: PMC4052093 DOI: 10.1155/2014/167124
Source DB: PubMed Journal: ScientificWorldJournal ISSN: 1537-744X
Figure 1Item generation flow chart.
Algorithm 1Java code description of the algorithm.
Figure 2The architecture of web-based testing environment.
Figure 3Adaptive testing flow chart.
Figure 4Log home.
Figure 5Web-based testing environment student's side.
Item pools scale and parameters description.
| Item pool | Item numbers | Average discrimination | Average difficult | Average guess factor |
|---|---|---|---|---|
| 1 | 100 | 0.78 | 0.56 | 0.25 |
| 2 | 250 | 0.69 | 0.53 | 0.25 |
| 3 | 500 | 0.71 | 0.59 | 0.25 |
| 4 | 691 | 0.74 | 0.55 | 0.25 |
The average execution time of the algorithm.
| Item numbers | Average execution time (s) | |||||
|---|---|---|---|---|---|---|
| Exhaustive search | Random search | SA search | ||||
| 5 | 10 | 15 | 20 | |||
| 100 | 0.857 | 0.421 | 0.433 | 0.524 | 0.682 | 0.826 |
| 250 | 1.648 | 0.596 | 0.602 | 0.717 | 0.921 | 1.012 |
| 500 | 2.515 | 0.826 | 1.096 | 1.251 | 1.367 | 1.578 |
| 691 | 2.973 | 1.022 | 1.463 | 1.752 | 1.958 | 2.147 |
Figure 6Algorithm execution time comparison.
Figure 7Content balance comparison.
Comparison of exposure.
| Exhaustive search | Random search | SA search | |
|---|---|---|---|
| Mean | 0.13 | 0.02 | 0.08 |
| Maximum | 0.90 | 0.20 | 0.60 |
| Minimum | 0.00 | 0.00 | 0.00 |
| Overexposure percentage (%) | 48.00 | 6.00 | 24.00 |
| Never exposed percentage (%) | 48.30 | 10.10 | 30.20 |
| Maximum number of exposure of each test | 19.00 | 2.00 | 3.00 |
Figure 8Number of items selection comparison.