Towards an adaptive delivery of evaluation tools uri icon

Abstract

  • Currently, online tests, quizzes and other forms of computer assisted evaluation tend to be delivered using a number of channels such as the web, SMS, MMS, podcasts and streamlined video feeds on an increasing variety of ubiquitous gadgets. In a number of Intelligent Tutoring Systems (ITS), such as ActiveMath, tests are assembled by the system from a pool of questions using student and domain models that are manipulated with automated reasoning machines such as Elvira or Jess. It may be argued that such forms of evaluation are adaptive in a perquestion basis. Other, more commercial products such as on-line book resorce of Physics by Wilson Buffa (Prentice-Hall), and the Brownstone software don't offer selective assembly of quizzes and tests, resorting rather to large exam sets from which the user may encounter a variety of practice options. In the case of courses un mathematics and physics, it would be desirable to have the questions themselves be adaptive, that is, that the values and variables used in formulas within the exam be adaptive based on student and domain models. This way, the size of question databases would be greatly reduced as these become databases with learning objects themselves. We have produced the first prototype of such a system. On this paper, we present the mechanics within, and show some preliminary results, as well as a list of objectives for future work. © 2007 IEEE.

Publication date

  • December 1, 2007