An approach to evaluating an online educational technology application

This week in my educational technology class we are studying methods for evaluating online educational technologies. Our assignment calls us to review our recent proposals for development of an educational technology application in order to reflect upon how we would go about evaluating the tool.

Dabbagh and Bannan-Ritland suggest that a four step process for evaluation be adopted by instructional designers:

“1. Clearly determine the purpose, desired results, and methods of evaluation.

2. Formatively evaluate the design and development prior to launching the online course

3. Revise the online materials according to the results of the formative evaluation

4. Implement the online learning experience and evaluate the results according to the identified goals.” (Dabbagh, Bannan-Ritland, 2005, p.235)

From my experience, effective multimedia development of small scale learning tools, such as the interactive physics lab, can be accomplished using informal evaluation at various stages of production. In most cases the purpose of an interactive multimedia application is to engage the learner while fulfilling instructional objectives.

For the interactive physics lab I described in my last post, I would begin development by creating a design profile with specifics provided by the client/instructor. From the profile a basic prototype would be designed. This prototype might be implemented using pen and paper to simulate the basic interactions a student would encounter. In collaboration with the instructor, we would review this basic model to determine strengths and weaknesses in the design. If we noticed opportunities where the student-simulation interaction could be improved, a list of characteristics and design patterns would be documented.

As development of the module continued we might have peer instructors review the prototype and provide feedback and suggestions on how to improve the project. After another phase of revisions, a preliminary electronic interactive version would be created. This prototype would be demonstrated and used by real students. The developers and instructors would observe the students using the tool to determine levels of engagement with the simulation. The development team could pose informal questions to the learner to gauge if the prototype supported acquisition of the learning objectives. Dependent upon the result of this review, development of the prototype would be continued or taken ‘back to the drawing board’. Features suggested by the student reviewers might be included to increase the potential for higher engagement when the tool is utilized in a real classroom setting.

This type of evaluation cycle would continue until the learning tool was ready for final release. If continued development of the tool after the final release were possible, it would be important to include a feedback mechanism for the instructors and students who used the tool in their studies to interact with the development team. An open communication channel with the end users would provide opportunity to measure the ongoing effectiveness of the tool and consider necessary adjustments.

4 Comments

  1. Mike Johnson March 22, 2007
  2. carrol erickson March 23, 2007
  3. Andy Blackman March 24, 2007
  4. Cindy DeRyke March 24, 2007

Leave a Reply

Your email address will not be published. Required fields are marked *

4 × five =