Description
Summary:Bias is a very real issue in most of the monitoring and evaluation work done of information and communication technologies (ICT) in education issues across the board. Such biases are often introduced at the monitoring and evaluation design stage, and include a lack of relevant and appropriate control groups, biases on the part of 'independent evaluators' (who often have a stake in seeing positive outcomes), and biases on the part of those evaluated (who may understandably seek to show that they have made good use of investments in ICTs to benefit education). The opportunity for such biases (which are usually positive biases) are especially acute where there a great reliance on self-reported data. There appears to be a lack of institutional and human resource capacity to carry out independent evaluations of ICT in education initiatives by local organizations in least development countries (LDCs) (which increases the cost of such activities and potentially decreases the likelihood that the results will be fed back into program design locally). A general lack of formal monitoring and evaluation activities inhibits the collection and dissemination of lessons learned from pilot projects and the useful formation of necessary feedback loops for such lessons learned to become an input into educational policy. Where such activities have occurred, they focus largely on program delivery, and are often specific to the project itself.