School of Education
Permanent URI for this communityhttp://10.10.97.169:4000/handle/123456789/4209
Browse
Browsing School of Education by Author "Mayeka, James George"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Massification in Universities: Are assessment tools still reliable? A reflection from Sokoine University of Agriculture, Tanzania(IISTE, 2021) Mayeka, James George; Kira, Ernest Simon1. Introduction A tremendous increase of the number of students in universities has been experienced by almost every country all over the world. While, the global universities’ enrolment has risen from 13.8% in 1990 to 29% in 2010, Sub- Saharan Africa has experienced a doubling of gross enrolment ratios from 3% in 1990 to 7% in 2010 (Hornsby & Osman, 2014). In Tanzania, the situation has become more evident in the recent past (Kapinga & Amani, 2016). According to Memba & Feng (2016), students’ enrolments in Tanzanian universities increased from 98,915 to 354,430 between 2008/2009 and 2015/2016 academic years, respectively. Sokoine University of Agriculture which is one of the public universities in Tanzania was established in 1 st July, 1984 (Sokoine University of Agriculture, 2007). Since its establishment, the university has also been experiencing the massive increase of the number of student just like other universities in the country. For example, the number of students raised almost four times from 2729 in 2008/2009 to 8296 in 2016/2017 academic years. Following this increase in number of students in universities, the instructor-student ratio has been greatly affected leading to ineffective provision of quality teaching and student assessments (Ntim, 2016). Large classes in education institutions affect much the interaction among instructors and students. Increase in numbers of students lead to poor communications among instructors with their students and the general practices of designing and using appropriate assessment tools (Alomari & Akour, 2014). Large classes hinder instructors to organize quizzes and regular class tests resulting into inefficient assessment of teaching and learning process (Yelkpieri, Namale, Esia-donkoh & Ofosu-dwamena, 2012). The increase in number of students in any education institution has turned the normal way of conducting assessment among students in universities. Regardless of the increasing number, universities would wish to maintain the quality of the programs offered. One of the means of maintaining quality of training is through effective evaluation of teaching and learning process. Effective evaluation requires valid and reliable assessment tools. Therefore, the need to check for internal consistency of the assessment tools used for teaching and learning in Tanzanian universities is one of the important aspects for effective assessment.Item Massification in universities: are assessment tools still reliable? a reflection from Sokoine University of Agriculture, Tanzania(Journal of Education and Practice, 2021-08-31) Mayeka, James George; Kira, Ernest SimonA tremendous increase of the number of students in universities has been experienced by almost every country all over the world including Tanzania. The Increasing number of students has greatly affected the instructors’ workload and general practices of student’s assessment and evaluation. This study aimed at determining the reliability of the assessment tools at Sokoine University of Agriculture. Retrospective record review was done on education undergraduate students who sat for an EDP 100 in 2014/2015, 2015/2016 and 2017/2018 academic years where the course was selected through random procedures. A total of 214 scripts were systematically randomly sampled from each cohort. The results revealed a drop in internal consistency of the scores obtained from EDP 100 course across the three cohorts. Majority of the questions for the EDP 100 though were moderately difficulty, their discrimination powers were poor. However, the variation in difficulty and discrimination indices for the three cohorts was statistically not significant (p˃0.05 for MCQ and MIQ) except the discrimination index for MIQ which shows significant variations (p˂0.05). It is therefore recommended that similar studies should be done to determine both validity and reliability of the assessment tools for the other subjects at the University.