Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur

The purpose of this study is to evaluate the validity of an existing English test in order to examine its potential and shortcomings in assessing the engineering students' English ability. The test was built mainly to measure grammar and reading ability, while adopting recognition testing techn...

Full description

Bibliographic Details
Main Author: Mohamed Bannur, Fahima
Format: Thesis
Language:English
Published: 2016
Online Access:http://ir.uitm.edu.my/id/eprint/18559/
http://ir.uitm.edu.my/id/eprint/18559/1/TP_FAHIMA%20MOHAMED%20BANNUR%20APB%2016_5.pdf
id uitm-18559
recordtype eprints
spelling uitm-185592018-06-26T01:09:41Z http://ir.uitm.edu.my/id/eprint/18559/ Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur Mohamed Bannur, Fahima The purpose of this study is to evaluate the validity of an existing English test in order to examine its potential and shortcomings in assessing the engineering students' English ability. The test was built mainly to measure grammar and reading ability, while adopting recognition testing techniques. The focus on the validity event for the ESP reading test arose from the urgent need of the University of Tripoli, Libya as well as students' appeal for an improved English test, where thousands of students from different engineering departments at the faculty of engineering study English (ESL) as a compulsory course and take the ESP test to carry on their academic study at the faculty. The current method of providing the English test to these students presents the university with some problems in terms of test design, construction, content, efficiency, reliability and validity. These are significant aspects for any validation process, and to date, they have not been addressed formally at the university. To achieve this, a framework for validating a reading test (Weir 2005) was adopted throughout the study. The framework is instructive and comprehensive in nature. It has five components and various parameters that ensure meaningful and typical test validity process in all test stages which include a 'priori', during and a 'posteriori' of the test event. The framework was operationalized such that data collection and analysis were conducted according to validity elements of the framework, and consequently all findings were systematically reported. The study involved three phases: a validation study on the Existing English Test (Ti), the development, administration and validation of a Sample Proposed ESP Test (T2), and the report analysis of the two tests. Data gathered from the main validation study point to deficiencies found in the existing test such as test specifications, test format and content, test construction, rating process, and other administrative and evaluative issues. Through the sample proposed test (T2) these issues were considered. The comparative validity report of the two ESP tests addressed the question of whether the use of an alternative test fulfills to some extent the requirements of a valid test, and students' needs for academic study and their future career. Recommendations were made for using systematic frameworks, such as that proposed by Weir (2005), to validate and improve language tests in which validity parameters are incorporated and further validation can subsequently be conducted. 2016 Thesis NonPeerReviewed text en http://ir.uitm.edu.my/id/eprint/18559/1/TP_FAHIMA%20MOHAMED%20BANNUR%20APB%2016_5.pdf Mohamed Bannur, Fahima (2016) Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur. PhD thesis, Universiti Teknologi MARA.
repository_type Digital Repository
institution_category Local University
institution Universiti Teknologi MARA
building UiTM Institutional Repository
collection Online Access
language English
description The purpose of this study is to evaluate the validity of an existing English test in order to examine its potential and shortcomings in assessing the engineering students' English ability. The test was built mainly to measure grammar and reading ability, while adopting recognition testing techniques. The focus on the validity event for the ESP reading test arose from the urgent need of the University of Tripoli, Libya as well as students' appeal for an improved English test, where thousands of students from different engineering departments at the faculty of engineering study English (ESL) as a compulsory course and take the ESP test to carry on their academic study at the faculty. The current method of providing the English test to these students presents the university with some problems in terms of test design, construction, content, efficiency, reliability and validity. These are significant aspects for any validation process, and to date, they have not been addressed formally at the university. To achieve this, a framework for validating a reading test (Weir 2005) was adopted throughout the study. The framework is instructive and comprehensive in nature. It has five components and various parameters that ensure meaningful and typical test validity process in all test stages which include a 'priori', during and a 'posteriori' of the test event. The framework was operationalized such that data collection and analysis were conducted according to validity elements of the framework, and consequently all findings were systematically reported. The study involved three phases: a validation study on the Existing English Test (Ti), the development, administration and validation of a Sample Proposed ESP Test (T2), and the report analysis of the two tests. Data gathered from the main validation study point to deficiencies found in the existing test such as test specifications, test format and content, test construction, rating process, and other administrative and evaluative issues. Through the sample proposed test (T2) these issues were considered. The comparative validity report of the two ESP tests addressed the question of whether the use of an alternative test fulfills to some extent the requirements of a valid test, and students' needs for academic study and their future career. Recommendations were made for using systematic frameworks, such as that proposed by Weir (2005), to validate and improve language tests in which validity parameters are incorporated and further validation can subsequently be conducted.
format Thesis
author Mohamed Bannur, Fahima
spellingShingle Mohamed Bannur, Fahima
Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur
author_facet Mohamed Bannur, Fahima
author_sort Mohamed Bannur, Fahima
title Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur
title_short Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur
title_full Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur
title_fullStr Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur
title_full_unstemmed Evaluating and validating ESP testing in a specific context: stakeholders' perspectives / Fahima Mohamed Bannur
title_sort evaluating and validating esp testing in a specific context: stakeholders' perspectives / fahima mohamed bannur
publishDate 2016
url http://ir.uitm.edu.my/id/eprint/18559/
http://ir.uitm.edu.my/id/eprint/18559/1/TP_FAHIMA%20MOHAMED%20BANNUR%20APB%2016_5.pdf
first_indexed 2023-09-18T23:00:48Z
last_indexed 2023-09-18T23:00:48Z
_version_ 1777418159601483776