Automated Essay Evaluations, popularly known as Automated Essay scoring (AES) is a form of educational assessment. It involves use of specialized computer programs, typically Natural Language Programing (NLP) to assign grades to essays written mostly in an educational setting or otherwise for various other evaluation processes.
The basic idea behind this approach to classify a large set of textual entities into a small number of discrete categories, corresponding to the possible grades.
There are numerous factors responsible for increasing popularity of AES. Some of them are cost, accountability, quality, accuracy & consistency and use of advanced technology. Increasing education costs and expectations for high standards in evaluation process have led to pressure to hold the educational system accountable for results by imposing high evaluation standards. AES is one such measure for betterment of educational standards at a reduced cost and high speed with no compromise on quality
The efficiency of automated essay scoring (AES) holds a strong relevance to higher education institutions that are considering using standardized writing tests graded by AES for selection or exit assessment.
The quality of an essay is affected by the following four primary dimensions:
- Topic relevance
- Grammar
- Mechanics – including spelling rules, punctuations, capitalization rules, etc.
- Word usage and writing style
- Sentence complexity
- Organization and coherence – essay structure and appropriate transition
- Ideas and examples
- Tone and persuasiveness
- Thesis clarity
Why switch to AES
Examination and evaluation are key areas of any educational set-up. Considering the number of applicants/students applying for a course, manual scoring has become a really outdated approach. There are many short-comings and challenges with manual evaluation process or essay scoring. Grading spelling, grammar, and punctuation, etc. is relatively straightforward for teachers to score because errors can be easily tracked and counted, but grading content, organization, coherence & relevance, sophistication are more difficult to score impartially in a manual process. Different evaluators might score the same essay differently because of the subjective impressions of the quality and concentration & fatigue level.
Key challenges of manual essay scoring
- Large requirement for experienced and trained staff – Manual essay scoring has a huge dependency on the quality and experience of evaluators.
- Time-consuming and slow – Manually scoring each essay is obviously a very time-consuming process. Applicants have to wait for a considerable amount of time to see the outcome. This makes the entire evaluation and scoring cycle slow and long.
- Inconsistent outcome and prone to errors – Human or manual evaluation is bound to be inconsistent in terms of the outcome and also has a high probability of error.
- Administrative and logistic issues – Manual process requires physical handling of essays and answer sheets, there by making it a very tedious process. The entire process of evaluation, when carried out manually leads to many administrative and logistic issues also like transporting the exam papers, masking names, data entry, storing and record-keeping.
- High cost – Manual essay scoring is a cost-intensive process as it involves a high cost of professional and qualified evaluators and other administrative costs.
Keeping the above challenges in mind, and for the sake of rationality, fairness, and reliability many institutions have switched to Automated Essay Scoring Solutions.
Outcomes of AES solutions are unbiased, error-free, and repeatable, even when irrelevant external factors are changed.
Automated Essay Scoring using NLP
NLP technology is the foundation for the Automated Scoring Applications, also known as Automated Essay Scoring Applications. NLP applies principles of linguistics and computer science to create computer applications that interact with human language.
These applications are developed to address the increasing demand for open-ended or constructed-response test questions, which prompt responses such as extended writing responses (e.g., essays, long format answer), shorter written responses to subject-matter items, and spontaneous speech, etc.
Methodology
An AES solution leverages a computer program that builds a scoring model by extracting linguistic features from a constructed-response prompt that has been pre-scored by human raters. After that same essays are rated using machine learning algorithms. It maps the linguistic features to the human scores so that the computer can be used to classify (i.e. score or grade) the responses of a new group of students. The accuracy of the score classification can be evaluated using different measures of agreement.
Results
Automated essay scoring offers a method for scoring constructed-response tests that complements the current use of selected-response testing in the assessment. The method can support evaluators by providing the summative scores required for high-stakes testing. It can also support students by providing them with detailed feedback as part of a formative assessment process.
Key points to consider for Automated Scoring Systems –
- Automated scores are consistent with the scores from expert human evaluators/graders
- The way automated scores are produced is understandable and substantively meaningful
- Automated scores are fair
- Automated scores have been validated against external measures in the same way as is done with human scoring
- The impact of automated scoring on reported scores is understood
Factors being considered for scoring –
- Syntax Analysis (including grammar check, capitalization, punctuation, etc.)
- Sentiment Analysis
- Entity Analysis
- Entity Sentiment Analysis
- Text Classification
How Automated essay scoring works
An AES works by extracting features such as word count, vocabulary choice, error density, sentiment strength, sentence length variance, and paragraph structure of high scoring essays to create a statistical model of essay quality. Comparing a writer’s/student’s essay to that statistical model allows the system to estimate a score in 2 seconds or less.
Benefits of Automated Essay Scoring (AES)
- High Productivity – AES has a higher degree of reproducibility as compared to human evaluators.
- Cost-Efficient – Efficiency aspects include cost reduction and faster results when evaluating essays. AES system is able to evaluate a large number of essays in an effective way as compared to human raters.
- Steady and Unbiased – The AES system is more steady than human raters who can be affected by tiredness, distraction, prejudices, etc., while scoring.
- Consistent Scoring – The AES system will always have consistent criteria in the essay evaluation.
- Reliable and High-Quality Result – If the AES system is trained on high-quality training material (essays and their scores), it will keep a reliable high quality.
- Handles Large Volumes Easily – The larger the volume of essays, the more efficient is the AES system because it has a constant initial cost.
- Scoring Consistency – AES grades each essay based on its own merits, and similar papers will receive the same grade. Computer-scored essays are not subject to human bias and subjectivity.
- Instant Feedback – AES provides instant feedback to the students during writing exercises. Some AES systems provide Web interface where the students write the essay and receive immediate feedback in different areas. More training and more feedback will help improve writing skills.
Conclusion
We cannot discount the valuable acceleration of feedback and reduction of workload for instructors and raters with the use of AES. Every profession in every domain embraces some degree of automation; essay scoring is no different.
Get in touch with experts at AgreeYa if you are looking for a solution around Automated Essay Scoring. Feel free to write an email to marketing@staging.agreeya.com for any queries you might have.