Automated feedback can improve hypothesis quality
Publication year
2019Number of pages
17 p.
Source
Frontiers in Education, 3, (2019), article 116ISSN
Publication type
Article / Letter to editor

Display more detailsDisplay less details
Organization
SW OZ BSI OLO
Journal title
Frontiers in Education
Volume
vol. 3
Languages used
English (eng)
Subject
Learning and PlasticityAbstract
Stating a hypothesis is one of the central processes in inquiry learning, and often forms the starting point of the inquiry process. We designed, implemented and evaluated an automated parsing and feedback system that informed students about the quality of hypotheses they had created in an online tool, the hypothesis scratchpad. In two pilot studies in different domains ('supply and demand' from economics and 'electrical circuits' from physics) we determined the parser's accuracy by comparing its judgments with those of human experts. A satisfactory to high accuracy was reached. In the main study (in the 'electrical circuits' domain), students were assigned to one of two conditions: no feedback (control) and automated feedback. We found that the subset of students in the experimental condition who asked for automated feedback on their hypotheses were much more likely to create a syntactically correct hypothesis than students in either condition who did not ask for feedback.
This item appears in the following Collection(s)
- Academic publications [229037]
- Electronic publications [111437]
- Faculty of Social Sciences [28689]
- Open Access publications [80287]
Upload full text
Use your RU credentials (u/z-number and password) to log in with SURFconext to upload a file for processing by the repository team.