- New
- Research Article
- 10.33902/jpsp.202629580
- Mar 23, 2026
- Journal of Pedagogical Sociology and Psychology
- Marina Sounoglou + 2 more
- New
- Research Article
- 10.33902/jpsp.202641639
- Mar 22, 2026
- Journal of Pedagogical Sociology and Psychology
- John Mark R Asio
The landscape of language education is changing rapidly as technologies give learners greater leverage. This investigation explores the relationships between educators' digital competence, students' academic integrity, academic self-efficacy, and academic commitment. It also intends to discover the mediating roles of academic self-efficacy and academic commitment. Using a quantitative, cross-sectional research design and employing a parallel mediation analysis (Model 4), this paper surveyed 334 voluntary participants using a purposive sampling technique from a higher education institution in the Philippines with a structured research tool. The data gathering happened during the first semester of the academic year 2025-2026. Statistical analysis includes descriptive and inferential statistics, using SPSS 23, especially in the Process Macro model analysis. The investigation reveals that educators have a high digital competence. At the same time, students also reveal very high levels of academic integrity. The result also observed high academic self-efficacy and commitment among the students. Additionally, moderate to high correlations among these variables suggest interconnectedness within the educational context. The study also highlights significant indirect effects of digital competence on academic integrity via students' self-efficacy and commitment, underscoring the pivotal roles of students' beliefs and dedication in shaping ethical behavior. These results emphasize the reputation of fostering educators' digital skills and nurturing students' self-belief and commitment to uphold academic integrity in language education, promoting a positive learning environment conducive to academic success and ethical conduct.
- Research Article
- 10.33902/jpsp.202641122
- Feb 27, 2026
- Journal of Pedagogical Sociology and Psychology
- Kadir Kaplan + 2 more
- Research Article
- 10.33902/jpsp.202641097
- Feb 27, 2026
- Journal of Pedagogical Sociology and Psychology
- Ergün Yurtbakan + 2 more
- Research Article
- 10.33902/jpsp.202636830
- Jan 4, 2026
- Journal of Pedagogical Sociology and Psychology
- Cem Kurdal + 1 more
- Research Article
- 10.33902/jpsp.202638243
- Jan 4, 2026
- Journal of Pedagogical Sociology and Psychology
- Samet Türer + 1 more
- Research Article
- 10.33902/jpsp.202533190
- Dec 4, 2025
- Journal of Pedagogical Sociology and Psychology
- Ibrahim Ndelale + 3 more
- Research Article
1
- 10.33902/jpsp.202536789
- Oct 25, 2025
- Journal of Pedagogical Sociology and Psychology
- Tosin Adewumi + 7 more
We introduce a novel writing method called Probing Chain-of-Thought, which potentially prevents students from cheating using a large language model while enhancing their critical thinking. large language models have disrupted education and many other fields. For fear of students cheating, many educationists have resorted to banning their use. We conduct studies in two different courses with 65 students using qualitative research design primarily (i.e. phenomenological) and quantitative methods. The students in each course were asked to prompt a large language model of their choice with one question from a set of four (random) questions and required to affirm or refute statements in the large language model output by using peer-reviewed references as evidence. In addition, the rubric for assessing the students writing included 5 more criteria: focus, logic, content, style and correctness. The average success rate of the writing of students based on the criteria for the two cases is 79.49% (±12.82%). The results of the rubric assessment show two things: (1) Probing Chain-of-Thought stimulates critical thinking and writing of students through engagement with large language models when we compare the large language models-only output to Probing Chain-of-Thought output and (2) Probing Chain-of-Thought may prevent cheating because of clear limitations in the concerned large language models when we compare students’ Probing Chain-of-Thought output to large language models’ Probing Chain-of-Thought output. In quantitative analysis, we also discover that most students prefer to give answers in fewer words than large language models, which are typically verbose. The average word counts for students in the first course, ChatGPT 3.5, and Phind (v8) are 208, 391 and 383, respectively, while it is 405, 356, and 315 for students, ChatGPT 3.5, and BingAI, respectively, in the second course, where we enforced a minimum word-count of 300 for the students. We provide access to the outputs for possible assessments (available after review).
- Research Article
- 10.33902/jpsp.202538505
- Oct 25, 2025
- Journal of Pedagogical Sociology and Psychology
- Ben Morris + 1 more
- Research Article
- 10.33902/jpsp.202535864
- Oct 16, 2025
- Journal of Pedagogical Sociology and Psychology
- Gibran A Garcia Mendoza + 2 more