查看原文
其他

刊讯|SSCI 期刊《写作评估》2022年第52-54卷

六万学者关注了→ 语言学心得 2024-02-19

好书推荐|《汉语方言研究的多维视角》(留言赠书)

2023-05-07

刊讯|《中国语文研究》2022年第1-2期

2023-05-07

刊讯|SSCI 期刊《语言与交际》2022年第84-87卷

2023-05-04

Assessing Writing

Volume 52-54, 2022

Assessing Writing(SSCI 一区,2021 IF:3.164)2022年第52-54期共发文38篇,其中研究性论文31篇,书评4篇,教学技术3篇。研究论文涉及写作各种题型,例如:小学儿童汉语叙事写作中句法复杂性发展的轨迹。同时还涉及二语的写作,对写作教学具有很强的启示作用。欢迎转发扩散!(2022年已更完)

往期推荐:

刊讯|SSCI 期刊《写作评估》2022年第51卷

目录


ARTICLES

■Individualized feedback to raters in language assessment: Impacts on rater effects, by Jing Huang, Gaowei Chen.

■The trajectory of syntactic complexity development in L1 Chinese narrative writings of primary school children: A systematic 5-year longitudinal study, by Yunlong Huang, Juan Wu, Meng Li, Yang Yang.

■ The mediating effects of student beliefs on engagement with written feedback in preparation for high-stakes English writing assessment, by William S. Pearson.

■ Exploring the impact of teacher feedback modes and features on students' text revisions in writing, by Mohammed Abdullah Alharbi.

■ Validating a rubric for assessing integrated writing in an EAP context, by Pakize Uludag, Kim McDonough.

■Automated writing evaluation: Does spelling and grammar feedback support high-quality writing and revision? by Kathryn S. McCarthy, Rod D. Roscoe, Laura K. Allen, Aaron D. Likens, Danielle S. McNamara.

■Dynamic assessment of argumentative writing: Mediating task response, bAli Kushki, Mohammad Rahimi, Kristin J. Davin.

■Assessing linguistic complexity features in L2 writing: Understanding effects of topic familiarity and strategic planning within the realm of task readiness, by Mahmoud Abdi Tabari, Yizhou Wang.

■It takes two to tango: Investigating teacher-student interactions related to written corrective feedback with Activity Theory, by Yiming Liu, Neomy Storch, Janne Morton.

■How feedback conditions broaden or constrain knowledge and perceptions about improvement in L2 writing: A 12-week exploratory study, by Ali Rastgou.

■Implementing continuous assessment in an academic English writing course: An exploratory study, by Maggie Ma, Gavin Bui.

■Reconceptualizing the impact of feedback in second language writing: A multidimensional perspective,Chunhong Liu, Shulin Yu.

■Assessing L2 student writing feedback literacy: A scale development and validation study, by Shulin Yu, Emily Di Zhang, Chunhong Liu.

■Explicit strategy-based instruction in L2 writing contexts: A perspective of self-regulated learning and formative assessment, by Lin Sophie Teng.

■Diagnosing EFL undergraduates’ discourse competence in academic writing, by Yumin Wang, Qin Xie.

■Unpacking the contribution of linguistic features to graph writing quality: An analytic scoring approach, by Seyyed Ehsan Golparvar, Hamideh Abolhasani.

■Developing and validating an analytic rating scale for a paraphrase task, by Sen Liu, Dunlai Lin.

■Perceptions of authorial voice: Why discrepancies exist, by Cecilia Guanfang Zhao, Jincheng Wu.

■Investigating whether a flemma count is a more distinctive measurement of lexical diversity, by Thwin Myint Myint Maw, Jon Clenton, George Higginbotham.

■Diversity of Advanced Sentence Structures (DASS) in writing predicts argumentative writing quality and receptive academic language skills of fifth-to-eighth grade students, by Ziyun Deng, Paola Uccelli, Catherine Snow.

■Integrated writing across conflicting L2 texts: The effects of prior beliefs on perspectivism and propositional content, by Mohammad N. Karimi, Tahereh Mousavi-Moghadam.

■Examining the social consequences of a locally-developed placement test using test takers’ attitudes, by Elizabeth Lee.

■Integrated writing and its correlates: A meta-analysis, by Sathena Chan, Junko Yamashita. Exploring Chinese EFL undergraduates’ writing from sources: Self-efficacy and performance, by Qin Zhou, Liang Chen, Xianrong Hou.

■Examining evaluative language used in assessment feedback on business students’ academic writing, by Prithvi N. Shrestha.

■Assessing L2 integrated writing self-efficacy: Development and validation of a scale, by  Peixin Zhang, Jason Fan, Wenfeng Jia.

■Validity evidences for scoring procedures of a writing assessment task. A case study on consistency, reliability, unidimensionality and prediction accuracy, by Paula Elosua.

■Using unfolding models to identify targeted feedback strategies for student writing, by Ye Yuan, George Engelhard, Kevin Raczynski, Jue Wang.

■(In)Equities in directed self-placement, by Theresa Tinkle, Jason Godfrey, Anil Ramachandran Menon, Andrew Moos,  Michelle Sprouse.

■Transfer of ideal L1 and L2 writing selves and their impacts on L2 writing enjoyment and integrated writing performance, by Xinhua Zhu, Yi Guan, Yuan Yao.

■Complexity, accuracy, and fluency in L2 writing across proficiency levels: A matter of L1 background? by Vo Dinh Phuoc, Jessie S. Barrot.

■Structure and coherence as challenges in composition: A study of assessing less proficient EFL writers’ text quality, by Lea Siekmann, Judy M. Parr, Vera Busse.


BOOK REVIEWS

■Academic Writing for International Students of Science by Jane Bottomley. Routledge, New York, NY (2022), 220, GBP 120.00 by Barbara Wing Yee SIU, Muhammad Afzaal.

■Understanding Development and Proficiency in Writing: Quantitative Corpus Linguistic Approaches, Philip Durrant, Mark Brenchley, Mc.Callum Lee. Cambridge University Press, United Kingdom (2021), 244, ix, by Hua Shang, Jessie S. Barrot.

■Fairness Justice, and Language Assessment, T. McNamara, U. Knoch, J. Fan (Eds.). Oxford University Press (2019), 215, £46.10. ISBN 978‐0‐19‐401708‐4, pp.by Wen Xiong, Hongcheng Ma, Xun Yan.

■Assessing EFL Writing in the 21st Century Arab World: Revealing the Unknown, Abdelhamid Ahmed, Hassan Abouabdelkader (Eds.). Palgrave Macmillan, Switzerland (2018), xxiii+315pp., ISBN: 978‐3‐319‐64103‐4, by Xinhua Yuan, Liangping Wu.


TOOLS AND TECH

■Constructs of argumentative writing in assessment tools, by Kelly Hartwell, Laura Aull. Using chatbots to scaffold EFL students’ argumentative writing, by Kai Guo, Jian Wang, Samuel Kai Wah Chu.

■Using chatbots to scaffold EFL students’ argumentative writing, by Kai Guo, Jian Wang, Samuel Kai Wah Chu.

■The persuasive essays for rating, selecting, and understanding argumentative and discourse elements (PERSUADE) corpus 1.0, by Scott A. Crossley, Perpetual Baffour, Yu Tian, Aigner Picou, Ulrich Boser.


摘要

Individualized feedback to raters in language assessment: Impacts on rater effects

Jing Huang,Department of Psychology, The Education University of Hong Kong, Hong Kong Special Administrative Region,Faculty of Education, The University of Hong Kong, Hong Kong Special Administrative Region 

Gaowei Chen,Faculty of Education, The University of Hong Kong, Hong Kong Special Administrative Region

AbstractThis study examined the impacts of individualized feedback on rater effects in performance-based language assessment. The raters were 93 native Chinese speakers without previous rating experience, and they were randomly assigned to one of the three treatment groups. The three groups of raters differed in the way that they received individualized feedback in a given period: (a) a control group receiving no feedback, (b) a single-feedback group receiving the feedback once, and (c) a double-feedback group receiving the feedback twice. The results indicated that individualized feedback significantly reduced rater severity/leniency and rater misfit. Specifically, raters from both the double-feedback and single-feedback groups exhibited lower levels of rater severity/leniency, compared with those from the control group. Raters from the double-feedback group showed lower levels of rater misfit than those from the single-feedback group. With regard to the retention of reduction in rater effects, individualized feedback was found to assist raters from the double-feedback group to maintain the reduction in rater severity/leniency. It also helped raters from the single-feedback group to maintain the reduction in rater misfit. These findings may shed light on the applications of individualized feedback in the designs of face-to-face and online rater training programs.


Key words Individualized feedback,Rater severity/leniency,Rater misfit,Rater bias,Writing assessment,Many-facet Rasch measurement


The trajectory of syntactic complexity development in L1 Chinese narrative writings of primary school children: A systematic 5-year longitudinal study

Yunlong Huang, Advanced Innovation Center for Future Education, Beijing Normal University, Beijing, China 

Juan Wu, Advanced Innovation Center for Future Education, Beijing Normal University, Beijing, China 

Meng Li, Institute of Education, Tsinghua University, Beijing, China

Yang Yang, Advanced Innovation Center for Future Education, Beijing Normal University, Beijing, China

AbstractThe present study contributes a systematic 5-year longitudinal investigation of the syntactic complexity development patterns in the L1 narrative writings of mainland Chinese primary school students. By adopting the longitudinal systematic approach, we aim to: (1) depict the specific trajectories of syntactic complexity development in L1 Chinese writings, (2) investigate whether, how and to what extent students’ proficiency levels influence syntactic complexity development in terms of different indices, and (3) study the effectiveness of indices and possible relationships between them. Hierarchical linear and nonlinear models and pairwise comparisons were performed to depict both the overall developmental trajectory and the detailed statistical differences between grades. All the indices have demonstrated a general growth of syntactic complexity but with different growth patterns. Similar subordination development but different coordination development trajectories were found when comparing the present findings with those previously reported in English. Evidence suggests that students’ proficiency level significantly influenced four out of the five indices in different ways. Unexpected inconsistencies between the length- and density-based indices and a possible topic limitation effect were also reported. Implications for L1 and L2 Chinese writing assessment, acquisition and instruction were provided.


Key words Syntactic complexity, Developmental trajectory, L1 Chinese, Proficiency levels, Primary students


The mediating effects of student beliefs on engagement with written feedback in preparation for high-stakes English writing assessment

William S.Pearson, University of Exeter, St Luke's Campus, Heavitree Road, Exeter, UK

AbstractResearch in L2 writing contexts has shown developing writers’ beliefs exert a powerful mediating effect on how they respond to written feedback. The mediating role of beliefs is magnified in preparation for high-stakes English writing assessment contexts, where tangible outcomes pivot on successful test performance. The present qualitative case study utilises data from semi-structured interviews to investigate how the beliefs of three self-directed IELTS preparation candidates mediated their affective, behavioural, and cognitive engagement with electronic teacher written feedback across three multi-draft Task 2 rehearsal essays. Utilising a metacognitive conceptual approach (Wenden, 1998), the study identified seven themes: 1) self-concept beliefs regulated engagement, 2) reliance on the expertise of a quality teacher, 3) engagement was mediated by individuals’ learning-to-write beliefs, 4) belief in comprehensive, critical written feedback, 5) feedback deemed transferable was more comprehensively engaged with, 6) entrenched test-taking strategy beliefs hindered engagement, and 7) supplementary self-directed learning activities were considered of limited value. The implications for practitioners of IELTS Writing preparation and the IELTS co-owners are discussed.


Key words Written feedback,Student beliefs, Writing for test preparation purposes, IELTS


Exploring the impact of teacher feedback modes and features on students' text revisions in writing

Mohammed Abdullah Alharbi, Department of English, College of Education, Majmaah University, Majmaah, Saudi Arabia

AbstractWhether and how feedback when provided in different modes affects students' text revisions continue to be important questions for research. Therefore, the present study investigates the quantity and quality of students' integration of teacher feedback in relation to its modes and features. The feedback was given through the oral/spoken mode in a face-to-face (FTF) classroom environment and in three digital modes: text, recorded audio and audio-visual modes. Feedback was classified into imperative, correction, question, suggestion, statement and combination. Data collected from the instructor's four modes of feedback and essays of 30 learners in a Saudi university were coded and analyzed. Findings show that students took up 83.52% of the teacher feedback provided to them with quality of integration measured at 68.46%. The quantity and quality of students' integration of teacher feedback varied across the four modes with the audio-visual feedback being most integrated in text revisions while the text feedback was least integrated. Results also show feedback features, including suggestions, questions and imperatives, were more integrated by students than others. Based on the findings, useful pedagogical and research implications are offered.


Key words Feedback modes, Feedback features,Students' integration of feedback,Computer mediated


Validating a rubric for assessing integrated writing in an EAP context

Pakize Uludag, Concordia University, Canada

Kim McDonough, Concordia University, Canada

AbstractAlthough researchers have argued for a mixed-method approach to rubric design and validation, such research is sparse in the area of L2 integrated writing. This article reports on the validation of an analytic rubric for assessing a classroom-based integrated writing test. Argumentative integrated essays (N = 48) written by EAP students at an English-medium Canadian university were rated by instructors (N = 10) with prior EAP teaching experience. Employing a mixed methods design, the quality of the rubric was established through many facet Rasch measurement and perceptions from the instructors elicited during semi-structured interviews. To further explore the rubric’s ability to differentiate among students, essays from three performance levels (low, average, high) were compared in terms of fluency, syntactic and lexical complexity, cohesion, and lexical diversity measures. Results have suggested the rubric can capture variation in student performance. Implications are discussed in terms of validation of assessment rubrics in localized assessment contexts.


Key words Rubric validation, EAP, Integrated writing assessment,Many-facet Rasch measurement


Automated writing evaluation: Does spelling and grammar feedback support high-quality writing and revision?

Kathryn S.McCarthy, Georgia State University, Atlanta, GA, USA

Rod D.Roscoe, Arizona State University, Tempe, AZ, USA

Laura K.Allen, University of New Hampshire, Durham, NH, USA

Aaron D.Likens, University of Nebraska at Omaha, Omaha, NE, USA

Danielle S.McNamara, Arizona State University, Tempe, AZ, USA

AbstractThe benefits of writing strategy feedback are well established. This study examined the extent to which adding spelling and grammar checkers support writing and revision in comparison to providing writing strategy feedback alone. High school students (n = 119) wrote and revised six persuasive essays in Writing Pal, an automated writing evaluation and tutoring system. All participants received automated strategy feedback after writing the first draft of their essays. Half of the participants were also given access to spelling and grammar checkers while writing. Spelling and grammar feedback on its own had no effect on the quality of students’ first draft. Linear mixed effects models revealed improvements from initial draft to revision on most subscales. The addition of spelling and grammar feedback contributed small but significant gains after revision on five subscales (i.e., mechanics, word choice, voice, conclusion, and organization) but no other aspects of the students’ essays. Qualitative exploration of exemplar students’ revision moves revealed how students incorporated both strategy and spelling and grammar feedback into their revisions. Findings from this study demonstrate that strategy feedback with an opportunity to revise contributed to improved essay quality, but that spelling and grammar feedback provided modest, complementary benefits.


Key words Writing,Revision, AWE feedback, Writing strategies, Feedback uptake, Mechanics


Dynamic assessment of argumentative writing: Mediating task response

AliKushki, Loyola University Chicago, School of Education, 820N. Michigan Ave, Chicago, IL 60611, USA

Mohammad Rahimi, Shiraz University, Department of Foreign Languages and Linguistics, Eram Square, Eram Campus, Shiraz, Fars, 71348-14336, Iran

Kristin J.Davin, University of North Carolina at Charlotte, Cato College of Education, 9201 University City Blvd, Charlotte, NC, 28223, USA

AbstractThis study used Vygotskian Sociocultural Theory and Dynamic Assessment to provide written commentary on essays by three English-as-a-foreign-language applicants for the International English Language Testing System, or IELTS. More specifically, it examined how asynchronous mediation tailored to the participants’ specific needs provided over multiple drafts of three essay prompts influenced their ability to formulate a relevant and well-supported argument, known as Task Response. Mediation consisted of three rounds of written commentary with increasing specificity about the location and nature of problematic areas. In total, 36 drafts – 12 per participant – were analyzed for evidence of qualitative and quantitative improvement. Qualitatively, participants responded in varying degrees to mediation across each essay, but all showed improvement. Quantitatively, all participants obtained higher scores on Task Response on the final essay as compared to the first. The study suggests that graduated written commentary is an effective mediation tool for instructors to assist L2 learners in improving argumentative writing.


Key words Sociocultural theory, Argumentative writing, Zone of proximal development, IELTS,Vygotsky


Assessing linguistic complexity features in L2 writing: Understanding effects of topic familiarity and strategic planning within the realm of task readiness

Mahmoud Abdi Tabari, Santa Clara University, Department of English, Vari Hall, Santa Clara, CA 95053, USA

Yizhou Wang, School of Languages and Linguistics, The University of Melbourne, Babel (Building 139), VIC 3010, Australia

AbstractUsing Bui’s (2014) task-readiness framework, this study investigates the interface between topic familiarity as a task-internal factor and strategic planning as a task-external factor in the TBLT field and explores how the two forms of task preparedness affect L2 learners’ writing complexity in assessment tasks. To this end, 64 L2 learners enrolled in a two-month intensive EAP program at an American university were randomly divided into planning and no planning groups and asked to perform a familiar and an unfamiliar task determined via a survey at the onset of the study. The sequence of the tasks was counterbalanced. The participants’ writing performances were analyzed using 11 finer-grained measures of linguistic complexity gauging L2 written syntactic and lexical complexity. Results revealed that topic familiarity had positive and substantial effects on syntactic complexity and some effect on lexical complexity. Strategic planning also promoted syntactic complexity at the phrasal level, but it had limited effects on lexical complexity. Furthermore, interaction effects between topic familiarity and strategic planning showed increases in general complexity, syntactic structure similarity, verb variations, and semantically similar words. These results are discussed in terms of the task-readiness framework, and pedagogical implications of the findings are provided for L2 writing assessment and complexity measurement in the EAP classrooms.


Key words Task preparedness, Topic familiarity, Strategic planning, L2 complexity, EAP writing, Task-based assessment


It takes two to tango: Investigating teacher-student interactions related to written corrective feedback with Activity Theory

Yiming Liu, The University of Melbourne, Australia 

Neomy Storch Janne Morton, The University of Melbourne, Australia

Abstract While research on written corrective feedback (WCF) has flourished in the past two decades, not many studies have probed the contextual factors impacting on the effectiveness of WCF in-situ. Even fewer studies have focused in on how the teacher-student relationship may shape WCF and what factors in turn could shape that relationship. This study addresses this gap in research by investigating teacher-student interactions related to WCF over time in a private institute. The study used a case study design. Thematic analysis of interviews conducted with two teachers and four of their students as well as a careful analysis of students’ essays with teacher feedback revealed that the teacher-student relationship not only influenced the amount of teacher WCF, but also students’ tendency to seek clarification about feedback and the accuracy of their revision. Through the lens of Activity Theory, three factors were identified shaping the teacher-student relationship: goals alignment, expectation fulfillment and power relationships. Pedagogical implications are proposed based on these findings.


Key words Writing, Written corrective feedback, Teacher-student relationship, Activity Theory

How feedback conditions broaden or constrain knowledge and perceptions about improvement in L2 writing: A 12-week exploratory study

Ali Rastgou, Faculty of Humanities, Islamic Azad University, Shahrood Branch, Shahrood, Iran

Abstract The current study reports the qualitative findings of a mixed methods longitudinal study into the effectiveness of sustained teacher feedback on L2 writing development. The qualitative phase explored reasons for marked variations in insights and knowledge of EFL students regarding improvement in accuracy, content, organization and fluency in writing. Participants (n = 88) were divided into four groups who received corrective feedback (CF), rhetorical feedback, multilateral feedback, and no feedback during (re)writing several expository compositions. A substantial data corpus was elicited through open-ended questionnaires. While all groups highlighted CF benefits, only the CF recipients elaborated on various levels of accuracy gain. Likewise, although the majority in all groups perceived improvement in content and organization, the detailed, multifaceted explanations given only by the recipients of rhetorical feedback indicated their in-depth understanding of the two constructs. Engagement with rhetorical feedback also broadened recipients’ insights regarding fluency, as they differentiated writing pace from smooth flow of ideas, while their counterparts equated fluency with pace. The study showcases that rhetorical feedback reinforces power of elaboration, broadens metacognition and insights about the constructs addressed and allows recipients to develop valuable pedagogical writing strategies. The study highlights implications for L2 writing practitioners and researchers.


Key words Feedback literacy, Student perceptions, Written CF, Rhetorical and multilateral feedback, Content and organization, Writing fluency


Implementing continuous assessment in an academic English writing course: An exploratory study

Maggie Ma, Department of English, The Hang Seng University of Hong Kong, Hang Shin Link, Siu Lek Yuen, Shatin, NT, Hong Kong

Gavin Bui, Department of English, The Hang Seng University of Hong Kong, Hang Shin Link, Siu Lek Yuen, Shatin, NT, Hong Kong

Abstract Although continuous assessment is often used in different disciplines, its implementation in L2 writing is underexplored. Adopting the framework of learning-oriented assessment (Carless, 2015) and the concept of affordances (Gibson, 1986), this exploratory study examined a case study of how one teacher and her students perceived and shaped/utilized the motivational and learning affordances of regular low-stakes continuous assessment in a tertiary academic English writing course. Data were collected from classroom observation, teacher and student interviews, student writing, and pre- and post-course surveys. The findings revealed a complex picture of the differing degrees to which the teacher harmonized the AoL and AfL aspects of CA across the three dimensions of learning-oriented assessment based on the various ways she perceived and shaped its classroom affordances. The students’ perceptions and utilization of the learning affordances of CA largely converged with those that were perceived and shaped by the teacher. However, difference was found in how the motivational affordances of CA were perceived and shaped/utilized. A model of learning-oriented continuous assessment in L2 writing is proposed and pedagogical implications are discussed.


Key words Continuous assessment, Learning-oriented assessment, Affordances, L2 writing


Explicit strategy-based instruction in L2 writing contexts: A perspective of self-regulated learning and formative assessment

Lin Sophie Teng, Department of Linguistics, Zhejiang University, Hangzhou 310058, China

Abstract Recent studies have pinpointed the critical and optimal role of self-regulated learning (SRL) in strategy instruction in second/foreign language (L2) research. To provide empirical grounding for developing self-regulated, strategic learners, this study developed an integrated model of SRL strategy instruction which was integrated with formative assessment and a process-genre approach and then examined its effects on L2 writing quality and motivational beliefs. This was a quasi-experimental research design, in which the experimental group received a 16-week SRL strategy instruction while the comparison group received a regular writing course over the same time period. Pretest, posttest, and delayed posttest of writing performance were conducted to examine students’ changes in writing quality in terms of content, organization, vocabulary, language, and mechanics. A self-report questionnaire was administered to both groups in the pretest and the posttest to investigate the development of motivational beliefs (i.e., task value, intrinsic goal orientation, extrinsic goal orientation, control of learning beliefs, and self-efficacy of learning and performance). Results of mixed factorial ANOVAs revealed the experimental group outperformed the comparison group in the overall writing scores and the subscores (e.g., content, organization, vocabulary, and language) in the posttest and the delayed posttest. In addition, the intervention helped students develop a higher level of task value, extrinsic goal orientation, control of learning beliefs, and self-efficacy of learning and performance. Data collected from reflective journals showed that EFL students reported increased levels of self-regulated, strategic behavior as a results of engaging in classroom formative assessment activities.


Key words Self-regulated learning, Strategy instruction, Formative assessment, L2 writing,Motivational beliefs, Writing quality


Diagnosing EFL undergraduates’ discourse competence in academic writing

Yumin Wang, Department of Linguistics and Modern Language Studies, Faculty of Humanities, The Education University of Hong Kong, 10 Lo Ping Road, Tai Po, New Territories, Hong Kong, China 

Qin Xie, International Education College, Henan University, 85 Minglun Street, Kaifeng, Henan, China


Abstract Discourse competence is a critical component of academic writing, as it influences the coherence, cohesion, and overall structure of a text. This article reports on the development of a set of fine-grained diagnostic rubrics to assess the construct of discourse competence, which is operationalized as consisting of 10 features underfive components, namely, topic building, global coherence, local coherence, logical connectives, and reader–writer interactions. The rubrics were applied to a sample of 108 essays written by EFL undergraduates majoring in business studies in a Sino-Australia joint degree programme in China. The study found that the rubrics could be used to generate detailed profiles of undergraduates’ relative strengths and weaknesses regarding the features of their discourse competence. The participants were stronger in using logical connectives but much weaker in topic building, global coherence, and reader–writer interactions. This study represents perhaps one of the first attempts to systematically specify the construct of discourse competence and to assess the construct at an appropriate grain size. As such, its findings can enrich our understanding of discourse competence. The study also contributes a useful and innovative instrument to the research into L2 writing assessment.


Key words Discourse competence, EFL writers, Academic writing, Diagnostic assessment


Unpacking the contribution of linguistic features to graph writing quality: An analytic scoring approach

Seyyed Ehsan Golparvar, University of Bojnord, Iran

Hamideh Abolhasani, University of Bojnord, Iran

Abstract Previous research on L2 writing quality has mostly focused on independent writing, and there is little empirical work on integrated writing quality. Furthermore, the majority of studies on L2 writing quality have adopted holistic scoring for writing assessment. Therefore, the purpose of this exploratory research was to investigate the contribution of linguistic features, i.e. measures of syntactic complexity, lexical complexity, cohesion, and linguistic accuracy, to scores of content, organization, and language use in graph writing, which is far less researched. Two hundred and forty-three undergraduate Iranian students described a graph with few visual chunks and a familiar topic. The findings of stepwise regression revealed that content nominals per clause and lexical diversity contributed to content and language use scores, linguistic accuracy could predict the scores of organization and language use, and contrastive and temporal connectives and temporal cohesion were predictive of the scores in all the three subscales. Moreover, some important discrepancies were observed between the raters’ scores to a subscale and its linguistic predictors. Implications for writing research, pedagogy, and assessment are offered.


Key words Writing quality, Graph writing, Syntactic complexity, Lexical diversity, Cohesion, Accuracy, Analytic scoring


Developing and validating an analytic rating scale for a paraphrase task

Sen Liu, Department of Linguistics, Zhejiang University, Hangzhou 310058, China 

Dunlai Lin,School of foreign languages and literature, Beijing Normal University, Beijing 100875, China

Abstract Paraphrasing is used as a language activity to train students’ comprehensive language use. Although its positive role has been commonly recognized, there is a lack of research on the construct and rating scale for paraphrase tasks. This study developed a five-level analytic rating scale and applied the scale to score one hundred and forty-three examinees’ paraphrase responses. To validate the scale using an argument-based approach, the scores were analyzed by generalizability analysis and many-facets Rasch analysis, and the responses were coded by construct-related linguistic features. The results show that: (1) The analytic scores explained 35.5% of the variance, and the relative Coef_G met the confidence requirements for a rating scale validation study; (2) The task distinguished examinees’ performance into different levels, the rater severity was consistent, and the rating dimensions were relatively independent with significant differences; (3) Construct components are reflected in the examinees’ responses; (4) The rating scale is appropriate for score reporting and decision making, and has a positive effect on teaching and learning. The study will facilitate language teachers’ assessment of students’ performance and provision of effective feedback on paraphrase tasks.


Key words Paraphrase task, Analytic rating scale,  Generalizability theory, Many-facets Rasch measurement, Argument-based approach to validation


Perceptions of authorial voice: Why discrepancies exist

Cecilia Guanfang Zhao, Department of English, University of Macau,Macau

Jincheng Wu, Department of English, University of Macau, Macau

Abstract Although theoretical conceptions of voice vary, researchers now generally agree on its amalgamated and dialogical nature, highlighting the interplay among the reader, the writer, and the text (Canagarajah, 2015; Matsuda, 2015). While much research has investigated the elements of the writer and the text in voice construction, far less has examined voice reconstruction from readers’ perspectives. The current study therefore explores reader reconstruction of writer voice, focusing particularly on understanding the phenomenon of discrepant voice perceptions by different readers. Two raters double-rated 65 EFL essays, simulating the conventional writing assessment practice. Independent-samples t tests on various linguistic indices across essays that received consistent vs. inconsistent voice ratings were carried out to identify linguistic elements that might be sources of inconsistency in raters’ voice perceptions. Semi-structured rater interview was conducted to both triangulate quantitative findings and explore other potential sources of inconsistency. Results showed that most of the language features did not seem to be associated with discrepant voice perceptions, but raters’ differing perceptions of the effectiveness of certain language elements, essay structure, and idiosyncratic interpretations of certain evaluative criteria might lead to divergent reconstructions of voice. Implications were discussed to inform L2 writing assessment, pedagogy, and future research.


Key words Voice, Writing assessment, Second language writing, Rater differences


Investigating whether a flemma count is a more distinctive measurement of lexical diversity

Thwin Myint Myint Maw, Graduate School of Humanities and Social Sciences, Hiroshima University, Japan

Jon Clenton, Graduate School of Humanities and Social Sciences, Hiroshima University, Japan

George Higginbotham, Faculty of Social System Design, Eikei University of Hiroshima, Japan

Abstract Lexical diversity (LD) measures (i.e., vocabulary range deployed in a written or spoken sample) have been shown to predict L2 language proficiency. Treffers-Daller et al. (2018), however, suggest that the analysis unit influences LD measures’ predictability of language proficiency and highlight the greater impact of lemma count on LD measurement. Despite evidence of lemma count usefulness, no single LD study has empirically examined a flemma count. We therefore partially replicate Treffers-Daller et al. to explore potential flemma count influences on LD measure writing predictability, compared to simple and lemma counts. We analyzed 105 Chinese L2 learner IELTS essays, completed at a UK university. We computed LD scores for non-lemmatized, lemmatized and flemmatized texts using three basic LD measures (Types, T TR, Guiraud’s Index), and three sophisticated measures (D, MTLD, HD-D). Results suggest that both flemmatization and lemmatization influenced LD scores and measures. LD measure predictability is dependent on the analysis unit. All three basic measures and D were reliable writing indicators, based on flemma and lemma counts, whereas HD-D was a better writing predictor once simple and lemma counts were applied. However, MTLD failed to predict any writing level. We conclude that different analysis units have different influences on LD measures.


Key words Lexical diversity measure, Flemma, Lemma, L2 writing


Diversity of Advanced Sentence Structures (DASS) in writing predicts argumentative writing quality and receptive academic language skills of fifth-to-eighth grade students

Ziyun Deng, Harvard Graduate School of Education, 305 Larsen Hall, 14 Appian Way, Cambridge, MA 02138, USA 

Paola Uccelli,Harvard Graduate School of Education, 320 Larsen Hall, 14 Appian Way, Cambridge, MA 02138, USA 

Catherine Snow, Harvard Graduate School of Education, 313 Larsen Hall, 14 Appian Way, Cambridge, MA 02138, USA

Abstract Prior adolescent writing research typically used omnibus length-based measures, such as Mean Length of Clauses (MLC), to describe and evaluate students’ performance at constructing complex sentences. This study undertook to: (1) develop a novel approach, Diversity of Advanced Sentence Structures (DASS), with a more detailed inventory of students’ complex sentence structures; and (2) provide evidence of DASS’ validity. To develop DASS, seven types of sentence structures (adverbial clause, clausal complement, clausal prepositional complement, relative clause as modifier, clausal subject, noun as modifier, and passive voice) that characterize school-based texts in adolescence, were identified. Students’ essays were coded for the presence or absence of each structure; the total types of structures present in an essay determined the 0-to-7-point DASS score. A cross-sectional sample of fifth-to-eighth graders (N = 512) wrote argumentative essays responding to a school policy controversy. DASS scores in seventh or eighth grade were significantly higher than those in fifth grade. DASS significantly and positively predicted both students’ writing quality and their receptive academic language, while MLC did not show significant associations with either, controlling for students’ grade, gender, socio-economic status. This study suggests that the DASS offers a promising novel approach to capturing a detailed picture of how emerging academic writers construct complex sentences.


Key words Sentence structures, Syntax, Grammar, Argumentative writing,  Adolescence,  Academic language


Integrated writing across conflicting L2 texts: The effects of prior beliefs on perspectivism and propositional content

Mohammad N.Karimi, Department of Foreign Languages, Kharazmi University, Tehran, Iran 

Tahereh Mousavi-Moghadam, Ilam University, Ilam, Iran

Abstract In light of the ever-increasing predominance of sources that present conflicting information on socio-scientific issues, constructing integrated written representations across such sources has become a routine practice for information users, particularly in academic contexts. Evidence accumulates that readers’ pre-existing beliefs affect source comprehension. Since source comprehension has been documented to significantly affect integrated writing task performance, the present study examined how participants’ prior beliefs interact with their integrated written representations of documents that present divergent standpoints on a controversy. The study, more specifically, examined perspectivism and propositional content in integrated summaries written by English as a Foreign Language (EFL) participants across documents that presented perspectives contradicting or aligning with their pre-existing beliefs. The results revealed that perspectives adopted by participants in their integrated summaries were largely biased towards their pre-existing beliefs. The results further indicated a strong bias towards the participants’ pre-existing stances on the controversy in the propositional content of the integrated writing task. Additionally, there was a significant difference in the propositional content of integrated summaries that adopted different perspectives. Furthermore, perceived plausibility judgements of the information were found to moderate the propositional content of the integrated summaries.


Key words Prior beliefs, Integrated writing, Propositional content, Source comprehension, Perspectivism


Examining the social consequences of a locally-developed placement test using test takers’ attitudes

Elizabeth Lee, Iowa State University, Department of English, 527 Farmhouse Ln, Ames, IA 50011, United States

Abstract Drawing on test takers’ attitudes to evaluate social consequences of testing is a widely recognized practice in the language testing literature. Yet studies using such approaches at the local placement level are few. To address this gap, 92 students from a large midwestern university were surveyed and 26 of these students participated in a follow-up interview. While survey results showed that students were more negative than positive towards the test, students’ accounts revealed a more complex picture. First, regardless of one’s attitudes toward the test, students experienced some lack of test preparation. Second, test-taking experiences were divergent even among students sharing similar perceptions about the test. Third, students sharing more positive views of the test tended to focus on ways that the test tasks represented the target domain, whereas students sharing less favorable views focused on the opposite. While there was no strong evidence of test invalidity, a need to review current test administration practices and ways of communicating test information and preparation materials is recommended. This study demonstrates that studying the attitudes of test takers and the factors contributing to such attitudes is useful not only for test validation purposes but also for establishing positive test quality.


Key words Test takers' attitudes, Placement test, Social consequences, Appraisal


Integrated writing and its correlates: A meta-analysis

Sathena Chan, Centre for Research in English Language Learning and Assessment, University of Bedfordshire, Putteridge Bury Campus, LU2 8LE, United Kingdom

Junko Yamashita, Graduate School of Humanities, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan

Abstract Integrated tasks are increasing in popularity, either replacing or complementing writing-only independent tasks in writing assessments. This shift has generated many research interests to investigate the underlying construct and features of integrated writing (IW) performances. However, due to the complexity of the IW construct, there are conflicting findings about whether and the extent to which various language skills and IW text features correlate to IW scores. To understand the construct of IW, we conducted a meta-analysis to synthesize correlation coefficients between scores of IW performances and (1) other language skills and (2) text quality features of IW. We also examined factors that may moderate the correlation of IW scores with these two groups of correlates. Consequently, (1) reading and writing skills showed stronger correlations than listening to IW scores; and (2) text length had a strongest correlation, followed by source integration, organization and syntactic complexity, with a smallest correlation of lexical complexity. Several IW task features affected the magnitude of correlations. The results supported the view that IW is an independent construct, albeit related, from other language skills and IW task features may affect the construct of IW.


Key words Meta-analysis, Integrated writing, Language skills, Text features, Task characteristics


Exploring Chinese EFL undergraduates’ writing from sources: Self-efficacy and performance

Qin Zhou, Central South University, Hunan, China 

Liang Chen, University of Georgia, Athens, USA 

Xianrong Hou, Central South University, Hunan, China

Abstract Little research has been done on whether instructional efforts focusing on self-efficacy will improve writing performance and change the dynamic relationship between self-efficacy and writing performance. The present study aimed to address this gap by utilizing a quasi-experimental pretest-posttest design to examine 50 Chinese undergraduates’ self-efficacy for writing from sources and performance over a 10-week long integrated course in EFL context. Working within the framework of social cognitive theory, a self-report questionnaire was constructed to measure the dimensions of self-efficacy for writing from sources, with defining elements of cognitive process (transformation and language control), metacognitive process (self-regulatory strategy (SRS)) and motivation. Results showed that participants’ performance, transformation and language control improved significantly but SRS and motivation remained stable. In addition, there were consistent correlations among motivation, SRS, transformation and language control over the course, and performance score was moderately correlated with self-efficacy at the posttest. These results suggested that the key transforming process provided EFL learners with mastery experiences in an effective observable modelling setting, which facilitated their performance and beliefs in the transforming ability of writing from sources, impacted their language learning positively, and regulated their writing and reading concurrently.


Key words Self-efficacy, Writing from sources, Chinese EFL undergraduates


Examining evaluative language used in assessment feedback on business students’ academic writing

Prithvi N.Shrestha, School of Languages & Applied Linguistics, The Open University, Milton Keynes, UK

Abstract Written assessment feedback in higher education has been examined from different perspectives. However, there is limited empirical evidence of how tutors use language to provide assessment feedback on students’ assessed academic writing. By deploying the rarely used Appraisal framework in Systemic Functional Linguistics, this innovative study examined the use of evaluative language by tutors in feedback on undergraduate business students’ academic writing in two assignments at a distance university. The data consisted of 16 tutor assessment feedback summaries on eight students’ written assignments and interviews with those students. The Appraisal system of Attitude (Judgement, Appreciation and Affect) was used to analyse the evaluative language of the summaries. The analysis of student interviews provided insights into their perceptions of tutor feedback, complementing the linguistic analysis. The findings suggest that tutors’ evaluative language was primarily used to judge students rather than to appreciate the assignment, and show their emotional reactions, potentially owing to the distance learning context. Additionally, while most of the feedback was perceived positively, students found certain types of tutor feedback less helpful. The paper has implications for moving assessment feedback research forward through applying the Appraisal framework, improving assessment strategies and tutor formative feedback practices in writing assessment.


Key words HE, Higher Education, SFL, Systemic Functional Linguistics, TMA, Tutor Marked Assignment


Assessing L2 integrated writing self-efficacy: Development and validation of a scale

Peixin Zhang, College of Foreign Languages and Cultures, Xiamen University, Xiamen, China

Jason Fan, Language Testing Research Centre, University of Melbourne, Parkville, VIC, Australia

Wenfeng Jia, School of Translation Studies, Shandong University, Weihai, China

Abstract Despite the increasing use of integrated tasks in English as a Foreign language (EFL) academic contexts, few attempts have been made to develop reliable and valid instruments to measure integrated writing self-efficacy. To address this research gap, this study developed and validated a scale of self-efficacy for L2 reading-to-write tasks in an EFL academic context. Three dimensions were posited to underlie the construct of L2 reading-to-write self-efficacy (L2RWSE), including self-regulation efficacy, discourse synthesis self-efficacy, and writing conventions self-efficacy. This theoretical underpinning informed the development of the L2RWSE scale. A series of three studies were conducted to interrogate the validity of the scale, involving a total of 1414 participants. Findings from the first two studies, based on exploratory and confirmatory factor analysis respectively, supported our hypothesis that the scale consisted of the three dimensions. Results of multigroup CFA indicated that the scale functioned equivalently on two groups of participants from key and non-key universities. Findings from Study 3 demonstrated that students’ L2RWSE had significant predictive effects on their performance on an integrated reading-to-write task. While shedding light on the construct of L2 integrated writing self-efficacy, the study has important implications for integrated writing assessment and pedagogy.


Key words L2RWSE, second language reading-to-write self-efficacy


Assessing pragmatic performance in advanced L2 academic writing through the lens of local grammars: A case study of ‘exemplification’

Hang Su, Centre for Foreign Languages and Literature, Sichuan International Studies University, China

Xiaofei Lu, Department of Applied Linguistics, The Pennsylvania State University, USA


Abstract This study explores the relationship between pragmatic performance of discourse acts and second language (L2) English writing proficiency. To this end, the study focuses on one specific discourse act, i.e., exemplification, and adopts a local grammar approach to investigate the performance of exemplification in three corpora of academic writing by Chinese undergraduate English as a foreign language (EFL) learners, Chinese postgraduate EFL learners, and expert writers. The results show that both the frequency of exemplificatory markers and, more notably, the quantity, strategic use, and diversity of local grammar patterns largely increase with proficiency level, suggesting that pragmatic performance of discourse acts could be a useful indicator of L2 writing proficiency. The implications of our findings for L2 writing research and assessment are discussed.


Key words Pragmatic performance, Discourse act, Local grammar, Exemplification,L2 writing assessment


Validity evidences for scoring procedures of a writing assessment task. A case study on consistency, reliability, unidimensionality and prediction accuracy

Paula Elosu, University of the Basque Country, San Sebastian, Spain

Abstract Scoring is a fundamental step in the assessment of writing performance. The choice of the scoring procedure as well as the adoption of a discrepancy resolution method can impact the psychometric properties of the scores and therefore the final pass/fail decision. In a comprehensive framework which considers scoring as part of the validation process of the scores, the aim of this paper is to evaluate the impact of rater mean, parity and tertium quid procedures on score properties. Using data from a writing assessment task applied in a professional context, the paper analyses score reliability, dependability, unidimensionality and decision accuracy on two sets of data; complete data and subsample of discrepant data. The results show better performance of the tertium quid procedure in terms of reliability indicators but a lower quality in defining construct unidimensionality.


Key words Assessing writing, Scoring procedures, Validity, Rater mediated assessment


Using unfolding models to identify targeted feedback strategies for student writing

Ye Yuan, The University of Georgia, Terrell Hall Athens, GA 30602, USA

George Engelhard Jr.,The University of Georgia, Terrell Hall Athens, GA 30602, USA

Kevin Raczynski, The University of Georgia, Terrell Hall Athens, GA 30602, USA;Kevin Raczynski & Associates, Atlanta, GA, 30301, USA

Jue Wang, University of Science and Technology of China, Hefei, Anhui 230052, China

Abstract Formative assessments are used to identify student strengths and weaknesses, but they frequently do not identify targeted feedback strategies for improving student achievement. Unfolding models offer an alternative measurement paradigm for connecting recommended feedback strategies with different levels of student achievement. In this study, a focus group of English teachers identified feedback strategies for the writing of middle school students. A questionnaire was constructed based on their identification of 10 feedback strategies related to organization, development and language usage. Next, classroom teachers responded to the feedback questionnaire based on a set of 20 essays selected to vary in writing quality. An unfolding model was used to analyze teacher responses and to identify recommended feedback strategies for each essay. Findings suggest that recommended feedback strategies can be used to define zones of targeted feedback for identifying optimal feedback strategies that can be used for improving student writing at different levels of achievement.


Key words Formative assessment, Unfolding models, Feedback strategies, Writing


(In)Equities in directed self-placement

Theresa Tinkle, Jason Godfrey, Anil Ramachandran Menon, Andrew Moos, Laura Romaine, Michelle Sprouse, University of Michigan, 105S State St, Ann Arbor, MI 48109, USA

Abstract Directed Self-Placement (DSP) refers to an increasingly utilized approach to writing placement in both two- and four-year institutions that both informs students of their writing course options and expectations at their institution and allows them to decide for themselves what course best fits their needs. One reason for its popularity as a method is its purported ability, when locally designed and maintained, to serve as a more equitable and antiracist form of writing placement than some alternatives. In order to test these claims, this study examines five years of placement, enrollment, and GPA data from first-year writing (FYW) classes. Descriptive data analysis reveals that DSP as locally administered has different social consequences for domestic under-represented minority (URM) minoritized students and women than for non-under-represented minority (non-URM) minoritized students and men. The data also reveal that student non-conformity to DSP placement recommendations does not result in significant underperformance in FYW. It would appear that at this institution “an admitted student is a qualified student.”


Key words Directed self-placement, DSP, Writing placement, Disparate impact analysis, Writing assessment, Equity


Transfer of ideal L1 and L2 writing selves and their impacts on L2 writing enjoyment and integrated writing performance

Xinhua Zhua, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong, China

Yi Guan, College of Education, Zhejiang University, Hangzhou, Zhejiang, China

Yuan Yao, College of Foreign Languages, Wulingshan K-12 Educational Research Center, Huaihua University, Huaihua, Hunan, China

Abstract Second language (L2) writers’ motivation and emotion is a flourishing area in L2 writing education research; however, there is a dearth of literature on L2 writers’ motivational and emotional responses to integrated writing (IW) tasks, particularly in a bilingual context. With a sample of 239 first-year English major students at a Chinese university, the present study investigated the relationship between ideal L1 and L2 writing selves and how they may contribute to L2 IW enjoyment and task performance. Path analysis results report a relationship of transfer between ideal L1 and L2 writing selves, as well as their positive associations with L2 writing enjoyment and L2 IW performance. Further, the indirect path from ideal L1 writing self to L2 IW performance via ideal L2 writing self and L2 writing enjoyment was evident. This study contributes to the literature in two ways. First, we included IW tasks in the existing research on ideal self and L2 writing. Second, we affirm that the transfer between L1 and L2 may concern not only linguistic capabilities and learning strategies, but also students’ personal beliefs about language learning. Pedagogical suggestions are made based on the findings.


Key words Ideal L1 writing self, Ideal L2 writing self, L2 writing enjoyment, L2 integrated writing performance


Complexity, accuracy, and fluency in L2 writing across proficiency levels: A matter of L1 background?

Vo Dinh Phuoc, Jessie S.Barrot, School of Foreign Languages, University of Economics Ho Chi Minh City, Vietnam

Abstract While there have been studies that explored complexity, accuracy, and fluency (CAF) as an index of language proficiency, studies that comprehensively examined the inter-variations in the CAF of different L1 backgrounds across proficiency levels remain very limited. Thus, we investigated how CAF measures in L2 writing differentiate proficiency levels in six Asian L1 backgrounds. We also examined how these measures vary from one L1 background to another. A total of 3606 argumentative essays from the International Corpus Network of Asian Learners of English (ICNALE) were analyzed for 18 CAF measures. Overall results confirmed that differences exist among these measures as an index of proficiency when different L1 backgrounds are compared. The findings also provided some evidence that L2 writers who belong to typologically similar languages exhibit striking similarities in several CAF measures compared to typologically dissimilar languages. Nonetheless, the data showed that MLC, CN/T, WCR, and W/Tx are the most consistent among CAF measures in differentiating proficiency levels in each of the different L1 backgrounds. Implications for L2 writing assessment and future studies are discussed.


Key words Complexity, Accuracy, Fluency, L1 background, L2 writing, Language proficiency


Structure and coherence as challenges in composition: A study of assessing less proficient EFL writers’ text quality

Lea Siekmann, Department of Education, University of Münster, Westfälische Wilhelms-Universität Münster, Schlossplatz, 248149 Münster, Germany

Judy M.Parr, Faculty of Education and Social Work, The University of Auckland, Private Bag 92019, Auckland Mail Centre, Auckland 1142, New Zealand

VeraBusse, Department of Education, University of Münster, Westfälische Wilhelms-Universität Münster, Schlossplatz, 248149 Münster, Germany

Abstract Students are usually expected to write full texts in English as a foreign language (EFL) at the end of secondary education. However, research on EFL writing at school is scarce, especially regarding less proficient writers, and seldom focuses on deep-level text features such as structure and coherence. Based on a sample of 166 EFL students in Year 9 attending German middle and lower performance track schools, this study examined 326 narrative and argumentative texts. First, we assessed structure and coherence via analytic ratings using detailed rubrics to gain insights into possible challenges for students. Our analysis showed that relevant text parts (such as the conclusion) were mostly missing and that students struggled to establish a broad common thread with argumentative texts being overall less structured and coherent than narrative texts. Second, we used the software Comproved® to conduct holistic ratings of overall text quality and compared them with our analytic ratings. Large correlations between both ratings suggest that structure and coherence are important aspects of text quality. We discuss how our rubrics can serve as a useful tool for assessment for learning and assist less proficient writers in establishing deep-level features in their texts.


Key words Secondary school, Writing competence, L2, Text organization, Low-achieving students



期刊简介

Assessing Writing is a refereed international journal providing a forum for ideas, research and practice on the assessment of written language. Assessing Writing publishes articles, book reviews, conference reports, and academic exchanges concerning writing assessments of all kinds, including traditional ('direct' and standardised forms of) testing of writing, alternative performance assessments (such as portfolios), workplace sampling and classroom assessment. The journal focuses on all stages of the writing assessment process, including needs evaluation, assessment creation, implementation, and validation, and test development; it aims to value all perspectives on writing assessment as process, product and politics (test takers and raters; test developers and agencies; educational administrations; and political motivations). The journal is interested in review essays of key issues in the theory and practice of writing assessment.

《写作评估》是一本国际性期刊,为书面语言评估的思想、研究和实践提供了一个论坛。评估写作出版文章、书评、会议报告和学术交流,涉及各种写作评估,包括传统的(“直接”和标准化形式的)写作测试、替代性绩效评估(如作品集)、工作场所抽样和课堂评估。该期刊关注写作评估过程的所有阶段,包括需求评估、评估创建、实施和验证以及测试开发;它旨在将写作评估的所有观点都视为过程、产品和政治(应试者和评分者、考试开发人员和机构、教育管理部门和政治动机)。该杂志对写作评估理论和实践中的关键问题的评论文章感兴趣。


The scope of the journal is wide, and embraces all work in the field at all age levels, in large-scale (international, national and state) as well as classroom, educational and non-educational institutional contexts, writing and programme evaluation, writing and critical literacy, and the role of technology in the assessment of writing. Through this scholarly exchange, Assessing Writing contributes to the development of excellence in the assessment of writing in all contexts, and, in so doing, to the teaching and appreciation of writing.

该杂志的范围很广,涵盖了所有年龄层在该领域的所有工作,包括大规模,例如课堂、教育和非教育机构背景、写作和课程评估、写作和批判性素养,以及技术在写作评估中的作用。通过学术交流,《写作评估有助于提升所有背景下的写作水平,并因此有助于写作教学和欣赏。



官网地址:

https://www.sciencedirect.com/journal/assessing-writing

本文来源:Assessing Writing官网

点击文末“阅读原文”可跳转官网




课程推荐




好书推荐|《汉语方言研究的多维视角》(留言赠书)

2023-05-07

刊讯|《中国语文研究》2022年第1-2期

2023-05-07

刊讯|《新疆大学学报(哲社版)》2022年刊文(语言学)

2023-05-06

稿  约|《外语教育评论》征稿启事

2023-05-06

刊讯|CSSCI《华中学术》2022年刊文(语言学)

2023-05-05

刊讯|SSCI 期刊《语言与交际》2022年第84-87卷

2023-05-04

语言学年报•期刊动态|《中国语文》(2022)

2023-05-03

刊讯|《中国社会语言学》2019年第1期

2023-05-02

刊讯|SSCI 期刊《应用语言学》2022年第4-6期

2023-05-01


欢迎加入
“语言学心得交流分享群”“语言学考博/考研/保研交流群”


请添加“心得君”入群请务必备注“学校+研究方向/专业“

今日小编:言海小鱼

  审     核:心得小蔓

转载&合作请联系

"心得君"

微信:xindejun_yyxxd

点击“阅读原文”可跳转下载

继续滑动看下一个

刊讯|SSCI 期刊《写作评估》2022年第52-54卷

六万学者关注了→ 语言学心得
向上滑动看下一个

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存