Despite their and other stakeholders’ consistent demand for excellence, doctoral programs have rarely, if ever, been assessed in terms of the quality of the dissertations departments produce. Yet dissertations provide the most powerful, objective measure of the success of a department’s doctoral program. Indeed, assessment, when done properly, can help departments achieve excellence by providing insight into a program’s strengths and weaknesses.This book and the groundbreaking study on which it is based is about making explicit to doctoral students the tacit “rules” for the assessment of the final of all final educational products―the dissertation. The purpose of defining performance expectations is to make them more transparent to graduate students while they are in the researching and writing phases, and thus to help them achieve to higher levels of accomplishment. Lovitts proposes the use of rubrics to clarify performance expectations–not to rate dissertations or individual components of dissertations to provide a summary score, but to facilitate formative assessment to support, not substitute for, the advising process. She provides the results of a study in which over 270 faculty from ten major disciplines―spanning the sciences, social sciences, and humanities―were asked to make explicit their implicit standards or criteria for evaluating dissertations. The book concludes with a summary of the practical and research implications for different stakeholders: faculty, departments, universities, disciplinary associations, accrediting organizations, and doctoral students themselves.The methods described can easily be adapted for the formative assessment of capstone courses, senior and master’s theses, comprehensive exams, papers, and journal articles.
#references
Title: Making the Implicit Explicit: Creating Performance Expectations for the Dissertation
One of only two exemplary empirically grounded studies of doctoral students’ success/challenges at synthesis, operating at the artifact level (there are others that examine in interviews/surveys); the other is @holbrookInvestigatingPhDThesis2004
#participants were N= 276 “PhD-productive” (advising on average 10s of dissertations, and participating in more) faculty across N= 74 departments in N= 10 disciplines spanning the sciences, social sciences, and humanities, across N= 9 different universities
Their #method: analyzing comments for a large number of actual dissertations across a wide range of disciplines that span the humanities, social sciences, and traditional STEM, including biology, physics, ECE, math, economics, psych, sociology, english, history, and philosophy