In 2005 St. John's University held a Presidential Summit, "How Do You Know if Your Students Are Learning?" The summit was led by Dr. Barbara E. Walvoord, a nationally recognized scholar on assessment.* At that time Walvoord discussed the degree to which assessment had become a national reform movement, fueled in part by calls for higher education to be more accountable for its learning standards, as well as increased scrutiny by college students and their families when selecting colleges. In the years since her visit to St. John's, the assessment movement has only increased.
Because of the growing importance of assessment in higher education, we find it necessary to provide chairs, faculty, and administrators with an introductory overview about this admittedly weighty and sometimes overwhelming term. We realize that the emphasis on assessment in education has surged in the last two decades, and that faculty and administrators who have not had the time to fully explore assessment theory or practice might benefit from some contextualization. As you go through these pages, please keep two things in mind:
- We at St. John’s are committed to approaches to progressive assessment practices informed by best practices in the field. In accordance with those best practices, we strongly believe that assessment needs to be local and “homegrown.” This means that faculty and students, within their local disciplines, need to work together to continually imagine, develop, and act upon their assessment initiatives. Assessment, first and foremost, is about reflecting upon one’s learning. We also strongly believe that it is the administration’s job to ensure that assessment is ongoing, measurable, and informed by best practices—and that the hard work faculty put into their ongoing assessment is recognized and acted upon, and not just archived. Most of all, we recognize that the heart of assessment has its origins with faculty and students who know best about their disciplinary and programmatic needs.
- The Office of the Provost is committed to working with all faculty and departments to help them with their assessment initiatives. We also want to make sure that all of these assessment initiatives are showcased and highlighted every semester through WEAVE, electronic portfolios, and our University web pages. Most importantly, we are committed to working with departments to act upon their ongoing assessment activities—the reports that get submitted online and placed into WEAVE should not simply be warehoused, but explored in order to identify further plans for action.
These “Unpacking Assessment” pages are intended as a living document. While originally posted by the Office of the Provost, we invite faculty, administration, and students to submit suggestions for revision as well as new information. Ultimately these pages should reflect the collaborative spirit of assessment. Please direct any suggestions, questions, or additional text to be considered to Elizabeth Ciabocchi Ed.D.
* We highly recommend Walvoord's succinct and very accessible book, Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education (2nd ed., Jossey-Bass 2010).
A paradigm shift has occurred in higher education. A generation ago such terms as “assessment,” “rubric,” and “learning outcomes” were not in the working vocabularies of many faculty and departments. Today these terms are unavoidable and omnipresent, and they have profoundly changed the landscape of higher education.
Nor can academic institutions, when their various accreditation agencies demand greater evidence of assessment, simply “push back” (as some colleagues have been heard to suggest). There is no resisting the assessment wave.
But there is no reason we should want to push back on assessment. After all, faculty and academic institutions have always been assessing their students and themselves. What's important is that we keep abreast of best practices in assessment, and conduct ongoing assessment in ways that privilege faculty expertise while taking seriously, student feedback. At its heart assessment is about learning: reflecting upon what one has learned and acting upon those reflections, continually.
The trick is to ensure that assessment is understood as something that develops locally and organically, originating in conversations between faculty and students within their own disciplines. Faculty know their disciplines better than anyone else. And because they are in such sustained contact with their students, faculty can learn a great deal about what students think of their courses, syllabi, and assignments. Together, faculty and students need to be learning from each other, all the time. All good academic assessment has faculty-student dialogue and mutual learning at its core.
Administrations also play a key role in assessment—not by telling faculty or departments how they must assess, but by creating a supportive, nurturing culture of assessment for faculty and students. Administrations help initiate, orchestrate, study (and, yes, assess) a University’s assessment practices. They help provide faculty with the necessary tools and direction for conducting an array of ongoing assessment activities and methods. In turn the administration is continually learning from faculty and students about the learning that takes place throughout the institution.
This is the three-way conversation that drives good assessment: students, faculty, and administration in continual dialogue, reflecting upon their ongoing learning.
What Is the Value of a St. John’s Education? is the question at the core of our University's recent “Repositioning Document.” This is the question that will drive much of our assessment initiatives for the foreseeable future. (It is, in fact, the kind of question all educational institutions are wrestling with, especially as the cost of tuition continues to rise and students are increasingly selective about where they attend college.) This question cannot be answered without in-depth, ongoing assessment.
- Cultivate a culture of assessment that is local and “homegrown”—where assessment methods and initiatives are designed by faculty, and unique to each department.
- Foster a lively sense of collegiality among faculty within their disciplines, as well as between those faculty and their students.
- Create a truly collaborative and mutually instructive three-way dialogue of assessment between students, faculty, and administrators.
- Showcase our collective assessment efforts every semester as a means of demonstrating to prospective students and our fellow colleagues just how creative and engaged we are when it comes to reflecting upon the learning that is at the heart of our work at St. John’s.
- Follow up on departmental and program assessment findings in ways that enrich our programs and student success.
Those who are new to the culture of assessment—and this includes students, faculty, and administrators alike—sometimes respond a bit skeptically to all this talk about assessment. We should be honest and up front about such initial skepticism. We need to explore this resistance and work through it so that any resultant assessment approaches are not perceived as mandates arbitrarily imposed upon faculty and students, but legitimate components of learning valued by all parties.
Students, for example, when asked to write self-reflection responses in which they evaluate their own work and progress (just one of many different forms of assessment used in high school and college courses today), don’t always see the value in such an activity. “Why are you making us do all this extra work,” they might wish to ask their professor; “why can’t you just give us our grade and be done with it?”
Faculty can also respond to this culture of assessment with a few raised eyebrows. This is understandable and logical. If faculty have been teaching for several decades quite comfortably without having to actively engage in assessment activities, and then are suddenly expected to articulate measurable learning objectives, it is only natural that they demand some evidence and justification for such a major shift. Faculty are independent thinkers who take their pedagogical and intellectual autonomy quite seriously, as well they should. So it’s only normal that this paradigm shift in assessment (and any pedagogical paradigm shift for that matter) is not immediately embraced by every faculty member. This is why it is the responsibility of both administration and faculty, working together, to explore best practices in assessment.
Just one example: sometimes faculty tend to respond to this new culture of assessment by asking, “What’s wrong with the way I—not to mention Universities for more than a century—have been assessing students? The University pays professors like me to teach them and judge their work and assess them with a grade. I’ve been doing that from day one, so why all this emphasis on measuring learning outcomes?” This is a fair response, and it is important that we address it head-on as we recognize why traditional grading practices are, by themselves, no longer enough. (Yes, grading is a method of assessment. But it is limited in what it tells us about student learning. Assessment involves a variety of means of collecting ideas and data—from students and one’s colleagues—that in turn help enhance faculty teaching methods, which in turn enhance student performance. The mere existence of grades doesn’t necessarily mean there is much significant learning taking place on either side.)
And administrators too—even when placed in the position of being advocates and promoters of assessment—are themselves sometimes overwhelmed (if not occasionally exhausted) by the new expectations that come with this culture of assessment. They too, with their faculty colleagues, sometimes look back to their jobs several decades prior, fondly reminiscing about all of that extra time they must have had when the vocabulary of assessment had not yet entered their daily working vocabularies.
As we enter this new culture of assessment, it’s important to understand that as educators we have always been assessing. Reflecting upon the effectiveness of our teaching—our ability to meet our goals while maintaining a flexible and open-minded approach to our pedagogies--is integral to any good teaching and learning experience. Faculty develop courses, assign projects, require tasks of their students. Then information is collected from students—not just test grades, but such things as portfolios, focus groups and student conferences, writing samples, surveys, and other forms of feedback. This data helps us learn where our programs and courses are working and where things can be improved. Then, faculty revisit assignments, curricula, and programs for further enhancement. This is the assessment feedback loop, and it is ongoing. Faculty never reach a point where there is nothing new to learn about our students’ performances.
- Understand that initial skepticism and resistance to calls for assessment among faculty are understandable and logical, and not inherently signs of apathy or disinterest.
- Develop contexts where faculty, students, and administrators can explore the intrinsic value in reflecting upon our learning.
- Design means by which faculty can see that assessment creates ongoing opportunities to better understand their students, their colleagues, their institution, and their own (always evolving) disciplines.
- Ensure that our assessment policies are informed by all cohorts (students, faculty, administration), are integrated into the daily activities of the departments and institution, and do not manifest in little more than busywork for faculty.
Like all grand concepts, “assessment” is really an umbrella for a broad range of different methodologies and approaches. These include, but are not limited to, “classroom assessment,” “direct assessment,” “embedded assessment,” “formative assessment,” “indirect assessment,” “non-referenced assessment,” “qualitative assessment,” “summative assessment,” and others. For those interested in exploring the rich vocabulary of assessment methods, there are plenty of guides and websites. But to be good at assessment we need not be overly preoccupied with such detailed schema. For now, we might focus on four distinct realms or genres of assessment.
I. Classroom Assessment
This is when individual faculty reflect on what and how their students learn in specific courses. Classroom assessment is focused on course improvement and is less preoccupied with giving grades. Obviously this doesn’t mean that faculty concerned with classroom assessment don’t give grades; it just means that faculty value dialogue with their students, exploring with them ways of enhancing the course. This can include focus groups, one-on-one and small group conferences, surveys, collecting student evaluations, and assigning and reflecting upon student writing. It also means faculty being in dialogue with their peers, sharing teaching methods and best practices with colleagues in meetings and retreats while keeping abreast of pedagogically relevant literature in the field. Certainly many faculty already engage in such activities—which illustrates the degree to which assessment has always been a part of our professional work. What’s different now is the need for faculty to document such assessment and how it translates into enhanced teaching and learning. Here at St. John’s, this means uploading data into WEAVE and assessing students’ accomplishments via their electronic portfolios and other means (more on these below).
II. Program Assessment
Just as individual faculty assess their own courses, so too do departments need to annually reflect upon their programs. Every year representatives from every academic program need to upload ongoing assessment findings from their department or unit into WEAVE, and also document what they are doing next in response to that assessment. In this manner a program is continually reflecting upon its strengths as well as areas it has identified for development and review.
III. Licensure and Examination Passing Rates
For some professions, a major form of assessment takes place in licensure and exam passing rates. A law school, for example, is evaluated to a large extent on how well its students perform on the bar exam. All departments that seek to prepare students for certain licensing or certification exams—and, thus, base their own effectiveness in part on how well their students do on such exams--need to continually compare their students’ success with benchmark institutions, establish targets, and develop action plans to increase or sustain performance. Summaries of this work are archived annually into WEAVE.
IV. Job Placement and Further Education
Are our students getting jobs related to their areas of study? Are they getting into graduate programs? Are they winning awards, or demonstrating other evidence of success? Departments are often the first places that learn of their students’ ability to land jobs or get into graduate programs. Departments need to keep records of where their majors, minors, and graduate students are establishing their careers or gaining admittance into other graduate programs. Such information should be regularly posted within the Department’s website.
- Understand some of the various ways assessment happens in response to the local needs of departments and programs.
- Appreciate the need to continually document assessment findings, as well as resultant action plans.
Chairs are not the people primarily responsible for leading departmental assessment--all department faculty need to be regularly involved in conversations about assessment. These conversations can take place during regular department meetings, annual or semi-annual department retreats, and additional workshops throughout the year. To this end, we urge every Department to assign one or more faculty members to work as Department Assessment Coordinators in order to address the following:
- Work with chair and department to schedule and conduct meetings where departmental assessment activities are generated, implemented, and reviewed
- Work with Chair and Institutional Research to update information into WEAVE at the end of every semester, keeping the dept abreast of ongoing assessment projects
- Update departmental assessment initiatives on a Departmental Assessment webpage every semester
- Work with dept to ensure that webpages are up to date (including faculty bios), and that necessary new content is being created; submit changes to be made to secretary (who will have already been trained in publishing to website)
- Annually facilitate departmental review of language in bulletin (hard copy and online) pertaining to the dept and its courses, consulting dept colleagues to ensure accuracy and working with Department secretary to post changes to website if and when necessary.
Likewise, the administration values honest feedback in departmental reporting. We recognize that the purpose of assessment is not for departments and programs to sing their praises, but rather to accurately and candidly reflect upon where things are working and where they can be improved. It is better for a program to report that its learning outcomes are below its targets, and is consequently pursuing an action plan aimed at increasing those targets, than for a program to report that all is well and no subsequent action plans are in effect. Also, the administration recognizes that the hard work faculty put into assessment cannot go ignored. Annual assessment reports need to be responded to and acted upon, not left forgotten in an online database.
- Impress upon faculty the need to regularly document their department’s various assessment initiatives online for everyone to see—colleagues, students, accreditation agencies, and other stakeholders.
- Involve faculty directly in the ongoing maintenance and design of their department’s web pages—which are nothing less than the most public face of their department.
- Make assessment engaging and inviting—to help faculty realize that many conversations about assessment are inherently conversations about values, knowledge, critical thinking, and many of the very attributes that led faculty into the profession of teaching in the first place.
- Ensure that the administration values honest reporting, and supports departments in following through on action plans when possible.
In 2006 St. John’s acquired WEAVE, a web-based assessment and planning management system designed to help faculty and administrators write goals, objectives, and criteria for tracking, assessing, and developing action plans. This tool currently serves as our University’s primary repository for student learning assessment—a living database where all departments and offices regularly record and monitor their various assessment activities.
The WEAVE management system has five main sections: 1) Assessment, 2) Action Plans, 3) Achievement Summaries, 4) Annual Reporting, and 5) Document Repository. The first section, “Assessment,” is the most detailed. It requires all programs and units to a) enter and edit their mission/purpose; b) establish goals for achieving that mission; c) develop outcomes and objectives; d) identify measures; e) specify achievement targets; f) enter findings; and g) develop action plans in response to those findings.
All Departments will need to submit summaries of their ongoing assessment activities into WEAVE every year, and ideally at the end of every semester. The importance of submitting accurate, detailed, and honest reports to WEAVE cannot be understated. When outside accreditation agencies explore our commitment to assessment, one of their primary concerns is to see that assessment is taking place, and is ongoing. The Associate Director of University Assessment, housed in Institutional Research, is available to assist chairs and their Department Assessment Coordinators in uploading and updating their assessment summaries, plans, and reports.
Our goal: For faculty to regularly discuss within their departments their ongoing assessment initiatives, and document those activities (and their results, and their follow-up plans) on WEAVE.
For more information on WEAVE please contact the Office of Institutional Research at (718)-990-1869.
Beginning in September 2011, St. John’s University partnered with Digication, a company that provides electronic portfolios for students.
An ePortfolio is a collection of academic material and accomplishments that a student gathers and places online. ePortfolios have become increasingly common at all levels of education, especially in high schools and colleges. Students use ePortfolios to showcase their intellectual achievements, as well as present themselves academically and professionally to outside audiences. In many areas the ePortfolio is fast replacing the conventional resume as the means by which students apply for jobs or graduate programs.
Faculty use ePortfolios to better understand and evaluate the work their students are doing. For example, ePortfolios offer a means by which faculty can engage in classroom assessment and program assessment. Faculty and administrations also use e-Portfolios to assess the work their students are doing.
Just as WEAVE offers an essential means by which an institution continually documents its claims for assessment—a living archive and database of mission statements, goals, findings, and action plans—e-Portfolios provide the evidence that supports the findings presented in WEAVE. Together, WEAVE and ePortfolios will be the mechanisms that St. John’s will use to further document our ongoing assessment activities.
Our Goal: For faculty to seriously imagine ways that their students can demonstrate expertise in their courses on ePortfolios—via written statements, papers, research, video, photography, audio, reflective journals, etc.
*Support information on ePortfolios can be found here.
It’s crucial that our assessment measures are specific to departments and disciplines, and are owned by faculty and students. One of the most important first steps any department can do in order to approach assessment logically and efficiently is to first spend time taking an inventory of the shared values of that department. This can be an ideal way to gather faculty together in a spirit of mutual inquiry, discussion, and exploration.
Such a preliminary inventory is important. Many faculty within a department, when speaking casually to one another at meetings or during hallway conversations, might initially seem to uphold the same shared values when it comes to their expectations for student work. Take student writing, for example—a perpetual hot topic among faculty in nearly every discipline. When faculty discuss student writing, their shared values are not infrequently expressed in the negative as faculty complain and sigh over what they have identified as their students’ shortcomings. A common complaint many faculty have heard if not uttered: “My students just can’t write,” a sentiment that might well elicit mutual nods of affirmation from one’s other colleagues.
But while this complaint might have value in that it points to a generalized frustration with the student writing they see in their courses, the statement remains too vague to be of any pedagogical value. Obviously, students can write—for if they were literally functionally illiterate, numerous red flags would have sounded and they would presumably never have been admitted to college. A statement like “my students just can’t write” is really shorthand for a faculty member’s frustration with her students’ inability to meet certain literacy demands specific to her course and her discipline. Because if those faculty having that hallway conversation about lousy student writing were to sit down together over a series of meetings and carefully articulate what exactly they mean by that statement, they would begin to identify concerns that they share, as well as areas where their own personal values are not in sync. Faculty #1 might be primarily concerned with preventing students from adopting a first person voice in their essays; faculty #2 might be more concerned with eliminating a finite list of common errors; faculty #3’s main concern might be with getting students to revise their work before submitting it. Faculty #4 might value all of these things, but be even more concerned with students’ ability to analyze assigned texts in their writing. And Faculty #5 might be mostly concerned with her students’ ability to marshal evidence and propose an argument.
While all of these faculty express shared frustration over “student writing,” their specific concerns are distinct. Until a department begins to untangle these many values, and work towards identifying (and perhaps prioritizing or ranking) those values, that department will not be able to effectively assess the writing of its students—for the department has not yet done the necessary work to identify what, exactly, it means when it refers to “good” and “bad” writing. Any resultant assessment methods might well be problematic if not invalid as a result.
And so, departments need to take the time to fully explore, and argue about, and debate, and ultimately define and articulate their shared values. Not only is this an ideal way of initiating serious assessment within a department, but it can be a good way for a department to—perhaps for the very first time--create a detailed portrait of what it stands for. Such an activity, taken seriously, is nothing less than a rigorous exercise in departmental self-identity.
There are scores of books, articles, and websites that offer examples of case studies which use easy, sound assessment methods. Three popular resources that we have found useful are Linda Suskie’s Assessing Student Learning: A Common Sense Guide (Jossey-Bass 2009); Thomas A. Angelo and K. Patricia Cross’s Classroom Assessment Techniques: A Handbook for College Teachers (Jossey-Bass 1993); and Barbara E. Walvoord’s Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education, 2nd ed. (Jossey-Bass 2010). The menu below (by no means comprehensive) references passages in these books where faculty can find additional information.
Listening to Students
Collecting short, anonymous feedback at different points in the semester can be a great means of finding out from students not only what they think of a course, but where they would prefer greater clarity, more time, or a chance to review certain information. Faculty can ask students to submit a page of typed, anonymous feedback at mid-semester, and then if necessary modify their teaching in response for the second half of the semester. Then, faculty could ask students to submit another one-page feedback at the end of the semester, reflecting upon whether or not they felt their concerns had been addressed in the second half of the semester. Even if a faculty member is unable to fully address every concern that students have, the simple act of asking for students’ opinions, and making an honest effort to address some of them, speaks volumes. Students appreciate such efforts.
Faculty can ask students to take a minute at the beginning or at the end of class to offer feedback, summarize an issue under discussion in the course, or answer a problem. Faculty can then quickly skim through these responses in order to identify any outstanding questions, issues, or problems that students identify.
If the size of one’s class permits, faculty can conduct one-on-one or small group conferences with students, asking them questions related to the course or soliciting their feedback. Many faculty find that students who are silent observers in class tend to open up more freely in such encounters outside of the classroom.
Using an online discussion board, through Blackboard or any other course management system, is an excellent way to gather student comments and to assess their ongoing understanding of material. Many students who would not speak up in class, nor feel comfortable in one-on-one conferences, will freely write feedback in an online format.
See Angelo and Cross’s Classroom Assessment Techniques for more information on such related assessment techniques as focused listing, memory matrices, analytic memos, one-sentence summaries, concept maps, directed paraphrasing, student-generated test questions, diagnostic learning logs, electronic mail feedback, assignment assessments, etc.
An ideal way for departments to gather information from their majors and minors is to hold focus groups where a manageable group of students are brought together to answer a series of questions that the department has identified. In focus groups, it is important that the questions asked in each group remain the same, and that the people asking the questions remain neutral, and as much as possible do not contribute their own opinions to the conversation. A good incentive to bring students together for focus groups is to offer food (free pizza is an important assessment tool that all departments should have in their arsenals). It can help to have two people conduct these focus groups—one to ask questions and gather input from all students assembled—and to ask follow-up questions in order to further clarify students’ input—and a second person to record those responses. After a series of focus groups, faculty can report their findings back to their departments, and have a discussion as to what they should change or maintain as a result of that student feedback.
Focus groups have the added benefit of sending a message to students that a department values their input, and regularly considers their opinions as it conducts its business of designing curricula and evaluating the program.
For more on Focus Groups see Suskie, pp 195, 196.
These “before and after” measures show how students have and haven’t progressed from one period to another. A professor might collect writing samples from her students early in a semester, then compare one or two variables later in the semester to see if they have changed. For example, how are students using evidence to back up their written claims? As with much assessment, the key here is to be focused one the variable one has chosen to assess: if one is looking at how students are using evidence, one does not (for the purposes of this assessment) focus on other issues as well, such as, say, presence of lively description. The more that faculty keep their assessment question simple and direct, the greater their success in following through on the project.
A Department can also conduct pre-post assessment across courses. A variable in the writing of students in a sophomore course for majors can be compared to that of students in a senior capstone course in order to obtain evidence on the degree to which students are demonstrating progress. If a department discovers that their senior students are not responding to that variable as successfully as they would like, then this can lead to changes in how that sophomore or senior course are taught. (Such changes in teaching are the all important “feedback loop” that needs to happen if assessment is to be successful.) Here it is important to note that it is not a black mark on the department to identify that their students are not performing up to certain standards in their senior year; in fact, the department’s diagnosis of this problem via assessment—and their resultant efforts at addressing this problem—would be examples of an engaged faculty conducting good assessment.
For more on getting students to reflect at the beginning and end of courses, see Suskie 190-191. Pre-Post or Before-and-After approaches to assessment are implicit in many of the methods described on this page.
A popular means of gathering information from students on a range of issues is via rating scales. The First-Year Writing program and the Writing Center have worked together to give all first-year writing students a Likert Rating Scale at the beginning and again at the end of the semester during which they take ENG 1000c. The purpose of this scale is not to measure student writing ability, but rather their engagement and self-assessment as writers.
For more on rating scales, see Suskie pp 195-198.
Many faculty use rubrics as a means of clarifying their values, and defining the characteristics of what they might describe as successful, moderate, and less successful performance. Many value rubrics because they make evaluation less mysterious and seemingly arbitrary.
See Suskie, pp 137-153 for more on scoring guides and rubrics. See Barbara Walvoord’s Appendix on “Sample Rubrics” in Assessment Clear and Simple (107-14). Also, one can find rubrics designed for our University core courses here.
Some faculty and departments, however, prefer methods other than rubrics. Bob Broad’s concept of Dynamic Criteria Mapping, however, originated in part from a critique of rubrics that found them to be too narrow and not descriptive enough to respond to the rich and complex range of variables that can exist. As a result, some faculty and programs come up with their own “homegrown” maps or other forms of capturing and defining the characteristics of student work they wish to promote.
For faculty interested in examples of these alternatives to rubrics, we recommend Bob Broad (ed.), Organic Writing Assessment: Dynamic Criteria Mapping in Action (Utah State University Press, 2009).
Similar in spirit to focus groups, surveys allow faculty to solicit feedback from students in response to a variety of questions drawn up by a faculty member or department. Surveys can be distributed online, or passed out and collected (anonymously) in class. One advantage of surveys over focus groups is that a department can get input from a greater number of students. One possible drawback however (especially with online surveys) is that students sometimes suffer from “survey fatigue,” as they are continually asked to fill out surveys and evaluation forms for so many other programs and events.
There is an art to writing good survey questions. For more information on surveys, see Suskie pp 198-200.
One of the most important early steps any department can do to approach assessment logically and efficiently is to first spend time identifying the shared values within that department. The goal is to develop an inventory of a department’s shared values that will then serve as the foundation for the learning goals to be established for a course or program. Many departments would do well to consider this approach before embarking on other more detailed assessment projects. This can be an ideal way to gather faculty together in a spirit of mutual inquiry, discussion, and exploration.
Individually, faculty can spend time writing down all of the learning outcomes they want their students to have—either at the end of a particular course, or at the end of their program. Then faculty would come together to share what each had written down privately in order to see where their opinions are in sync. They might rank their desired outcomes, then see where different members of a department prioritize certain skills or qualities. Such a departmental values inventory quickly becomes an exercise in vocabulary analysis: for example, if a faculty member says she wants her students to write “strong claims,” whereas another privileges “rhetorical risk-taking,” faculty would then spend time trying to articulate and carefully define what, exactly, these terms mean to them. Such clarification of terms—and the inevitable intradepartmental faculty debates that result from such conversations—is an essential component of good assessment. Until faculty can carefully and accurately define their own vocabulary of learning outcomes terms, any assessment of students’ ability to meet those outcomes could lack validity. Much of assessment is a matter of moving beyond the general to the specific: fine-tuning, clarifying, and articulating with precision what, exactly faculty are looking for in student performance.
See “Developing Learning Goals” in Suskie (115-133) and Walvoord (81-85).
Portfolios offer a rich means of collecting artifacts students have created in their courses, as well as their own self-assessment. Portfolios can include both early and final drafts, and all manner of artifacts. Beginning Fall 2011, the University will begin implementing electronic portfolios for certain students. For more on our electric portfolios, go here.
For more on portfolios see Suskie (202-213) and Walvoord (50-54).
Many assessment methods do not require permission from an Institutional Review Board. Federal policy exempts the following:
(1) Research conducted in established or commonly accepted
educational settings, involving normal educational practices, such as (i) research on regular and special education instructional strategies, or (ii) research on the effectiveness of or the comparison among instructional techniques, curricula, or classroom management methods.(2) Research involving the use of educational tests (cognitive, diagnostic, aptitude, achievement), survey procedures, interview procedures or observation of public behavior.
Faculty and departments must be very careful, however, that individual students can’t be identified, nor harmed by disclosing their opinions or feedback outside of the assessment research. For more see U.S. Department of Health and Human Services, Title 45, Part 46, “Protection of Human Subjects,” Section 101, “To What Does This Policy Apply,” found here. Finally, consult our University’s IRB here for further information.
All of the above assessment methods are legitimate approaches, and can be summarized and entered into WEAVE online as examples of a department’s ongoing efforts to assess its courses and programs.
- Academic Program Review
- Academic Support Services
- Constitution Day
- Digital Measures
- Faculty Authors
- Faculty Resources
- Graduate Program Content Request Form
- Hiring Non-Academic Graduate Assistants
- University Writing Center
- Institutional Research
- J.K.Watson Undergraduate Fellowship
- Middle States Accreditation
- Mock Trial Team
- New Faculty Information
- Policies, Procedures and Reporting
- Research Month
- University Assessment Committee
- Submit Graduate Program News