University Assessment Committee (UAC)

The University Assessment Committee (UAC), comprised of faculty and staff representing each school/college within the university, provides advice, recommendations, and strategies to university administration and academic units regarding all activities associated with the assessment of student learning. The UAC is charged by the Provost to re-examine extant assessment practices, recommend new and different strategies where change may be warranted, and provide counsel aimed at improving and enhancing the effectiveness of all student assessment practices in undergraduate, graduate, professional and online education at St. John’s University.

UAC Membership List

University Assessment Committee Membership

Fall 2021

Academic Center for Equity and Inclusion
Dr. Manouchkathe Cassagnol
Pronouns: she|her|hers

Center for Teaching and Learning
Dr. Cynthia Phillips

College of Pharmacy & Health Sciences 
Dr. Marc Gillespie (Co-Chair)
Dr. Nicole M. Maisch

Collins College of Professional Studies 
Dr. Emese Ivan
Prof. James Croft

Institutional Research
Dr. Christine Goodwin (Co-Chair)

Law School
Prof. Rosa Castello

School of Education 
Dr. Edwin Tjoe

St. John’s College 
Dr. Phyllis Conn
Dr. Brittany Dotson
Dr. Srividhya Swaminathan 
 

The Peter J. Tobin College of Business
Dr. William Reisel
Dr. Victoria Shoaf

University Core Curriculum Council 
Dr. Olga Hilas
Dr. Joseph Serafin (and SJC)

University Libraries
Prof. Cynthia Chambers
Prof. Benjamin Turner

University Mission and The Vincentian Institute for Social Action
Ms. Lucy A. Pesce

Assessment Materials

The pages assembled here under the title of “Assessment Materials” fall into two categories. The first section is “Unpacking Assessment,” which is an extended overview for chairs and faculty. It is broken up into a series of sections in hopes of easing navigation through the text.

The second category is “Assessment Tools,” which includes various assessment resources, many of which have existed previously in different places on our website and are now housed here in one place. This includes information on rubrics, links to related organizations and archives, and bibliographies.

Unpacking Assessment

In 2005 St. John's University held a Presidential Summit, "How Do You Know if Your Students Are Learning?" The summit was led by Dr. Barbara E. Walvoord, a nationally recognized scholar on assessment.* At that time Walvoord discussed the degree to which assessment had become a national reform movement, fueled in part by calls for higher education to be more accountable for its learning standards, as well as increased scrutiny by college students and their families when selecting colleges. In the years since her visit to St. John's, the assessment movement has only increased.

Because of the growing importance of assessment in higher education, we find it necessary to provide chairs, faculty, and administrators with an introductory overview about this admittedly weighty and sometimes overwhelming term. We realize that the emphasis on assessment in education has surged in the last two decades, and that faculty and administrators who have not had the time to fully explore assessment theory or practice might benefit from some contextualization. As you go through these pages, please keep two things in mind:

We at St. John’s are committed to approaches to progressive assessment practices informed by best practices in the field. In accordance with those best practices, we strongly believe that assessment needs to be local and “homegrown.” This means that faculty and students, within their local disciplines, need to work together to continually imagine, develop, and act upon their assessment initiatives. Assessment, first and foremost, is about reflecting upon one’s learning. We also strongly believe that it is the administration’s job to ensure that assessment is ongoing, measurable, and informed by best practices—and that the hard work faculty put into their ongoing assessment is recognized and acted upon, and not just archived. Most of all, we recognize that the heart of assessment has its origins with faculty and students who know best about their disciplinary and programmatic needs.

The Office of the Provost is committed to working with all faculty and departments to help them with their assessment initiatives. We also want to make sure that all of these assessment initiatives are showcased and highlighted every semester through WEAVE, electronic portfolios, and our University web pages. Most importantly, we are committed to working with departments to act upon their ongoing assessment activities—the reports that get submitted online and placed into WEAVE should not simply be warehoused, but explored in order to identify further plans for action.
These “Unpacking Assessment” pages are intended as a living document. While originally posted by the Office of the Provost, we invite faculty, administration, and students to submit suggestions for revision as well as new information. Ultimately these pages should reflect the collaborative spirit of assessment. Please direct any suggestions, questions, or additional text to be considered to Elizabeth Ciabocchi Ed.D.

* We highly recommend Walvoord's succinct and very accessible book,  Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education (2nd ed., Jossey-Bass 2010).

A paradigm shift has occurred in higher education. A generation ago such terms as “assessment,” “rubric,” and “learning outcomes” were not in the working vocabularies of many faculty and departments. Today these terms are unavoidable and omnipresent, and they have profoundly changed the landscape of higher education. 

Nor can academic institutions, when their various accreditation agencies demand greater evidence of assessment, simply “push back” (as some colleagues have been heard to suggest). There is no resisting the assessment wave.

But there is no reason we should want to push back on assessment. After all, faculty and academic institutions have always been assessing their students and themselves. What's important is that we keep abreast of best practices in assessment, and conduct ongoing assessment in ways that privilege faculty expertise while taking seriously, student feedback. At its heart assessment is about learning: reflecting upon what one has learned and acting upon those reflections, continually.

The trick is to ensure that assessment is understood as something that develops locally and organically, originating in conversations between faculty and students within their own disciplines. Faculty know their disciplines better than anyone else. And because they are in such sustained contact with their students, faculty can learn a great deal about what students think of their courses, syllabi, and assignments. Together, faculty and students need to be learning from each other, all the time. All good academic assessment has faculty-student dialogue and mutual learning at its core.

Administrations also play a key role in assessment—not by telling faculty or departments how they must assess, but by creating a supportive, nurturing culture of assessment for faculty and students. Administrations help initiate, orchestrate, study (and, yes, assess) a University’s assessment practices. They help provide faculty with the necessary tools and direction for conducting an array of ongoing assessment activities and methods. In turn the administration is continually learning from faculty and students about the learning that takes place throughout the institution.

This is the three-way conversation that drives good assessment: students, faculty, and administration in continual dialogue, reflecting upon their ongoing learning.

What Is the Value of a St. John’s Education? is the question at the core of our University's recent “Repositioning Document.” This is the question that will drive much of our assessment initiatives for the foreseeable future. (It is, in fact, the kind of question all educational institutions are wrestling with, especially as the cost of tuition continues to rise and students are increasingly selective about where they attend college.) This question cannot be answered without in-depth, ongoing assessment.

      Our goals:

  • Cultivate a culture of assessment that is local and “homegrown”—where assessment methods and initiatives are designed by faculty, and unique to each department.
  • Foster a lively sense of collegiality among faculty within their disciplines, as well as between those faculty and their students.
  • Create a truly collaborative and mutually instructive three-way dialogue of assessment between students, faculty, and administrators.
  • Showcase our collective assessment efforts every semester as a means of demonstrating to prospective students and our fellow colleagues just how creative and engaged we are when it comes to reflecting upon the learning that is at the heart of our work at St. John’s.
  • Follow up on departmental and program assessment findings in ways that enrich our programs and student success.

Those who are new to the culture of assessment—and this includes students, faculty, and administrators alike—sometimes respond a bit skeptically to all this talk about assessment. We should be honest and up front about such initial skepticism. We need to explore this resistance and work through it so that any resultant assessment approaches are not perceived as mandates arbitrarily imposed upon faculty and students, but legitimate components of learning valued by all parties.

Students, for example, when asked to write self-reflection responses in which they evaluate their own work and progress (just one of many different forms of assessment used in high school and college courses today), don’t always see the value in such an activity. “Why are you making us do all this extra work,” they might wish to ask their professor; “why can’t you just give us our grade and be done with it?”

Faculty can also respond to this culture of assessment with a few raised eyebrows. This is understandable and logical. If faculty have been teaching for several decades quite comfortably without having to actively engage in assessment activities, and then are suddenly expected to articulate measurable learning objectives, it is only natural that they demand some evidence and justification for such a major shift. Faculty are independent thinkers who take their pedagogical and intellectual autonomy quite seriously, as well they should. So it’s only normal that this paradigm shift in assessment (and any pedagogical paradigm shift for that matter) is not immediately embraced by every faculty member. This is why it is the responsibility of both administration and faculty, working together, to explore best practices in assessment.

Just one example: sometimes faculty tend to respond to this new culture of assessment by asking, “What’s wrong with the way I—not to mention Universities for more than a century—have been assessing students? The University pays professors like me to teach them and judge their work and assess them with a grade. I’ve been doing that from day one, so why all this emphasis on measuring learning outcomes?” This is a fair response, and it is important that we address it head-on as we recognize why traditional grading practices are, by themselves, no longer enough. (Yes, grading is a method of assessment. But it is limited in what it tells us about student learning. Assessment involves a variety of means of collecting ideas and data—from students and one’s colleagues—that in turn help enhance faculty teaching methods, which in turn enhance student performance. The mere existence of grades doesn’t necessarily mean there is much significant learning taking place on either side.)

And administrators too—even when placed in the position of being advocates and promoters of assessment—are themselves sometimes overwhelmed (if not occasionally exhausted) by the new expectations that come with this culture of assessment. They too, with their faculty colleagues, sometimes look back to their jobs several decades prior, fondly reminiscing about all of that extra time they must have had when the vocabulary of assessment had not yet entered their daily working vocabularies.

As we enter this new culture of assessment, it’s important to understand that as educators we have always been assessing. Reflecting upon the effectiveness of our teaching—our ability to meet our goals while maintaining a flexible and open-minded approach to our pedagogies--is integral to any good teaching and learning experience. Faculty develop courses, assign projects, require tasks of their students. Then information is collected from students—not just test grades, but such things as portfolios, focus groups and student conferences, writing samples, surveys, and other forms of feedback. This data helps us learn where our programs and courses are working and where things can be improved. Then, faculty revisit assignments, curricula, and programs for further enhancement. This is the assessment feedback loop, and it is ongoing. Faculty never reach a point where there is nothing new to learn about our students’ performances.

Our goals:

  • Understand that initial skepticism and resistance to calls for assessment among faculty are understandable and logical, and not inherently signs of apathy or disinterest.
  • Develop contexts where faculty, students, and administrators can explore the intrinsic value in reflecting upon our learning.
  • Design means by which faculty can see that assessment creates ongoing opportunities to better understand their students, their colleagues, their institution, and their own (always evolving) disciplines.
  • Ensure that our assessment policies are informed by all cohorts (students, faculty, administration), are integrated into the daily activities of the departments and institution, and do not manifest in little more than busywork for faculty.

Like all grand concepts, “assessment” is really an umbrella for a broad range of different methodologies and approaches. These include, but are not limited to, “classroom assessment,” “direct assessment,” “embedded assessment,” “formative assessment,” “indirect assessment,” “non-referenced assessment,” “qualitative assessment,” “summative assessment,” and others. For those interested in exploring the rich vocabulary of assessment methods, there are plenty of guides and websites. But to be good at assessment we need not be overly preoccupied with such detailed schema. For now, we might focus on four distinct realms or genres of assessment.

I. Classroom Assessment

This is when individual faculty reflect on what and how their students learn in specific courses. Classroom assessment is focused on course improvement and is less preoccupied with giving grades. Obviously this doesn’t mean that faculty concerned with classroom assessment don’t give grades; it just means that faculty value dialogue with their students, exploring with them ways of enhancing the course. This can include focus groups, one-on-one and small group conferences, surveys, collecting student evaluations, and assigning and reflecting upon student writing. It also means faculty being in dialogue with their peers, sharing teaching methods and best practices with colleagues in meetings and retreats while keeping abreast of pedagogically relevant literature in the field. Certainly many faculty already engage in such activities—which illustrates the degree to which assessment has always been a part of our professional work. What’s different now is the need for faculty to document such assessment and how it translates into enhanced teaching and learning. Here at St. John’s, this means uploading data into WEAVE and assessing students’ accomplishments via their electronic portfolios and other means (more on these below).

II. Program Assessment

Just as individual faculty assess their own courses, so too do departments need to annually reflect upon their programs. Every year representatives from every academic program need to upload ongoing assessment findings from their department or unit into WEAVE, and also document what they are doing next in response to that assessment. In this manner a program is continually reflecting upon its strengths as well as areas it has identified for development and review.

III. Licensure and Examination Passing Rates

For some professions, a major form of assessment takes place in licensure and exam passing rates. A law school, for example, is evaluated to a large extent on how well its students perform on the bar exam. All departments that seek to prepare students for certain licensing or certification exams—and, thus, base their own effectiveness in part on how well their students do on such exams--need to continually compare their students’ success with  benchmark institutions, establish targets, and develop action plans to increase or sustain performance. Summaries of this work are archived annually into WEAVE.

IV. Job Placement and Further Education

Are our students getting jobs related to their areas of study? Are they getting into graduate programs? Are they winning awards, or demonstrating other evidence of success? Departments are often the first places that learn of their students’ ability to land jobs or get into graduate programs. Departments need to keep records of where their majors, minors, and graduate students are establishing their careers or gaining admittance into other graduate programs. Such information should be regularly posted within the Department’s website.

 Our goals:

  • Understand some of the various ways assessment happens in response to the local needs of departments and programs.
  •  Appreciate the need to continually document assessment findings, as well as resultant action plans.

Chairs are not the people primarily responsible for leading departmental assessment--all department faculty need to be regularly involved in conversations about assessment. These conversations can take place during regular department meetings, annual or semi-annual department retreats, and additional workshops throughout the year. To this end, we urge every Department to assign one or more faculty members to work as Department Assessment Coordinators in order to address the following:

  • Work with chair and department to schedule and conduct meetings where departmental assessment activities are generated, implemented, and reviewed
  • Work with Chair and Institutional Research to update information into WEAVE at the end of every semester, keeping the dept abreast of ongoing assessment projects
  • Update departmental assessment initiatives on a Departmental Assessment webpage every semester
  • Work with dept to ensure that webpages are up to date (including faculty bios), and that necessary new content is being created; submit changes to be made to secretary (who will have already been trained in publishing to website)
  • Annually facilitate departmental review of language in bulletin (hard copy and online) pertaining to the dept and its courses, consulting dept colleagues to ensure accuracy and working with Department secretary to post changes to website if and when necessary. 

Likewise, the administration values honest feedback in departmental reporting. We recognize that the purpose of assessment is not for departments and programs to sing their praises, but rather to accurately and candidly reflect upon where things are working and where they can be improved. It is better for a program to report that its learning outcomes are below its targets, and is consequently pursuing an action plan aimed at increasing those targets, than for a program to report that all is well and no subsequent action plans are in effect. Also, the administration recognizes that the hard work faculty put into assessment cannot go ignored. Annual assessment reports need to be responded to and acted upon, not left forgotten in an online database.

      Our goals:

  • Impress upon faculty the need to regularly document their department’s various assessment initiatives online for everyone to see—colleagues, students, accreditation agencies, and other stakeholders.
  • Involve faculty directly in the ongoing maintenance and design of their department’s web pages—which are nothing less than the most public face of their department.
  • Make assessment engaging and inviting—to help faculty realize that many conversations about assessment are inherently conversations about values, knowledge, critical thinking, and many of the very attributes that led faculty into the profession of teaching in the first place.
  • Ensure that the administration values honest reporting, and supports departments in following through on action plans when possible.

In 2006 St. John’s acquired WEAVE, a web-based assessment and planning management system designed to help faculty and administrators write goals, objectives, and criteria for tracking, assessing, and developing action plans. This tool currently serves as our University’s primary repository for student learning assessment—a living database where all departments and offices regularly record and monitor their various assessment activities.

The WEAVE management system has five main sections: 1) Assessment, 2) Action Plans, 3) Achievement Summaries, 4) Annual Reporting, and 5) Document Repository. The first section, “Assessment,” is the most detailed. It requires all programs and units to a) enter and edit their mission/purpose;  b) establish goals for achieving that mission; c) develop outcomes and objectives; d) identify measures; e) specify achievement targets; f) enter findings; and g) develop action plans in response to those findings.

All Departments will need to submit summaries of their ongoing assessment activities into WEAVE every year, and ideally at the end of every semester. The importance of submitting accurate, detailed, and honest reports to WEAVE cannot be understated. When outside accreditation agencies explore our commitment to assessment, one of their primary concerns is to see that assessment is taking place, and is ongoing. The Associate Director of University Assessment, housed in Institutional Research, is available to assist chairs and their Department Assessment Coordinators in uploading and updating their assessment summaries, plans, and reports.

Our goal: For faculty to regularly discuss within their departments their ongoing assessment initiatives, and document those activities (and their results, and their follow-up plans) on WEAVE.

For more information on WEAVE please contact the Office of Institutional Research at (718)-990-1869.

Beginning in September 2011, St. John’s University partnered with Digication, a company that provides electronic portfolios for students. 

An ePortfolio is a collection of academic material and accomplishments that a student gathers and places online. ePortfolios have become increasingly common at all levels of education, especially in high schools and colleges. Students use ePortfolios to showcase their intellectual achievements, as well as present themselves academically and professionally to outside audiences. In many areas the ePortfolio is fast replacing the conventional resume as the means by which students apply for jobs or graduate programs.

Faculty use ePortfolios to better understand and evaluate the work their students are doing. For example, ePortfolios offer a means by which faculty can engage in classroom assessment and program assessment. Faculty and administrations also use e-Portfolios to assess the work their students are doing.

Just as WEAVE offers an essential means by which an institution continually documents its claims for assessment—a living archive and database of mission statements, goals, findings, and action plans—e-Portfolios provide the evidence that supports the findings presented in WEAVE. Together, WEAVE and ePortfolios will be the mechanisms that St. John’s will use to further document our ongoing assessment activities.

Our Goal: For faculty to seriously imagine ways that their students can demonstrate expertise in their courses on ePortfolios—via written statements, papers, research, video, photography, audio, reflective journals, etc.

*Support information on ePortfolios can be found here.

It’s crucial that our assessment measures are specific to departments and disciplines, and are owned by faculty and students. One of the most important first steps any department can do in order to approach assessment logically and efficiently is to first spend time taking an inventory of the shared values of that department. This can be an ideal way to gather faculty together in a spirit of mutual inquiry, discussion, and exploration.

Such a preliminary inventory is important. Many faculty within a department, when speaking casually to one another at meetings or during hallway conversations, might initially seem to uphold the same shared values when it comes to their expectations for student work. Take student writing, for example—a perpetual hot topic among faculty in nearly every discipline. When faculty discuss student writing, their shared values are not infrequently expressed in the negative as faculty complain and sigh over what they have identified as their students’ shortcomings. A common complaint many faculty have heard if not uttered: “My students just can’t write,” a sentiment that might well elicit mutual nods of affirmation from one’s other colleagues.

But while this complaint might have value in that it points to a generalized frustration with the student writing they see in their courses, the statement remains too vague to be of any pedagogical value. Obviously, students can write—for if they were literally functionally illiterate, numerous red flags would have sounded and they would presumably never have been admitted to college. A statement like “my students just can’t write” is really shorthand for a faculty member’s frustration with her students’ inability to meet certain literacy demands specific to her course and her discipline. Because if those faculty having that hallway conversation about lousy student writing were to sit down together over a series of meetings and carefully articulate what exactly they mean by that statement, they would begin to identify concerns that they share, as well as areas where their own personal values are not in sync. Faculty #1 might be primarily concerned with preventing students from adopting a first person voice in their essays; faculty #2 might be more concerned with eliminating a finite list of common errors; faculty #3’s main concern might be with getting students to revise their work before submitting it. Faculty #4 might value all of these things, but be even more concerned with students’ ability to analyze assigned texts in their writing. And Faculty #5 might be mostly concerned with her students’ ability to marshal evidence and propose an argument.

While all of these faculty express shared frustration over “student writing,” their specific concerns are distinct. Until a department begins to untangle these many values, and work towards identifying (and perhaps prioritizing or ranking) those values, that department will not be able to effectively assess the writing of its students—for the department has not yet done the necessary work to identify what, exactly, it means when it refers to “good” and “bad” writing. Any resultant assessment methods might well be problematic if not invalid as a result.

And so, departments need to take the time to fully explore, and argue about, and debate, and ultimately define and articulate their shared values. Not only is this an ideal way of initiating serious assessment within a department, but it can be a good way for a department to—perhaps for the very first time--create a detailed portrait of what it stands for. Such an activity, taken seriously, is nothing less than a rigorous exercise in departmental self-identity.

There are scores of books, articles, and websites that offer examples of case studies which use easy, sound assessment methods. Three popular resources that we have found useful are Linda Suskie’s Assessing Student Learning: A Common Sense Guide (Jossey-Bass 2009); Thomas A. Angelo and K. Patricia Cross’s Classroom Assessment Techniques: A Handbook for College Teachers (Jossey-Bass 1993); and Barbara E. Walvoord’s Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education, 2nd ed. (Jossey-Bass 2010). The menu below (by no means comprehensive) references passages in these books where faculty can find additional information.

Listening to Students

Anonymous Feedback

Collecting short, anonymous feedback at different points in the semester can be a great means of finding out from students not only what they think of a course, but where they would prefer greater clarity, more time, or a chance to review certain information. Faculty can ask students to submit a page of typed, anonymous feedback at mid-semester, and then if necessary modify their teaching in response for the second half of the semester. Then, faculty could ask students to submit another one-page feedback at the end of the semester, reflecting upon whether or not they felt their concerns had been addressed in the second half of the semester. Even if a faculty member is unable to fully address every concern that students have, the simple act of asking for students’ opinions, and making an honest effort to address some of them, speaks volumes. Students appreciate such efforts.

One-minute Paper

Faculty can ask students to take a minute at the beginning or at the end of class to offer feedback, summarize an issue under discussion in the course, or answer a problem. Faculty can then quickly skim through these responses in order to identify any outstanding questions, issues, or problems that students identify.

Conferences

If the size of one’s class permits, faculty can conduct one-on-one or small group conferences with students, asking them questions related to the course or soliciting their feedback. Many faculty find that students who are silent observers in class tend to open up more freely in such encounters outside of the classroom.

Online Feedback

Using an online discussion board, through Blackboard or any other course management system, is an excellent way to gather student comments and to assess their ongoing understanding of material. Many students who would not speak up in class, nor feel comfortable in one-on-one conferences, will freely write feedback in an online format.

See Angelo and Cross’s Classroom Assessment Techniques for more information on such related assessment techniques as focused listing, memory matrices, analytic memos, one-sentence summaries, concept maps, directed paraphrasing, student-generated test questions, diagnostic learning logs, electronic mail feedback, assignment assessments, etc.

Focus Groups

An ideal way for departments to gather information from their majors and minors is to hold focus groups where a manageable group of students are brought together to answer a series of questions that the department has identified. In focus groups, it is important that the questions asked in each group remain the same, and that the people asking the questions remain neutral, and as much as possible do not contribute their own opinions to the conversation. A good incentive to bring students together for focus groups is to offer food (free pizza is an important assessment tool that all departments should have in their arsenals). It can help to have two people conduct these focus groups—one to ask questions and gather input from all students assembled—and to ask follow-up questions in order to further clarify students’ input—and a second person to record those responses. After a series of focus groups, faculty can report their findings back to their departments, and have a discussion as to what they should change or maintain as a result of that student feedback.

Focus groups have the added benefit of sending a message to students that a department values their input, and regularly considers their opinions as it conducts its business of designing curricula and evaluating the program.

For more on Focus Groups see Suskie, pp 195, 196.

Pre-Post

These “before and after” measures show how students have and haven’t progressed from one period to another. A professor might collect writing samples from her students early in a semester, then compare one or two variables later in the semester to see if they have changed. For example, how are students using evidence to back up their written claims? As with much assessment, the key here is to be focused one the variable one has chosen to assess: if one is looking at how students are using evidence, one does not (for the purposes of this assessment) focus on other issues as well, such as, say, presence of lively description. The more that faculty keep their assessment question simple and direct, the greater their success in following through on the project.

A  Department can also conduct pre-post assessment across courses. A variable in the writing of students in a sophomore course for majors can be compared to that of students in a senior capstone course in order to obtain evidence on the degree to which students are demonstrating progress. If a department discovers that their senior students are not responding to that variable as successfully as they would like, then this can lead to changes in how that sophomore or senior course are taught. (Such changes in teaching are the all important “feedback loop” that needs to happen if assessment is to be successful.) Here it is important to note that it is not a black mark on the department to identify that their students are not performing up to certain standards in their senior year; in fact, the department’s diagnosis of this problem via assessment—and their resultant efforts at addressing this problem—would be examples of an engaged faculty conducting good assessment.

For more on getting students to reflect at the beginning and end of courses, see Suskie 190-191. Pre-Post or Before-and-After approaches to assessment are implicit in many of the methods described on this page.

Rating Scales

A popular means of gathering information from students on a range of issues is via rating scales. The First-Year Writing program and the Writing Center have worked together to give all first-year writing students a Likert Rating Scale at the beginning and again at the end of the semester during which they take ENG 1000c. The purpose of this scale is not to measure student writing ability, but rather their engagement and self-assessment as writers.

            For more on rating scales, see Suskie pp 195-198.

Rubrics

Many faculty use rubrics as a means of clarifying their values, and defining the characteristics of what they might describe as successful, moderate, and less successful performance. Many value rubrics because they make evaluation less mysterious and seemingly arbitrary.

See Suskie, pp 137-153 for more on scoring guides and rubrics. See Barbara Walvoord’s Appendix on “Sample Rubrics” in Assessment Clear and Simple (107-14). Also, one can find rubrics designed for our University core courses here.

Some faculty and departments, however, prefer methods other than rubrics. Bob Broad’s concept of Dynamic Criteria Mapping, however, originated in part from a critique of rubrics that found them to be too narrow and not descriptive enough to respond to the rich and complex range of variables that can exist. As a result, some faculty and programs come up with their own “homegrown” maps or other forms of capturing and defining the characteristics of student work they wish to promote.

For faculty interested in examples of these alternatives to rubrics, we recommend Bob Broad (ed.), Organic Writing Assessment: Dynamic Criteria Mapping in Action (Utah State University Press, 2009).

Surveys

Similar in spirit to focus groups, surveys allow faculty to solicit feedback from students in response to a variety of questions drawn up by a faculty member or department. Surveys can be distributed online, or passed out and collected (anonymously) in class. One advantage of surveys over focus groups is that a department can get input from a greater number of students. One possible drawback however (especially with online surveys) is that students sometimes suffer from “survey fatigue,” as they are continually asked to fill out surveys and evaluation forms for so many other programs and events.

There is an art to writing good survey questions. For more information on surveys, see Suskie pp 198-200.

Values Inventory

One of the most important early steps any department can do to approach assessment logically and efficiently is to first spend time identifying the shared values within that department. The goal is to develop an inventory of a department’s shared values that will then serve as the foundation for the learning goals to be established for a course or program. Many departments would do well to consider this approach before embarking on other more detailed assessment projects. This can be an ideal way to gather faculty together in a spirit of mutual inquiry, discussion, and exploration.

Individually, faculty can spend time writing down all of the learning outcomes they want their students to have—either at the end of a particular course, or at the end of their program. Then faculty would come together to share what each had written down privately in order to see where their opinions are in sync. They might rank their desired outcomes, then see where different members of a department prioritize certain skills or qualities. Such a departmental values inventory quickly becomes an exercise in vocabulary analysis: for example, if a faculty member says she wants her students to write “strong claims,” whereas another privileges “rhetorical risk-taking,” faculty would then spend time trying to articulate and carefully define what, exactly, these terms mean to them. Such clarification of terms—and the inevitable intradepartmental faculty debates that result from such conversations—is an essential component of good assessment. Until faculty can carefully and accurately define their own vocabulary of learning outcomes terms, any assessment of students’ ability to meet those outcomes could lack validity. Much of assessment is a matter of moving beyond the general to the specific: fine-tuning, clarifying, and articulating with precision what, exactly faculty are looking for in student performance.

See “Developing Learning Goals” in Suskie (115-133) and Walvoord (81-85).

Portfolios

Portfolios offer a rich means of collecting artifacts students have created in their courses, as well as their own self-assessment. Portfolios can include both early and final drafts, and all manner of artifacts. Beginning Fall 2011, the University will begin implementing electronic portfolios for certain students. For more on our electric portfolios, go here.

For more on portfolios see Suskie (202-213) and Walvoord (50-54).

IRB

Many assessment methods do not require permission from an Institutional Review Board. Federal policy exempts the following:

(1) Research conducted in established or commonly accepted

educational settings, involving normal educational practices, such as (i) research on regular and special education instructional strategies, or (ii) research on the effectiveness of or the comparison among instructional techniques, curricula, or classroom management methods.(2) Research involving the use of educational tests (cognitive, diagnostic, aptitude, achievement), survey procedures, interview procedures or observation of public behavior.

Faculty and departments must be very careful, however, that individual students can’t be identified, nor harmed by disclosing their opinions or feedback outside of the assessment research. For more see U.S. Department of Health and Human Services, Title 45, Part 46, “Protection of Human Subjects,” Section 101, “To What Does This Policy Apply,” found here. Finally, consult our University’s IRB here for further information.

WEAVE

All of the above assessment methods are legitimate approaches, and can be summarized and entered into WEAVE online as examples of a department’s ongoing efforts to assess its courses and programs.

Assessment Tools

These resources are intended to supplement the overviews found in the “Unpacking Assessment” pages. Faculty and chairs will find information here on ways or constructing rubrics, links to relevant organizations and archives, and bibliographies. As with the “Unpacking Assessment” pages, we invite others to submit additional information or suggestions for these pages. (For example, while rubrics are commonplace in assessment, there are also recent theories of assessment that argue for alternatives to rubrics. We want these pages to reflect and point to the multifaceted and diverse world of assessment theories and approaches that exist in hopes of enabling faculty to determine which assessment models and approaches best fit their needs.)

Demonstrate the ability to think critically
Identify issue(s) clearly and effectively
Identify the key components and perspective(s) of subject matter, both stated and unstated, and their relations to each other and to the issue
Integrate multiple perspectives and information to produce a new or original whole
Assess the value and/or usefulness of information or ideas
Use information and relevant knowledge to construct a valid argument or solution
 
Demonstrate proficiency in information literacy
Design a research objective appropriate to assignment
Locate needed information from a variety of sources
Critically evaluate information and sources
Situate primary sources in their historical context and articulate their continued relevance
Integrate information to accomplish the planned objective
Use information ethically and legally
 
Demonstrate the ability to write skillfully
Address assignment appropriately
Present well-defined claims
Compose well-organized work with logical flow
Use multiple, reliable sources, with researched support correctly cited, to support claim
Use effective word choice, sentence variety and standard written English competently
Reflect upon, evaluate, and revise one’s written work
 
Demonstrate skill in oral presentation
Use evidence and research appropriate to topic and organize the materials for presentation
Engage the audience, using language appropriate to audience and assignment
Use appropriate and effective supplementary materials
Utilize appropriate volume, rate, and length relevant to topic
 
Demonstrate the ability to use quantitative reasoning in a variety of contexts
Differentiate among interpretations of quantitative information, including causality and correlation
Interpret quantitative measures, including statistical significance and descriptive statistics (mean, median, mode)
Utilize quantitative measures (electronic, graphical, tabular or numerical) to make informed decisions in a variety of contexts

These curriculum maps of the core courses indicate the competencies that have been approved by the University Core Curriculum Committee  as “essential” or “suggested” for the common core and distributed core courses in the degrees offered by your College or School. All core curriculum course syllabi should include the designated competencies as learning outcomes.

College of Professional Studies
BA, BS: Common Core Competencies (PDF)
BA, BS: Distributed Core Competencies (PDF)

The School of Education
BA, BS: Common Core Competencies (PDF)
BA, BS: Distributed Core Competencies (PDF)

Pharmacy and Health Sciences
Pharm D: Common Core Competencies (PDF)
Pharm D: Distributed Core Competencies (PDF)
BS: Physician Assistant Common Core Competencies (PDF)
BS: Physician Assistant Distributed Core Competencies (PDF)
BS: MedTech, Toxicology, Pathology Assistant Common Core Competencies (PDF)
BS: Pathology Assistant Distributed Core Competencies (PDF)

St. John's College of Liberal Arts and Sciences
BA, BFA: Common Core Competencies (PDF)
BS: Common Core Competencies (PDF)
BA: Distributed Core Competencies (PDF)
BFA: Distributed Core Competencies (PDF)
BS: Distributed Core Competencies (PDF)

Peter J. Tobin College of Business
BS: Common Core Competencies (PDF)
BS: Distributed Core Competencies (PDF)

The Catholic and Vincentian mission of St. John’s University
The Catholic identity of St. John’s University and its dialogue with other religious traditions
The Vincentian value of respect for the dignity of all human persons
The responsibility to address poverty and structures of injustice through research, education, service and advocacy
The commitment, through research, education, service and advocacy, to foster peace and justice in metropolitan communities and global society
 
Philosophical traditions and concepts
Conceptions of human nature and of ultimate reality
Conceptions of the ultimate foundations and principles of morality and virtue and their pragmatic implications
Philosophical and/or religious implications of modern science
 
Christian traditions and contemporary issues
Historical development of Christianity as reflected in biblical, doctrinal and theological sources
Relationships between the church and the contemporary world
Perennial questions and issues in Christianity
 
Processes of scientific inquiry
Scientific methods of thinking and their limits
Historical development of scientific concepts
Scientific thinking in relationship to societal issues
 
Social and psychological dimensions of human behavior
Individual behaviors in social, economic, geographic, psychological and/or political contexts
Interactions of individuals and groups and their effects on society
 
Emergence of global society
Chronology of key events in the emergence of global society
Factors shaping cross-cultural relationships between Western and non-Western societies
 
Cultural, literary and aesthetic components of global traditions
Literary and aesthetic perspectives across cultures
Literary works within cultural contexts

Interactions between languages and contemporary cultures

 
Diversity and richness of New York City
Cultural and educational resources of New York City
New York as a dynamic, global city
Basic history of New York City

Below are curriculum maps of the core courses. These indicated the knowledge bases that have been approved by the University Core Curriculum Committee  as “essential” or “suggested” for the common core and distributed core courses. All core curriculum course syllabi should include the designated knowledge bases as learning outcomes.

Common Core Knowledge Bases by Course

DNY 1000C (PDF)
English 1000C & 1100C (PDF)
History 1000C (PDF)
Philosophy 1000C & 3000C (PDF)
Science (BS, Pharm D) (PDF)
Scientific Inquiry 1000C (PDF)
Speech 1000C (PDF)
Theology 1000C (PDF)

Common Core Knowledge Bases:  Pharmacy & Health Sciences Equivalents

History (Physician Assistant and PharmD) (PDF)
Speech (Toxicology, Physician Assistant and PharmD) (PDF)

Distributed Core Knowledge Bases by Course

Creativity & Arts (PDF)
Language (SJC, SOE, TCB, CPS) Language and Culture (PDF)
Mathematics (PDF)
Philosophy (PDF)
Social Science Social Science (SOE) (PDF)
Social Science (TCB) (PDF)
Theology (PDF)

Distributed Core Knowledge Bases:  Pharmacy & Health Sciences Equivalents

Creativity & Arts, Language & Culture, Math, Social Science (Phys Asst) (PDF)
Creativity & Arts, Language & Culture, Math, Social Science (Pharm D) (PDF)

Additional Distributed Core Course Requirements by College/School

SJC Electives SOE Science (PDF)
TCB English (PDF)

Faculty who have used rubrics have found that a description of what constitutes a core competency is an exceptional aid to both the faculty member and the student for improvement of learning.

  • Rubrics articulate in writing the various criteria and standards used to evaluate student work.
  • Given to the student before an assignment clarifies expectations
  • Used to evaluate that assignment and/or other assignments improves clarity of feedback and specification of areas needed for improvement both for the student and the faculty member

Each of the competency rubrics below is “read-only” and may be downloaded for use in classes.

  • Critical Thinking Rubric (newly revised: 2/08) (PDF)
  • Writing Rubric (newly revised: 2/08) (PDF)
  • Oral Presentation Rubric (newly revised:  2/08) (PDF)
  • Quantitative Reasoning Rubric (PDF)
  • Information Literacy Rubric (newly revised :2/08) (PDF)

The Association for the Assessment of Learning in Higher Education offers numerous rubrics from institutions here.

Course Goals: the general aims or purposes of the course. Effective goals are broadly stated, meaningful, achievable and assessable. Goals should provide a framework for determining the related learning outcomes of a course.

Example:
 Students will (or begin to--depends on level):

I. Understand and can apply fundamental concepts of the discipline
(which are part of the course)
II. Communicate effectively, both orally and in writing.

III. Conduct sound research, demonstrating proficiency in information literacy.

IV. Address issues critically and reflectively.

V. Create solutions to problems.

VI. Work effectively in a team.

VII. Gain realistic ideas of how to implement their knowledge, skills and values in occupational pursuits in a variety of settings

Learning Outcomes:  the knowledge, skills, abilities, capacities, attitudes or dispositions you expect students to acquire in your course.  Learning outcomes articulate suggested strategies for how the goals can be demonstrated. Clearly state each outcome you are seeking:  What will the student be able to do?

Example:  
Goal IV. Address issues critically and reflectively.

Possible Learning Outcomes

    1.    Apply concepts and/or viewpoints to a new question or issue.
    2.    Describe ______________.
    3.    Classify________________.
    4.    Distinguish_____________.
    5.    Give examples of _______.
    6.    Interpret_______________.
    7.    Etc.

Goal VI.  Work effectively in a team.

Learning Outcomes

    1.    Demonstrate the ability to contribute, listen and cooperate with teammates
    2.    Define research needed for topic, aid in collection and share information
    3.    Identify and fulfill team role duties on time
    4.    Demonstrate that assigned work was shared equally

Bloom’s Taxonomy of Education Objectives (below) identifies verbs that faculty might use in writing learning goals:

KnowledgeComprehensionApplication
defineannotateapply
describeexplaindemonstrate
recallgive examplesillustrate
statepredictsolve
listinfermanipulate
summarizeinterpretinterview
identifygeneralizeconstruct
point tocalculatedraw
matchconvertperform
   
AnalysisSynthesisEvaluation
subdividewriteevaluate
comparecreateassess
contrastcomposecritique
identifyformulateprioritize
inferoutlinedefend
distinguishplanjudge
diagramconceiverecommend
illustratehypothesizedefend
categorizepredictselect

 

Select methods or instruments for gathering evidence to show whether students have achieved the expected learning outcomes related to program or course goals. Methods of assessment will vary depending on the learning outcome(s) to be measured.

Following is a partial list of examples:

Direct Measures
(Students demonstrate an expected learning outcome)

Scoring Rubrics: can be used to holistically score any product or performance such as essays, portfolios, recitals, oral exams, research reports, etc.  A detailed scoring rubric that delineates criteria used to discriminate among levels is developed and used for scoring.

Capstone Courses: could be a senior seminar or designated assessment course.  Program learning outcomes can be integrated into assignments. Performance expectations should be made explicit prior to obtaining results

Case Studies: involve a systematic inquiry into a specific phenomenon, e.g. individual, event, program, or process.  Data are collected via multiple methods often utilizing both qualitative and quantitative approaches.

Embedded Questions to Assignments: Questions related to program learning outcomes are embedded within course exams.  For example, all sections of “research methods” could include a question or set of questions relating to your program learning outcomes.  Faculty score and grade the exams as usual and then copy exam questions and scores that are linked to the program learning outcomes for analysis.  The findings are reported in the aggregate.

Standardized Achievement Tests: Select standardized tests that are aligned to your specific program learning outcomes.  Score, compile, and analyze data.  Develop local norms to track achievement across time and use national norms to see how your students compare to those on other campuses.

Locally developed exams with objective questions: Faculty create an objective exam that is aligned with program learning outcomes.  Performance expectations should be made explicit prior to obtaining results.

Locally developed essay questions: Faculty develop essay questions that align with program learning outcomes.  Performance expectations should be made explicit prior to obtaining results

Reflective Essays: generally are brief (five to ten minutes) essays on topics related to identified learning outcomes, although they may be longer when assigned as homework.  Students are asked to reflect on a selected issue.  Content analysis is used to analyze results. Performance expectations should be made explicit prior to obtaining results

Collective Portfolios: Faculty assemble samples of student work from various classes and use the “collective” to assess specific program learning outcomes.  Portfolios can be assessed by using scoring rubrics; expectations should be clarified before portfolios are examined.

Observations: can be of any social phenomenon, such as student presentations, students working in the library, or interactions at student help desks.  Observations can be recorded as a narrative or in a highly structured format, such as a checklist, and they should be focused on specific program objectives.

Indirect Measures of Student Learning
(Students or others report their perception of how well a given learning outcome has been achieved)

Standardized Self-Report Surveys: Select standardized tests that are aligned to your specific program learning outcomes.  Score, compile, and analyze data.  Develop local norms to track achievement across time and use national norms to see how your students compare to those on other campuses.

Focus Groups: are a series of carefully planned discussions among homogeneous groups of 6-10 respondents who are asked a carefully constructed series of open-ended questions about their beliefs, attitudes, and experiences.  The session is typically recorded and later the recording is transcribed for analysis.  The data is studied for major issues and reoccurring themes along with representative comments.

Exit Interviews: Students leaving the University, generally graduating students are interviewed or surveyed to obtain feedback.  Data obtained can address strengths and weaknesses of an institution or program and/or to assess relevant concepts, theories or skills.

Interviews: are conversations or direct questioning with an individual or group of people.  The interviews can be conducted in person or on the telephone.  The length of an interview can vary from 20 minutes to over an hour.  Interviewers should be trained to follow agreed-upon procedures (protocols).

Surveys: are commonly used with open-ended and closed-ended questions.  Closed-ended Questions require respondents to answer the question from a provided list of responses.  Typically, the list is a progressive scale, ranging from low to high or strongly agree to strongly disagree.

Classroom Assessment: is often designed for individual faculty who wish to improve their teaching of a specific course.  Data collected can be analyzed to assess student learning outcomes for a program.


Adapted from work done by Allen, Mary; Noel, Richard C.; Rienzi, Beth M.; and McMillin, Daniel J. (2002).  Outcomes Assessment Handbook, California State University, Institute for Teaching and Learning, Long Beach, CA. and the APA Task Force on Undergraduate Psychology Major Competencies.

Below please find some resources which might be helpful in thinking about assessment and making plans to do assessment at a variety of different levels.

General

  • For a start, here is a list of Nine Principles of Good Practice for Assessing Student Learning (PDF).
  • Online Resources for Higher Education Assessment 
  • The Association of American Colleges and Universities has a good website with links to a host of solid assessment resources.
  • Jon Mueller has a great site called the Authentic Assessment Toolbox with information on a host of assessment types.
  • Noel Entwistle has a good article on the relationship between assessment and deep versus superficial learning.

Assessment of Majors