The University Assessment Committee (UAC), comprised of faculty and staff representing each school/college within the university, provides advice, recommendations, and strategies to university administration and academic units regarding all activities associated with the assessment of student learning. The UAC is charged by the Provost to re-examine extant assessment practices, recommend new and different strategies where change may be warranted, and provide counsel aimed at improving and enhancing the effectiveness of all student assessment practices in undergraduate, graduate, professional and online education at St. John’s University.
University Assessment Committee MembershipFall 2018
Dr. Elizabeth Ciabocchi (Co-Chair)
College of Pharmacy and Health Sciences
Dr. Marc Gillespie (Co-Chair)
Dr. Nicole M. Maisch
College of Professional Studies
Prof. James Croft
Dr. Glenn Gerstner
Dean Larry Cunningham
Ms. Christine Goodwin
The School of Education
Dr. Aliya Holmes
Dr. Edwin Tjoe
St. John’s College of Liberal Arts and Sciences
Dr. Bryan Hall
Dr. Laura Schramm
The Peter J. Tobin College of Business
Dr. Sylwia Gornik-Tomaszewski
Dr. William Reisel
University Core Curriculum Council
Dr. Olga Hilas
Dr. Joseph Serafin
Prof. Cynthia Chambers
Prof. Benjamin Turner
The pages assembled here under the title of “Assessment Materials” fall into two categories. The first section is “Unpacking Assessment,” which is an extended overview for chairs and faculty. It is broken up into a series of sections in hopes of easing navigation through the text.
The second category is “Assessment Tools,” which includes various assessment resources, many of which have existed previously in different places on our website and are now housed here in one place. This includes information on rubrics, links to related organizations and archives, and bibliographies.
In 2005 St. John's University held a Presidential Summit, "How Do You Know if Your Students Are Learning?" The summit was led by Dr. Barbara E. Walvoord, a nationally recognized scholar on assessment.* At that time Walvoord discussed the degree to which assessment had become a national reform movement, fueled in part by calls for higher education to be more accountable for its learning standards, as well as increased scrutiny by college students and their families when selecting colleges. In the years since her visit to St. John's, the assessment movement has only increased.
Because of the growing importance of assessment in higher education, we find it necessary to provide chairs, faculty, and administrators with an introductory overview about this admittedly weighty and sometimes overwhelming term. We realize that the emphasis on assessment in education has surged in the last two decades, and that faculty and administrators who have not had the time to fully explore assessment theory or practice might benefit from some contextualization. As you go through these pages, please keep two things in mind:
We at St. John’s are committed to approaches to progressive assessment practices informed by best practices in the field. In accordance with those best practices, we strongly believe that assessment needs to be local and “homegrown.” This means that faculty and students, within their local disciplines, need to work together to continually imagine, develop, and act upon their assessment initiatives. Assessment, first and foremost, is about reflecting upon one’s learning. We also strongly believe that it is the administration’s job to ensure that assessment is ongoing, measurable, and informed by best practices—and that the hard work faculty put into their ongoing assessment is recognized and acted upon, and not just archived. Most of all, we recognize that the heart of assessment has its origins with faculty and students who know best about their disciplinary and programmatic needs.
The Office of the Provost is committed to working with all faculty and departments to help them with their assessment initiatives. We also want to make sure that all of these assessment initiatives are showcased and highlighted every semester through WEAVE, electronic portfolios, and our University web pages. Most importantly, we are committed to working with departments to act upon their ongoing assessment activities—the reports that get submitted online and placed into WEAVE should not simply be warehoused, but explored in order to identify further plans for action.
These “Unpacking Assessment” pages are intended as a living document. While originally posted by the Office of the Provost, we invite faculty, administration, and students to submit suggestions for revision as well as new information. Ultimately these pages should reflect the collaborative spirit of assessment. Please direct any suggestions, questions, or additional text to be considered to Elizabeth Ciabocchi Ed.D.
* We highly recommend Walvoord's succinct and very accessible book, Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education (2nd ed., Jossey-Bass 2010).
A paradigm shift has occurred in higher education. A generation ago such terms as “assessment,” “rubric,” and “learning outcomes” were not in the working vocabularies of many faculty and departments. Today these terms are unavoidable and omnipresent, and they have profoundly changed the landscape of higher education.
Nor can academic institutions, when their various accreditation agencies demand greater evidence of assessment, simply “push back” (as some colleagues have been heard to suggest). There is no resisting the assessment wave.
But there is no reason we should want to push back on assessment. After all, faculty and academic institutions have always been assessing their students and themselves. What's important is that we keep abreast of best practices in assessment, and conduct ongoing assessment in ways that privilege faculty expertise while taking seriously, student feedback. At its heart assessment is about learning: reflecting upon what one has learned and acting upon those reflections, continually.
The trick is to ensure that assessment is understood as something that develops locally and organically, originating in conversations between faculty and students within their own disciplines. Faculty know their disciplines better than anyone else. And because they are in such sustained contact with their students, faculty can learn a great deal about what students think of their courses, syllabi, and assignments. Together, faculty and students need to be learning from each other, all the time. All good academic assessment has faculty-student dialogue and mutual learning at its core.
Administrations also play a key role in assessment—not by telling faculty or departments how they must assess, but by creating a supportive, nurturing culture of assessment for faculty and students. Administrations help initiate, orchestrate, study (and, yes, assess) a University’s assessment practices. They help provide faculty with the necessary tools and direction for conducting an array of ongoing assessment activities and methods. In turn the administration is continually learning from faculty and students about the learning that takes place throughout the institution.
This is the three-way conversation that drives good assessment: students, faculty, and administration in continual dialogue, reflecting upon their ongoing learning.
What Is the Value of a St. John’s Education? is the question at the core of our University's recent “Repositioning Document.” This is the question that will drive much of our assessment initiatives for the foreseeable future. (It is, in fact, the kind of question all educational institutions are wrestling with, especially as the cost of tuition continues to rise and students are increasingly selective about where they attend college.) This question cannot be answered without in-depth, ongoing assessment.
Those who are new to the culture of assessment—and this includes students, faculty, and administrators alike—sometimes respond a bit skeptically to all this talk about assessment. We should be honest and up front about such initial skepticism. We need to explore this resistance and work through it so that any resultant assessment approaches are not perceived as mandates arbitrarily imposed upon faculty and students, but legitimate components of learning valued by all parties.
Students, for example, when asked to write self-reflection responses in which they evaluate their own work and progress (just one of many different forms of assessment used in high school and college courses today), don’t always see the value in such an activity. “Why are you making us do all this extra work,” they might wish to ask their professor; “why can’t you just give us our grade and be done with it?”
Faculty can also respond to this culture of assessment with a few raised eyebrows. This is understandable and logical. If faculty have been teaching for several decades quite comfortably without having to actively engage in assessment activities, and then are suddenly expected to articulate measurable learning objectives, it is only natural that they demand some evidence and justification for such a major shift. Faculty are independent thinkers who take their pedagogical and intellectual autonomy quite seriously, as well they should. So it’s only normal that this paradigm shift in assessment (and any pedagogical paradigm shift for that matter) is not immediately embraced by every faculty member. This is why it is the responsibility of both administration and faculty, working together, to explore best practices in assessment.
Just one example: sometimes faculty tend to respond to this new culture of assessment by asking, “What’s wrong with the way I—not to mention Universities for more than a century—have been assessing students? The University pays professors like me to teach them and judge their work and assess them with a grade. I’ve been doing that from day one, so why all this emphasis on measuring learning outcomes?” This is a fair response, and it is important that we address it head-on as we recognize why traditional grading practices are, by themselves, no longer enough. (Yes, grading is a method of assessment. But it is limited in what it tells us about student learning. Assessment involves a variety of means of collecting ideas and data—from students and one’s colleagues—that in turn help enhance faculty teaching methods, which in turn enhance student performance. The mere existence of grades doesn’t necessarily mean there is much significant learning taking place on either side.)
And administrators too—even when placed in the position of being advocates and promoters of assessment—are themselves sometimes overwhelmed (if not occasionally exhausted) by the new expectations that come with this culture of assessment. They too, with their faculty colleagues, sometimes look back to their jobs several decades prior, fondly reminiscing about all of that extra time they must have had when the vocabulary of assessment had not yet entered their daily working vocabularies.
As we enter this new culture of assessment, it’s important to understand that as educators we have always been assessing. Reflecting upon the effectiveness of our teaching—our ability to meet our goals while maintaining a flexible and open-minded approach to our pedagogies--is integral to any good teaching and learning experience. Faculty develop courses, assign projects, require tasks of their students. Then information is collected from students—not just test grades, but such things as portfolios, focus groups and student conferences, writing samples, surveys, and other forms of feedback. This data helps us learn where our programs and courses are working and where things can be improved. Then, faculty revisit assignments, curricula, and programs for further enhancement. This is the assessment feedback loop, and it is ongoing. Faculty never reach a point where there is nothing new to learn about our students’ performances.Our goals:
Like all grand concepts, “assessment” is really an umbrella for a broad range of different methodologies and approaches. These include, but are not limited to, “classroom assessment,” “direct assessment,” “embedded assessment,” “formative assessment,” “indirect assessment,” “non-referenced assessment,” “qualitative assessment,” “summative assessment,” and others. For those interested in exploring the rich vocabulary of assessment methods, there are plenty of guides and websites. But to be good at assessment we need not be overly preoccupied with such detailed schema. For now, we might focus on four distinct realms or genres of assessment.
I. Classroom Assessment
This is when individual faculty reflect on what and how their students learn in specific courses. Classroom assessment is focused on course improvement and is less preoccupied with giving grades. Obviously this doesn’t mean that faculty concerned with classroom assessment don’t give grades; it just means that faculty value dialogue with their students, exploring with them ways of enhancing the course. This can include focus groups, one-on-one and small group conferences, surveys, collecting student evaluations, and assigning and reflecting upon student writing. It also means faculty being in dialogue with their peers, sharing teaching methods and best practices with colleagues in meetings and retreats while keeping abreast of pedagogically relevant literature in the field. Certainly many faculty already engage in such activities—which illustrates the degree to which assessment has always been a part of our professional work. What’s different now is the need for faculty to document such assessment and how it translates into enhanced teaching and learning. Here at St. John’s, this means uploading data into WEAVE and assessing students’ accomplishments via their electronic portfolios and other means (more on these below).
II. Program Assessment
Just as individual faculty assess their own courses, so too do departments need to annually reflect upon their programs. Every year representatives from every academic program need to upload ongoing assessment findings from their department or unit into WEAVE, and also document what they are doing next in response to that assessment. In this manner a program is continually reflecting upon its strengths as well as areas it has identified for development and review.
III. Licensure and Examination Passing Rates
For some professions, a major form of assessment takes place in licensure and exam passing rates. A law school, for example, is evaluated to a large extent on how well its students perform on the bar exam. All departments that seek to prepare students for certain licensing or certification exams—and, thus, base their own effectiveness in part on how well their students do on such exams--need to continually compare their students’ success with benchmark institutions, establish targets, and develop action plans to increase or sustain performance. Summaries of this work are archived annually into WEAVE.
IV. Job Placement and Further Education
Are our students getting jobs related to their areas of study? Are they getting into graduate programs? Are they winning awards, or demonstrating other evidence of success? Departments are often the first places that learn of their students’ ability to land jobs or get into graduate programs. Departments need to keep records of where their majors, minors, and graduate students are establishing their careers or gaining admittance into other graduate programs. Such information should be regularly posted within the Department’s website.
Chairs are not the people primarily responsible for leading departmental assessment--all department faculty need to be regularly involved in conversations about assessment. These conversations can take place during regular department meetings, annual or semi-annual department retreats, and additional workshops throughout the year. To this end, we urge every Department to assign one or more faculty members to work as Department Assessment Coordinators in order to address the following:
Likewise, the administration values honest feedback in departmental reporting. We recognize that the purpose of assessment is not for departments and programs to sing their praises, but rather to accurately and candidly reflect upon where things are working and where they can be improved. It is better for a program to report that its learning outcomes are below its targets, and is consequently pursuing an action plan aimed at increasing those targets, than for a program to report that all is well and no subsequent action plans are in effect. Also, the administration recognizes that the hard work faculty put into assessment cannot go ignored. Annual assessment reports need to be responded to and acted upon, not left forgotten in an online database.
In 2006 St. John’s acquired WEAVE, a web-based assessment and planning management system designed to help faculty and administrators write goals, objectives, and criteria for tracking, assessing, and developing action plans. This tool currently serves as our University’s primary repository for student learning assessment—a living database where all departments and offices regularly record and monitor their various assessment activities.
The WEAVE management system has five main sections: 1) Assessment, 2) Action Plans, 3) Achievement Summaries, 4) Annual Reporting, and 5) Document Repository. The first section, “Assessment,” is the most detailed. It requires all programs and units to a) enter and edit their mission/purpose; b) establish goals for achieving that mission; c) develop outcomes and objectives; d) identify measures; e) specify achievement targets; f) enter findings; and g) develop action plans in response to those findings.
All Departments will need to submit summaries of their ongoing assessment activities into WEAVE every year, and ideally at the end of every semester. The importance of submitting accurate, detailed, and honest reports to WEAVE cannot be understated. When outside accreditation agencies explore our commitment to assessment, one of their primary concerns is to see that assessment is taking place, and is ongoing. The Associate Director of University Assessment, housed in Institutional Research, is available to assist chairs and their Department Assessment Coordinators in uploading and updating their assessment summaries, plans, and reports.
Our goal: For faculty to regularly discuss within their departments their ongoing assessment initiatives, and document those activities (and their results, and their follow-up plans) on WEAVE.
For more information on WEAVE please contact the Office of Institutional Research at (718)-990-1869.
Beginning in September 2011, St. John’s University partnered with Digication, a company that provides electronic portfolios for students.
An ePortfolio is a collection of academic material and accomplishments that a student gathers and places online. ePortfolios have become increasingly common at all levels of education, especially in high schools and colleges. Students use ePortfolios to showcase their intellectual achievements, as well as present themselves academically and professionally to outside audiences. In many areas the ePortfolio is fast replacing the conventional resume as the means by which students apply for jobs or graduate programs.
Faculty use ePortfolios to better understand and evaluate the work their students are doing. For example, ePortfolios offer a means by which faculty can engage in classroom assessment and program assessment. Faculty and administrations also use e-Portfolios to assess the work their students are doing.
Just as WEAVE offers an essential means by which an institution continually documents its claims for assessment—a living archive and database of mission statements, goals, findings, and action plans—e-Portfolios provide the evidence that supports the findings presented in WEAVE. Together, WEAVE and ePortfolios will be the mechanisms that St. John’s will use to further document our ongoing assessment activities.
Our Goal: For faculty to seriously imagine ways that their students can demonstrate expertise in their courses on ePortfolios—via written statements, papers, research, video, photography, audio, reflective journals, etc.
*Support information on ePortfolios can be found here.
It’s crucial that our assessment measures are specific to departments and disciplines, and are owned by faculty and students. One of the most important first steps any department can do in order to approach assessment logically and efficiently is to first spend time taking an inventory of the shared values of that department. This can be an ideal way to gather faculty together in a spirit of mutual inquiry, discussion, and exploration.
Such a preliminary inventory is important. Many faculty within a department, when speaking casually to one another at meetings or during hallway conversations, might initially seem to uphold the same shared values when it comes to their expectations for student work. Take student writing, for example—a perpetual hot topic among faculty in nearly every discipline. When faculty discuss student writing, their shared values are not infrequently expressed in the negative as faculty complain and sigh over what they have identified as their students’ shortcomings. A common complaint many faculty have heard if not uttered: “My students just can’t write,” a sentiment that might well elicit mutual nods of affirmation from one’s other colleagues.
But while this complaint might have value in that it points to a generalized frustration with the student writing they see in their courses, the statement remains too vague to be of any pedagogical value. Obviously, students can write—for if they were literally functionally illiterate, numerous red flags would have sounded and they would presumably never have been admitted to college. A statement like “my students just can’t write” is really shorthand for a faculty member’s frustration with her students’ inability to meet certain literacy demands specific to her course and her discipline. Because if those faculty having that hallway conversation about lousy student writing were to sit down together over a series of meetings and carefully articulate what exactly they mean by that statement, they would begin to identify concerns that they share, as well as areas where their own personal values are not in sync. Faculty #1 might be primarily concerned with preventing students from adopting a first person voice in their essays; faculty #2 might be more concerned with eliminating a finite list of common errors; faculty #3’s main concern might be with getting students to revise their work before submitting it. Faculty #4 might value all of these things, but be even more concerned with students’ ability to analyze assigned texts in their writing. And Faculty #5 might be mostly concerned with her students’ ability to marshal evidence and propose an argument.
While all of these faculty express shared frustration over “student writing,” their specific concerns are distinct. Until a department begins to untangle these many values, and work towards identifying (and perhaps prioritizing or ranking) those values, that department will not be able to effectively assess the writing of its students—for the department has not yet done the necessary work to identify what, exactly, it means when it refers to “good” and “bad” writing. Any resultant assessment methods might well be problematic if not invalid as a result.
And so, departments need to take the time to fully explore, and argue about, and debate, and ultimately define and articulate their shared values. Not only is this an ideal way of initiating serious assessment within a department, but it can be a good way for a department to—perhaps for the very first time--create a detailed portrait of what it stands for. Such an activity, taken seriously, is nothing less than a rigorous exercise in departmental self-identity.
There are scores of books, articles, and websites that offer examples of case studies which use easy, sound assessment methods. Three popular resources that we have found useful are Linda Suskie’s Assessing Student Learning: A Common Sense Guide (Jossey-Bass 2009); Thomas A. Angelo and K. Patricia Cross’s Classroom Assessment Techniques: A Handbook for College Teachers (Jossey-Bass 1993); and Barbara E. Walvoord’s Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education, 2nd ed. (Jossey-Bass 2010). The menu below (by no means comprehensive) references passages in these books where faculty can find additional information.
Listening to Students
Collecting short, anonymous feedback at different points in the semester can be a great means of finding out from students not only what they think of a course, but where they would prefer greater clarity, more time, or a chance to review certain information. Faculty can ask students to submit a page of typed, anonymous feedback at mid-semester, and then if necessary modify their teaching in response for the second half of the semester. Then, faculty could ask students to submit another one-page feedback at the end of the semester, reflecting upon whether or not they felt their concerns had been addressed in the second half of the semester. Even if a faculty member is unable to fully address every concern that students have, the simple act of asking for students’ opinions, and making an honest effort to address some of them, speaks volumes. Students appreciate such efforts.
Faculty can ask students to take a minute at the beginning or at the end of class to offer feedback, summarize an issue under discussion in the course, or answer a problem. Faculty can then quickly skim through these responses in order to identify any outstanding questions, issues, or problems that students identify.
If the size of one’s class permits, faculty can conduct one-on-one or small group conferences with students, asking them questions related to the course or soliciting their feedback. Many faculty find that students who are silent observers in class tend to open up more freely in such encounters outside of the classroom.
Using an online discussion board, through Blackboard or any other course management system, is an excellent way to gather student comments and to assess their ongoing understanding of material. Many students who would not speak up in class, nor feel comfortable in one-on-one conferences, will freely write feedback in an online format.
See Angelo and Cross’s Classroom Assessment Techniques for more information on such related assessment techniques as focused listing, memory matrices, analytic memos, one-sentence summaries, concept maps, directed paraphrasing, student-generated test questions, diagnostic learning logs, electronic mail feedback, assignment assessments, etc.
An ideal way for departments to gather information from their majors and minors is to hold focus groups where a manageable group of students are brought together to answer a series of questions that the department has identified. In focus groups, it is important that the questions asked in each group remain the same, and that the people asking the questions remain neutral, and as much as possible do not contribute their own opinions to the conversation. A good incentive to bring students together for focus groups is to offer food (free pizza is an important assessment tool that all departments should have in their arsenals). It can help to have two people conduct these focus groups—one to ask questions and gather input from all students assembled—and to ask follow-up questions in order to further clarify students’ input—and a second person to record those responses. After a series of focus groups, faculty can report their findings back to their departments, and have a discussion as to what they should change or maintain as a result of that student feedback.
Focus groups have the added benefit of sending a message to students that a department values their input, and regularly considers their opinions as it conducts its business of designing curricula and evaluating the program.
For more on Focus Groups see Suskie, pp 195, 196.
These “before and after” measures show how students have and haven’t progressed from one period to another. A professor might collect writing samples from her students early in a semester, then compare one or two variables later in the semester to see if they have changed. For example, how are students using evidence to back up their written claims? As with much assessment, the key here is to be focused one the variable one has chosen to assess: if one is looking at how students are using evidence, one does not (for the purposes of this assessment) focus on other issues as well, such as, say, presence of lively description. The more that faculty keep their assessment question simple and direct, the greater their success in following through on the project.
A Department can also conduct pre-post assessment across courses. A variable in the writing of students in a sophomore course for majors can be compared to that of students in a senior capstone course in order to obtain evidence on the degree to which students are demonstrating progress. If a department discovers that their senior students are not responding to that variable as successfully as they would like, then this can lead to changes in how that sophomore or senior course are taught. (Such changes in teaching are the all important “feedback loop” that needs to happen if assessment is to be successful.) Here it is important to note that it is not a black mark on the department to identify that their students are not performing up to certain standards in their senior year; in fact, the department’s diagnosis of this problem via assessment—and their resultant efforts at addressing this problem—would be examples of an engaged faculty conducting good assessment.
For more on getting students to reflect at the beginning and end of courses, see Suskie 190-191. Pre-Post or Before-and-After approaches to assessment are implicit in many of the methods described on this page.
A popular means of gathering information from students on a range of issues is via rating scales. The First-Year Writing program and the Writing Center have worked together to give all first-year writing students a Likert Rating Scale at the beginning and again at the end of the semester during which they take ENG 1000c. The purpose of this scale is not to measure student writing ability, but rather their engagement and self-assessment as writers.
For more on rating scales, see Suskie pp 195-198.
Many faculty use rubrics as a means of clarifying their values, and defining the characteristics of what they might describe as successful, moderate, and less successful performance. Many value rubrics because they make evaluation less mysterious and seemingly arbitrary.
See Suskie, pp 137-153 for more on scoring guides and rubrics. See Barbara Walvoord’s Appendix on “Sample Rubrics” in Assessment Clear and Simple (107-14). Also, one can find rubrics designed for our University core courses here.
Some faculty and departments, however, prefer methods other than rubrics. Bob Broad’s concept of Dynamic Criteria Mapping, however, originated in part from a critique of rubrics that found them to be too narrow and not descriptive enough to respond to the rich and complex range of variables that can exist. As a result, some faculty and programs come up with their own “homegrown” maps or other forms of capturing and defining the characteristics of student work they wish to promote.
For faculty interested in examples of these alternatives to rubrics, we recommend Bob Broad (ed.), Organic Writing Assessment: Dynamic Criteria Mapping in Action (Utah State University Press, 2009).
Similar in spirit to focus groups, surveys allow faculty to solicit feedback from students in response to a variety of questions drawn up by a faculty member or department. Surveys can be distributed online, or passed out and collected (anonymously) in class. One advantage of surveys over focus groups is that a department can get input from a greater number of students. One possible drawback however (especially with online surveys) is that students sometimes suffer from “survey fatigue,” as they are continually asked to fill out surveys and evaluation forms for so many other programs and events.
There is an art to writing good survey questions. For more information on surveys, see Suskie pp 198-200.
One of the most important early steps any department can do to approach assessment logically and efficiently is to first spend time identifying the shared values within that department. The goal is to develop an inventory of a department’s shared values that will then serve as the foundation for the learning goals to be established for a course or program. Many departments would do well to consider this approach before embarking on other more detailed assessment projects. This can be an ideal way to gather faculty together in a spirit of mutual inquiry, discussion, and exploration.
Individually, faculty can spend time writing down all of the learning outcomes they want their students to have—either at the end of a particular course, or at the end of their program. Then faculty would come together to share what each had written down privately in order to see where their opinions are in sync. They might rank their desired outcomes, then see where different members of a department prioritize certain skills or qualities. Such a departmental values inventory quickly becomes an exercise in vocabulary analysis: for example, if a faculty member says she wants her students to write “strong claims,” whereas another privileges “rhetorical risk-taking,” faculty would then spend time trying to articulate and carefully define what, exactly, these terms mean to them. Such clarification of terms—and the inevitable intradepartmental faculty debates that result from such conversations—is an essential component of good assessment. Until faculty can carefully and accurately define their own vocabulary of learning outcomes terms, any assessment of students’ ability to meet those outcomes could lack validity. Much of assessment is a matter of moving beyond the general to the specific: fine-tuning, clarifying, and articulating with precision what, exactly faculty are looking for in student performance.
See “Developing Learning Goals” in Suskie (115-133) and Walvoord (81-85).
Portfolios offer a rich means of collecting artifacts students have created in their courses, as well as their own self-assessment. Portfolios can include both early and final drafts, and all manner of artifacts. Beginning Fall 2011, the University will begin implementing electronic portfolios for certain students. For more on our electric portfolios, go here.
For more on portfolios see Suskie (202-213) and Walvoord (50-54).
Many assessment methods do not require permission from an Institutional Review Board. Federal policy exempts the following:
(1) Research conducted in established or commonly accepted
educational settings, involving normal educational practices, such as (i) research on regular and special education instructional strategies, or (ii) research on the effectiveness of or the comparison among instructional techniques, curricula, or classroom management methods.(2) Research involving the use of educational tests (cognitive, diagnostic, aptitude, achievement), survey procedures, interview procedures or observation of public behavior.
Faculty and departments must be very careful, however, that individual students can’t be identified, nor harmed by disclosing their opinions or feedback outside of the assessment research. For more see U.S. Department of Health and Human Services, Title 45, Part 46, “Protection of Human Subjects,” Section 101, “To What Does This Policy Apply,” found here. Finally, consult our University’s IRB here for further information.
All of the above assessment methods are legitimate approaches, and can be summarized and entered into WEAVE online as examples of a department’s ongoing efforts to assess its courses and programs.
These resources are intended to supplement the overviews found in the “Unpacking Assessment” pages. Faculty and chairs will find information here on ways or constructing rubrics, links to relevant organizations and archives, and bibliographies. As with the “Unpacking Assessment” pages, we invite others to submit additional information or suggestions for these pages. (For example, while rubrics are commonplace in assessment, there are also recent theories of assessment that argue for alternatives to rubrics. We want these pages to reflect and point to the multifaceted and diverse world of assessment theories and approaches that exist in hopes of enabling faculty to determine which assessment models and approaches best fit their needs.)
These curriculum maps of the core courses indicate the competencies that have been approved by the University Core Curriculum Committee as “essential” or “suggested” for the common core and distributed core courses in the degrees offered by your College or School. All core curriculum course syllabi should include the designated competencies as learning outcomes.
College of Professional StudiesBA, BS: Common Core CompetenciesBA, BS: Distributed Core Competencies
The School of EducationBA, BS: Common Core CompetenciesBA, BS: Distributed Core Competencies
Pharmacy and Health SciencesPharm D: Common Core CompetenciesPharm D: Distributed Core CompetenciesBS: Physician Assistant Common Core CompetenciesBS: Physician Assistant Distributed Core CompetenciesBS: MedTech, Toxicology, Pathology Assistant Common Core CompetenciesBS: Pathology Assistant Distributed Core Competencies
St. John's College of Liberal Arts and SciencesBA, BFA: Common Core CompetenciesBS: Common Core CompetenciesBA: Distributed Core CompetenciesBFA: Distributed Core CompetenciesBS: Distributed Core Competencies
Peter J. Tobin College of BusinessBS: Common Core CompetenciesBS: Distributed Core Competencies
Interactions between languages and contemporary cultures
Below are curriculum maps of the core courses. These indicated the knowledge bases that have been approved by the University Core Curriculum Committee as “essential” or “suggested” for the common core and distributed core courses. All core curriculum course syllabi should include the designated knowledge bases as learning outcomes.
Common Core Knowledge Bases by Course
DNY 1000CEnglish 1000C & 1100CHistory 1000CPhilosophy 1000C & 3000CScience (BS, Pharm D)Scientific Inquiry 1000CSpeech 1000CTheology 1000C
Common Core Knowledge Bases: Pharmacy & Health Sciences Equivalents
History (Physician Assistant and PharmD)Speech (Toxicology, Physician Assistant and PharmD)
Distributed Core Knowledge Bases by Course
Creativity & ArtsLanguage (SJC, SOE, TCB, CPS) Language and CultureMathematicsPhilosophySocial Science Social Science (SOE)Social Science (TCB)Theology
Distributed Core Knowledge Bases: Pharmacy & Health Sciences Equivalents
Creativity & Arts, Language & Culture, Math, Social Science (Phys Asst)Creativity & Arts, Language & Culture, Math, Social Science (Pharm D)
Additional Distributed Core Course Requirements by College/School
SJC Electives SOE ScienceTCB English
Faculty who have used rubrics have found that a description of what constitutes a core competency is an exceptional aid to both the faculty member and the student for improvement of learning.
Each of the competency rubrics below is “read-only” and may be downloaded for use in classes.
The Association for the Assessment of Learning in Higher Education offers numerous rubrics from institutions here.
Course Goals: the general aims or purposes of the course. Effective goals are broadly stated, meaningful, achievable and assessable. Goals should provide a framework for determining the related learning outcomes of a course.
Students will (or begin to--depends on level):
I. Understand and can apply fundamental concepts of the discipline
(which are part of the course)
II. Communicate effectively, both orally and in writing.
III. Conduct sound research, demonstrating proficiency in information literacy.
IV. Address issues critically and reflectively.
V. Create solutions to problems.
VI. Work effectively in a team.
VII. Gain realistic ideas of how to implement their knowledge, skills and values in occupational pursuits in a variety of settings
Learning Outcomes: the knowledge, skills, abilities, capacities, attitudes or dispositions you expect students to acquire in your course. Learning outcomes articulate suggested strategies for how the goals can be demonstrated. Clearly state each outcome you are seeking: What will the student be able to do?
Goal IV. Address issues critically and reflectively.
Possible Learning Outcomes
1. Apply concepts and/or viewpoints to a new question or issue.
2. Describe ______________.
5. Give examples of _______.
Goal VI. Work effectively in a team.
1. Demonstrate the ability to contribute, listen and cooperate with teammates
2. Define research needed for topic, aid in collection and share information
3. Identify and fulfill team role duties on time
4. Demonstrate that assigned work was shared equally
Bloom’s Taxonomy of Education Objectives (below) identifies verbs that faculty might use in writing learning goals:
Select methods or instruments for gathering evidence to show whether students have achieved the expected learning outcomes related to program or course goals. Methods of assessment will vary depending on the learning outcome(s) to be measured.
Following is a partial list of examples:
(Students demonstrate an expected learning outcome)
Scoring Rubrics: can be used to holistically score any product or performance such as essays, portfolios, recitals, oral exams, research reports, etc. A detailed scoring rubric that delineates criteria used to discriminate among levels is developed and used for scoring.
Capstone Courses: could be a senior seminar or designated assessment course. Program learning outcomes can be integrated into assignments. Performance expectations should be made explicit prior to obtaining results
Case Studies: involve a systematic inquiry into a specific phenomenon, e.g. individual, event, program, or process. Data are collected via multiple methods often utilizing both qualitative and quantitative approaches.
Embedded Questions to Assignments: Questions related to program learning outcomes are embedded within course exams. For example, all sections of “research methods” could include a question or set of questions relating to your program learning outcomes. Faculty score and grade the exams as usual and then copy exam questions and scores that are linked to the program learning outcomes for analysis. The findings are reported in the aggregate.
Standardized Achievement Tests: Select standardized tests that are aligned to your specific program learning outcomes. Score, compile, and analyze data. Develop local norms to track achievement across time and use national norms to see how your students compare to those on other campuses.
Locally developed exams with objective questions: Faculty create an objective exam that is aligned with program learning outcomes. Performance expectations should be made explicit prior to obtaining results.
Locally developed essay questions: Faculty develop essay questions that align with program learning outcomes. Performance expectations should be made explicit prior to obtaining results
Reflective Essays: generally are brief (five to ten minutes) essays on topics related to identified learning outcomes, although they may be longer when assigned as homework. Students are asked to reflect on a selected issue. Content analysis is used to analyze results. Performance expectations should be made explicit prior to obtaining results
Collective Portfolios: Faculty assemble samples of student work from various classes and use the “collective” to assess specific program learning outcomes. Portfolios can be assessed by using scoring rubrics; expectations should be clarified before portfolios are examined.
Observations: can be of any social phenomenon, such as student presentations, students working in the library, or interactions at student help desks. Observations can be recorded as a narrative or in a highly structured format, such as a checklist, and they should be focused on specific program objectives.
Indirect Measures of Student Learning
(Students or others report their perception of how well a given learning outcome has been achieved)
Standardized Self-Report Surveys: Select standardized tests that are aligned to your specific program learning outcomes. Score, compile, and analyze data. Develop local norms to track achievement across time and use national norms to see how your students compare to those on other campuses.
Focus Groups: are a series of carefully planned discussions among homogeneous groups of 6-10 respondents who are asked a carefully constructed series of open-ended questions about their beliefs, attitudes, and experiences. The session is typically recorded and later the recording is transcribed for analysis. The data is studied for major issues and reoccurring themes along with representative comments.
Exit Interviews: Students leaving the University, generally graduating students are interviewed or surveyed to obtain feedback. Data obtained can address strengths and weaknesses of an institution or program and/or to assess relevant concepts, theories or skills.
Interviews: are conversations or direct questioning with an individual or group of people. The interviews can be conducted in person or on the telephone. The length of an interview can vary from 20 minutes to over an hour. Interviewers should be trained to follow agreed-upon procedures (protocols).
Surveys: are commonly used with open-ended and closed-ended questions. Closed-ended Questions require respondents to answer the question from a provided list of responses. Typically, the list is a progressive scale, ranging from low to high or strongly agree to strongly disagree.
Classroom Assessment: is often designed for individual faculty who wish to improve their teaching of a specific course. Data collected can be analyzed to assess student learning outcomes for a program.
Adapted from work done by Allen, Mary; Noel, Richard C.; Rienzi, Beth M.; and McMillin, Daniel J. (2002). Outcomes Assessment Handbook, California State University, Institute for Teaching and Learning, Long Beach, CA. and the APA Task Force on Undergraduate Psychology Major Competencies.
Below please find some resources which might be helpful in thinking about assessment and making plans to do assessment at a variety of different levels.
Assessment of Majors