Excellence in Teaching & Learning

Defining and Measuring Student Learning

by Dr. Pamela Speaks

The value of assessment lies in defining the purpose of the course and this is based solely on the course goals and outcomes established and stated in the course syllabus. These goals and outcomes are established in collaboration with experts in respective fields that contribute to expectations of the Higher Learning Commission Criteria for Accreditation section 4B requires that:

The institution demonstrates a commitment to educational achievement and improvement through ongoing assessment of student learning.

    1. The institution has clearly stated goals for student learning and effective processes for assessment of student learning and achievement of learning goals.
    2. The institution assesses achievement of the learning outcomes that it claims for its curricular and co-curricular programs.
    3. The institution uses the information gained from assessment to improve student learning.
    4. The institution’s processes and methodologies to assess student learning reflect good practice, including the substantial participation of faculty and other instructional staff members.

Assessment refers to a systematic and continuous process of collecting and analyzing information that informs instructional strategies that target what we want students to know and do upon completion of instruction within a course or program and how we determine mastery (that they really “got” it).

“Assessment is essential not only to guide the development of individual students but also to monitor and continuously improve the quality of programs, inform prospective students and their parents, and provide evidence of accountability to those who pay our way,” (Gardiner, 1994, p. 109).

Understanding goals, objectives and outcomes are key to making a course relative and relevant. Following these guidelines are foundational to online courses and provide the guidelines necessary for accountability for faculty and ownership for students. According to Gikandi, Morrow, & Davis (2011), “As online and blended learning has become common place educational strategy in higher education, educators need to reconceptualise fundamental issues of teaching, learning and assessment in non traditional spaces.” Their study reveals key themes and findings in the field of online formative assessment. Instructional feedback plays a critical role and differs from feedback within a traditional setting. Use of tools necessary for success within Blackboard regarding assessment and evaluation for student comprehension involve Collaborate, Discussion Board forums, Wiki, self-test quiz, groups, e-portfolios and virtual classroom to name a few. Each of these opportunities require practice and/or training for effective implementation. In addition, many online students especially those who are new to the online/distance/remote learning environment may require more tutorial guidance than instructors are accustomed to providing. Time spent mastering these assessment opportunities in a front-loading manner allows for focus on content through the online instructional process in maintaining integrity and limiting compromise detracting from intent and purpose of course goals and outcomes.

Considering the heightened priority of validity and reliability of online course assessment to serve the intended purposes of meeting goals, objectives and outcomes in an online learning environment, attention to detail and best practice or innovative practice is a requirement to successful online formative assessment of student learning.

References

Gardiner, Lion F. (1994). Redesigning higher education: Producing dramatic gains in student learning. ASHE-ERIC Higher Education Report No. 7.

Gikandi, J.S., Morrow, D., & Davis, N.E. (2011). Online formative assessment in higher education: A review of the literature. Computers & Education, 57(4), 2333-2351.

Higher Learning Commission 2015

University of Connecticut

Return to top

Adventures in Measuring First Year Student Learning

by Susan Woitte & Kathleen McCay

Anyone who has worked with first year students is familiar with some of the barriers to define and measure their learning. The library has had a lot of experience with this by promoting our services, teaching research resources and information literacy skills within the University Strategies course. After years of unsuccessful assessment, a new way to measure student learning from our instruction came from an unexpected place this year.

For most of the time we have worked with University Strategies, we were using observation and limited instructor feedback to decide what students needed to learn and if they were learning from us. Often we updated information literacy instruction to our University Strategies students based on this and research done by other institutions, not our own.

In the last couple of years, diligent work began to change from “guess work” into using assessment as evidence of success and as an indicator of where we need to make adjustments. We began by including a pre and post-test assessment in our online instruction module. The goal was to measure student learning and find out if we were meeting our learning objectives.

Immediately we ran into problems. More students took the post-test than the pre-test. One of the questions did not actually have a correct answer listed. We could not track specific students learning because the assessment tool was a Blackboard survey where students are anonymous. Even with these issues, we surprisingly did end up discovering student learning from our new measurement tool.

Although it was not our original intent, one open ended question prompt we placed at the end of the post-test produced rich results. The question asks, “Please share your comments about this library assignment with us. For example, did you learn anything? Was it a relatively fun way to complete an activity for University Strategies? Do you have any suggestions for improving the assignment for future students?”

University Strategies students predominantly wrote positive comments about the assignment. We could anecdotally see success from reading the list of answers, but how could we analyze this in a way to share with others?

In reviewing the student comments one of the team members noticed a pattern that many students were specifically mentioning they had learned at least one of our objectives. She collected all of the student comments and created a simple rubric through which the team could quantify the qualitative data of the student comments and then apply an intercoder reliability measurement which would demonstrate whether our qualitative rubric was consistent across coders.

Every member of our team applied the rubric, coding each of the students’ comments as to whether at least one of the learning objectives had been stated. The current University Strategies information literacy assignment learning objectives are: upon successful completion of this course, students will be able to:

  • use at least 5 factors when evaluating a website,
  • recognize that the NSU library offers multiple reliable resources,
  • recall one or more features of Google Scholar,
  • recall one or more features of advanced Google search,
  • indicate willingness to ask library employees for assistance in person or online when needed.

We had minor differences in our coding which we readily resolved to a 100% intercoder reliability coefficient. The result of our measurement of student learning from our open ended question demonstrated that out of 756 responses, 605 or 80% of the students self-identified as having learned one or more of the learning objectives.

Pie chart showing data in previous paragraph

It took a progression of attempts to successfully measure first year student learning from our information literacy assignment. It also took the team work of our Library Information Literacy Community and a little bit of luck. If you want to hear more about our measurement, future plans and continued learning please join us at Community and Collaboration Day or consider joining our learning community. We welcome the opportunity for collaboration!

Return to top

Measuring Learning in Student Writing

by Dr. Christopher Malone

How do we define and measure learning in student writing? When it comes to good writing, we might be tempted to agree with Plato (and Robert Pirsig after him), when Socrates asks his student: "And what is good, Phaedrus, and what is not good--need we ask anyone to tell us these things?" It's difficult to come up with criteria for assessing an effectively written essay. We just know it when we read it.

This is probably because what constitutes good writing is less quantifiable and more rhetorical, involving an awareness of audience, purpose, and context. Matters of tone and word choice, tuning into what others have said before, understanding the conventions of certain discourse and how to meet the needs of specific readers--our rhetorical understanding of these various elements gives rise to writing strategies that may be more or less effective. And as much as it is possible to develop these rhetorical skills in a college writing classroom, we also all have an intuitive understanding of them to the extent that we are communal, social, and rhetorical creatures. Our job as teachers who ask students to write is to articulate our shared sense of what makes some strategies effective more than others, in an effort to involve students more self-reflectively in academic discourse.

Central to what we mean by academic discourse is research; and how research relates to our expectations about student writing is a good place to begin thinking about what we want our students to learn and how to measure it. Essay assignments that involve a research component may be evaluated according to several criteria: introducing quotations carefully, and discussing them after; paraphrasing source material clearly with a sense of the larger argument in mind. All of these concerns involve stylistic choices and matters of professionalism important for our students to consider when it comes to bringing other points of view into dialogue with their own.

When coming up with assignments, we might reflect on the assumptions underlying our approach to having students write the "research paper." Some research-based assignments can discourage students from creatively engaging with issues, rendering their voice and perspective superfluous rather than encouraging them to enter into and have some stake in a scholarly conversation. Evaluative criteria may involve concerns such as the number of sources used, or correct formatting, but should also get at the larger rhetorical purpose of having students do research in the first place. This has to do with helping them realize a sense of their own investment in the material we ask them to study.

We want our students to think about research, and the decisions they make about what to do with it in their papers, as matters of rhetorical effectiveness more than correctness. The grading criteria and learning outcomes that we formulate should foster in students this type of understanding. Developing a rhetorical attitude towards research depends on emphasizing the writing process as much as the final product. The criteria that we use to measure research elements in an essay should emphasize that process, and doing so will ultimately position students to write more effectively. This just means breaking down and presenting what it means to do research as a process involving steps, and making those steps part of the larger grade: for instance, asking students to submit abstracts or summaries of sources before they begin drafting, or having them early on generate a list of "research questions" concerning the direction they want to go in a paper, and that they need to consult sources to help answer. (Such questions can give focus to an otherwise meandering search for information related to a writing topic.) Emphasizing that writing and research involve a process is not just reserved for freshmen composition courses. Nor does requiring these steps in the process necessarily entail more time spent grading, no more than a simple completion grade.

Asking students to write often in all of their coursework develops in them critical thinking and research skills, fosters creative approaches to problem solving, and makes them better communicators. Defining these benefits as learning outcomes--in ways that persuade students that these skills are necessary to their academic success in general, beyond just a particular course--is crucial. Plato would agree that we need not ask anyone to tell us why, for our students, these things are all good.

Return to top

How can I tell they are getting it? Getting the most out of your Student Learning Assessment

by Dr. Allyson Watson

As faculty member at Northeastern State University, I can attest to the fact that we have a keen ability to teach creatively and innovatively. I have listened to several colleagues across the campus share new ideas and teaching strategies that are brilliant and bring their classrooms to life. Like many of you, it is exciting to glean from the faculty presentations during the “Delicious Dialog Faculty Spotlight” series held monthly on our campuses. In all of our pedagogical capacity and new teaching innovation, assessment is more than likely at the forefront of your planning. If it isn’t it should be, but it doesn’t have to resemble what is often perceived as negative regarding assessment. As a faculty member in education, we are constantly battling the over assessment of students in the K-12 school settings. We look for ways to teach our teacher candidates how to gain an understanding of student learning outcomes using a variety of measures. What I know for sure, is that effective assessment can happen in your daily course routine regardless of the delivery be it face to face, hybrid, or online.

To decide the extent to which your students are learning what you intend for them to learn the key strategy is curricular alignment. You may be the sole faculty in your discipline, but the materials you teach can be the catalyst to enhance your students’ career marketability. By determining if your students have grasped the concept and can explain it or teach it to someone else, you can also assess their level of learning.

To maintain your academic rigor by aligning your curricular standards with your student learning assessments, I have gathered five tips from recent research literature.

  1. Take a moment to review your course syllabi and activities with fresh eyes, align each student learning outcome with the student learning assessments for the semester. You can also identify a colleague or team of colleagues to strategize about how to get the best opportunities for learning and assessment in similar courses (Dickson & Treml, 2013).
  2. Use active student engagement strategies within your course to immerse student learning outcomes and course goals through weekly summative assessments (Holmes, 2015) .
  3. Provide meaningful and timely feedback to students for formal assessments and in class discussions. Students feel more engaged when they are aware of strengths and weaknesses based on assignment criteria (Evans, Zeun, & Stanier, 2014).
  4. Maintain objectivity by creating rubrics to assess based on the desired student learning outcome (Gesie, Khaja, Chang, Adamek, & Johnsen, 2012). Our own Center for Teaching and Learning can assist with building rubrics into your curricular design.
  5. Engage your students in the assessment process to make it meaningful to their learning process (Lauer & Korin, 2014).
  6. I have been interested in student assessment for quite some time, and I would never provide tips I didn’t try myself. A recent paper intrigued my interest because it spoke specifically about engaging students in the assessment process. In the recent Assessment Update: Progress, Trends and Practices in Higher Education (2014), researchers highlighted the use of undergraduate students as resources for building course assessment. In the example the authors, Lauer and Korin provided, faculty and students were immersed in a four week, grant funded assessment training where they learned to align course outcomes with effective assessment resources. After the training the student team worked across disciplines with faculty to create surveys and analytical instruments for course use. The faculty and student assessors reported the benefit of the collaboration. One effective feature was including the student voice and perspective in the assessment process.

    We have the talent and dedication in our faculty team to explore this and other options to allow our students to gain the most out of course outcomes using student learning assessments. With these five tips and a variety of student assessment strategies that you are probably using, we will continue to gauge our students learning and truly assess that they are getting it!

    References

    Dickson, K. L., & Treml, M. M. (2013). Using assessment and SoTL to enhance student learning. New Directions for Teaching & Learning, 2013(136), 7-16. doi:10.1002/tl.20072

    Eisenhauer-Ebersole, T. (2014). Relating students' grades and measures of specific outcomes. Assessment Update, 26(1), 7-15. doi:10.1002/au

    Gezie, A., Khaja, K., Chang, V. N., Adamek, M. E., & Johnsen, M. B. (2012). Rubrics as a tool for learning and assessment: What do baccalaureate students think?. Journal of Teaching In Social Work, 32(4), 421-437. doi:10.1080/08841233.2012.705240

    Holmes, N. (2015). Student perceptions of their learning and engagement in response to the use of a continuous e-assessment in an undergraduate module. Assessment & Evaluation in Higher Education, 40(1), 1-14. doi:10.1080/02602938.2014.881978

    Lauer, A. J., & Korin, J. R. (2014). Expanding assessment perspectives: The importance of student leadership in student learning outcomes assessment. (Cover story). Assessment Update, 26(1), 1-16. doi:10.1002/au.20003

    Lowood, J. (2013). Restructuring the writing program at Berkeley City College: Or how we learned to love assessment and use it to improve student learning. Assessment Update, 25(6), 6-14. doi:10.1002/au

    Souza, J. M. (2014). Empowering faculty and students with assessment data. Assessment Update, 26(1), 3-15. doi:10.1002/au

    Return to top

Fall 2015 CCD Call for Proposals - EXTENDED

The Center for Teaching & Learning is pleased to announce the Call for Proposals for the Fall 2015 Community & Collaboration Day which will be held on August 12.  Proposals are now due by April 13, at 8 a.m.  You can submit a proposal online.

Return to top