Blog


Your Categories
Readings Practical Tips Clearing the Fog Ideas Events

Events

On July 20, I'll be leading the Faculty Conclave at the World Education Congress of Meeting Professionals International in Las Vegas. This is for faculty teaching courses on meeting planning in hospitality and other programs. We'll begin with a two-hour session on effective teaching, learning, and assessment and then move into a freeform conversation on whatever participants are interested in. For more information, click on the Events link of my website.

On September 17, I'll be leading a one-hour webinar on "Six Steps to Preparing for Accreditation Review." No matter who your accreditor is, you'll likely pick up valuable, practical tips. Click on the Events link of my website for more information.

As I announced in another blog entry today, the current (March-April 2013) issue of Assessment Update features an Editor's Notes column by...me! I am so honored that Trudy ask me to guest-author this column. It's on "Helping Faculty Members Learn," and my other blog entry today (click on the Ideas category link above) gives you some background on how the column came to be.

 

Also... I'll be speaking at the following events, and I hope to see you! For more information, click on the Events link.

  • June 3: Third Annual AALHE Assessment Conference in Lexington, Kentucky
  • July 23: SACSCOC Summer Institute on Quality Enhancement and Accreditation in Daytona Beach, Florida
  • October 25-26: Using Grading Strategies to Understand and Improve Student Learning, a two-day workshop in Atlanta

 

I also have some webinars coming up, but the details are not completely nailed down yet. Check back for more info!


Clearing the Fog

There have been a number of surveys of employers on what they most value in new hires. Putting them together, I see the following skills crop up most often, in roughly the following order:

  • Teamwork and collaboration, including listening
  • Written and oral communication, especially the ability to articulate ideas clearly and effectively
  • Real-world problem solving, especially complex problems, under pressure or “on the fly”
  • Critical thinking and analysis, especially in evaluating information and conclusions
  • Flexibility and adaptability to change, including the capacity to continue learning
  • Creativity and innovation
  • Intercultural knowledge and skills, especially the ability to work with people from diverse cultural backgrounds
  • Ethical judgment
  • Quantitative and computer skills, especially the ability to understand numbers and statistics
 

Consider this: according to surveys by AAC&U and the Higher Education Research Institute at UCLA, 78% of employers want to know that graduates have used real-life examples in coursework, but 55% of faculty report doing so, 42% of college seniors believe their professors have given them chances to apply classroom learning to real life situations, and 29% of seniors say they are satisfied with the relevance of course work to real life.

How is your college doing?

Institutional effectiveness is typically defined as an institution’s effectiveness in achieving its mission and goals. But if we look at institutional effectiveness through the lens of accountability, a broader definition is more appropriate. My definition of institutional effectiveness is how well the institution is meeting the responsibilities of stewardship, which are not only achieving its mission and goals but also:

  • Meeting the needs of its students and other constituents (for example, employers in the region it serves)
  • Serving the public good
  • Ensuring the institution’s health and well-being
  • Deploying resources effectively and prudently
  • Demonstrating publicly that the institution is meeting these stewardship responsibilities

This question was recently posed on a LinkedIn group discussion, and it’s not one that can be answered simply or quickly. 

 

First, for those of you outside the U.S.: Unlike the vast majority of countries, the United States does not have a government agency directly responsible for assuring the quality of higher education. This responsibility is assumed by several dozen accreditation agencies, each serving the needs of a different sector of America’s incredibly diverse higher education enterprise. They are all membership organizations in which the membership institutions set their own rules and enforce them. It’s a bit like a private club: If one member turns out to be an embarrassment or liability, the reputation of the organization and its other members can suffer. So the other members first try to have quiet conversations with the problematic member and help it improve. If that fails (which is pretty rare, because most colleges want to succeed), the organization takes steps to remove the institution’s membership (accreditation). 

 

Another distinctive characteristic of the U.S. model of quality assurance is that the Federal government does have an oversight role, because Federal financial aid money can only go to students attending colleges accredited by a Federally-recognized accreditor. The Federal government has laws and regulations about accreditors and reviews them on a regular basis. 

 

So what are the pros and cons of this system of quality assurance? 

 

Pros 

 

1. Low cost: Accreditation work is often conducted by volunteers from member institutions. This is far less expensive than paying Federal employees to do this work.

 

2. Relevance: A theology seminary and a technology institute should not have to operate under exactly the same standards; there are things vital to one that are irrelevant to the other. To keep accreditation relevant to and appropriate for each institution, accreditation rules are generally designed and approved by member institutions and can vary from one accreditor to another. This promotes the tremendous diversity of American higher education, which is one of its greatest strengths and makes it unique in the world. No matter what you want to study and how you want to learn it, you will find a college here offering what you want, and that’s thanks in large part to an accreditation system that lets that happen.

 

3. Comprehensiveness: Most colleges aim to accomplish a lot more than simply prepare students for entry-level jobs at decent salaries. The U.S. system can provide a more comprehensive, balanced approach that examines the full extent of the college’s mission and goals.

 

4. Value to the institution: Because of #2, the accreditation process can be valuable to the institution, generating useful introspection and ideas and useful recommendations from the external evaluation team.

 

5. Pass/fail report card: With most accreditors, an institution is either accredited or not accredited. An institution may be sanctioned or placed on probation, but it remains accredited. This pass-fail system is easy for the public to understand, even if it doesn’t always understand the underlying details. 

 

Cons 

 

1. Time: Institutions find accreditation work incredibly time-consuming; some have full-time staff members who work year-round solely on compiling documentation needed for accreditation. Many have multiple accreditations for specific programs they offer, and those accreditors sometimes have diverse and conflicting demands.

 

2. Complexity: Many American colleges may offer dozens if not hundreds of academic programs of study, and there is no cost-effective way to thoroughly review and validate every single one. (Many colleges monitor their own programs through a system of regular program review, but these vary widely in quality and effectiveness.)

 

3. Communication: Because of the complexity of what most American colleges do, their quality and effectiveness cannot be summed into a simple report card or checklist. This makes the accreditation process and results hard to communicate to those outside academe beyond its pass/fail status..

 

4. Inconsistency: Because many accreditors use volunteers, the process is so complex, and education is concerns human beings, not widgets, the accreditation process is inherently subjective and therefore inconsistent.  

 

Bottom line: The American system of accreditation does work: It weeds out the “deadwood” and forces many colleges to make necessary improvements. As a result, American higher education generally has a strong reputation across the globe. Is there room for improvement? Absolutely. 

Lately I've been finding myself explaining that what accreditors are looking for in reports is akin to the scholarly research reports familiar to many faculty:

  • Both start address clearly articulated goals (in the research report, the purpose of the study; in the accreditation report, institutional and program/unit-level goals).
  • Both articulate targets for those goals (in the research report, the hypothesis; in the accreditation report, targets or benchmarks for key performance indicators or metrics)
  • Both present summaries of evidence related to those goals and targets.
  • Both draw conclusions from that evidence.
  • Both analyze the evidence and conclusions.
  • Both use the evidence and conclusions to develop action steps (in the research report, recommendations for further study; in the accreditation report, implemented improvements).
  • Both require documentation of processes and results so the work can be replicated and so subsequent changes/improvements (in the research report, changes in subsequent trials) can be identified.

I've been working with several institutions lately on assessing institutional effectiveness as well as student learning. A repeated question has been how to link planning and budgeting in these extraordinarily difficult times when the conversation is not where we can increase funding but where and how funding must be cut.

 

I like to draw an analogy to a family budget when a family member experiences a pay cut. An across-the-board cut in family expenses isn't practical: the rent or mortgage and utility bills must be paid in full; you usually can't cut them by, say, 5%, unless you downsize or can refinance your mortgage. And there are other basic things a family will continue to pay for unless the bottom falls out: everyone must be fed and clothed.

 

But then there are the more discretionary expenses, and this is often where a family makes the big decisions about spending: eating out, entertainment, vacations, a new car. And this is where a family's values and priorities come into play. My family loves to travel, and we're willing to drive basic cars and furnish our house inexpensively so we have the money to travel. We have friends, however, for whom a new car is more important. They travel more rarely but buy a new car every couple of years.

 

These decisions are, of course, based on what each family considers most important: its mission, values, and goals, if you will. And that's exactly how a college should be approaching budget cuts: What are the things we want to keep doing, no matter what? What are the things that, while good programs or initiatives, aren't as central to our mission, values and priorities? These are the questions that need to take place in an era of funding constraints.

Several times during my workshops and consultations over the past few months, someone's said something disparaging about "subjective" assessment, such as "That rubric is just a subjective assessment. What we need are objective assessments that will be more accurate."

 

Well, what are objective and subjective assessments anyway? An objective assessment is one for which there's only one correct answer, while a subjective assessment has more than one correct answer. For example, a simple addition problem would be an objective assessment, while asking students to explain a concept in their own words would be subjective. My own definition is that an objective assessment can be scored by an 8-year-old armed with an answer key, while a subjective assessment requires professional judgment in scoring.

 

But professional judgment goes not just into scoring but also in developing or choosing the assessment tool. You can write very easy test questions or very hard test questions. You can write a rubric so that all your students do very well or one so that they mostly get pretty low scores. Your students will likely "do better" on some published instruments than others, and you and your colleagues can let this affect your choice of instrument.

 

So, in a way, ALL assessments are subjective, in that they all require professional judgment. And objective assessments are not necessarily "better" than subjective ones.

Stuart Wasilowski at South Piedmont Community College recently shared an intriguing thought with me: "What prevents institutions from doing what you prescribe? I think, leadership and fear."

 

Stuart is spot on! For years I've been sharing what Marilee Bresciani says are the two major barriers to assessment: not understanding the value and importance of assessment and insufficient resources to engage in assessment. Then I've added a third barrier of my own: fear of change and risk-taking. But Stuart has taken Marilee's two barriers and identified the cause and solution for them: leadership. Institutional leaders are the ones who are empowered to take proactive steps so that everyone understands the value and importance of assessment and has the resources to get it done.

 

We know that, when institutional leaders "get" assessment, it gets done, and it gets done well. When they give it lip service or, worse yet, disparage or ignore it, there will be spots but not a pervasive culture. Thank you, Stuart, for putting this so succinctly!

For many years, the answer has seemed simple to me: Students of a good teaching learn what they're supposed to be learning...and hence the value of learning assessment evidence when teaching is reviewed and evaluated.

 

But this answer is a little too simplistic. Teaching and learning is a partnership, with students bearing a good chunk of the responsibility for learning, and even the greatest teacher can't guarantee that every student will learn what he or she is supposed to learn.

 

In a recent workshop I did, one participant defined a good teacher as someone who's always trying to improve his or her teaching. What a great definition! It still brings assessment into the mix, because improvements should be based on evidence of what students have and haven't learned--in other words, assessment results. I would add just one more caveat: Good teachers try to improve their teaching informed not only by their students' assessment evidence but also by research on practices that promote deep, lasting learning. There's a table listing these practices in my book Assessing Student Learning: A Common Sense Guide (Jossey-Bass). 

How much is enough?
04/02/2012

It's one of the most common questions I get: How much does our accreditor want us to do--and report--on assessment?

 

And it's the wrong question. What regional accreditors want to see, ideally, is simply a compilation of assessment reports you share and use among yourselves, because you all understand how important it is to understand whether students have learned what you want them to...and how important it is to use evidence and not just gut instinct when making important decisions.

 

So the right questions are, "What do we need to see to make sure our students are learning what we think is important?" and "Are our decisions addressing important issues and based on enough information and evidence that we're reasonably confident of them?"

In the last few weeks I've worked with faculty at several colleges and universities to help them move forward with assessment. In almost every case, we ended up talking not about assessment per se but about what they want students to learn and how they help students learn those things. It inevitably became clear that the problem was not assessment but that they didn't have a clear sense of their key expected learning outcomes and/or the curriculum wasn't designed to give students enough time to learn and practice those things.

 

If you're teaching it, and you're grading students on it, you're already assessing it. If you're struggling to figure out how to assess something, I'll bet it's because either you don't yet have a clear sense of what it is or you aren't really teaching it.

 

For tips on articulating expected student learning outcomes, see Chapter 6 of my book Assessing Student Learning: A Common Sense Guide. Chapter 7 talks about curriculum mapping, a powerful tool to evaluate how well your curriculum addresses a particular goal.

I've been getting a lot of questions lately about assessing institutional effectiveness. This is not hard, people! Fundamentally, it's tracking your institution's progress toward achieving its mission and goals.

 

Let's say, for example, that one of your institution's strategic goals is to increase student graduation rates. Some folks identify some initiatives to improve student graduation rates, and "assess" the goal by checking off the initiatives that have been implemented. Uh, no...despite your best efforts, those initiatives might not have any impact on graduation rates.

 

Instead, set a target graduation rate. You're not going to achieve it overnight; even if you implement initiatives today, you'll probably need to wait years to see if they impact graduation, especially if they're designed for first-year students.

So set your target graduation rate for a few years from now, then develop some interim targets to assess progress toward that goal. You might, for example, set target first-year-to-sophomore retention rates.

 

And no one is looking for an elaborate report here. Think of your board members. They should be tracking progress toward achievement of the institution's strategic goals, but they don't have time to wade through a huge report. What would they want to see that would be succinct yet informative? That's the kind of report that institutional leaders should be monitoring on a regular basis...and that accreditors want to see.

How many faculty do you know who's said, "Assessment is a violation of my academic freedom." Now Gary Rhoades, general secretary of AAUP, has put that to bed for once and for all. Here are some quotes from him in "What Faculty Unions Say About Student Learning Outcomes Assessment, a paper released by the National Institute for Learning Outcomes Assessment in May 2011:

 

"To some observers as well as some faculty, the AAUP's principles and policies might suggest that the association encourages its members to resist the assessment of student learning outcomes, including acting on that data to reform curriculum and instruction. That is a fundamental misreading and a misapplication of the association's basic principles and they pertain to assessment and institutional improvement."

 

"Assessment of student learning and reform of teaching and academic programs are core academic activities. As such, the AAUP sees them as being the primary responsibility of faculty--individually and collectively."

 

"...the AAUP emphasizes the collective responsibility of the faculty as a whole for academic programs, suggesting that an academic department, for instance, can adopt pedagogical or curricular standars that colleagues teaching the course(s) need to adopt."

 

"There is no reason that a faculty cannot collectively take on the task of identifying student learning outcomes, conducting those assessments, and revising curriculum accordingly."

About a year ago, I posted here some examples of course, program, and institutional goals. I'm still getting a lot of questions about this, so here's some more on this topic.

 

Student-level assignment is assessing learning of individual students, generally on course learning goals. This is the kind of assessment that faculty have done for literally thousands of years. Its primary purposes are to grade students and give them feedback on their learning.

 

Course-level assessment is assessing learning of an entire class of students, again on course learning goals. Here assessment results are summarized for all students in a class or course, to get an overall picture of students' collective strengths and weaknesses. The primary purposes are to reflect on and improve teaching practice.

 

Program-level assessment is assessing learning of all students in a program on program-level learning goals. Program learning goals are generally addressed over multiple courses and are broader than course learning goals. Course-level goals contribute to program-level learning goals. For example, several courses in a program may each address some specific technological skills expected of graduates. Those courses collectively contribute to an overall program-level goal that graduates use technologies appropriately and effectively in their careers.

 

There are many ways to assess program-level learning goals, but often the easiest way is to identify some major project or assignment that students complete just before they graduate. If the assignment is well-designed, it should require students to apply much of what they've learned throughout the entire program and thus demonstrate achievement of a number of the key program-level learning goals.

I've gotten a lot of questions about what I mean by "externally-informed standards and benchmarks," so let's deconstruct this phrase a bit.

 

First, I define "standards and benchmarks" quite broadly (and I use the two words synonymously)--not just seeing how you compare against peers or a "brightline," but perhaps also how much your results have changed over time, how much improvement you're seeing, or how your results compare against some standard or benchmark that you and your colleagues have established. For example, you and your colleagues might decide that you are aiming for at least 80% of your students to write well enough to meet the "very good" criteria on a rubric that you've developed.

 

My question is, "Why did you decide on 80%? Why not 75%, or 90%, or 100%?" I've seen faculty too often simply pull a number out of a hat, perhaps under the guise of their "professional judgement." Folks, this kind of navel-gazing isn't going to cut it in the 21st century. We need to justify the standards or benchmarks we set by consulting with appropriate external sources when we're setting them. For example:

  • Consult with faculty colleagues in peer programs at peer institutions.
  • If your students go on to graduate school or 4-year study from a 2-year institution, consult with those faculty about what level of writing competence they expect in an entering student.
  • Consult with your disciplinary association to see if it's doing any work on articulating appropriate standards.
  • Convene a panel of employers of your graduates and ask them what level of writing competence they expect in a new employee.
  •  Keep tabs on the Lumina Foundation's Tuning USA project, in which it's trying to articulate competencies at a variety of levels in specific disciplines. Right now it's a pilot project, but it's bound to provide food for thought.

I'm getting a lot of questions lately about the relations among student learning outcomes--and assessments of them--at the institutional, program, general education, and course levels.

 

Basically, goals should relate to one another as appropriate. Here's an example from my book Assessing Student Learning: A Common Sense Guide (Table 8.4, page 125):

 

A college may have a goal that students will "use and analyze a variety of resources to make decisions and solve problems." It might assess this by implementing a graduation requirement that all students, regardless of major, complete a senior capstone research project that's assessed using a rubric.

 

The general education curriculum may have this same goal. (Often gen ed goals are institutional goals and vice versa.) This goal might be taught and assessed in the social science requirement of the gen ed curriculum by having students complete group research projects that are assessed using a rubric.

 

The English department may have a goal that students will "conduct research on issues in the study of English literature." This is a bit more specific than the institutional goal. This goal might be assessed by having students complete the college's required senior capstone research project on an issue in the study of English literature. (And that's the key to all of this: Find assessments that can do double duty!)

 

The English department's course on Shakespeare's tragedies may have a goal that students will "analyze scholarly views of the motivations of one of Shakespeare's characters." This is a bit more specific than the English program's goal. The idea is that this course, among other English courses, should help students achieve the English program goal. This course-level goal can be taught and assessed by having students complete a research paper, again assessed using a rubric.

Why aren't grades sufficient assessment evidence of student learning? Let's count the ways...

  1. Grades alone do not usually provde meaningful information on exactly what students have and have not learned. So it's hard to use grades alone to decide how to improve teaching and learning.
  2. Grading and assessment criteria may (appropriately) differ. Some components of grades reflect classroom management strategies (attendance, timely submission of assignments) rather than achievement of key learning goals.
  3. Grading standards are sometimes vague or inconsistent. They may weight relatively unimportant goals more heavily than some major (but harder to assess) goals.
  4. Grades do not reflect all learning experiences. They provide information on student performance in individual courses or assignments, but not student progress in achieving program-wide or institution-wide goals.

Direct evidence of student learning is tangible, visible, self-explanatory, and compelling evidence of exactly what students have and have not learned. Imagine a critic of your college or your program--someone who thinks it's a complete joke, a total waste. Direct evidence is the kind that even that critic couldn't argue with.

 

If one of your goals is effective writing, for example, a critic would say that self-ratings, logs of time spent, and even grades aren't truly convincing evidence that your students can write effectively. They can be very helpful, but they're indirect evidence.

 

But it would be hard for that critic to argue papers, tests, and portfolios--accompanied by rubrics or other evaluation criteria that expect appropriate rigor.


Ideas

The current issue of Assessment Update features an Editor's Notes column by...me! I am so flattered and honored that Trudy Banta asked me to serve as a guest editor for this issue.

 

I really struggled trying to figure out what to say, however. I ended up falling back on that old adage to writers, "Write what you know." It made sense to share the advice I've been giving to colleges over the last year and a half. 

 

It also made sense to practice what I preach--in other words, base my article on systematic evidence of the advice I've been most frequently offering. So I did a content analysis of the consulting reports I've written and, boy, I was surprised by the results. My memory was that I was telling colleges, over and over, to clarify their learning outcomes and to make sure their assessments aligned with those outcomes. Wrong! The most frequent advice I gave--in virtually every report I wrote--was this: Faculty are hungry to learn. Help them!

 

So that's what I wrote for Assessment Update: "Helping Faculty Members Learn," with a framework and some practical tips on what they need to learn and how to help them.

 

Faculty often tell me, "You know, I'm doing this assessment stuff. I just need to organize and document it better." Here's my new response, based on this experience: If you're not documenting it, you're not really doing it.

On the face of it, the best article I’ve read in the last month had nothing to do with assessment or accreditation, but everything to do with teaching and helping students learn. It’s a Hubspot blog post on “7 Lessons from the World’s Most Captivating Presenters.”   The “lesson” that got me excited was from Steve Jobs: Before you start talking about what, first talk about why and how:

  • Why should I (the listener) care?
  • How will your idea make my life better?
  • What do I need to do?

This is a proposed structure for a presentation, specifically a sales pitch, and not a lecture or a class lesson. But Steve Jobs’ approach reinforces research that students learn more effectively when they see relevance in what they’re being taught. If they understand the relevance of World War I or Shakespeare or East Asian art or chemistry to their lives and their world today, for example, they’ll engage more with the subject and their learning will be deeper and most lasting.  

 

I’ve always tried to incorporate the why and how into my teaching, discussing why I’m making my students complete each assignment and how what they learn will benefit them. But now I see the short shrift my teaching gives to the why and how. The article suggests that you address these three questions not only in this order but as three “acts,” giving each question roughly the same amount of time. This is where I’ve fallen short—I’ll often cover the why and how in just a few moments. Jobs’ approach suggest that, if I spend more time on the why and how, my students will be more engaged and get more out of my what, even if spending more time on the former means less time on the latter. After all, I’d rather have my students understand 15 minutes of “what” well than get only bits and pieces out of a longer presentation.  

 

There are lessons here for when we work with colleagues to help them understand assessment or accreditation. Too often we plunge right into the “what” without spending much time if any on the why and how. I’m going to take a fresh look at my presentations and workshops with an eye to spending more time on the why and how. But, unlike Jobs, I’ll aim to make the discussion interactive and collaborative. We know, after all, that lectures are the worst ways for students to learn. Jobs’ brilliant presentations are the exception that makes the rule true.  

I recently had the pleasure of visiting Mount Royal University in Calgary, Alberta...whose government agencies do not require student learning assessment evidence. So the faculty at Mount Royal are doing assessment not because anyone is making them do it but simply because it's a good idea! (What an amazing concept!)

 

The conversations during my visit focused on the classroom: articulating class-level learning outcomes, learning opportunities, and assessments. Only at the end did one faculty member ask, "Should we be sharing this with one another and talking about the major goals we have for our entire program?"

 

Of course. But the lesson was clear to me: If I could go back to the 1980s and rethink how we in the United States launched the higher education learning assessment movement, I would have started at the classroom-level, not the program-level. Getting faculty to think about two new ideas at once--doing a better job assessing student learning AND looking on an academic program as an integrated, synthesized learned experience--was probably overwhelming. I understand why we started at the program level; too many faculty feel threatened by the potential negative repercussions of disclosing what their students have learned. But starting at the class level would have created a more naturalistic process, and faculty would likely have seen value in it much earlier.

I'm coming off a round of frustrating experiences. A noted assessment scholar published an article on accreditation with some factual inaccuracies; a president of a research university wrote a letter on accreditation with some factual inaccuracies; and the board of a professional organization that I sit on developed a strategic plan without what planners call a SWOT analysis: a systematic review of the organization's strengths and weaknesses plus environmental opportunities and threats.

 

There's a common theme here. This assessment stuff is all about doing an even better job than what we're doing so far, in part by bringing more systematic information to bear on plans and decisions--what I call a culture of evidence-informed planning and decision-making. Yet here we have three instances of people who should know better--get the facts, and review and analyze them before making decisions--not practicing what we preach.

 

Part of my frustration is that these folks all have doctorates. They all know how to do research, and how to do it well, but it apparently doesn't occur to them to actually use those skills in their day-to-day work.

 

If key voices in higher education aren't doing this, what hope is there for the kind of culture change we want to promote through assessment?

As I look at accreditation actions taken by the regional accreditors this spring, I'm struck by how many colleges still haven't done much with assessment or have major misconceptions about assessment, such as grades being sufficient evidence of student learning. And I still hear from so many faculty who understand and value assessment but are stymied by a lack of support from institutional leaders...and, conversely, from administrators working with foot-dragging faculty.

 

The commonality I see among many of the people who don't "get it"--see and internalize the value and importance of assessment, beyond jumping through some accreditation hoop--is that they are often isolated from the higher ed community. Many of these colleges are specialized or serve a narrow market, and they identify not with higher education but their niche. Some colleges affiliated with particular church denominations, for example, identify more with their church. Some career-oriented colleges identify more with the professions they educate students for. Some should be part of the mainstream higher ed community but aren't, often because institutional leaders have other priorities or values. I know of some community colleges, private liberal arts colleges, and research universities that fall in this categories.

 

What I see is that people at isolated colleges don't read the Chronicle or Inside Higher Ed. While faculty may attend conferences in their discipline, people at the colleges aren't encouraged to go to higher ed conferences, such as those sponsored by AAC&U, or read publications like Change. Their board members either don't belong to AGB or don't read its publications.

 

People at these isolated institutions are often oblivious to higher ed conversations over the last two decades. They've never heard of research on student retention or practices that promote lasting learning... and this is far more important than what they don't know about assessment.

 

I wish I had an answer on how to reach these folks. Accreditors can, but often that's too late for these institutions to come up to speed before their next accreditation review. Let me know if you have any ideas! 

Academically Adrift is provoking plenty of discussion throughout American higher education, and with good reason. While there are valid concerns about the methodology, instrumentation and overreaching inferences of Richard Arum’s and Josipa Roksa’s research study, many of their conclusions are important ones that have been confirmed by others.  

We know—and should not even need a research study to confirm—that students learn more when they spend more time studying, practicing, reading, writing, and working on complex “messy” problems without ready solutions.   But, as Arum and Roksa point out, today’s students do less studying and homework than students did a generation ago. What’s sad is the number of faculty who expect and even encourage this, saying “My students have jobs and families, so I really can’t expect them to do much homework.” This line of reasoning is shortchanging our students. No wonder they’re not learning as much as they should! Should students who don’t have time to devote to homework and studying be taking the course?   

 

Arum and Roksa confirm a number of other conclusions drawn from earlier research. Students learn more effectively when they understand course and program goals and the characteristics of excellent work. They learn more effectively when they are academically challenged and given high but attainable expectations. And they learn more effectively when they engage in multidimensional “real world” tasks in which they explore, analyze, justify, evaluate, use other thinking skills, and arrive at multiple solutions. Chapter 18 of my book Assessing Student Learning: A Common Sense Guide, 2nd ed. (Jossey-Bass, 2009) includes a list of more practices that we know through research promote deep, lasting learning…and that are used too little in today’s college curricula.

In one of my favorite workshop exercises, I present a scenario of great assessment results of clearly defined goals, carefully and rigorously obtained. I then ask participants what action should be taken with the results. Their responses fall into two distinct groups. One group recognizes that great assessment results should be celebrated and publicized. The other tries to pick holes in the results. Something must be wrong if the results are this good! The assessment must have been too easy, or the evaluators not properly trained, or there must be a problem with the very small number of students who didn't do well.

 

The thinking of the second group is why, I think, higher education is under fire these days. As I said in my remarks in Indianapolis last fall, we in the higher education community are so bright, so driven, that we view anything less than perfection as failure, and perfection as unobtainable. Too rarely we look at our assessment results and say, "These are pretty darned good."

 

If we don't accept ourselves how effective we are, we're certainly not going to be able to make a convincing case of our effectiveness to our public audiences. No wonder higher education is under fire! We haven't told the public the story of how good we are, because too often we don't see our successes ourselves.

These last two weeks have been amazing. On Monday, October 25, I had the honor of opening Trudy Banta's Assessment Institute in Indianapolis by serving as the "provocateur" (Trudy's word) for the opening plenary panel. The following morning, Inside Higher Ed published my remarks, "Why Are We Assessing?" It quickly became the publication's most frequently e-mailed article of the past month.

 

This was an amazing opportunity (thank you, Trudy, Doug, and Scott!) to share some of my ideas with a wide audience, and the comments I've received have been terrific. If you want to see any of the ideas fleshed out, some of them are covered in my book Assessing Student Learning: A Common Sense Guide, some are addressed in earlier blog posts here, and I'll be talking about the rest here in future posts. Let me know your thoughts and ideas, and stay tuned for more!

Over the last decades, we’ve consistently identified two purposes of assessment: improvement and accountability. The thinking has been that improvement means using assessment to identify problems—things that need improvement—while accountability means using assessment to show that we’re already doing a great job and need no improvement. A great deal has been written about the need to reconcile these two seemingly disparate purposes.

 

Framing assessment’s purpose as this dichotomy has always troubled me. It divides us, and it confuses a lot of our colleagues. We need to start viewing assessment has having a common purpose that everyone—faculty, administrators, accreditors, government policymakers, and others—can agree on. Actually I see not one but three common purposes that we all can and should focus on.

 

The most important purpose of assessment should be not improvement or accountability but their common aim: Everyone wants students to get the best possible education. Everyone wants them to learn what’s most important. A college’s mission statement and goals are essentially promises that the college is making to its students, their families, employers, and society. Today’s world needs people with the attributes we promise. We need skilled writers, thinkers, problem-solvers and leaders. We need people who are prepared to act ethically, to help those in need, and to participate meaningfully in an increasingly diverse and global society. Imagine what the world would be like if every one of our graduates achieved the goals we promise them! We need people with those traits, and we need them now. Assessment is simply a vital tool to help us make sure we fulfill the crucial promises we make to our students and society.

 

Too many people don’t seem to understand that simple truth. As a result, today we seem to be devoting more time, money, thought, and effort to assessment than to helping students learn as effectively as possible.

One of the most frequent questions I've gotten over the last decade is when the assessment "fad" will end. Well, the assessment "fad" is about 25 years old now! I don't know when things stop being a fad, but I'd guess less than 25 years!

But assessment--and accountability--won't be at the top of higher education's radar screen forever. So what will rise to the top? For the past five years or so, I've been prognosticating that the next big push will be access...and I think that's now coming to pass, with major focuses on community colleges and financial aid reform. So what will be next? My guess is that there will soon be increasing scrutiny of accelerated programs: calls for solid evidence that the depth, breadth, and rigor of learning is equivalent to their traditional counterparts.

Why are so many of us resistant to--if not afraid of--accountability? I think it's because we fear the worst--that whatever we provide will be twisted in a negative way or somehow otherwise make us look bad.

 

Is this reasonable? Or does it make more sense to look on accountability as not a threat but an opportunity--to tell the world how good we are?

 

Virtually every college and university I've worked with has some instinct for what it does really well. Why not get systematic evidence to verify our instincts...and then use it to tell the story of our successes?

 

I think it's because we in higher education are so self-critical. We set incredibly high standards for ourselves, then view ourselves as abject failures if we don't surpass those standards. Can we accept how good we are? And, if we can't, how can we possibly tell others how good we are?

The November 2 issue of Newsweek has an excerpt from a forthcoming book on President Obama's first year. In it, Alter explains that President Obama and Secretary of Education Arne Duncan evaluated education policy by asking one question: Is it good for kids?

 

 This struck me as a great question to ask about everything we do with assessment, both for improvement and for accountability: Is what we're doing good for our students?

 

 Are our assessment methods good for our students?

 

Are our uses of assessment results good for our students?

I've had some great opportunities over the last week to talk with a number of higher education leaders and scholars about assessment and accountability. I've come away with one overriding thought:

 

We don't know what information our constituents (stakeholders, audiences, whatever) want and need from us.

 

And to make things even fuzzier,

 

We don't have a clear common sense of who those constituents are.

 

We have a few clues here and there, from surveys of opinions of higher ed, of accreditors, and the like, but I haven't seen any systematic research to identify our constituents or their needs.

 

If we don't know who they are or what they want, how can we respond to their needs?

Trudy Banta posed an interesting question to me: What are the one or two biggest issues or ideas on my mind these days about assessment? There are so many that it's hard to pare them down to just a few, but let me try:

  • So many faculty (and administrators) are oblivious to the remarkable body of research over the last 25-30 years on strategies that promote student success and deep, lasting learning...or the role that assessment plays as part of these strategies.
  • So many faculty and administrators are reasonably content with the status quo and see no need to rethink their approaches to teaching and learning...and using assessment to do so. One big obstacle is the annual US News & World Report rankings, which are sadly based almost entirely on inputs rather than outcomes. Why should an institution at the top of those lists engage in assessment when it's already gotten a gold seal of quality from an entity that many consumers respect?
  • Since the death of AAHE, assessment practitioners have had no organization that can provide a national voice for speaking out on the above and other assessment issues.
  • Assessment practitioners also have no place where assessment "newbies" can go for comprehensive information on learning about assessment. There are great resources out there, such as Ephraim Schechter's terrific website at NC State, Trudy Banta's Assessment Institute in Indianapolis, and the ASSESS listserv sponsored by the University of Kentucky, but how can people out of the loop find out about them?

Let me know if you have any other ideas!

My vote is for information literacy:

  • Recognizing the need for information to answer a question or solve a problem;
  • Identifying what information is needed to answer the question or solve the problem;
  • Finding that information from appropriate resources (which may be traditional library resources, online sources, or other sources);
  • Evaluating the information critically for credibility and relevance;
  • Using the information to answer the question or solving the problem;
  • Using the information legally and ethically, citing and acknowledging the work of others accurately.

Can you imagine how different the health care debate this summer would have been if everyone involved had practiced these essential skills?

Assessment is so full of jargon (objectives, institutional effectiveness, validity, direct evidence...and that's just for starters). Should we all use the same vocabulary? Should we have a glossary of definitions?

I think not. Instead, let's simply use words that people are comfortable with and understand intuitively. For example, "learning goals" may be less off-putting to some than "learning outcomes" or "learning competencies." Just make sure everyone understands what a good and appropriate learning goal is. Effectively-stated learning goals describe outcomes rather than processes, are clearly worded so everyone grasps their intent, use action words ("describe" rather than "understand"), and are important.


Practical Tips

I’ve seen some accreditation reports lately where the results seem too good to be true. In program after program, faculty report that they deem student learning assessment results satisfactory and see no areas for improvement. Sometimes this conclusion is based on a relatively low bar, say that 70% of students will pass the final exam. Sometimes I can’t tell how the faculty drew this conclusion—the rubric they used isn’t provided with the report.  

While I’m convinced that the vast majority of colleges are blessed with dedicated faculty who often work miracles with their students, I can’t help but be skeptical when results are uniformly positive, with no areas for improvement. What’s going on here?  

I suspect that, at these colleges, there’s no tangible incentive or reward for trying to improve one’s teaching and curriculum. If there’s no incentive to change, well, hey, change is hard work! It takes time! I just want to get this assessment stuff off my back.  I don’t in any way mean to imply that anyone’s doing anything unethical here. But results can be viewed through rose-colored glasses or an analytical eye. Which do you encourage on your campus, with tangible incentives and rewards?

I've been getting a lot of questions lately about how to assess "fuzzy" goals like preparation for lifelong learning, teamwork, and ethical behavior.

 

A great resource for these and many other broad competencies are the VALUE rubrics developed and published by the Association of American Colleges and Universities. The challenge with many of these goals is that they're ill defined: What must students be able to DO in order to be prepared for lifelong learning? The VALUE rubrics offer some good, concrete explications of these terms.

 

Here's the one caution I offer about using these rubrics: I see too many colleges and programs adopting them wholesale, even though they're not a perfect match with local conceptions of these outcomes. AAC&U means them to be a kickoff for discussion, with colleges adapting them rather than using them as is. They will give you good ideas. Find them at www.aacu.org or simply search for "VALUE rubrics." 

Some of the colleges I’ve been working with recently have asked me to offer feedback on the processes and templates they’ve developed for faculty to use report on learning outcomes at various levels. My main reaction has been a question: Why are you doing this? In other words, why are you asking faculty to report on each of these things in the report?

  

The immediate answer, of course, is to get through the next accreditation review. But that won’t work. Accreditors today are looking for sustained processes, ones that will persist after the accreditation review is completed. Requests for long, apparently pointless reports are the antithesis of the meaningful, useful, simple processes that endure. 

So look at the assessment reports you’re requesting across your campus with a critical eye: 

  • Who are the reports’ intended audiences? Who (beyond the accreditation reviewers) will see these reports? Why do each of those intended audiences need to see the reports?
  • What will they do with the reports, especially once the accreditation review is complete?
  • What decisions will the reports inform? Will faculty receive any constructive feedback on what’s in their reports: kudos for the things they’re doing well, constructive advice on possible improvements? 

Don’t get me wrong: Even if accreditation were out of the picture, I see potential value in annual reports on assessment efforts and results. The accountability of submitting an annual report is a bit of motivation to some to continue to move on assessment; the reports collectively paint a potentially picture of where the institution is regarding assessment—what’s going well, how faculty most need help; the reports can be the basis for budget requests; and individual programs can get valuable feedback, help, and support from those reviewing the reports. (I prefer a team of peers doing the review rather than an administrator.) But these benefits happen only when the report format is designed to achieve these ends.

I recently worked with a university president on identifying metrics for his institution's strategic goals. When I mentioned that he should set targets for each metric (for example, the student retention rate), he asked, "Why? Targets strike me as artificial and arbitrary. They're not worth the time."

 

As with many of the good points I've heard over the years, he got me thinking. I realized that we may need not one but two targets. One represents the red flag: the point at which you know you're in trouble. For example, what is the student retention rate that he would find unacceptably low? What would be the point at which he'd say to his team, "This is a real problem; we've got to put more effort into this."

 

The other target represents the ideal: the point at which things are acceptable. Very few if any institutions are going to have a 100% retention rate. At what point can you say, "This is, realistically, very good for us, and it's not worth investing more to improve it any further?"

 

So think about setting not one but two targets for your assessment results.

How do you know?
07/03/2012

A recent Time cover story on Bibi Netanyahu noted that he is always asking his colleagues, "How do you know?" This group thinks this, or that country will react in that way: How do you know?

 

This is exactly the same question that accreditors ask--directly or indirectly--when they evaluate your institution. The faculty are dedicated? How do you know? Students graduate well prepared for careers? How do you know? The annual budget supports the strategic plan? How do you know?

 

As a colleague once said to me, answers without supporting evidence aren't really answers to accreditors' questions. Make sure that every statement you make to an accreditor is accompanied by information explaining "how you know."

Over the past few weeks, I've had three very different experiences, but all with a common theme.

 

#1: I've been working with an institution on the Periodic Review Report it will be submitting to Middle States later this year. The PRR is an abbreviated review of the institution that focuses on planning, assessment, and resources. I've been going back and forth with them on the PRR's organization. Their tendency is to report on assessment by saying things like, "We did this survey and here's what we found." I've been pushing them to organize the assessment section around their goals, e.g., "One of our key institutional learning goals is that students communicate effectively in writing. We assess this through papers they write in Course X and through Survey Y, and here's what we found." So the survey results would not be presented in one lump but referred to as they relate to key goals.

 

#2: I'm starting to mention more frequently in presentations and workshops something that I've mentioned in a prior blog post: Mmy fantasy of requiring all syllabi to include a three-column chart, headed "This is what you'll learn" (i.e., the learning outcomes), "This is how you'll learn it" (i.e., the learning activities/experiences/projects), and "This is how you'll show me that you've learned it" (the submitted paper or presentation, test, etc.). I now show a slide with a mock-up of this chart. The reaction has been phenomenal--I hear repeatedly that the chart is the most useful idea participants have gotten from my sessions.

 

#3: I'm working with Elizabeth Jones at Holy Family University in Philadelphia to design a 12-credit post-master's certificate program in higher education assessment. I've designed courses before but never a whole program. I kept struggling until I hit on the following process: first list the learning outcomes we want students to achieve in the program (a pretty long list, not just key learning outcomes). Then list the assignment(s) (papers, projects, etc.) that will help students achieve each goal. Then place the assignments in courses so that the assignments and courses build on one another as students progress through the program.

 

The underlying theme of these very different experiences? It's all about goals. As I've been saying for many years, everything we do regarding teaching, assessment, and reporting on assessment should flow from and be framed by the big things we want our students to learn: our goals for student learning. The key to good teaching, good assessment, and good reports on assessment is thus having clear, effective statements of what we most want students to learn. If you're struggling with any of this, go back and take a hard look at your goals--that's probably the source of your problems.

My fantasy syllabus
02/02/2012

Today course syllabi are often required to include a list of the course's key learning outcomes. If I were in charge of setting accreditation standards, I'd be tempted to require that this be expanded into a three-column chart:

 

The first column would be titled "This is what you'll learn" and list the course's key learning outcomes.

 

The second column would be titled "This is how you'll learn this" and list the homework, classwork, papers, projects, etc., that will help students achieve that particular learning outcome.

 

The third column would be titled "This is how you'll show that you've learned this" and state the (graded) test, project, presentation, etc., in which students will demonstrate and be graded on their achievement of the learning outcome.

 

This is, in essence, a very simple version of what's often called a curriculum map. It's a great way to make sure that (1) faculty do indeed teach and assess the learning outcomes claimed on the syllabus and (2) students see the purpose of--the big things they're supposed to learn from--each assignment, which can help them achieve deeper, lasting learning.

My last blog post noted that one way to tick off an accreditor is to drown him or her by throwing everything onto your accreditation report's "wall" and seeing what sticks. As reporting moves from paper to electronic submissions, this problem is only growing. I've seen some reports whos appendices run into thousands of pages. These kinds of submissions raise a red flag to reviewers, indicating that the institution doesn't really understand what the accreditor is looking for, so it submits anything that seems even remotely related to what the accreditor has requested.

 

Jackson Kytle, one of the Middle States' Commissioners, has a wonderful statement about this: Accreditors are concerned with an institution's really big goals, not what's happening "in every tiny corner." Yes, a regional accreditor needs to see that assessment is happening in each academic program and general education requirement, but most regional accreditors don't want reports on every single course and every single institutional activity. If a program has 12 expected learning outcomes, focus on only the most important ones (perhaps 3 to 6). If your strategic plan has 376 initiatives to achieve its six strategic goals (and, yes, I've seen this), don't report on the 376 initiatives but instead on your overall progress toward those six goals.

 

This makes not only the reviewer's life easier but yours as well. Someone said to me a good 20 years ago that, if you have more than half a dozen strategic goals, you are spreading yourself too thin and you won't accomplish any of them very well. So if you find yourself reporting on hundreds of goals. step back and ask yourself if you and your institution would benefit from a clearer, tighter focus with fewer goals.

This is my last post as a vice president at the Middle States Commission on Higher Education. I'm retiring at the end of December and, effective January 1, 2012, I'll be offering consulting services on accreditation as well as on assessment. As I make this transition, it seems a good time to share five ways to tick off a volunteer peer evaluator reviewing your institution.

 

1. Fill your report with sweeping generalizations ("faculty are dedicated to teaching," "students thrive here both academically and in terms of personal development") and provide no supporting evidence.

 

2. Create your report by pasting together pieces from various individuals and groups. Don't worry about inconsistent information scattered throughout the report, such as one set of graduation rates at the beginning and different figures later on.

 

3. Make it as hard as possible for the reviewer to find evidence of compliance with the accreditor's standards. Never mind that the reviewer is doing this as a volunteer and has a day job. Throw into the appendices everything but the kitchen sink--anything that remotely looks like it's related to, say, assessment. Provide only raw evidence with no summaries or analyses. Attach every faculty member's resume, for example, and leave it to the reviewer to read them all and decide if the faculty are appropriately qualified.

 

4. Only answer the accreditor's questions or requests that you want to answer. Just ignore the rest.

 

5. If you think an accreditor's standard or request is unnecessary or inappropriate, or if you disagree with a peer evaluator's conclusions, keep your tone as arrogant and condescending as possible. Better yet, be downright snarky!

I had a real treat (for me) a couple of weeks ago...a rare request to conduct a workshop on writing and improving multiple choice tests. Multiple choice tests have really fallen out of favor among assessment enthusiasts over the last couple of decades, to the point that no one seems to want to even talk about them. But they can have a place in the assessment world; they can give us a broad picture of student learning and can be scored and evaluated quickly. And, yes, they can assess application and analysis skills as well as memory and comprehension.

 

The key is to write questions that could be answered in an open-book, open-note format...ones that require students to think and apply their knowledge rather than just remember. (The way I put it is students can bring anything except a friend or the means to communicate with one.)

 

My favorite way to do this is with what I call "interpretive exercises" and others call "vignettes," "context-dependent items" or "enhanced multiple choice." You've seen these on published tests. Students are given material they haven't seen before: a chart, a description of a scenario, a diagram, a literature excerpt. The multiple choice questions that follow ask students to interpret this new material.

So many institutions are sitting on piles of assessment information and not sharing or using it in a meaningful way. How can we get people talking about assessment results? One key is to find ways to share results in simple, succinct forms, so busy people and mathephobes can quickly and easily digest it. Some tips for doing this:

  • Don't feel obliged to share all results. Pull out just those results that relate to key learning outcomes or other key goals.
  • Don't feel obliged to present the results in the order in which they appeared on the original rubric or survey. Sort them from highest to lowest, so readers can quickly identify areas of relative strength and weakness.
  • Round numbers, both to limit the number of digits you present and to keep readers from focusing on trivial differences.
  • Use charts more than narrative text, and use graphs more than charts. Most people can absorb them more quickly.

Exploring the Landscape cover

The latest paper from the National Institute for Learning Outcomes Assessment is "Exploring the Landscape: What Institutional Websites Reveal About Student Learning Outcomes Assessment Activities." Unlike NILOA's earlier papers, this one got me questioning more than learning. The study examined what information--if anything--colleges and universities publish on their websites about learning outcomes assessment. The study concluded that much assessment information isn't publicly posted, that survey results are the most commonly-posted assessment information, and that the target audience is largely internal audiences. The study concludes that institutions should post more information "to meet transparency obligations and responsibilities" but doesn't really talk about what those transparency obligations and responsibilities are. That's a critical question that shouldn't be brushed under the carpet.

 

I'm a great believer that form should follow function and that there's no point in posting assessment information upblicly without first having clear answers to the follownig questions:

 

Who wants or needs to see this learning outcomes information?

 

Why do they want or need to see it?

 

What decisions are they making that these assessment results should inform?

Keeping it simple
05/14/2010

After a horrendous year of financial cutbacks, understaffing, and general stress, everyone I'm seeing these days is plain wiped out, if not feeling a bit beat up into the bargain. Keeping up the assessment momentum is always a challenge, but never more so than after a year like we've all had.

 

This is a great time to make a pledge to do everything we can to simplify assessment next year. Here are some questions you can ask yourselves:

  • Are we keeping the goals we assess to a manageable number? Are we focusing our assessments on just our most important learning goals? (See my earlier blog for more suggestions on identifying goals to assess.)
  • Are we keeping our assessment tools as simple as possible? Are we asking students to answer 30 survey questions when six or eight might do?
  • Are we being realistic in our expectations for quality? Are validity studies and reliability indices really necessary, for example? Are we evaluating 200 portfolios when 50 might give us enough insight?
  • Are we trying to "sync" various accountability and accreditation processes such as regional or national accreditation, specialized accreditation, and program review?
  • Most important: Are we conducting only assessments that are useful to us? If you've found that an assessment you've been conducting isn't helpful, stop doing it and start doing something else!

So many surveys end up sitting on a (perhaps virtual) shelf! Yes, some people may look at the results, but nothing really happens as a result. Why? One of the main reasons is that the surveys don't have a clear purpose. Before you send out a survey, look at every single question and ask yourself, "What are we going to do if the answers are X? What are we going to do if the answers are Y?" If you can't come up with a clear sense of the kinds of decisions that the results will inform, there's probably not much point in asking the questions!

There are just two precepts for writing good multiple choice items.

First, remove all barriers that will keep a knowledgeable student from getting the item right.

Second, remove all clues that will help a less-than-knowledgeable student get the item right.

Five specific suggestions:

  1. Don’t make vocabulary unnecessarily difficult.
  2. Make sure the “stem” (question) asks a complete question.
  3. Don’t ask questions about trivia.
  4. Avoid grammatical clues to the right answer.
  5. Use common misconceptions or stereotypes as incorrect options.

 

If you're dealing with faculty who are highly resistant to assessment (and who isn't?), approach them with a spirit of value and respect. Recognize that they are already doing good things and find ways to value and respect their views, their approaches to teaching, and their goals for their students. Look for assessment strategies that build on what they're doing already rather than asking them to develop a new assessment strategy, such as a rubric, that may be foreign and therefore off-putting to them.

A question on today's ASSESS listserv...and I'd suggest being flexible. Some programs are required by specialized accreditors to have dozens of goals, and I once saw a finance program with just one (great) goal. But as a general rule suggest assessing just 3-6 goals to start.


Readings

The current issue of Assessment Update features an Editor's Notes column by...me! I'm so honored that Trudy Banta asked me to guest-write this column. The topic was unexpected...another blog entry today (under the Ideas category link above) provides some background on how it came to be.

cooganswingpaper_000[1]

With staffing and resources stretched more than ever these days, there's never been a better time to take stock of not only how useful assessments have been, but also of how much time and money has been invested in them, and whether the value of the assessment has been worth all that time and money.

 

Just in time, Randy Swing and Chris Coogan at the Association of Institutional Research have written a great paper on understanding the costs and benefits of assessment:

 

Swing, R.L. & Coogan, C.S. (2010, May). Valuing assessment: Cost-benefit considerations. (NILOA Occasional Paper No.5). Urbana, IL: University of Illinois and Indiana University, National Institute of Learning Outcomes Assessment.

ewellcover_000     twocover_000

If you or someone you know would benefit from a great overview of the current state of student learning assessment in American higher education, look no farther than the first two occasional papers from new National Institute for Learning Outcomes Assessment, now headed by Peter Ewell and George Kuh. The first one, Peter Ewell's "Assessment, Accountability, and Improvement: Revisiting the Tension," is a solid review of the history of the assessment movement and some of the major issues facing us today. The second one is "Three Promising Alternatives for Assessing College Students' Knowledge and Skills" by Trudy Banta, Merilee Griffin, Teresa Flateby, and Susan Kahn. It's must reading for anyone who thinks published tests are the only valid way to assess student learning.

Effective Grading 2nd edThe best assessment book on the market (after my own, of course!) is the terrific second edition of Effective Grading by Barbara Walvoord and Ginny Anderson. It's perfect for who don't understand the difference between--or relationship of--the grading processes they've been using for a lifetime and this new-fangled assessment stuff. Barbara and Ginny get faculty to think about grading and assessment as ways to think about what and how they are teaching--the true purpose of assessment. Plus the book is written in such a simple, straightforward manner that you can literally curl up with it when you go to bed and learn so much.

Lindas book coverI'm going to be shamelessly self-promotional here...

 

I'm really excited about the second edition of my book Assessing Student Learning: A Common Sense Guide, which was published by Jossey-Bass in March 2009. Jossey-Bass calls it a "landmark book" and "the standard reference for college faculty and adminstrators." 

 

The idea for this came to me when I was at the old American Association for Higher Education. People were always asking me for recommendations for a good soup-to-nuts primer on assessment...so I wrote one! While there are now a number of wonderful books on assessment, this is still the only one that covers everything from organizing an assessment effort to rubrics, multiple choice tests, published instruments, setting standards, and communicating results.

 

People tell me one of the things they like the best about the book is its plainspoken style. I've really tried to strip out all the jargon and "academese" and make assessment approachable, doable...and maybe even fun!

 

This new edition is a major reorganization and rewrite of the first edition, with a lot of new material and several new chapters. Here's the table of contents:

  1. What is assessment?
  2. How can student learning be assessed?
  3. What is good assessment?
  4. Why are you assessing student learning?
  5. The keys to a culture of assessment: Tangible value and respect
  6. Supporting assessment with time, infrastructure, and resources
  7. Organizing an assessment process
  8. Developing learning goals
  9. Using a scoring guide or rubric to plan and evaluate an assignment
  10. Creating an effective assignment
  11. Writing a traditional test
  12. Assessing attitudes, values, dispositions, and habits of mind
  13. Assembling assessment information into portfolios
  14. Selecting a published test or survey
  15. Setting benchmarks or standards
  16. Summarizing and analyzing assessment results
  17. Sharing assessment results with internal and external audiences
  18. Using assessment results effectively and appropriately
  19. Keeping the momentum going

The book can be purchased through www.josseybass.com or any major retailer like www.amazon.com or www.barnesandnoble.com.


***