Tag Archives: innovation

What Is Learner-Centered?

Originally posted by Rochelle Diogenes on Acrobatiq.

While there doesn’t seem to be one definition for the student or learner-centered approach in higher education, Barbara McCombs, author of two books on learner-centered teaching, provides a comprehensive definition including three features (in italics) discussed below:

The core of the LCM [Learner-Centered Model] is that all instructional decisions begin with knowing who the learners are – individually and collectively.

Instructors need to take into account who they are teaching. Each student comes to class with their own past—academically and experientially. They also come with their own goals. Not everyone will succeed in the same way and with the same type of instruction. Personalized learning data is key to understanding and supporting this aspect of the student-centered approach. Instructors can obtain this data by analyzing each student’s work and engaging with them.

Courseware that incorporates personalized learning (see previous post, One Size Fits All…Not) makes this process easier and more productive. The data that instructors obtain from courseware helps instructors reach individuals and the class as a whole in real time. This allows instructors to use their time in a more focused way to move the whole class forward.

This [the first tenet] is followed by thoroughly understanding learning and how best to support learning for all people in the system.

Approaches to student-centered learning are innovative, and varied. They usually fall into these categories:

  • Activity-based learning such as discovery exercises, exchange of ideas (in person or online), simulations, problem-based learning, and project-based learning
  • Choice such as students choosing assignments, when and where they study, how they want to approach a topic, and deadlines
  • Collaboration such as team-based learning and peer exchanges
  • Real-world challenges such as problem-solving and community outreach
  • Metacognition such as transparency of progress and learning pathways, reflection on learning, and self-motivation

Quality courseware includes most if not all of these types of support for learner-centered programs.

Decisions about what practices should be in place at the school and classroom levels depend upon what we want learners to know and be able to do.

Learning outcomes based on instructor-determined teaching goals are integral to the success of student-centered learning. The student-centered approach changes but doesn’t eliminate the role of the instructor in the learning equation. While the instructor’s role is no longer mainly about transferring knowledge, it’s still about determining what students should learn and how they learn it.

At the institutional level, faculty coming together on how to implement the student-centered approach strengthens the success potential of the approach. Creating learning outcomes across departments and connected to institutional outcomes is important. Faculty have also begun to value using personalized courseware that works across subject matter areas so that students are engaged in a consistent method of learning.

What we see as innovative for instructors is also innovative for students, particularly those in higher education today who are used to more traditional methods of learning. The more practice learners get at student-centered learning, the more impactful the approach will be. And that applies to those implementing it as well.

#edtech #learner-centered

Thumbs Up for Blended Learning

Originally posted by Rochelle Diogenes on Acrobatiq.

Blended or hybrid learning has come a long way from its original concept of brick (classroom) and click (e-learning) in 1999. Just using some media with students doesn’t make it a blended approach anymore.

Now, blended learning is usually described as the integration of adaptive courseware yielding learning analytics and face-to-face learning situations such as class lecture, tutoring, or discussion groups to advance learning. Penn State professor Ike Shibley advocates for blended learning:

“When you see how well blended learning fits with established pedagogical paradigms, creating a synergistic blend of what works best in face-to-face and online, the question becomes why wouldn’t you want to at least try it?”

Maybe because we still have to dispel some myths about blended learning:

Myth#1 Blended learning isn’t as good as traditional approaches.

On the contrary, research confirms that blended learning is more effective than on-line learning alone or class learning without technology. A 2010 US Department of Education meta-analysis of 84 studies  (79 with higher education or adult learners) concluded that blended learning is much more effective in achieving learning outcomes than face-to-face instruction alone.

Myth #2 Blended learning requires less faculty.

Not true. The 2010 study cited above found that students using courseware received more “learning time and instructional elements” than those who didn’t use courseware.

When instructors use courseware learning analytics on individual and group progress to inform teaching, they spend less time in front of the classroom, but they spend more time in targeted communication with students.

This aspect of quality blended learning became clear in a recent pilot program with courseware in math at New Jersey’s Essex Community College.

In the one-year pilot, less students passed in the blended course than in the traditional course. Lack of legitimate faculty involvement was cited as one of the major contributing factors to the pilot’s shortcomings. Essex CC thought they could just use graduate students to teach segments of the blended course. According to Douglas Walcerz, a  program consultant, “We underestimated the skill that you would need as a teacher to deliver that content.”

Myth #3 Blended learning creates more ongoing work for instructors.

As happens with most changes, startup takes time. Once instructors take the plunge, however, teaching a blended course is no more time-consuming than teaching a traditional course.

How much time it takes to make the transformation also depends on which approach you take. Instructors who create all of their online materials will do the most work.

That’s why in her insightful post, Blended Learning on the Ground: Advice from College Educators, Jennifer Spohrer advises against starting from scratch. She suggests instructors new to blended learning “stand on the shoulders of giants” and use pretested online products from education technology companies as the foundation for their courses.

Finally, there are added benefits to blended learning (see previous blog, What’s a Seventeen-Year-Old to Do?) including those articulated by learning and development professionals in a 2013 survey:

 …it’s critical to foster lasting learning. It helps ideas stick and creates an air of accountability that is critical to learner success.” “Blended solutions deliver customization and focus on individual needs which traditional methods just can’t match.

Formative Assessment in a World of Learning Outcomes

Consider this scenario: You’re teaching language arts to a middle school special ed class. The learning objective is to write a story about making something. While you go through the provided writing sample about children building a clubhouse, your students get more excited about the clubhouse than writing a story. They ask to build a clubhouse. Do you make them write the story or do you let them build a clubhouse first?

If you go with the clubhouse, you’re delaying writing the story and you may not have time to fulfill all the learning objectives embedded in your curriculum. On the other hand, if you decide, as I did, to build your lesson on your students’ spontaneous enthusiasm, you are choosing to write in additional learning objectives involving commitment, collaboration, and problem-solving before writing the story. And, you must alter your teaching plans to achieve them.

My decision was based on formative assessment or assessment for learning. Paul Black and Dylan Wiliam wrote the classic definition of formative assessment in 1998:

….the term ‘assessment’ refers to all those activities undertaken by teachers, and by their students in assessing themselves, which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged. Such assessment becomes ‘formative assessment’ when the evidence is actually used to adapt the teaching work to meet the needs.

This definition holds true for higher education even though Wiliam’s continuing work is with teachers in K-12. He emphasizes that many strategies can be successful as long as we remember “the big idea is to use evidence-based learning to adapt instruction to meet student needs.”  I encourage you to watch his exceptional talk, Assessment for Learning, below:

Education technology offers us valuable tools for assessment. Evidence-based programs can quickly adapt instruction based on feedback from student learning. These programs also help instructors alter their class instruction because aggregate data is available in real time. (see my earlier blog, What’s a Seventeen-Year-Old to Do?).

But there is a downside. Since, like all effective formative assessment, adaptive learning programs tie instruction and feedback to learning outcomes, the learning outcomes in adaptive programs are predetermined Formative assessment means changing student learning pathways–more material for a struggling student; less for an excelling student. But all pathways lead to the same goal.

The movement for student competencies and consistency in higher education also rests on predetermined learning outcomes. While these trends have merit, we need to be cautious and not allow them to get us entrenched in rigid practices, deterring instructors from going “off-script” and tapping into students’ enthusiasm and innovative ideas–these, too, are worthwhile in the learning environment. (When you look back, isn’t it the off-script instructors who influenced you the most?)

As we develop and use technology to get more precise evidence-based snapshots of student progress, we need to build in flexibility so that formative assessment based on student feedback can modify learning outcomes as well as learning pathways.

Multiple Choice Questions

A multiple choice question begins with a stem or lead-in that is addressed by a correct response chosen from a list of alternatives. Writing a good multiple choice question that elicits an answer based on knowledge, not guessing or misunderstanding, is an art. For example:

Who was the twentieth president of the United States?

  1. Rutherford B. Hayes
  2. James A. Garfield
  3. Chester A. Arthur
  4. Grover Cleveland

This question tests recall of the twentieth president. The stem is parsimonious, including only the ideas and words necessary to answer the question. The “distractors” are parallel, possible answers–all presidents from around the same time. Compare to this question:

Choosing the first president of the United States was a tremendous responsibility. He would set precedents for subsequent office holders. The Electoral College unanimously elected George Washington who had led the colonies to victory against the British. James Madison, who was married to Dolly, was the fourth president. Who was the twentieth?

  1. James Brown
  2. LeBron James
  3. James Garfield
  4. James Bond

In this question, the stem is overwritten with information you don’t need to answer the question correctly. Irrelevant information may be testing your reading comprehension more than your twentieth president knowledge. Even if you know the correct answer, you may get it wrong because you can’t get through the reading.

The distractors are implausible. If the correct answer is embedded in a group of possibilities that are totally outlandish, you will get the right answer not because you’ve learned it, but because you can use general knowledge to eliminate the others. That’s a bad question.

If written correctly, a multiple choice question can be very effective at proving mastery in Bloom’s elementary cognitive categories of remembering and understanding, and to a lesser extent in the third category, applying (see previous blog Learning Objectives in Higher Education).

According to Cathy Davidson, educator Frederick J. Kelly introduced multiple choice tests in 1914.  They were intended to improve the equality of grading. Teacher bias as well as individual differences such as wealth or poverty would not prevent a student from being graded correctly. Multiple choice questions also made grading less time-consuming for teachers, freeing them to do more instruction. Incorporated in standardized tests, multiple choice questions allowed us to compare student proficiency in different areas of the country. Good goals, right?

Don’t we share these goals today: To evaluate students without bias. To give them equal opportunity to learn despite where they live or learn. To free instructors to have more time to teach and interact with students. So why are multiple choice questions criticized so much?

Davidson says it’s because we try to use multiple choice questions in areas where they don’t work such as

….problem solving, collaborative thinking,  interdisciplinary thinking, complex analysis, the ability to apply learning to other problems, complexity…creativity, imagination, originality…

Demonstration of these types of learning, Bloom’s applying, analyzing, evaluating, and creating, requires more than picking out the right answer if there even is a “right” answer. Kelly created multiple choice questions to measure basic skills important to twentieth century American work and citizenship. He admitted that they only tested “lower-order” thinking.

Extending the multiple choice format to measure higher-order thinking results in many flawed questions. Piled one on top of the other in repetitive quizzes or long tests, these ill-conceived items become anxiety-provoking, deadening experiences for students. In this context, they are weak indicators of student learning achievement.

Through digital programming we have the potential to create robust profiles of students showing how they process, retain, and apply information. This gives us the opportunity to approach the challenge of assessing student performance from a fresh perspective, one that may even use testing rarely. Let’s start by identifying the problem we want to solve: How do we make sure that students have learned what they need to learn to be successful in the world?

Now to test your understanding:

Which statement best describes this blog writer’s point of view?

  1. Multiple choice questions are easy to write.
  2. Multiple choice questions test critical thinking.
  3. We should rethink how we assess learning.
  4. We should never use multiple choice questions.