Tag Archives: big picture

One Size Fits All…Not

Originally posted by Rochelle Diogenes on Acrobatiq.

Adaptive learning is a key strategy in higher education today (see previous blog, What’s A Seventeen-Year-Old to Do?). Research shows that online courseware based on personal learning data has increased success for diverse students. It’s clear that in education one “size” does not fit all.

While this research and practice has made an impression on me, others continue to debate the pros and cons of tailoring programs to individual learner needs.  Looking for more confirmation, I found a study that underscores the need for a non-uniform approach from a source outside of education.

A recent BuzzFeed article, This Is What “One Size Fits All” Actually Looks Like on All Body Types, describes the results of a test on consumer reaction to the trend towards replacing delineated sizing such as 10, 12, 18, with clothing in one size that companies advertise will fit everyone or as one company says, “most.’

In Buzzfeed’s experiment, they asked five young women, sized 0-18, to try on samples of the same outfit produced as “one size” to compare how they fit. BuzzFeed showed their results through photos and the participants’ comments.

The outcomes of the fittings in terms of physical appearance could be anticipated. A skirt only fit on one leg of half the women. One shirt looked like a dress on others. Clearly, to fit physically, the clothes had to be altered to individual characteristics.

What was surprising was the women’s comments on how the general experience affected them psychologically. It wasn’t just about how they looked. They all talked about how the experience made them feel. I took the liberty of substituting education phrases in a representative response [original wording appears in brackets]:

Allison [size 0]: “There’s clearly no such thing as one size fits all! Everyone has a different way of learning [shape], and higher education [clothing stores] should embrace that instead of making people feel shitty for not being able to succeed [fit] following what they deem to be a universal learning pathway [size]. ‘One size fits all’ sends a message that if you don’t  learn successfully in their programs,[fit into the clothing], whether it’s too advanced [big] or too slow-paced [small], you’re not ‘normal,’ and leads to all sorts of feelings of [body] dissatisfaction with how smart you are and how successful you can be.” 

Kind of eerie that the message for clothing and education can be the same. Yes, education is more complicated; you can’t look in a mirror to see how a course fits you, but over time you will feel the psychological effects of the right or wrong fit in a course.

Which brings us back to why we should continue to move towards adaptive and personalized learning online and in the classroom: these strategies put learning in a context that supports all students without stigmatizing them for starting at different levels or coming from diverse backgrounds. And, a positive environment motivates learning.

As Lara [size 4/6] says: “We’re all different, so the idea of ‘one size’ for all of us is just absurd. Different minds [bodies], unite!”

Formative Assessment in a World of Learning Outcomes

Consider this scenario: You’re teaching language arts to a middle school special ed class. The learning objective is to write a story about making something. While you go through the provided writing sample about children building a clubhouse, your students get more excited about the clubhouse than writing a story. They ask to build a clubhouse. Do you make them write the story or do you let them build a clubhouse first?

If you go with the clubhouse, you’re delaying writing the story and you may not have time to fulfill all the learning objectives embedded in your curriculum. On the other hand, if you decide, as I did, to build your lesson on your students’ spontaneous enthusiasm, you are choosing to write in additional learning objectives involving commitment, collaboration, and problem-solving before writing the story. And, you must alter your teaching plans to achieve them.

My decision was based on formative assessment or assessment for learning. Paul Black and Dylan Wiliam wrote the classic definition of formative assessment in 1998:

….the term ‘assessment’ refers to all those activities undertaken by teachers, and by their students in assessing themselves, which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged. Such assessment becomes ‘formative assessment’ when the evidence is actually used to adapt the teaching work to meet the needs.

This definition holds true for higher education even though Wiliam’s continuing work is with teachers in K-12. He emphasizes that many strategies can be successful as long as we remember “the big idea is to use evidence-based learning to adapt instruction to meet student needs.”  I encourage you to watch his exceptional talk, Assessment for Learning, below:

Education technology offers us valuable tools for assessment. Evidence-based programs can quickly adapt instruction based on feedback from student learning. These programs also help instructors alter their class instruction because aggregate data is available in real time. (see my earlier blog, What’s a Seventeen-Year-Old to Do?).

But there is a downside. Since, like all effective formative assessment, adaptive learning programs tie instruction and feedback to learning outcomes, the learning outcomes in adaptive programs are predetermined Formative assessment means changing student learning pathways–more material for a struggling student; less for an excelling student. But all pathways lead to the same goal.

The movement for student competencies and consistency in higher education also rests on predetermined learning outcomes. While these trends have merit, we need to be cautious and not allow them to get us entrenched in rigid practices, deterring instructors from going “off-script” and tapping into students’ enthusiasm and innovative ideas–these, too, are worthwhile in the learning environment. (When you look back, isn’t it the off-script instructors who influenced you the most?)

As we develop and use technology to get more precise evidence-based snapshots of student progress, we need to build in flexibility so that formative assessment based on student feedback can modify learning outcomes as well as learning pathways.

The Last 20%

Originally posted by Rochelle Diogenes on Acrobatiq.

When Pittsburgh Steelers’ James Harrison wrote on Instagram (#harrisonfamilyvalues) that he was returning his sons’ participation trophies because they were awarded “for nothing,” he probably wasn’t aware that his values about the feedback his sons got resonate with the views of an educator halfway around the world.

Australian professor John Hattie found teacher feedback to be one of the top factors helping students bridge the gap between trying and achievement. His findings are based on meta-analyses of 50,000 studies involving over 200 million students.

What does high-quality feedback look like? It’s clear, dynamic, and specific so that students can address their weaknesses to attain their goals. A trophy for participation doesn’t do that. According to Hattie, worthwhile feedback  answers these questions:

Where am I going? Students need to have a clear understanding of what the goal is, how to achieve it, and its benefits. For Harrison’s sons, participation was a means of reaching the goal of excelling or winning in athletics. Getting a trophy before you reach your goal could actually undermine working towards achievement.

How am I going? Feedback should give students a realistic picture of their progress, what they have accomplished, and what they need to work on.

If Harrison’s sons had gotten productive feedback, it would have included acknowledgment of the skills they acquired and evaluation of specific skills they need to improve. Not having that kind of feedback robbed Harrison of the opportunity to discuss and practice skills with his sons. This type of progress report is extremely successful in moving students forward.

Where to next? This feedback illuminates learning pathways for students. When teachers outline specific steps such as engaging in new activities, working with peers, or just plain practice, they are showing faith in the student to do better. In this context, “I am not good at math” doesn’t hold. Instead, it’s “I didn’t understand this problem today.”  This approach leads students to forget they “failed” and focus on how to do better.

Notice that there is no mention of raising student self-esteem. It’s all about the task. According to Hattie, confidence and pride grow from achievement. Productive feedback is not personal; it’s individualized.

If, as Woody Allen says, “eighty percent of success is showing up,” then it’s the last 20% that gets you significant achievement.  Hattie reveals that any program or method of teaching can show some success—students will show some improvement from the beginning to the end of the year. But that doesn’t mean the program is the best one for your students.

Technology can offer the types of feedback Hattie advocates to help students conquer the challenges in that last 20%. To help your students reach the trophy level in their endeavors, here’s some questions on feedback to keep in mind when you evaluate a digital learning program:

  • Does it include pedagogically sound learning objectives?
  • Is there targeted feedback specific to skills throughout the program?
  • Is the program adaptive, providing new varied content pathways tailored to each student?
  • Does it share learning data with students and instructors in real time?

Multiple Choice Questions

A multiple choice question begins with a stem or lead-in that is addressed by a correct response chosen from a list of alternatives. Writing a good multiple choice question that elicits an answer based on knowledge, not guessing or misunderstanding, is an art. For example:

Who was the twentieth president of the United States?

  1. Rutherford B. Hayes
  2. James A. Garfield
  3. Chester A. Arthur
  4. Grover Cleveland

This question tests recall of the twentieth president. The stem is parsimonious, including only the ideas and words necessary to answer the question. The “distractors” are parallel, possible answers–all presidents from around the same time. Compare to this question:

Choosing the first president of the United States was a tremendous responsibility. He would set precedents for subsequent office holders. The Electoral College unanimously elected George Washington who had led the colonies to victory against the British. James Madison, who was married to Dolly, was the fourth president. Who was the twentieth?

  1. James Brown
  2. LeBron James
  3. James Garfield
  4. James Bond

In this question, the stem is overwritten with information you don’t need to answer the question correctly. Irrelevant information may be testing your reading comprehension more than your twentieth president knowledge. Even if you know the correct answer, you may get it wrong because you can’t get through the reading.

The distractors are implausible. If the correct answer is embedded in a group of possibilities that are totally outlandish, you will get the right answer not because you’ve learned it, but because you can use general knowledge to eliminate the others. That’s a bad question.

If written correctly, a multiple choice question can be very effective at proving mastery in Bloom’s elementary cognitive categories of remembering and understanding, and to a lesser extent in the third category, applying (see previous blog Learning Objectives in Higher Education).

According to Cathy Davidson, educator Frederick J. Kelly introduced multiple choice tests in 1914.  They were intended to improve the equality of grading. Teacher bias as well as individual differences such as wealth or poverty would not prevent a student from being graded correctly. Multiple choice questions also made grading less time-consuming for teachers, freeing them to do more instruction. Incorporated in standardized tests, multiple choice questions allowed us to compare student proficiency in different areas of the country. Good goals, right?

Don’t we share these goals today: To evaluate students without bias. To give them equal opportunity to learn despite where they live or learn. To free instructors to have more time to teach and interact with students. So why are multiple choice questions criticized so much?

Davidson says it’s because we try to use multiple choice questions in areas where they don’t work such as

….problem solving, collaborative thinking,  interdisciplinary thinking, complex analysis, the ability to apply learning to other problems, complexity…creativity, imagination, originality…

Demonstration of these types of learning, Bloom’s applying, analyzing, evaluating, and creating, requires more than picking out the right answer if there even is a “right” answer. Kelly created multiple choice questions to measure basic skills important to twentieth century American work and citizenship. He admitted that they only tested “lower-order” thinking.

Extending the multiple choice format to measure higher-order thinking results in many flawed questions. Piled one on top of the other in repetitive quizzes or long tests, these ill-conceived items become anxiety-provoking, deadening experiences for students. In this context, they are weak indicators of student learning achievement.

Through digital programming we have the potential to create robust profiles of students showing how they process, retain, and apply information. This gives us the opportunity to approach the challenge of assessing student performance from a fresh perspective, one that may even use testing rarely. Let’s start by identifying the problem we want to solve: How do we make sure that students have learned what they need to learn to be successful in the world?

Now to test your understanding:

Which statement best describes this blog writer’s point of view?

  1. Multiple choice questions are easy to write.
  2. Multiple choice questions test critical thinking.
  3. We should rethink how we assess learning.
  4. We should never use multiple choice questions.

Do You Inter-Mind?

Before the ubiquity of the Internet, getting the answer to a question such as when was the computer invented could take a long time and some serious effort. You might call a friend, read about it in a book, or even go to the library. Now answers can be as close as your nearest digital device.

In a Scientific American article, “The Internet Has Become the External Hard Drive for Our Memories,” psychologists Daniel Wegner and Adrian Ward discuss what using the Internet can mean for human cognitive abilities. They asked students to research trivia online and then tested them on recall. They found that those who used the Internet believed that they were smarter when they gave the right answers than those who did not use the Internet. The researchers’ conclusion:

These results hint that increases in cognitive self-esteem after using Google are not just from immediate positive feedback that comes from providing the right answers. Rather, using Google gives people the sense that the Internet has become part of their own cognitive tool set. A search result was recalled not as a date or name lifted from a Web page but as a product of what resided inside the study participants’ own memories, allowing them to effectively take credit for knowing things that were a product of Google’s search algorithms.

Wegner and Ward suggest that the more we rely on technology answers to trivial questions, the more the possibility of creating a true merger between the human brain and technology, resulting in an “Inter-mind.” They see this possibility very positively:

As we are freed from the necessity of remembering facts, we may be able as individuals to use our newly available mental resources for ambitious undertakings. And perhaps the evolving Inter-mind can bring together the creativity of the individual human mind with the Internet’s breadth of knowledge to create a better world—and fix some of the set of messes we have made so far.

The hopefulness of these researchers is very refreshing when others argue strongly that computers make us dumb (see my post, Is Smart Technology Making Us Dumb?)

Still, we cannot assume that freeing previously used brain space to remember facts such as what is the name of that actor on the screen or when was the March on Washington will necessarily lead to cleaning up the “messes” of the world. The latter involves keen social abilities and complicated thought processes such as making connections, logical thinking, critical thinking, and problem solving. These abilities are not somewhere in our brains simply waiting to move over into vacated space. They have to be cultivated and practiced.

Fortunately, educators are working to do just that in many ways from advocating that everyone learn computer science because it embodies new ways to evaluate and solve problems to teaching critical thinking to incorporating active learning in curricula. Let’s hope that these efforts ensure that our Inter-minds use the new room in our brains for the kind of thinking that will make the world a better place.