Category Archives: Technology

MOOCs and Innovation

Originally posted by Rochelle Diogenes on Acrobatiq.

Last week, MIT announced they’re making Massive Open Online Courses (MOOCs) part of their admissions process–candidates for their one-year Supply Chain Management master’s program will enhance their chances of acceptance into the program if they successfully complete relevant MOOCs before application. If they are accepted into the program, they will get credit for their online work and only have to complete one semester on campus to get their degree.

Why is this exciting? Like adaptive courseware, MOOCs are fairly new. The University of Manitoba’s Stephen Downes and George Siemens built the first online course called a MOOC in 2008 to see just how learning might be accomplished using the internet. Since then there’s been a lot of speculation about how MOOCs fit into the broad spectrum of education.

Initially, many hailed MOOCs as the wave of the future. Others saw them as a disruptive, revolutionary force that would replace colleges. When early reports showed that retention or course completion in MOOCs was less than 10%, many education stakeholders breathed a sigh of relief—MOOCs would not replace or even really compete with institutions of higher education.

Others, like Koller, Ng, Do and Chen in Retention and Intention in Massive Open Online Courses: In Depth aren’t ready to write off MOOCs. Retention should be understood in context. MOOCs have large enrollments, many with over 100,000 students. Where 10% would be a ridiculously low retention rate for a college class of 100, for a MOOC, 10% represents more students than some faculty reach in a decade. Koller et al. suggest we change the way we evaluate MOOCs:

…one can relate the act of enrolling in a free online class to that of checking out a book from a public library…Some people might read a few chapters of a nonfiction book and stop after getting enough information to suit their needs. Others might read more deliberately and renew the book a few times before finishing. In both cases, few would consider the lack of completion or the extra time taken to be a waste or a failure of the book.

MIT’s new policy is also asking us to look at MOOCs in a different way–as a “test” for admission. The shortcomings of standardized tests such as the SAT and ACT are well-known. Wouldn’t seeing how a student learns in an online program be a more accurate picture of their abilities? And completing relevant learning activities would give students a taste of what’s to come, helping them make more informed decisions about how they want to further their education.

MIT’s policy, which could be the first step towards unseating a decades-old admissions policy in all areas, is a reminder that we are far from harnessing the full power of digital learning programs. Because the major shareholders–students, faculty, institutions, and education technology companies–are learning by doing, we can’t even foresee all of the possibilities.

If we’re open-minded rather than judgmental about innovations, if we’re willing to take some risks and even fail at points in the process without ditching the whole framework, we’ll make great strides in education.

Formative Assessment in a World of Learning Outcomes

Consider this scenario: You’re teaching language arts to a middle school special ed class. The learning objective is to write a story about making something. While you go through the provided writing sample about children building a clubhouse, your students get more excited about the clubhouse than writing a story. They ask to build a clubhouse. Do you make them write the story or do you let them build a clubhouse first?

If you go with the clubhouse, you’re delaying writing the story and you may not have time to fulfill all the learning objectives embedded in your curriculum. On the other hand, if you decide, as I did, to build your lesson on your students’ spontaneous enthusiasm, you are choosing to write in additional learning objectives involving commitment, collaboration, and problem-solving before writing the story. And, you must alter your teaching plans to achieve them.

My decision was based on formative assessment or assessment for learning. Paul Black and Dylan Wiliam wrote the classic definition of formative assessment in 1998:

….the term ‘assessment’ refers to all those activities undertaken by teachers, and by their students in assessing themselves, which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged. Such assessment becomes ‘formative assessment’ when the evidence is actually used to adapt the teaching work to meet the needs.

This definition holds true for higher education even though Wiliam’s continuing work is with teachers in K-12. He emphasizes that many strategies can be successful as long as we remember “the big idea is to use evidence-based learning to adapt instruction to meet student needs.”  I encourage you to watch his exceptional talk, Assessment for Learning, below:

Education technology offers us valuable tools for assessment. Evidence-based programs can quickly adapt instruction based on feedback from student learning. These programs also help instructors alter their class instruction because aggregate data is available in real time. (see my earlier blog, What’s a Seventeen-Year-Old to Do?).

But there is a downside. Since, like all effective formative assessment, adaptive learning programs tie instruction and feedback to learning outcomes, the learning outcomes in adaptive programs are predetermined Formative assessment means changing student learning pathways–more material for a struggling student; less for an excelling student. But all pathways lead to the same goal.

The movement for student competencies and consistency in higher education also rests on predetermined learning outcomes. While these trends have merit, we need to be cautious and not allow them to get us entrenched in rigid practices, deterring instructors from going “off-script” and tapping into students’ enthusiasm and innovative ideas–these, too, are worthwhile in the learning environment. (When you look back, isn’t it the off-script instructors who influenced you the most?)

As we develop and use technology to get more precise evidence-based snapshots of student progress, we need to build in flexibility so that formative assessment based on student feedback can modify learning outcomes as well as learning pathways.

If You Give a Student a Cell Phone…

Originally posted by Rochelle Diogenes on Acrobatiq.

With the increase in digital distractions, interest in how we pay attention has grown. Although researchers continue to delineate definitions, most agree with the early psychologist, William James:

Everyone knows what attention is. It is taking possession of the mind, in clear and vivid form, of one out of what seems several simultaneously possible objects or trains of thought. Focalization, concentration of consciousness are of its essence. It implies a withdrawal from some things in order to deal effectively with others.

Attention is really selective attention.  We consciously or automatically choose which things to ignore and which to focus on. You are more likely to pay attention to something that affects you, interests you, or has deep meaning.

What we pay attention to is contextual and subjective. At a play, we think it’s important to focus on what’s happening on the stage without distraction. If an 8-year-old points out that there’s a man behaving oddly in the next row, he will probably get shushed. But these days, if he makes the same observation as his mother rushes him to catch a train or plane, Mom will probably pay attention and report it to security personnel.

Attention is the gateway to learning, to remembering and processing information. Instructors competing for student attention isn’t new. Remember when we thought all students were taking notes, but many were doodling, or writing love letters, or passing notes to other students? Remember when daydreaming was a common class distraction?

Cell phones may just be a more efficient way of channeling wandering attention. Researchers have shown that students texting/posting on their cell phones while watching a video lecture tested more than a grade level below their phoneless counterparts. They suggested that instructors discuss cell phone use policies with their students. That’s a start, but it doesn’t get to contextual factors that may contribute to cell phone distractions.

If, as the Pew Research Center reports, 93% of 18-29 year old smartphone owners use their phones to avoid being bored, maybe we should consider that having students listen to long lectures is not the best way to hold their attention. Even I’ve been known to check my cell phone during the most inspirational TED Talk.

Distraction can work in the opposite way as well.  A student who tunes out biology to check Instagram, may also avoid the boredom of waiting on line at Chipotle by accessing their course online. And, with Acrobatiq, the professor standing behind them can evaluate how their students’ progress.

While helping students think about how and when they use cell phones, educators need to expand opportunities for students to accomplish a wide variety of goals from communication to graduation with mobile devices. Formats such as blended or hybrid classes using digital learning platforms can lessen student distraction. Some instructors are already incorporating education apps into class time as part of the curriculum.

Mobile device programs will not replace all forms of teaching. They are meant as an active way to promote student learning by using the technology around us.

What’s a Seventeen-Year-Old to Do?

Originally posted on Acrobatiq and cross-posted with permission.

When co-founder Larry Page recently announced Google’s reorganization, he referred to the company as ” still a teenager.”  Incorporated in 1998, Google is 17 years old, making Page and co-founder Sergey Brin 17-year-old businessmen.

Like 17-year-olds, they are tired of routine and want the flexibility to do things they like doing.  Restructuring Google frees them from the mundane tasks of running a large company. As Page writes, it allows them “to do things other people think are crazy but we are super excited about.”

While the Google founders are getting what they want, most 17-year-olds getting ready for college will not. According to Remediation—Higher Education’s Bridge to Nowhere, Complete College America’s study of public institutions in 33 states including Florida, Illinois, Massachusetts, and Texas, over 50% of students starting college or community college test into remedial courses.  In California, 68% of students entering the state college system test into remediation.

These courses rehash high school material. To adapt a University of Chicago motto, this is where excitement and freedom come to die. The proof is in how many people don’t go on to graduate.  According to the study, about 40% of community college students don’t even complete their remedial courses. Less than 10% of community college students who finish remedial classes graduate within 3 years. Only about 35% who start in remedial courses in four-year colleges obtain a degree in 6 years.

Factor in the cost of remedial courses, courses that do not count towards a degree, and you have a formula for failure. To make things even worse, placement in remediation is often based on one test that researchers say is a poor indicator of student potential.

To address this situation, educators are calling for reforms. Some states are giving remedial students a choice of whether to take remedial courses or directly enroll in regular college courses. Others are eliminating remedial education as prerequisites for regular courses, allowing students to take them as co-requisites. Technology offers other ways to support students with weak skills in regular college courses.

Digital adaptive learning programs can be adopted for all students in college courses. They address individual student abilities at every level, in effect, containing what might be called “embedded co-requisites,” academic support where needed.

As students learn, these self-paced digital programs assess their abilities, tailoring the material so that they get the most out of their time in the course. For example, if a student shows non-mastery in identifying evidence for historical argument after reading a text section, the program might give them more explanation, activities, or video on that subject. On the other hand, if a student shows mastery after one text section, they can immediately move forward.

Adaptive programs also give instructors data on how each student is progressing in real time. Based on this data, instructors can reach out to students while they are learning instead of after they have failed a test.

As students succeed at their own pace in courses that count, they will have greater motivation to complete their degrees. They might even enjoy the experience.

Do You Inter-Mind?

Before the ubiquity of the Internet, getting the answer to a question such as when was the computer invented could take a long time and some serious effort. You might call a friend, read about it in a book, or even go to the library. Now answers can be as close as your nearest digital device.

In a Scientific American article, “The Internet Has Become the External Hard Drive for Our Memories,” psychologists Daniel Wegner and Adrian Ward discuss what using the Internet can mean for human cognitive abilities. They asked students to research trivia online and then tested them on recall. They found that those who used the Internet believed that they were smarter when they gave the right answers than those who did not use the Internet. The researchers’ conclusion:

These results hint that increases in cognitive self-esteem after using Google are not just from immediate positive feedback that comes from providing the right answers. Rather, using Google gives people the sense that the Internet has become part of their own cognitive tool set. A search result was recalled not as a date or name lifted from a Web page but as a product of what resided inside the study participants’ own memories, allowing them to effectively take credit for knowing things that were a product of Google’s search algorithms.

Wegner and Ward suggest that the more we rely on technology answers to trivial questions, the more the possibility of creating a true merger between the human brain and technology, resulting in an “Inter-mind.” They see this possibility very positively:

As we are freed from the necessity of remembering facts, we may be able as individuals to use our newly available mental resources for ambitious undertakings. And perhaps the evolving Inter-mind can bring together the creativity of the individual human mind with the Internet’s breadth of knowledge to create a better world—and fix some of the set of messes we have made so far.

The hopefulness of these researchers is very refreshing when others argue strongly that computers make us dumb (see my post, Is Smart Technology Making Us Dumb?)

Still, we cannot assume that freeing previously used brain space to remember facts such as what is the name of that actor on the screen or when was the March on Washington will necessarily lead to cleaning up the “messes” of the world. The latter involves keen social abilities and complicated thought processes such as making connections, logical thinking, critical thinking, and problem solving. These abilities are not somewhere in our brains simply waiting to move over into vacated space. They have to be cultivated and practiced.

Fortunately, educators are working to do just that in many ways from advocating that everyone learn computer science because it embodies new ways to evaluate and solve problems to teaching critical thinking to incorporating active learning in curricula. Let’s hope that these efforts ensure that our Inter-minds use the new room in our brains for the kind of thinking that will make the world a better place.