I really appreciated learning about the resources and reading the articles on open access education. I had never used Open@CUNY prior to this week and I was amazed at how thorough it is. I will definitely share it with students. While reading and going through this week’s materials, I kept thinking about their relation to Freire’s Pedagogy of the Oppressed and particularly about how Open Access affects individuals from  different roles within academia and people outside of academia. Certainly for students, Open Access textbooks are a “godsend.” In theory, the same can be said for professors who want their students to read certain books or articles without restriction since OA/OER lifts the financial strain that serves an obstacle to learning. However, as Leslie Chan points out the economic conversation (e.g., cost models, etc.) are overshadowing the conversion about OA’s core principle of free access to knowledge. The costs of production are not something that can be totally ignored even if authors and universities are happy to make their research and books free to access. Chan states that “the irony is that while Open Access was supposed to improve the two-way models of knowledge access and dissemination, what we are seeing is that some models are creating new divides. For instance, these days, many OA models are now requiring that the authors themselves pay the (often exorbitant) fee to make their work openly accessible. The problem with this model is that it is simply re-solidifying the status quo, even more restrictively” Having authors pay exorbitant fees is certainly problematic for the democratic production of knowledge, it is likely to alienate any who cannot afford to share their knowledge. However, according to Peter Suber “most peer-reviewed open access journals charge no fees at all”. There are a few that may charge publication fees but they seem to be in the minority and based on the informative powerpoint from Jill Cirasella, when publication fees are charged by OA journals, they normally are not paid by the authors. Crisella mentions that a benefit of OA is that it is not exploitative. Indeed, if authors have to pay hefty fees this will limit access to and production of knowledge as we do see happening with traditional journal publishers. I was shocked to learn that when a paper is published by a typical well-established journal that the author/s essentially forfeit copyrights. I have published a few things when I did research and my PI never shared this apparent fact which is an apparently common practice. I actually tried to find some of the research I worked on after I ditched research for a career in teaching but I had to pay to access it so those articles were “forever lost”- or so I thought. This week’s readings made me try again and I am happy to report that most of my old research work (i.e., papers I co-authored)  is now OA, freely available probably more so because it’s old than any other reason as my PI was definitely the type that might shy away from OA due to career concerns.  Chan describes the issue that arises of “whose knowledge is considered most “legitimate.”… encouraging a definition of openness as a process rather than a set of conditions that need to be met. It is an adaptive and dynamic process, and one that is always changing…people have their own careers to worry about. And it is often their own careers that take precedence over their principles. And because they do well in their careers, they also advise their students to follow the same trajectories and not to take risks. So, it’s a cycle”. Those that fear change or fear being seen as less rigorous as a result of being published in OA journals rather than traditional journals are misguided in their anxiety. As noted by Suber, “open access journals can be first-rate: the quality of a scholarly journal is a function of its authors, editors, and referees, not its business model or access policy”. Those who worry that their careers will suffer by going OA are simply misinformed and need to get with the times. I suspect they are clinging to the old ways because they came up under the “banking concept” of Education that Paulo Freire criticized for its “attempts to control thinking and action, leads men and women to adjust to the world, and inhibits their creative power.” Open Access, in principle, contradicts the banking concept directly by extending control over thinking and action by allowing free publication (action) and free access (thinking) of authors and ultimately encouraging more creativity as views and research are shared freely. However, because we are in a capitalist system- the conversations about OA itself conceptually, socially, academically may get hijacked by conversations about what the business models should be, as Chan criticizes, which they are missing the point of how OA does/should/can invite “diverse stakeholders’ participation in knowledge production processes”. I do believe that the public has a right to view and learn from research and/ or resources that were funded by taxpayer dollars (also private dollars but at least taxpayer dollars); however, as was noted when we covered MOOCs & POOCs, having access does not ensure development of content knowledge or an even an educated public. I think that we need to consider a highly diverse stakeholder group when we think about open access education. Of course thinking about the college student is first on our mind as being able to pay for a textbook may be essential to learning content for a given course but making knowledge accessible to the general public and all of the stakeholder types that that entails has a different value and may need to have different forms of accessibility. Schools (at least every high school I ever worked in) tend to conflate intellectual ability with pursuing college,  and I have at times found myself questioning why we educators tend to push every student to pursue higher ed almost in a way that suggests that professions which don’t require degrees are made up of intellectually inferior/incapable people.  I think that part of Open Access as a movement requires us to delve into this bias which is common among the academically elite (although it may not be openly stated) otherwise we would not have any researchers avoiding OA publication and favoring long established fee-charging journal publishers because of fears about their careers, they would not have to choose their careers over their principles.

What does it mean to make knowledge accessible to the public, not just the academically engaged public of students and professors, but the rest of it? 

Who are the stakeholders that should be considered when implementing OA education?

How do/can we manage the knowledge production process within OA so that it is democratic enough to challenge the banking concept of the student-teacher relationship while at the same time “improving the two-way models of knowledge access and dissemination” that Chan cites?

How can we as educators and students do this while also allowing an open and ongoing “talk about the value — in particular the nonmarket social value — of public higher education” that Robyn DeRosa highlights.

If we do, a DeRosa implies we must, “begin to explore the distinctions between a knowledge commons and a public education system,” what do you think that the distinctions would look like? 

If our goal is an educated (not necessarily credentialed) public capable of digesting information in a way that makes us all (or the majority of citizens) immune to misinformation, how do we establish this using OA/OER for a “knowledge commons” vs public education system?


(I want to apologize for making this post extremely long, I really loved this week’s readings, and they all tugged at my “teacher heart strings”, as I went through them, feelings of hope and despair popped up and excessive self-reflection took place. I decided in keeping with Joseph Ugoretz’s implied philosophy on the use of discussion boards, that it would still be ok to post it, but again, sorry about the length, I will try to keep it short the next time)

I especially appreciated Warner’s piece, “The Problem of Technology Hype” and Waters’s piece, “Hippocratic Oath for Ed Tech”. Often in the excitement about technological solutions for learning, the problems (e.g., the failures of adaptive instruction, the need for compassion, etc.) identified by these authors are ignored although they are very real to instructors who attempt to carry out these tech-infused learning processes.

What pedagogical opportunities does the integration of technology into the classroom make possible?

I have an ongoing and very real love-hate relationship with ed-tech. I love it enough to say that it I would rather not teach without the used of technology (I have done so but I’d rather not). Technology gives educators huge opportunities for developing and implementing differentiation and scaffolding strategies (assuming there are enough components that the teacher can customize). That said, it does not necessarily make this process more efficient since not every teacher or every student may be able to figure out how to use specific more technologies more quickly than the time it may take to learn a given skill without the aid of tech tools ; however, it certainly offers robust opportunities to implement these strategies, potentially at scale. Technology can offer more opportunities for scholars, students and teachers to share and collaborate across vast distances and this would not be feasible without technology.

The first time I gained access to smartboards and later ipads as a teacher, I was teaching all core subjects to students classified as “intellectually disabled” in a self-contained special education class. Prior to that I was working with a blackboard and chalk and very old textbooks that were way beyond the reading level of the students they were meant to be used by. The majority of my students were functioning at a pre-k to 3rd grade reading level, while only 2 students functioned at 5th to 6th grade levels. Technology allowed me to make stories/books from higher grade levels more accessible to students functioning at lower grade levels. For instance I used a software that allowed me utilize text to speech features as well as automatically highlighting words or phrases as they were read aloud by the software, and adjusting the reading speeds of the computer. All of this made it easier for me to teach literacy to struggling readers and improved engagement and retention of students’ word recognition and sequencing skills but this was due to my understanding of my students’ needs more than it was due to the features themselves. Technology can help us help students by allowing teachers to tailor instruction through the use of various software features. This helps to encourage student participation/motivation but the key here is that the technology should make it easier for the teacher to tailor the instruction because the software itself cannot effectively tailor anything because algorithms cannot get to know a student but can only memorize responses and plug them into another formula to pop out whatever was programmed into the function. A given X will always yield Y because Y depends on X but the fact is that learning is not so simple. Student understanding cannot be elucidated by a pre-programmed formula. An algorithm cannot customize learning, only a human teacher can do that.

What challenges does technology create for the student, the instructor, the institution?

Software cannot replace teachers, at least not for typical students (perhaps for the few self-motivated autodidacts who actually finish MOOC courses). Warner points out: ” Is repeating the same presentation over and over again to confused students a pedagogical practice we would accept in human instructors? …What is it we think students may ‘learn’ in theses systems?” (Warner, p. 92). While many technologies can be customized or attempt to customize instruction, individualizing instruction and reaching different types of learners is not something a computer program is capable of in my opinion. At best, an “adaptive learning algorithms” can reach a certain type of student, typically not students that would qualify as “outliers” on the normal curve that the algorithm is likely aimed to instruct at; regardless of how much data is collected and fed into a machine learning system, the outlier data points are unlikely to be used to connect with a lesson tailored to outlier students. For a student who has misunderstood, repeating the same lesson word for word until they get the right answer is not helpful in general. As any human teacher knows, when a student fails to understand the lesson, you do not feed the student the lesson in the same exact way on your next iteration. If we consider the student who is an English Language Learner, a simple phrase can throw off the understanding by so much and there is no way that the system can know that what the student misunderstood was based on English language fluency or vocabulary that may be unrelated to the specific lesson. Technology can be a very valuable tool but it is nothing more than a tool, it cannot sit in for a teacher. Treating “adaptive learning software” which aims to use algorithms to individualize instruction, as if it were equal to a teacher who is capable of learning about their students as individual humans, is highly problematic.

Optimizing learning sounds very appealing at first glance because often there is not always time to learn or teach all of the tech skills when one is following a strict curriculum. While technology is meant to optimize everything and may indeed make it more possible for teachers to individualize learning for a certain type of student, all of this takes time. It takes time to tailor assignments for different students, it takes time to teach students how to work with new technologies, and it takes teachers time to learn new technologies as well that a school might suddenly require teachers to use. Generally, true individualization of instruction is not reflected in the “adaptive software”. For instance, there is usually no room to digress in a productive way. Any digression is considered a mistake by an algorithm but teachers can use mistakes and digressions as teachable moments to promote depth of learning as noted in the “Two Roads Diverged in a Wood” article giving space for tangents in a lesson have learning value, “by allowing or even encouraging digression—by permitting students to take the “road not taken”—instructors facilitate a process whereby students may make new and original connections arising from their own thinking and discovery processes” (Ugoretz, p.2). These “teachable moments” are not recognize by algorithms and yet they have great value in student motivation as well as the ability to encourage students to make connections that lead to deeper understanding of content. The asynchronous discussion board can offer much more “personalization” than an “adaptive software” can because it is open and allows mental wandering that is often part of the learning process, especially when students encounter new topics.

What is reflected in adaptive learning software is the “oops you did it wrong again, a now try learning it again in exactly the same way”. This process is painful for students who need real instruction and have a low frustration threshold. For students who have disabilities or traumas, the algorithm can be downright abusive. I have been forced to use math software when I taught middle students students with learning disabilities that had this effect. After about 5 iterations of the “oops your wrong, watch this same video again and try again” the student would put their head down and cry or act out. What is eve more sad is that the student knew another way to get the math correct but the system would only accept its one defined way of demonstrating a skill based aligned with soon to be entirely phased out core standards (in my opinion it was a less useful way of doing fractions and division). What is worse is that this “math lab class” was meant for students with learning impairments and I was not supposed to teach it, I was supposed to just supervise the student interacting with the system (i.e., make sure they stay on the program that the school has spent its money on rather than going online to play video games, etc.).

Another issue with these formulaic attempts at instruction is that the systems can also be hacked. For instance, recently my step-son who is in HS Algebra shared with me that he had figured out a way to interact with one of the math softwares he has to use where he was able trick the system into believing he completed all of the assignments and therefore giving him all of the correct answers which he then used to get a perfect score. He then shared this trick with other students. The possibility to hack came from “repeated attempts” after “failing the first few times”. I am not sure how he did it but he exploited something in the part of the system that aims to “adapt instruction” by simply repeating the same exact lesson.

A district-mandated instructional software that is definitely not designed with your actual students’ in mind can be a major waste of money and class time and can even have detrimental effects on students’ motivation.

I think that schools will need to develop more diverse positions focused on instructional technology- having a computer teacher or an IT person is insufficient to keep up how quickly educational software is developing and how customizable some of it has to become. Generally, it is not teachers who decide what software a district should purchase. The people making those decisions are usually admins that are not currently teaching any classes (at least in k12). If core content teachers are allowed to take the lead in identifying resources and training each other, I think it would make for a better institutional outcomes. I emphasize core content teachers because they are the ones whose students will be tested most and whose data will be most utilized.

How do we understand the politics of educational technology that is both a field of inquiry and an industry?

The ever-intimate relationship between tech companies and political players increasingly leaves out the values of actual instruction.

The IT industry is focused on optimization and profit far more than effective instruction. The politicians are focused on having the numbers make them look good (i.e., consider how effective Bloomberg was at increasing passing rates by decreasing the minimum score to qualify as passing). There is a difference between learning and teaching that one cannot appreciate unless one has significant experience in both. It is very easy to feel that you understand “teaching” just because you are good at learning (particularly if you are a capable & self-motivated learner) but it is another story to actually teach others who are not so great at this and require significant guidance. Just because you are good at learning yourself it does not mean that you understand what schools or students need (hence the failure of initiatives by Bill Gates to fix education in NYC for instance ). One thing that seems to be overlooked where educational politics meets technological solutions to complex problems, despite it being obvious is that, as that “computers, no matter how powerful their algorithms, can only count” (Warner, p. 97). I think that this fact is often ignored perhaps because the idea of “optimization” is so alluring.

Generally, the politics associated with educational technology pulls us away from the depth of learning and teaching and toward the a sort of misguided efficiency or optimization relying on technological solutions which are not aligned with how students actually learn. I do not understand why wealthy IT moguls are treated as educational experts; they are not. They are simply people with money to spend who happen to be good at learning for their own purposes. Even if their intentions are pure, what qualifies Zuckerberg or Gates as education experts?

There are also costs to taking the time to teach technology skills not explicitly considered a part of the curriculum. Ensuring access to and accessibility of technology is also very challenging. I do not think that the technology companies appreciate that there are students that may struggle to use basic technologies (e.g., I once had to teach a “normal kid” who was HS senior how to use email in 2015!). Even when websites try to be accessible, they are not accessible to all people. Even the guidelines for accessibility themselves are not really accessible to everyone ( It’s hard (maybe impossible) to have a set of guides that work for all students because multiple things affect how we interact with technology based on who were are and how we perceive and interpret information.

How do we locate our own values within all of this? As teachers we value learning and as technofiles we appreciate what technology has the potential to do but we must ask ourselves “is it necessary” and “is it customizable for the teacher?” and just as importantly “is there time?” . I do not think that educator values are used as much as financial values when decisions are made about which software a school must use. Everyone wants the fastest way to show the best numbers even when the desired numbers may not be accurate representations of learning.

We want to use technology to make learning “bigger, faster, stronger” but the reality of the human mind is not this way. There are too many differences and learning itself is not linear. If the tech industry could appreciate the “non-linear” nature of the learning experience and be less motivated by profit and more motivated by making learning a positive experience for students (rather than using data aggregation to make correlations and treat them as causations), perhaps better software could come of it but optimization must be treated as less important than the experience of learning itself. I think that teachers value technology but understand its limits in instruction and it will take experienced teachers, not experienced IT moguls, to make software that reflects the values of learning and teaching.

Audrey Waters considers a Hippocratic Oath for ed tech ethics might ” insist that students be recognized as humans, not as data points. It would demand a respect for student privacy… recognize that ‘the tools’ are less important than compassion… It could call for more professional transparency perhaps …open disclosure about relationships with industry”. I love this idea but I do not think that it is considered much when technologies are developed. I hope that such an oath would be required of developers/ IT companies pushing their software onto schools and not just educators as teachers themselves have limited power when it comes to decisions that the district or institution makes about what tools must be used.

Motivated by the Answer

I was motivated to join this program by my love-hate relationship with instructional technology. I love the potential that ed tech offers but I have yet to find a software tool that can teach students with significant learning impairments in the way that they need to be taught (i.e., with the sensitivity to students’ cognitive and emotional needs that an experienced teacher might have). I hope to create a game-based learning project, ideally an actually usable and enjoyable game that would encourage kids to learn something (I am leaning toward math or physical science in my head). I honestly do not play a lot of games but I used a lot of games when I taught kids (mostly games that I made up or that the kids, with some guidance, came up with themselves). Games work, better than any other method of instruction in my opinion (assuming that the the game is well designed & simple enough for kids to play while still learning stuff) but I am nervous about my journey into this project because 1) I am not a gamer and 2) I do not enjoy coding (unlike solving a difficult physics problem, editing a script and seeing it run as intended just is not very cathartic, I hope to bypass any need to code if possible). However, I do love teaching and I am confident in my ability to teach and scaffold instruction. I aspire to come up with a perfect scaffolding algorithm but I do not have a ton of faith in machine learning for instructing kids. When I taught “cognitively impaired” students, the software would typically punish them for not knowing how to accomplish certain tasks, and would not allow them to move forward into content unless they “proved themselves” by accomplishing single tasks. For instance, this one math software was supposedly designed to teach kids who were behind and so it tested skills that students should have mastered while at the same time attempting to catch them up to their actual grade level (i.e., imagine a middle to hs school level but with a lot elementary questions in it at the start). It did not work out in my opinion. If you messed up subtraction, you would never make it to fractions. A few of my students struggled (and maybe will always struggle) with subtraction problem that require a lot “borrowing and carrying of the one” but these same exact students were able to solve higher level problems. Subtracting large numbers is a skill that should be mastered between 2nd to 3rd grade while being able to create & manipulate equivalent fractions is a middle school skill. I wish that a computer could figure out when its time to “let it go”. Many students get trapped in a “loop of not getting the correct answer” and the system does not know when to just let it be and move on to the next lesson. A teacher would know when they have hit an instructional wall and be able to let it go and come back to it later (i.e., another day in the week). For example, the teacher would know that a student who struggles with subtracting large numbers should not be prevented from learning other material in the content area; that the student can still learn how to cross multiply in order to create equivalent fractions or solve other math problems that do not require subtraction. The “wrong answer loop” that kids sometimes find themselves in when working with educational software can be painful and can destroy their confidence. I witnessed it with my former students. This is also where game-based learning (and by this I actually mean gamified instructional software which most of it is, rather than a pure game) fails to motivate struggling students. Repeatedly getting the wrong answer (i.e., failing to pass a level) is a good reason to not want to play (it is why I quit. Teachers, unlike algorithms, can make decisions that take into account the full reality of the instructional experience. Teachers can see where a child is struggling to the point that it is best to stop teaching and teachers can assess when a good time is to have the student try again. I am not convinced that we can train algorithms to make these sorts of decisions but maybe in the future by way of complex AI, this may happen. I wish that I could take all of my teaching experience and upload it into a super computer that would be able to teach for me and would be able to give the proper amount of guidance, motivation, and mini study-breaks, needed to encourage struggling students to keep trying and eventually learn. Yet, I doubt that a machine learning algorithm (no matter how much data is used to teach it about a student’s learning behavior) could discern the cognitive and emotional needs of students who are struggling with content. I have not figured out the steps that are needed to make my great game idea into a reality yet but I am expecting that my “great idea” will have to be broken down into an “okay idea” in order to ensure that I finish this program on time (and because I do not think robots are quite ready to fully take over all of our jobs). I do believe that instructional tech can be highly effective at “teaching content” but I think it strongly favors the self-motivated student who has not struggled too much and therefore does not need much more than a “ding ding ding you got the correct answer” to remain motivated to try. I would like for my project to focus on the other type of student but I realize this is a tall order so we shall see what happens as I progress through the ITP program.