Category Archives: New Media

Tech and Being Mindful

I recently purchased a Macbook Air because the Pro was getting a little cranky after seven or so years. I’m rarely not on a computer doing something. The new buy is sleek and soft and hums and the battery is so far so good. But it’s also a pain in the ass in a lot of ways (because new in many ways).

I also purchased an iPhone 6 many months ago. The relationship between the new technology is a little surprising. Both machines want to manage everything they can manage. From messages to email to whatever update wants to leak in, including messages from the newly installed Nest system R and I have running in the hallway, which claims to know now when the house is empty of breathing creatures.

But why pain in the ass? Maybe not pain in the ass but a new sense of transitional mindfulness about clutter. I still go back to the Pro, as I’m been able to relieve it some of all the thousands of ghosts inhabiting its go-betweens, like the Steam app, which I never used and who knows how many hidden files. I have no idea how many versions of Rails or Git I have on the Pro. How many versions of VC.

I’m reluctant to install on the new box. It’s a certain kind of tentativeness about weight and balance. Kind of like remembering not to lift heavy objects with my healing broken elbow.

When Social Media is Really Stupid Media

When I hear, read, think about intelligent systems or machine learning, sometimes my gross euphemism muscle goes a little spastic. I just got a notice that a good friend of mine who passed away a few years ago is deserving of a work anniversary from a social network and that he should be congratulated. It is a very strange confrontation. And somewhat morbid.

That’s all I’m going to say on the matter.

Marketing versus Storytelling

I’ve been doing a lot of work on a website, engineering for multiple media types, doing a little javascript, and digging deeper into the possibilities of WordPress and Bootstrap. It would have been a great Rails project but that move might have been a little much. In any event, my job isn’t content or “design.” But the talk today did stray into areas that I would lodge into the category of social media and digital ecosystem storytelling and getting the word out or spreading the news.

Hypothetically, if a writer wanted to create a world of multiple, interconnected novels, and wanted to ride the line between characters who use Twitter and YouTube, how would this be done to encourage metafictional and real-world parallels. The characters, say Marvin and Luisa, Tweet a backchannel to their main storyline or plot. The writer pays a few friends to play these roles on Twitter. In the novel, Marvin and Luisa go to Germany. They rent a car. They have a fight at the counter. After the fight, Marvin gives one version of events in a few Tweets and Luisa returns her version with Tweets of her own. Twitter is mentioned briefly in the novel.

Does the use of Twitter in this regard provide an extension to the story? Is it something that might stand alone, especially if the fictional characters who tweet accumulate real followers, who either expect something more or come to the novel later. If the novel is told from one point of view, what happens when the other characters who tweet provide their own. Does this expand the POV of the novel, invite, for example, a new consideration of the reliability of the teller?

This is not just a question about fiction writing. It’s also interesting in the sense that “marketing” is even more influenced by the thinking of the storyteller. It’s NOT Marketing vs Storytelling; they become one in the same.

Computer Savvy vs Gadget Savvy and other Booleans

Anecdotally, since the 80s, I’ve seen a rapid rise of general computer savvy in students and am now seeing a decline in computer savvy with a corresponding rise in gadget savvy students, though I’m not quite sure if that’s what I mean since I’m not really sure what computer savvy or gadget savvy actually means, in any meaningful sense.

Let me see if I can parse this out. By computer savvy I mean a general comfort and “comfort in the” interest in the workings of the machine, which I would attribute to newness or greenness of the object (not a general interest with its next iteration when that “its” becomes an annoyance). Has anyone cared about the next iteration of the stove?  In the nineties we were excited about computers for all kinds of reasons. Telephony gadgets were things you feared being clubbed with. The Walkman wasn’t a digital gadget but it was the iPhone of the 80s. In the 90s, students were still using typewriters. Before the ’00s, computers, I would argue, were “exploratory.”

I don’t think this is true for most college students these days. Computers are things people just have. I don’t see a lot of students “exploring” the possibilities. In the recent years, I’ve observed platform cliches (people still argue Redmond vs Cupertino and now vs Mountain View)  but the chatter’s all old hash and you can hear the hinges creaking.

Still, things are changing fast and I don’t really know what to expect when I say, “Send it to me through email.” Not a lot of  “savvy” that maybe one doc might not work if sent through mode of transfer. The proliferation of “types” is driving me crazy. I cringe when a student sends me something with the extension .pages, and wonder when I get one that says .odt (hm, I know when someone’s using open source but isn’t worrying about the information contained in the dialogue or context box dropdowns). In other words, thinking about “what one sends” matters. But what people hear when I say “through email” is “okay, my doc is the same as yours, so, I’ll send my .pages doc or ‘whatever I’m saving.'” But I wonder if people are thinking about the object they’re actually saving. One way of thinking about this is the “personal cloud”: drag it to my public file, but then I have to worry about what’s being dragged in.

I have students who text during class. They have their phone on the desk and periodically tap a message out. And then get one back. It’s amusing to think about this activity. It doesn’t get me anywhere to judge the behavior, but it is curious to think about what sort of compulsivity to which this points. I tell students we need those things to look up information. When you want to evaluate data it’s good to have the real numbers from the CDC or the FBI. Computing devices, wifi enabled, are fantastic for that.

Which finally gets me to the reason why I started this post: the recent news about the NSA and the iPhone, about which I’m holding a certain amount of skepticism. I need someone to tell me how this actually works. And thus the problem: how does one search for a piece of software on the iPhone or the iPad. How does one know, and manipulate, specifically, “all” the software on their gadget?

This story, true or false, tells me something about the relationship people have had with computers and the technical relationship they’ve developed with the iPhone.  This is the “decline of the computer savvy” narrative. My conclusion or observations here may, of course, be true or false.

On Change, Horses, and Water

For some reason I find this Fast Company article on Sebastian Thrun fascinating.

Here’s where I got really excited, regarding Thrun’s Stats 101 course and the relationship between the quality of the course and whether or not it would be successful:

Only it wasn’t: For all of his efforts, Statistics 101 students were not any more engaged than any of Udacity’s other students. “Nothing we had done had changed the drop-off curve,” Thrun acknowledges.

Here’s some context for the above quote that has nothing to do with online education, Udacity, or Smartboards. The good teachers I know mostly consider themselves failures. A particular semester will end and the dejected class of faculty will go back to the drawing board, rehearsing their future plays, and adding to the perennial checklist of things to alter for next time. At the beginning of the semester, the syllabus was newly minted with additional directions, already. Other content was added to stave off that unforeseen and persistent, naggly question. “It’s right there on the syllabus,” a teacher will say. “I’ll clarify further.” Done, as summer work. The links were refreshed. The Calendar was shined to perfection. And so the semester ends with half the students gone and pretty much the same ratio of grades puncturing the brains of the bewildered.

I had a conversation just the other day with a seasoned Psychology prof ready to go at the online course with a mouse pointer sharpened by “student success foreshadowing.” She paused. She said, “Yeah, we do this every semester.” But still, that video showing students how to find the directions for the assignment could always be made a little clearer.

Teachers worry a lot about students, learning, assessment, and curriculum. But they also know that revisions come with unforeseen consequences. This is something that novice faculty learn over time. We will always seek better learning and better clarity. That’s the nature of the ecosystem. Every course will tell a story and some courses can themselves be a story. Maybe the final exam is the climax. First we’ll do this, then this, then that, and by the time we get to Oedipus the student will have this, that, and the other thing to work with for improved analysis and interpretation of our despairing protagonist.

I pretty much have the curriculum nailed for my Comp II course. But it still doesn’t work right. There’s still a part of the story that’s missing. I’ll hunt it down next break and rewrite the syllabus.

But in all seriousness, the theme that appears to be missing in the story of Professor Thrun, at least as far as FC tells it, is that “students” are human beings. Human beings experience the world in the private space of their minds. Most of the time, I don’t know what my students know, and I’m just as much a solipsism to them as they are to me. Most of the time motivation, technique, expertise, and the relationship between effort and evidence are a mystery. There’s that old trick of the greenhorn writer  who scribes a query thusly: “This is the best damned story every” and so on. Here’s a hypothetical: we’ve had lots of geniuses over time who have walked the planet, shod and unshod. We could hire this superteam to construct the “killer app” of online or on-ground courses. The result will be the same, and this is where statistics get us into trouble. The students who grasp and demonstrate will grasp and demonstrate. Those who do not grasp and demonstrate, or, more importantly, do not demonstrate and either grasp or don’t grasp will grasp and demonstrate OR not. (Hm, that was tough to formulate.)

In my view, statistics are problematic in determining the success or failure of a college course, whether it smells of chalk dust or is warmed by binary code. Chafkin quotes Thrun here in regards to the “painful moment”:

As Thrun was being praised by Friedman, and pretty much everyone else, for having attracted a stunning number of students–1.6 million to date–he was obsessing over a data point that was rarely mentioned in the breathless accounts about the power of new forms of free online education: the shockingly low number of students who actually finish the classes, which is fewer than 10%. Not all of those people received a passing grade, either, meaning that for every 100 pupils who enrolled in a free course, something like five actually learned the topic. If this was an education revolution, it was a disturbingly uneven one.

“We were on the front pages of newspapers and magazines, and at the same time, I was realizing, we don’t educate people as others wished, or as I wished. We have a lousy product,” Thrun tells me. “It was a painful moment.”

The arithmetic in my head tells me that 10% of 1.6 million is 160,000. Additional math leads to this after the equals sign: 8,000. This means 8,000 people passed whatever courses are a part of the smorgasbord. Is this a problem, given that out all the courses unmentioned in the above quote a million and change people did not eat their vegetables? We don’t know the reasons. We can’t know the reasons. Degrees of interest, access, modes, prerequisites, time, ability, attention, disagreement with technique? I would submit that this has little to do with “lousy” products and more to do with being human. The system I work with to do online ed is, in my estimation, not that great of a product. It’s not a fantastic communication tool, which is what a decent system ought to do best. At heart, any learning system is about getting ideas across and getting ideas back in a context that makes sense. Classrooms simulate that most ancient and persistent of situations: a group gathering to share ideas and maybe learn something in the process. The key here is “maybe.” Then again, why is “coworking” space all the rage these days? Because its pretty basic human stuff.

Here’s a further at heart: a) people cannot be forced to learn (or watch Youtube videos) and b) institutions cannot guarantee learning (no matter the quality of TED talks). See a). That’s why accountability in education will always lead to comedy sketches. And there’s more to doing it than just wanting to. I’m not a great fan of thinking about education in the context of for-profit because of the human quotient. Imagine if I sold tables to customers with a sign that said: this one got a C. My point of view on this is that education is best viewed as a public service that will succeed or fail on the tenacity and mindfulness of students, not chocolate-covered systems that when bitten into reveal their broccoli center (You know, the chocolate covered broccoli syndrome typically associated with education games).

Just to refer back to that first quote I started with. I say, join the club.

I think it’s fascinating that Thun is really bugged by his perceived failure. I would have to conclude that, given this, he’s a good teacher. Teachers who don’t obsess about improvement and who think they can actually teach well should find another line of work.


Do We Need More Coders?

John Dankoski did a fine show today on coding skills in relation to children, with some brief relational context built from issues (well, you know, maybe those old systems should be rebuilt). One idea that could have been developed has to do with curriculum. There were two driving questions in the broadcast: do we need more children learning coding (get em into Alice) and do we need more coders (grab some javascript skills). These are two separate questions. Another question is this: should school curriculum include machine logic and engineering in the bag? I think that’s the more significant question.

The reason it’s a significant question is because of the way people think about the purpose of an education. There’s a lot of talk today about S(Science)T(Technology)E(Engineering)M(Math) as a sort of new space-race for the future. But the acronym should be this: STEMH. Doesn’t make for a very good sound, but that H is for the Humanities. Okay, call it STHEM. Let’s say math is difficult. So is writing good poetry.

Let’s say we want to make something really complicated:

this.poem with an argument in the function generatePoem(poem) and eventually we’ll be sorting through an array.

One of the things we need to know is why something can be complicated. I don’t mean complicated in terms of thinking about why an activity might be difficult, like working through limits in calculus. Sometimes complication has to do with thinking about what we “might want” to do. The might adds complexity. For the above javascript we might want to make preexisting data available to the array. We might want to add an argument to the function: genre, for example. How?

The question could be: do we need more poets who can understand the complexities of machine code? BUT ALSO, do we need more coders who can understand human language? Those are good questions too.

The Concept of Privacy

In all the hair-splitting going on about the US government intelligence apparatus having access to citizen activity metadata, I have yet to see a lot of crunching going on about what privacy means. It seems to me that privacy constitutes a relationship first between “I” and “me,” that is my cerebral activity, and how much of it leaks out and captured by another agent. Imagine Basho on his rounds, leaving poems on the side of the road for others to read and, in future, to be recorded in other forms. It’s hard to say whether another traveler is wandering by with a poem by Basho in their head. The observer can’t know what is in a person’s head. If I read a tweet, I don’t necessarily know if the “thought” is actually authentic. I simply take it as a “factual” grain.

This is a cut and paste of part of Twitter’s Collection clause:

Our Services are primarily designed to help you share information with the world. Most of the information you provide us is information you are asking us to make public. This includes not only the messages you Tweet and the metadata provided with Tweets, such as when you Tweeted, but also the lists you create, the people you follow, the Tweets you mark as favorites or Retweet, and many other bits of information that result from your use of the Services. Our default is almost always to make the information you provide public for as long as you do not delete it from Twitter, but we generally give you settings to make the information more private if you want.

The implication here falls on the idea of “choice,” that Twitter makes available “information you are asking us to make public.” Agreeing to the services by provided is something the user “asks” for and therefore the service complies with software. This seems fair, as it’s observed that people freely chose the service and that they understand that “you asked for it.” It would seem fair that the NSA could use this metadata, just like any one else who understand the API.

Here’s the Log clause:

Our servers automatically record information (“Log Data“) created by your use of the Services. Log Data may include information such as your IP address, browser type, operating system, the referring web page, pages visited, location, your mobile carrier, device and application IDs, search terms, and cookie information. We receive Log Data when you interact with our Services, for example, when you visit our websites, sign into our Services, interact with our email notifications, use your Twitter account to authenticate to a third-party website or application, or visit a third-party website that includes a Twitter button or widget. Twitter uses Log Data to provide our Services and to measure, customize, and improve them. If not already done earlier, for example, as provided below for Widget Data, we will either delete Log Data or remove any common account identifiers, such as your username, full IP address, or email address, after 18 months.

This is the sort of metadata any run of the mill database will have spaces for, IP, time stamps, whatever. People agree to this sort of backend storage, assuming they know what a device ID is. If they don’t, they might agree to use and basically lie to the service. Meaning: I agree but I really don’t know what I’m agreeing to because I don’t know what a device ID is. I would assume this data would be interesting to law enforcement. But my authentic question is: is log data private or public information?

In this, I think about passwords and the encryption tools, such as SALT, that make them work. Again, I am assuming that a password is related to a “thought” I might want to keep private, “to myself,” as there’s risk in making it “public.” We know, though, that passwords are stored all over the place. They are also persistently entered, altered, and key-logged by at least two listening systems, else the system wont open. In any event, everyone who uses Twitter possesses a password but it’s strange to think of a password as “private” as it is “shared” in a sort of “middle place,” a limbo, let’s say between private and public, or, as we say in modern terms, a database, which is sort of also like the modern rendering of a nature deity.

None of this, however, gets to a definition of privacy in the context of digital tools. Part of the legal stroke here has to do with “presumptions” of privacy. We have a reasonable presumption that our in-door conversations are none of the government’s business, therefore the government has no “interest” in peeping at us through the window: presumption and interest. Of added complexity is the notion of privacy itself in the linguistic storehouse. We know that digital culture has provided spaces for dispute about the meaning of choice and sharing. It may be that in the future people drop out of the culture and chose to live more selectively. Or people will sanitize their participation, so that all we get on Twitter are links to frog images. But metadata will still grow and accumulate, as data in and of itself is neither this or that until it’s related to something else. Jaron Lanier has an interesting opinion piece in the NYT on the nature of data gathering and manipulation that’s well worth plowing through in this regard.

In literature courses, we can trace how people have viewed the line between private and public ideas. People have probably always known that they can get into a lot of trouble by speaking their minds. The image of the secret police has made vigilance the protagonist to the lordly “eye’s” antagonist. When one signs onto the Verizon contract, one should also know that something physical needs to be stored. If it is a 1, then we can always read the 1, then scramble the 1 to hide its identity. But does this constitute privacy?

Paul Ford on “Machines”

From Paul Ford (from my growing lists of readings on technology)

If you read old manuals (I do), they were the same in the punch-card era. They’re the same now. What’s changed is, of course, the total penetration of the computer and Internet into society, and the way that this way of organizing the world has started to prevail, so much so that we sometimes fantasize about life five or six years ago.


What Does Learning Look Like: Reflections on MOOCs and Classrooms

This article by Amanda Ripley titled College is Dead. Long Live College is somewhat unnerving. I have all my current assignments ready for students in a software package called Digication, for reasons too long to mention in this post. Students will upload papers to each assignment and I’ll use the software to wade through them all and assess them. I manage the day to day of calendars, directions, and certain instructional aspects of my courses using a WordPress MU install run by Sixnut (that’s the name of the college strung in the opposite of the normal spelling).

Some students get confused and look for assignments in our version of Blackboard and say, “I couldn’t find the assignment.” But that’s another story. I have students who run up against technological problems. They run their home laptops off of current because their batteries are killed and so if the cat knocks the cord out the device goes blank. Or their printers color cartridges are down to dust so their drafts won’t print (who was the genius who decided that a black cartridge wasn’t ink enough to print a black and white essay?). And the price tonnage of ink prohibits just running to the store for more. I have a student who couldn’t participate in peer review sessions because he fell, broke his arm, and smashed his computer as his backpack took a good portion of the impact. Or so he says, though the sling he wears is some sort of proof. Many of my students don’t know how to solve common issues with their latest pricey equipment, which is typically far more advanced than mine. I sat with a student the other day, showing her/him how to actually close out running software on the latest greatest Mac and to find that hitherto unfindable paper. Sometimes those desktops are a real mess.

Most of my students have everything they need to do everything but the task at hand. This technological ambience is a phenomenon of everyday experience. Therefore, the question of how to make a college course a place were mindfulness is encouraged is now an apparent issue in design. The author writes:

This fall, to glimpse the future of higher education, I visited classes in brick-and-mortar colleges and enrolled in half a dozen MOOCs. I dropped most of the latter because they were not very good. Or rather, they would have been fine in person, nestled in a 19th century hall at Princeton University, but online, they could not compete with the other distractions on my computer.

It could be argued that the digital native is always at some task. I’ve noticed in class that these tasks rarely have much to do with what I want people to focus on, though often it’s hard to tell what’s in peoples’ heads. While some students appear off in the ether during a lecture or discussion, they are indeed listening or at least prove so later in response to a question or submitted work.

Ripley spends a lot of time developing her experience with a Udacity physics course. There’s a video intro, the instructor introduces himself, and then he and the students get down to business

“This course is really designed for anyone … In Unit 1, we’re going to begin with a question that fascinated the Greeks: How big is our planet?” To answer this question, Brown had gone to the birthplace of Archimedes, a mathematician who had tried to answer the same question over 2,000 years ago.

Minute 4: Professor Brown asked me a question. “What did the Greeks know?” The video stopped, patiently waiting for me to choose one of the answers, a task that actually required some thought. This happened every three minutes or so, making it difficult for me to check my e-mail or otherwise disengage — even for a minute.

“You got it right!” The satisfaction of correctly answering these questions was surprising. (One MOOC student I met called it “gold-star methadone.”) The questions weren’t easy, either. I got many of them wrong, but I was allowed to keep trying until I got the gold-star fix.

My colleague John Timmons figured the repetition question out years ago in his online courses and approaches the question of testing in a sensible way, allowing student to relearn as they’re assessed. I’ve tried to mimic this approach in my own brick and mortar courses in a variety of ways. We’ve understood the importance of feedback and examine, in new media, how the digital can be advantageous in this regard. Trial and error, learning from mistakes, and the significance of testing guesses against experience is important for growth; games teach these lessons, as does getting lost in the mall as a child. If it was good enough for Sir Gawain, I claim, it’s good enough for me.

Studies of physics classes in particular have shown that after completing a traditional class, students can recite Newton’s laws and maybe even do some calculations, but they cannot apply the laws to problems they haven’t seen before. They’ve memorized the information, but they haven’t learned it — much to their teachers’ surprise.

The “teacher surprise” here is interesting to consider. One of the reasons for surprise may have to do with what teachers have learned to consider as the definition of success in a course, which is often times geared to the narrow focus of a particular task, such as covering Chapter 5 through 7 so what’s in Chapters 5 through 7 can be “learned.” I remember having to memorize the nerves of the hand in Anatomy class because in Anatomy class it is important to learn all the hand’s nerves. But the meaning of the hands nerves to a non-major is difficult to fathom.

The intent of a course may simply be to memorize facts and to take a few multiple choice tests. The facts that form the subject of the course may be important to recall. The question is: should this be the intention of “any” course of study, which determines the flavor of feedback a student may be intended to receive? Question 2: should people be surprised to learn that rote learning or even the application of heuristics may not constitute problem solving or the ability to diagnose. If memory serves, my history courses in undergraduate school had a lot to do with reading about historical events and having to recall them on essays. But my memory fails in the details. What I do know is that I understand history now much differently than I used to; now it’s something I depend on. I’ve forgotten the nerves of the hand, though.

I’m not generally surprised at Richard Arum’s conclusions in Academically Adrift. In my work with academic curriculum over the last several years, I’ve come to the conclusion that expected application or knowledge testing isn’t always a part of courses in huge doses. In this context, I reflect back on my high school and undergraduate experience and remember that it was in the high school band where I had the best memory of learning, seconded by graduate school. One reason is Aristotelian in process, meaning that students are expected to go from general basics to specificity over the course of an arbitrary period of time, although the “arbitrary time aspect” isn’t Aristotle’s fault.

In the band, we worked as a team; in the band, we had all sort of ways of applying what we learned; we often failed and walked away with lowered heads only to rear back upright when the competition was won; and when we sucked, the leader was never at a loss to cuss the hell out of us. I earned experience by watching that same teacher “outside” of the classroom in his devotion to discipline, art, and the machines of his trade, and to the amount of work he did to manage hundreds of students, and when he tackled the mysterious glue sniffer on the lawn prior to an afternoon marching practice, then waited for security to arrive, I saw him in a new light. I still remember him as a courageous person, personally flawed, sure, but he understood humanness and would do anything for his charges. If you didn’t practice, he always figured it out. He would apply the appropriate level of derision to your shitty of character. With the guitar, you can either play a scale or you can’t. And when you can, there’s always the opportunity to improve, and if you don’t improve, YOU need to work harder at it. In band performances, either you got clapped at or you were nailed by tomatoes. But we needed the master teacher. We knew that if trouble encroached on the field, he’d tackle it, even if it meant personal damage.

The power of the digital is its ability to be trained or designed for individual people. It’s entirely possible to construct a learning environment where aid is available from a variety of sources and time streams and where asynchrony can work to the advantage of individuals. Maybe one person will take six months to learn what another person can learn in a month. Traditional teaching environments won’t allow for this obvious problem. Thus, a student who can’t demonstrate the requisite amount of learning in fifteen weeks “fails.” (This is indeed a certain kind of failure, but I can’t think of any successful game that operates this way. Failure in life should best be seen as a stage in learning.) A student can pay up and take the failed course again. The business plan, however, won’t allow for a student to pay once and take more time to demonstrate the required learning. There are also rules of fairness and the question of the value for the amount paid. The digital provides for disruption of all this.

But institutions don’t currently work this way, though they could. And so the digital disrupts the “structure” of a modern college degree regardless of the nature of the degree. I would posit that modern, mass education will always fail people if arbitrary, exacting structures provide the definitional framework, unless it is, indeed, judged as an “exclusive” system, like military training.

What prevents change? Definitions of value and organizational imagination.

Ripley’s essay is devoted heavily toward anecdotal evidence. While I appreciate Niazi and colleagues’ experience with MOOCs, their experience is a small slice of the story continuum. However, stores about peoples’ experience with online learning is significant circumstantially and to provide context and for asking good questions about priorities, such as the theme of good teaching and the arbitrary notion of periods of learning.