• Home
    Home This is where you can find all the blog posts throughout the site.
  • Categories
    Categories Displays a list of categories from this blog.
  • Tags
    Tags Displays a list of tags that have been used in the blog.
  • Bloggers
    Bloggers Search for your favorite blogger from this site.
  • Archives
    Archives Contains a list of blog posts that were created previously.
  • Login
    Login Login form

Posted by on in Education Technology

The cliché is a fifty-year-old asking some ten year old student for help in making the computer work. Having trouble making working with your device or your software? Just grab one of those digital natives to handle it for you!

Well, not so fast. Here's Jenny Abamu at Edsurge saying what I've been arguing for over a decade-- our digital natives are hugely naïve about technology.

With the adoption of any new technology, there's a curve. In the 1910s, if you owned an automobile, you were also a reasonably savvy mechanic who knew how to work on his own machine. But in the century since, cars have become advanced in a way that has led to fewer and fewer car owners who could actually repair their own vehicle.

It's a simple fact of marketing-- early adopters may be willing to know the nuts and bolts of the tech, but to expand my market, I have to be able to say to non-savvy buyer, "Don't worry-- the tech will take care of everything for you." I have to make the tech user-friendly, and the friendlier it is, the less my customers need to know. The goal is to move from a product that only an aficionado can handle to a product that any dope can use. We are well into Any Dope territory with computer tech (spoiler alert: Linux is not the PC wave of the future).

Fifteen to twenty years ago, I could count on a few students in each class who could code. I used student helpers to build the school website from scratch. But nowadays I have to explain to my students how to save a photo the like on line, or how to use a Google doc. And students at the New Media Consortium Summer Conference echo that:

“Something you can do to prep your students for college is to have one day where you host a workshop on using Google Docs,” suggested Alejandra Cervantes, a junior at UCLA, in response to a question from an educator about the best way to support high school students heading to college. “Something simple like that can be pretty instrumental in helping them succeed in classes in the future.”

And yes-- that quote and the article its from raise its own set of issues. Because Google is working hard to inject themselves into the ed world, and they're not doing it just to be great humanitarians, so pieces like the Edsurge piece are meant to keep banging the drum that your student must know how to use Brand X Software or she'll fail at life.

And yet there is all this cool stuff to use, and my students don't have a clue. They know Snapchat, Instagram, a little twitter, and whatever the hot app of the week is (developers who think they can come up with an educational app that students will use enthusiastically for a year, starting months from now-- those developers have a naivete problem of their own). There are pieces of software that let them collaborate on projects-- they don't know how to use any of them. There are tools for including art and images and videos in one project and they don't know how to use any of them. And why do we keep reading stories about somebody who lost a job or a college spot because they posted something stupid on line? Because the vast majority of my students have no idea how the interwebs actually work.

In some cases it is tunnel vision-- they just use what they use, which is what they picked up from friends or the pre-loaded software on their device. In many cases, it's lack of access. A Pew Research Report from 2015 says that 17.5% of households with children have no internet access. That does not seem out of line with my own student population (though virtually all of my students have their own smartphones).

I have beaten my head against this cyberwall for years. I was hugely excited about the possibilities of web-based projects in which students could take 15 or 20 different works of literature and show a web of relationships between them-- far more complex stuff than could be managed in a traditional paper. But when I gave them the assignment, what I got was a traditional linear paper with each paragraph on its own page, linked so that the reader could go forward or back a paragraph.

I am not a thoughtless technophile, and I never implement tech just to do it. If it's not useful, I don't care. Where it is useful (I have replaced the traditional English teacher keep-em-writing practice of a paper journal with mandatory blogging for my students), I embrace it. But I have had to train and explore and learn myself first, because my digital natives are like people who have grown up in a big metropolitan city but only know their way around their own two-block neighborhood and don't even know the actual names of the streets there.

If you want to get your students into the technofuture, you are going to have to lead them there, just like you have to with Shakespeare and critical realism and new vocabulary words. That's the implication of this kind of article for teachers. The implications for people who think giving standardized tests on over-the-net software-- well, that's another discussion (spoiler alert: it's a bad idea).

Last modified on
Hits: 639 Comments

Posted by on in Education Technology

quarter

Technology always moves at the speed of exhaustion, but didyou know about the  LifeLine Modernization Act of 2016? The super short version: the 226-page act provides those families that live in poverty to qualify for a $9.75 internet grant for each home.

So what?

Well…the same exact families are also qualified for reduced rates (Free / Reduced Lunch rates under USDA) at all national cable companies for $10.00 a month.

So…

...
Last modified on

Posted by on in Education Technology

Facebook absolutely insists on showing me "top stories." Every time I open the Facebook page, I have to manually switch back to "most recent," because even though the Facebook Artificial Smartitude Software thinks it knows what I most want to see, it can't figure out that I want to see the "most recent" feed. Mostly because the Facebook software is consistently wrong about what I will consider Top News.

Meanwhile, my Outlook mail software has decided that I should now have the option of Focused, an email listing that lists my emails according to... well, that's not clear, but it seems to think it is "helping" me. It is not. The Artificial Smartitude Software seems to work roughly as well as rolling dice to decide the ranking of each e-mail. This is not helpful.

I pay attention to these sorts of features because we can't afford to ignore new advances in artificial intelligence, because a whole lot of people think that AI is the future of education, that computerized artificial intelligence will do a super-duper job directing the education of tiny humans, eclipsing the lame performance of old-school meat-based biological intelligence.

Take, for instance, this recent profile in Smithsonian, which is basically a puff piece to promote a meat-based biological intelligence unit named Joseph Qualls. Now-Dr Qualls (because getting meat-based biological intelligence degrees is apparently not a waste of time just yet) started his AI business back when he was a lonely BS just out of college, and he has grown the business into.... well, I'm not sure, but apparently he used AI to help train soldiers in Afghanistan among other things.

To his credit, Qualls in his interview correctly notes one of the hugest issues of AI in education or anywhere else-- What if the AI's wrong? Yes, that's a big question. It's a "Other than that, how did you like the play, Mrs. Lincoln" question. It's such a big question that Quall notes that much AI research is not driven by academics, but by lawyers who want to know how the decisions are made so they can avoid lawsuits. So, hey, it's super-encouraging to know that lawyers are so involved in developing AI. Yikes.

Still, Qualls sees this rather huge question as just a bump in the road, particularly for education.

With education, what’s going to happen, you’re still going to have monitoring. You’re going to have teachers who will be monitoring data. They’ll become more data scientists who understand the AI and can evaluate the data about how students are learning.

You’re going to need someone who’s an expert watching the data and watching the student. There will need to be a human in the loop for some time, maybe for at least 20 years. But I could be completely wrong. Technology moves so fast these days.

So neither the sage on the stage or the guide on the side, but more of a stalker in the closet, watching the data run across the screen while also keeping an eye on the students, and checking everyone's work in the process. But only for the next couple of decades or so; after that, we'll be able to get the meat widgets completely out of education. College freshmen take note-- it's not too late to change your major to something other than education.

Where Qualls' confidence comes form is unsure, since a few paragraphs earlier, he said this:

One of the great engineering challenges now is reverse engineering the human brain. You get in and then you see just how complex the brain is. As engineers, when we look at the mechanics of it, we start to realize that there is no AI system that even comes close to the human brain and what it can do.

We’re looking at the human brain and asking why humans make the decisions they do to see if that can help us understand why AI makes a decision based on a probability matrix. And we’re still no closer.

I took my first computer programming course in 1978; our professor was exceedingly clear on one point-- computers are stupid. They are fast, and they are tireless, and if you tell them to do something stupid or wrong, they will do it swiftly and relentlessly, but they will not correct for your stupid mistake. They do not think; they only do what they're told, as long as you can translate what you want into a series of things they can do.

Much of what is pitched as AI is really the same old kind of stupid, but AI does not simply mean "anything done by a computer program." When a personalized learning advocate pitches an AI-driven program, they're just pitching a huge (or not so huge) library of exercises curated by a piece of software with a complex (or not so complex) set of rules for sequencing those exercises. There is nothing intelligent about it-- it is just as stupid as stupid can be but, but implemented by a stupid machine that is swift and relentless. But that software-driven machine is the opposite of intelligence. It is the bureaucratic clerk who insists that you can't have the material signed out because you left one line on the 188R-23/Q form unfilled.

There are huge issues in directing the education of a tiny human; that is why, historically, we have been careful about who gets to do it. And the issues are not just those of intelligence, but of morals and ethics as well.

We can see these issues being played out on other AI fronts. One of the huge hurdles of self-driven cars are moral questions-- sooner or later a self-driven car is going to have to decide who lives and who dies. And as an AP story noted just last week, self-driven car software also struggles with how to interact with meat-based biological intelligence units. The car software wants a set of rules to follow all the time, every time, but meat units have their own sets of exceptions and rules for special occasions etc etc etc. But to understand and measure and deal and employ all those "rules," one has to have actual intelligence, not simply a slavish, tireless devotion to whatever rules someone programmed into you. And that remains a huge challenge for Artificial So-called-intelligence. Here are two quotes from the AP story:

"There's an endless list of these cases where we as humans know the context, we know when to bend the rules and when to break the rules," says Raj Rajkumar, a computer engineering professor at Carnegie Mellon University who leads the school's autonomous car research.

"Driverless cars are very rule-based, and they don't understand social graces," says Missy Cummings, director of Duke University's Humans and Autonomy Lab.

In other words, computers are stupid.

It makes sense that Personalized Learning mavens would champion the Artificial Stupidity approach to education, because what they call education is really training, and training of the simplest kind, in which a complicated task is broken down into a series of simper tasks and then executed in order without any attention to what sort of whole they add up to. Software-directed education is simply that exact same principle applied to the "task" of teaching. And like the self-driven car fans who talk about how we need to change the roads and the markings and the other cars on the highways so that the self-driven car can work, software-driven education ends up being a "This will work well if you change the task to what we can do instead of what you want to do." You may think you can't build a house with this stapler-- but what if you built the house out of paper! Huh?! Don't tell me you're so stuck in a rut with the status quo that you can't see how awesome it would be!

So, they don't really understand learning. they don't really understand teaching, and they don't really understand what computers can and cannot do-- outside of that, AI-directed Personalized Learning Fans are totally on to something.

And still, nobody is answering the question-- what if the AI is wrong?

What if, as Qualls posits, an AI decides that this budding artist is really supposed to be a math whiz? What if the AI completely mistakes what this tiny human is interested in or motivated by? What if the AI doesn't understand enough about the tiny human's emotional state and psychological well-being to avoid assigning tasks that are damaging? What if the AI encounters a child who is a smarter and more divergent thinker than the meat widget who wrote the software in the first place? What id we decide that we want education to involve deeper understanding and more complicated tasks, but we're stuck with AI that is unable to assess or respond intelligently to any sort of written expression (because, despite corporate assurances to the contrary, the industry has not produced essay-assessment software that is worth a dime, because assessing writing is hard, and computers are stupid)?

And what if it turns out (and how else could it turn out) that the AI is unable to establish the kind of personal relationship with a student that is central to education, particularly the education of tiny humans?

And what, as is no doubt the case with my Top Stories on Facebook, the AI is also tasked with following someone else's agenda, like an advertiser's or even political leader's?

All around us there are examples, demonstrations from the internet to the interstate of how hugely AI is not up to the task. True-believing technocrats keep insisting that any day now we will have the software that can accomplish all these magical things, and yet here I sit, still rebooting some piece of equipment in my house on an almost-daily basis because my computer and my router and my isp and various other devices are all too stupid to talk to each other consistently. My students don't know programming or intricacies of certain software that they use, but they all know that Step #1 with a computer problem is to reboot your device because that is the one computer activity that they all practice on a very regular basis.

Maybe someday actual AI will be a Thing, and then we can have a whole other conversation about what the virtues of replacing meat-based biological intelligence with machine-based intelligence may or may not be. But we are almost there in the sense that the moon landings put us one step closer to visiting Alpha Centauri. In the meantime, beware of vendors bearing AI, because what they are selling is a stupid, swift, relentless worker who is really not up to the task.

Last modified on
Hits: 840 Comments

Posted by on in Education Technology

3d printing shapeways

"PC load letter?!" Can't say I've said that in a long time. I love G Suite for Education, so there is little need to get frustrated with the printer, let alone print things out. Unless that is, you want to print something with a 3D printer!

At the end of January, I wrote a grant for the Polar3D printer, and as luck would have it, I was fortunate enough to receive the grant. My Polar3D printer arrived in my classroom last week, and hopefully we will be printing tomorrow. Not only is this my sixth grade students' first opportunity to use a 3D printer, but it is also mine. So it is a very exciting time as my students and I explore this new piece of technology together. We are both learning so much already, but what I am seeing and learning from my students is incredible.

Students as engaged learners

Before allowing my students to print, I had my students go through the tutorial lessons on tinkercad.com which is the free, web based platform we will be using to create our 3D projects. There are six tutorials which take students through different parts of the design process. Some struggled, while some excelled, but every student was an engaged learner. They were hooked as soon as they saw the 3D printer in the classroom. So when they went through the tutorials, they were asking questions without any prompting and they were answering their classmates' questions without any prompting. 

...
Last modified on

Posted by on in Education Technology

coding

So last year I started a Coding Club...with kindergarten students.

Oh it get's better-I haven't been a 'teacher' in over a decade!

The Woodson Coding Club started first on a personal passion - to find out more about my son's love of robotics. This passion grew into a mission to provide our youngest learners opportunity to create, design and amplify their own learning, and in turn my own.

This year we are back again, this time with a little more research and a new framework to support this learning.  We will run two 30-minute sessions during two 6-week sessions for a total of at least 60 students participating. Teachers are identifying students who might have an interest or spark in this work and like last year ensuring that the students in our club represent the diversity of  our classrooms.

...
Last modified on