This is a post whose original core idea has aged badly since its stubbing nearly four years ago, but looking at it, I thought the total rewrite it now needed was actually indicative of something worth saying. So I hope you’ll forgive me if I lead you through some old thinking before I take stock of where we now seem to be. The subject is the digital transformation of university teaching, and specifically that demon of internet commentary circa 2015, the Massive Open Online Course or MOOC.

‘MOOC: every letter is negotiable’, by Mathieu Plourde (Mathplourde on Flickr) – https://www.flickr.com/photos/mathplourde/8620174342/sizes/o/in/photostream/ File:MOOC-Poster.png, CC BY 2.0, Wikimedia Commons
It is one of my contracted professional obligations in my current job to keep abreast of trends in the scholarship of education itself, as well as my mainline academic field or fields. This makes sense in as much as teaching is most of what, professionally, I do, but it is not without its frustrations, as that scholarship is prolific and I don’t think it’s controversial to say that it has yet really to establish clear standards of quality control. There certainly are serious, large-cohort or long-running studies of techniques or teaching environments and so on that make full use of the potential of the classroom (or rather, lots of classrooms in collaboration) as a laboratory for educational strategy and technique.1 The trouble for the people who put that kind of experimental planning and effort into their pedagogical scholarship is that they seem to stand as much chance of getting published as, and to share more or less equal standing in publication and citation when they do with, studies whose experimental basis is more like ‘I’ve been teaching small groups in the same small instititution to the same general demographic for thirty years now and though I don’t have records or anything, here’s what seems to work.’2 And I have to say, I have probably learnt at least as much about teaching from some of the latter as I have from the former, which might unkindly be seen mainly to provide data that shows what we mostly already suspected.3
But there is another trend in this scholarship, anyway, which is prognostication. I could list you scads of articles that exist only to say, “we predict that everything is going to change in the next few years in such-and-such a direction”, and they would probably almost all fall into three groups, being about either the ‘flipped classroom‘, ‘blended learning‘ or MOOCs. Of these, the most accurate looks like the ‘flipped classroom’ group, arguing with often-sound data for the improvement in learning and retention that happens if you use your classroom time for interaction and discussion rather than for lecturing, and instead have the information acquisition set as preparation and use the classroom time to work with that information and make sure it’s understood at the level where the students can do stuff with it.4 The only thing that annoys me about that scholarship is how they manage to keep selling as new methods that represent how the humanities, at least, has largely been taught for decades. Asking students to read in advance and come to class prepared to discuss, or even already having written something about, it is how I was taught and, I think I could safely say, how my father was taught either side of the Second World War, so how this keeps being published as a new idea beats me.5 Of course, it blurs into the second category, blended learning, when the prep work is digital content rather than just reading or a physical-space lecture, but the actual structure of the learning is not different. Anyway, this is not actually my subject for the day, just a frustration.

You see, in what discipline or subject area has the right-hand method been ‘traditional’ inside this century? And could whoever they are please catch up with the humanities so that we don’t keep getting buffeted in the bow wave of your reform efforts? Image from the UK’s Learning Foundation, linked through; I claim fair use because commentary…
So instead my target is the scholarship or commentary which, in 2017, was saying that the future was either partly-online or partly-digital education, i. e. blended learning, which has been going on for a while, or else fully-online and centralised in the form of the MOOC. I was actually signed up to a MOOC for 2015-16—is it ironic that it was about blended learning? I’ll let you decide—as training for an administrative role I then held and so I got some idea not just of how they were supposed to work but how they actually do, or don’t. (Of course, this is the same sample-of-one no-control anecdotal standard of scholarship I was just separating from stuff with any wider basis, but this is why it’s a blog post not a journal article, isn’t it?) But even as I was doing this, the generally hyperbolic level of excitement about them had struck me as weird. I think it was a recurrent thread on a blog I wish I still had time to read, Not of General Interest, that alerted me to this, but then I started collecting references, and some example titles would be, ‘MOOCs: another weapon in the outreach armoury’, ‘MOOCs are Coming’, and my two favourites, ‘This Could Be Huge’ and ‘An Avalanche is Coming: Higher Education and the Revolution Ahead’.6 It’s probably worth noting that none of these were peer-reviewed work, although the third is good journalism and I cite it often below; but it’s almost as if genuine educators didn’t find all this plausible… And indeed, as Undine’s posts already linked show, it wasn’t hard at the time to find push-back and criticism as well, but we seemed to be pushing against a technology-evangelical wall.
I say ‘we’ because I had definitely settled against the whole MOOC idea by the time I originally stubbed this post. This was not because I thought they were dreadful ways to learn, as such, but I did think that they were somewhat missold, and most of all that there was a bait-and-switch going on from their evangelists about the economics involved. The rhetoric that all the literature I could find, and the teaching I received, was to the effect that you, the educator, can now access this mass of pre-prepared digital content and incorporate it into your own courses/modules/whatever, thus saving you valuable time and enlivening your material for your students!7 And call me a cynic, but my reaction was, “that sounds a lot like, ‘your content is boring because it’s not modern and digital; someone else is already doing what we want from you better than you are; if we can just replace all your stuff with content we bodge in from elsewhere, or even just record your content, we can get rid of you, save your wage and just pay teaching assistants to do seminars on the recorded content, because hey man, it’s history, it’s literally all in the past, not like it’s gonna change amirite?'” Using MOOCs was pretty clearly out-sourcing, and one only ever does that to save labour costs. In other words, I saw MOOCs pretty squarely as a resource to which cash-poor universities could resort to cut teachers, and which cash-rich universities who didn’t need to cut teachers could create to sell to the cash-poor ones. That was more or less explicitly Harvard’s rationale for starting to generate them, and I’m sure there were other universities whose digital learning teams had similar glinty-eyed aspirations.8 But to me it seemed obvious that we should neither use nor make these things because by doing do we would, somewhere or other, be making a colleague and maybe eventually ourselves redundant.

I include this just because it is so impressively meaningless; it is supposedly a representation of the ‘digital pivot’ from a story in the MIT Technology Review from last year (linked through), which was sponsored by Hitachi but seems mainly to be about Walmart.
Now, that was the point of the original post, but the thing is, from four years on, it’s obvious that the promised revolution hasn’t happened, isn’t it? This was already becoming evident even as I stubbed the post; only a few days before major MOOC provider Udacity had decided to stop generating new courses and declared MOOCs ‘dead’, and by 2019 the subsidence of the phenomenon had been noted even in Science.9 But then came the Great Digital Pivot of 2020, which you might have thought was the tidal wave that should reverse the fortune of this dying tool, the point where everyone had to go not even blended but full-online. But it seems to have made no difference. Coursera, FutureLearn, EdX and a few other firms continue to offer such courses, but we have seen neither of the outcomes once predicted, where cheap online learning replaces universities or where universities start using extensive external content instead of their own staff (though if some of the staff cuts currently being fought in the UK go ahead, I guess that could still change).
So why not? Why didn’t the pandemic save the MOOC and why was it so ill in the first place? I think there are two big reasons. Firstly, as was being observed even in 2012, no-one was quite sure how to make them pay. The point of them was to be open access, after all, so you couldn’t charge up-front. The way Coursera used to work, and may still, is that you could do the whole course for free but had to pay a small amount to get certification that you had done so; I guess that the logic was that those who had actually completed the course would want to be able to prove it, and with literally millions of learners, even if conversion rates were terrible, you should still see enough of those payments to cover creation or licensing of content and running costs of the IT infrastructure, and even minimal ongoing staff time; a teacher-pupil ratio of 1:21,000 (as instanced below) should make that possible… But it seems that drop-out rates were even more terrible than that, and that actually you can just about keep going on that model but not make the forecasted mint. So only the biggest offerers have survived and very few universities have built MOOCs to try and make money; a few embraced them for a while as advertising for full degree courses, but I’ve seen nothing to suggest that that seems to work either.10 So reason one: a massive misjudgement of the world population’s willingess to pay for this kind of product.
I would like to think that reason two was that the turkeys who would have had to generate this content saw the wisdom in not voting for Christmas, but I’m pretty sure that in this employment climate you can pay would-be academics to teach pretty much anything, however much it may be against their long-term interests. So reason two might instead be a thing which the pandemic has very clearly exposed, which is that whether it’s because they think it results in better learning or because it forms part of the much-championed but little-specified ‘university experience’, people doing degree-level education actually want to receive it direct from people they think are experts, and have the chance to interact with those strange beasts in real time. In theory, that was possible in MOOCs, if you signed up to them when they were new and kept up with their schedules; the designers and instructors would be around and responding to comments as far as they could. But with 38,000 people signed up to a course, many contributing several times weekly, and probably two staff members running it in only some of their time, you can see how much chance there is of reaction from the instructor really happening for most students.11 If you were well behind on the content, then you probably didn’t even have the benefit of peer discussion and learning; there just wouldn’t be enough people on the same unit as you at the same time to sustain a conversation. This was certainly my experience, and I suppose illustratively, I never actually finished the course myself. An Edinburgh study in 2013 made the problems even more clear, however: they found that a MOOC took eight hours a week to set up and sixteen hours a week to run, for only one of the two staff involved; only 2,000 of 42,000 students enrolled actually completed it; the students expected instructors to be more present than was actually possible; those who fell behind didn’t find it desirable to catch back up (as I also found); and, as one of their students (identified as “Bertin”) put it on their blog, “the overall effect for me was knowing that I don’t want to do an e-learning course they run that I had previously been interested in taking”.12 Oops!
What this means is that making these things work is actually very hard, but even when they do there’s really very little difference, other then the endurance required to finish a MOOC, between it and any other online training course like the ones they give university staff on heavy lifting or fire safety or gender and race equality, all useful within limits, but basically canned content with zero interaction with the supposed teacher. And it seems clear that, even though often enough when you have students in a classroom with you it seems like interacting with you is absolutely the last thing they want to do, when the alternative is no interaction at all, it’s worse, and they’ll pay to avoid it, or at least accumulate fairly abstract debt (in England and Wales, anyway; I realise that student debt has more serious implications where the Student Loans Company doesn’t periodically go bankrupt from shortage of repayments). And MOOCs, it seems, were and are that alternative. Perhaps they could be made to run better, with more student-teacher interaction and more live content; but it would send their costs up, reduce their universal accessibility (because live means in a fixed time and place) and probably therefore make their margins worse not better. So I cautiously think that university teachers might now be safe from this revolution, at least. The blended learning evangelists look like being a lot closer to the future, and indeed the present really, but that would be another post. Let me mature that one a few years too before trying it, eh?
1. I hope I can be forgiven no more than one example per generalisation, even though by so doing I am myself normalising bad scientific practice; sorry. But: the biggest large-cohort highly-designed meta-study I know is Elif Kara, Mirco Tonin and Michael Vlassopoulos, ‘Class size effects in higher education: Differences across STEM and non-STEM fields’ in Economics of Education Review Vol. 82 (Amsterdam 2021), 102104, DOI: 10.1016/j.econedurev.2021.102104.
2. For example, the perfectly worthy but not really scientific Anne Firor Scott, “Why I Teach By Discussion” in A. Leigh Deneef and Craufurd D. Goodwin (edd.), The Academic’s Handbook, 3rd ed. (Durham NC 2007), pp. 212–216, on JSTOR here. A charming example I also have to cite is Harry Brighouse, “Becoming a Better College Teacher (If You’re Lucky)” in Daedalus Vol. 148 (2019), pp. 14–28, DOI: 10.1162/daed_a_01758.
3. A really good anecdotal practice paper is Brett Lunceford, “There Are No Girls in My Classroom: A Pedagogical Note” in ETC: A Review of General Semantics Vol. 68 (New York City NY 2001), pp. 63–67, which I don’t know how you’d find without being told. A somewhat unsurprising large-scale study is Louis Deslauriers, Logan S. McCarty, Kelly Miller, Kristina Callaghan and Greg Kestin, “Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom” in Proceedings of the National Academy of Sciences Vol. 116 (Washington DC 2019), 19251, DOI: 10.1073/pnas.1821936116, which laboriously shows that on the whole students prefer it when they don’t have to work as hard to grasp what’s being taught even if it teaches them better to do so.
4. There is an incredible amount of work trialling flipped-classroom approaches or resisting them. A very recent and huge meta-study, which ought to lay the matter to rest but probably won’t, is Khe Foon Hew, Shurui Bai, Weijao Huang, Phillip Dawson, Jiahui Du, Guoyuhui Huang, Chengyuan Jia & Khongjan Thankrit, “On the use of flipped classroom across various disciplines: Insights from a second-order meta-analysis” in Australasian Journal of Educational Technology Vol. 37 (Tugun 2021), pp. 132–151, DOI: 10.14742/ajet.6475. Some suggestions that the effects might be socially variable in Elizabeth Setren, Kyle Greenberg, Oliver Moore & Michael Yankovich, Effects of the Flipped Classroom: Evidence from a Randomized Trial, discussion paper #2019.07 (Cambridge MA 2019), online here.
5. Evangelism: Dan Berrett, ‘How “Flipping” the Classroom Can Improve the Traditional Lecture’ in Chronicle of Higher Education, 19th February 2012, online here; Jonathan Bergmann and Aaron Sams, Flip your classroom: reach every student in every class every day (Eugene OR 2012); Jennifer Gavriel, “The flipped classroom” in Education for Primary Care Vol. 26 (Abingdon 2015), pp. 424–425, DOI: 10.1080/14739879.2015.1109809; or Betty Love, Angie Hodge, Cynthia Corritore and Dana C. Ernst, “Inquiry-Based Learning and the Flipped Classroom Model” in Problems, Resources, and Issues in Mathematics Undergraduate Studies Vol. 25 (Abingdon 2015), pp. 745–762, DOI: 10.1080/10511970.2015.1046005.
6. Respectively Chris Parr, “MOOCs: another weapon in the outreach armoury” in Times Higher Education, 11th July 2013, p. 11; David Williams, “MOOCs are coming”, AdvanceHE, n.d., online here; Zoë Corbyn, “This could be huge…” in Times Higher Education, 6th December 2012, pp. 34–39; and Michael Barber, Katelyn Donnelly & Saad Rizvi, An Avalanche is Coming: Higher Education and the Revolution Ahead (London 2013), online here. Corbyn says some academics were then forecasting a ‘Napster moment’ (p. 36), which seems unintentionally accurate given that in the end Napster perished and traditional recording labels somehow survive…
7. See even now Peter G. M. de Jong, James D. Pickering, Renée A. Hendriks, Bronwen J. Swinnerton, Fereshte Goshtasbpour and Marlies E. J. Reinders, “Twelve tips for integrating massive open online course content into classroom teaching” in Medical Teacher Vol. 42 (Abingdon 2020), pp. 393–397, DOI: 10.1080/0142159X.2019.1571569.
8. Harvard’s offering discussed in Corbyn, “This could be huge”, pp. 36 & 38.
9. Justin Reich and José A. Ruipérez-Valiente, “The MOOC pivot” in Science Vol. 363 (Washington DC 2019), pp. 130–131.
10. Corbyn, “This could be huge”, p. 38, where the example is the University of London, in one of its very rare corporate actions.
11. Numbers from the Stanford course around which Corbyn, “This could be huge”, is centred; see esp. pp. 36 & 38.
12. Reportage from Chris Parr, “Wisdom and Crowds” in Times Higher Education, 18th April 2013, pp. 24–25.
You’re a cy . . . realist.
Thank you for the references and links!
Well, if it helps, I’m happy to have done so!
“one of my contracted professional obligations in my current job to keep abreast of trends in the scholarship of education itself”: harsh!
Although I’ve always found it hard to put my finger on why students prefer attending lectures to opening a ruddy book and then pitching up for a discussion, I’ve little doubt that most of them do. I assiduously attended lectures myself when new to the game; later in my undergraduate course I became selective about which lectures I chose to attend. But at least I always had the nous to attend labs and tutorials – there are lessons you can expect to learn there that you won’t get out of either a book or a lecture.
Anyway, it occurs to me that there is one undoubted advantage of live lectures over computery things – feedback. We had a physics lecturer who was so bad that we drove him from the classroom by throwing things at him. For his next scheduled lecture the department fielded someone else who was indeed better. Them wuz the days.
I’m glad to say I’ve never had feedback quite that active! But of course we do run feedback surveys still, and those too are now born-digital, but even when they weren’t, they included a question about the electronic resources. This year, of course, everything was electronic. Weird thing is, the feedback is better than usual…
You are putting it very mildly and politely: “that scholarship is prolific and I don’t think it’s controversial to say that it has yet really to establish clear standards of quality control.”
Well, I could be stronger in my expression but as I go on to say, I have personally found that the stuff that should be the most unscientific is also often the most insightful and useful, so I think that I really do mean that the discipline has yet to work out how to identify quality. I actually don’t mean to say it’s all terrible!
When I was new to lecturing I had an 18 lecture course to deliver. So after six lectures I distributed a form to elicit comments, so that I still had twelve lectures in which to respond to them.
Later I was obliged to take part in a departmental scheme which meant that I got the comments only after the course was over. And anyway the form was inferior to the one I’d composed. Progress, eh?
Later I learnt to my amusement that the secretarial staff categorised the academic staff according to whether they cheated with their feedback forms. I was told that I was one of the angels. I said that, alas, I had once destroyed a form rather than submit it. Why? Because it was laced with obscenities that were obviously designed to upset the lass who dealt with the forms. Her crime was probably to have been rather attractive. Oh dear; man in his fallen state.
True that. I also agree about the feedback process as it is now usually done: I think it’s gaming the system by making sure that the students who’ve already voted with their feet don’t get a say. I’ve seen several teaching manuals or articles advise the intermediate feedback technique you describe, though, and even know some other people who do it. (For example, it’s in the ancient old-hand’s guide, Arthur W. Chickering and Zelda F. Gamson, “Seven Principles for Good Practice in Undergraduate Education” in Biochemical Education Vol. 17 (1989), pp. 140–141, which was already its second printing because it’s full of sense like that.) It’s obviously more direct use than the systematized ones, but those also enable comparisons – really bad comparisons. See Troy Heffernan, “Sexism, racism, prejudice, and bias: a literature review and synthesis of research surrounding student evaluations of courses and teaching” in Assessment & Evaluation in Higher Education (2021), 11p, DOI: 10.1080/02602938.2021.1888075 for some horror stories…
“Seven Principles for Good Practice …”: I’m on their side already. “Good practice” => useful advice; “best practice” => authoritarianism.
When I was in Queensland the academic staff had no easy opportunity to be angels or demons: an outside contractor distributed, gathered, and analysed the forms.
Here’s a serious question. Academic staff have all been undergraduates and therefore have had ample opportunity to judge which teaching methods work well, which badly. How is it, then, that so many teach badly themselves? It’s a mystery.
It is a good question! I can come up with three suggestions but of course, I’ve no real data except my own experience on which to base them (and you have more of that than me).
First might be, academics are an odd sample of the student body: we often don’t need that much teaching and tend to be self-driving students. We may just not have paid that much attention to how we were being taught… (I also had some very bad teaching, at least in lecture halls if not in supervisions, and was completely unaware of how seminars were supposed to work when I first had to lead them because of basically not having experienced them as pupil… but that is only to say that Cambridge may be a weirder sample than most.)
Second, what we are now asked to do may not resemble how we ourselves were taught much. No VLEs or digital aids in my undergraduate experience; even Google wasn’t up and running then! I pioneered use of a VLE teaching at Birkbeck but before anyone else had really got their heads round it they junked it for one that worked better, so even that didn’t pay forward very much.
Third, hopefully weakest, suggestion would be that teaching well takes at least some time on planning and thinking about how to; but time management advice I’ve received from many people is to let teaching preparation take as little time as possible. Teaching quality may be the enemy of quality of life, sadly…
But as I say, I’m going on weird-or-no data here. What do you think?
For the typical student, “good” seems to be all about methods, presentation, making students secure about mastering what is required and getting good grades. Whereas for me, “good” meant the professor had deep knowledge and strong interest in the material, and in communicating about it, and there was something they really wanted to do or get across in that particular class, and they were interested in the students. With those characteristics, nothing else mattered: they could be organized or not, lecture or not, etc., etc. I wanted them to be individuals and show it but the typical student wants them to be, maybe, predictable but have surprising jokes. Something along these lines.
Great post.
An interesting perspective, and also especially welcome for seeing you here again! I think, from the way that my personal tutees will occasionally talk about their teachers, or our graduates do, that my current student body values three things about their teachers, in declining order, which would be (i) kindness and supportiveness, (ii) ease of comprehensibility and, way down the list, (iii) expertise. I think they either believe that we’re all super experts or don’t know what expertise actually looks like. However, in witing this it occurs to me that this anecdotal impression would be quite different from the one you’d get from our feedback surveys, which are of course structured around particular questions but where our top three goods would probably be (i) preparedness, (ii) clarity of explanation of assessment tasks and (iii) not being boring in lectures. I’m not sure what that suggests, but one thing it does is that our surveys might be imperfect measures. Of course, all science and sense was already ganging up to tell us that, as documented above, but as long as England’s National Student Survey (about which there are similar, national-scale doubts) governs anything at all, we will keep generating skewed numbers to feed into it, I guess.
I doubt whether there’s a general explanation. I did have a colleague who once relaxed so much as to confess to me that he deliberately taught badly because that way his teaching load was kept low. But how many are as cynical? (Question expecting the answer “plenty”.)
Maybe a common problem is that many university teachers find no satisfaction in teaching well and therefore make little effort. Why people would choose to spend much of their life doing something they don’t enjoy beats me.
My guess is – and there may be data to refute it – that students might differ in their assessments of who teaches well but are likely to be near unanimous on who teaches badly.
I could certainly add one or two people to any answer to your first question, but not many, overall. I only really began to enjoy, rather than fear, teaching once I was comfortable that I was good enough at it to get by; it is by so long a way the thing over which we academics are most often evaluated that it took that certainty that I’d pass before I could really feel secure in a classroom. If one never gets good enough, I suppose that one might always then fear it and resist its expectations… But yes, why you’d then do the job I’m not sure!
Pingback: Teaching scholarship, MOOCs and the digital piv...
I think I cannot win, because I cannot be prepared. I cannot be prepared because I have NO idea what the students will be ready for or willing to take an interest in until I meet that particular group, they vary so now and telling them to take another class if this one isn’t the right one for them is not an option since we have so few classes now. I could be prepared at the film or book club level, and should perhaps think of classes like that.
You may be positioned slightly further up a slope down which most medievalists have already slid: we just have to assume that our students aren’t ready and we will have to start them from the ground up. At least, I think this in terms of background knowledge of history of our period, but we also meet that lack of readiness in terms of writing skill or indeed reading ability, though less consistently. I have to assume novices in my classroom until final year, and even then, I may be teaching students who never did any medieval history before. Against this, I find the best preparations are scaffolding—making sure the classes support each other by connecting in themes or content, so only, say, 75% of any given class is new stuff and the other 25% is reinforcement, and the very first is mostly overview—and absolute clarity about assessments and what they need actually to do or read. (I say how much I’d appreciate extra effort, but the minimum is really clearly drawn. Then at least if they don’t do it, they don’t dare come to me and say they’re lost…) With those things laid down, a few, a few may take their feet off the bottom and see if they can swim a bit…
It occurs to me looking that over that I am distinguishing quite sharply between who students are and what interests them, on one hand, and what they are equipped to do, on the other, which might relate but aren’t the same thing. You are writing of the former and I answer with the latter. Sorry if that’s not actually helpful…
It’s the language skills. They may be literate native speakers of Spanish and they may speak no Spanish. Yet I am to be teaching literature in Spanish. How many pages per week will the average student be able to read? Will most students be able to understand a class given in Spanish, if we all speak very slowly? How much can I expect them to write if I do not want them to compose in English and use Google translate? Should I require them to speak Spanish themselves, or allow them to slip into English if they need to? How do I convince the illiterate ones who speak good colloquial Spanish that there is more to learn? Will they reject all literature written before 1950 as being written in an impossible to read “Old Spanish”? Will they refuse to read literature at all? One thing I HAVE learned to assume is that they will not have read a piece of fiction before. If they have, it’s great, but many never have and also have never read for pleasure, only for information or to read instructions, and they’ve either been beaten if they made any mistakes about what the reading said or been told reading is unimportant and they shouldn’t be forced to do it.
Good lord. I am not going to risk suggestions there, but a university-level Spanish class in which someone may speak or read no Spanish is a situation I don’t think you can be blamed for finding difficult. Is this because students take this course/these courses as electives? I find it hard to understand why someone without Spanish would take it at university (and then not read). But there is obviously stuff about student background and college level I’m perceiving only darkly here.
Haha! Well: some have been taught how to do exercises and pass “objective” examinations, but have not been asked to speak or read, or write anything they have not had translated and then memorized. And they are studying to be high school teachers, and plan to teach that way themselves. Others are at the university to get a degree, any degree, and have chosen Spanish because they speak it–and feel they know something about “the culture”–and are not there to learn but to pick up the degree. Then of course you can also get students who are what I would call more normal university students.
Right, I begin to see. School language teaching in the UK has got pretty weak, but may not be quite that far gone. I still get some students who, on the basis that they have notionally been taught some of a language and can use a dictionary, will attempt to read small chunks of foreign text if I tell them it will help them with an assignment; but I get others who will not, despite notionally being equally qualified. I guess that teachers and an interest in literature, themselves probably related factors, must be the differences I can’t see. What I don’t get, and I don’t know why, is students who will use Google Translate to read something with no other knowledge. To me it seems an obvious resort, and I remember when it was new; I’d have thought it was obvious to the new generations…
But I never know what I’ll get! I run a final-year undergraduate special subject that, because of its Iberian focus, tends to pick up people doing our History & Spanish joint program. The first year it ran, I had one such student—and one student bilingual in English and Greek and two whose second language was Welsh, one pretty fluently. I’ve also had subsidiary-language Arabic and Turkish, and the former of those was most vexing, because it would really have helped except they couldn’t read it, only speak it at a kind of basic madrasa level… Do your Spanish-speakers also read in it fairly comfortably?
The ones who speak fluently because of having parents who do or spouses are jobs, are often (although not always) the ones who oppose reading. Formal writing or anything written before 1950, maybe 1980, is considered “Old Spanish” and, if it is from Spain, “colonialist” and to be rejected (current things from Spain, though, are cool because Spain post movida and now that it is in the EU is cool, and because it is seen as white and black (that is, good) as opposed to brown and Native American (that is, inferior).
Good students who have learned in school because they’re interested in learning can, of course, read, but less good students don’t see the point of reading anything beyond signage.
OFF TOPIC: I have also learned that the visually impaired no longer necessarily read Braille. They listen to audio books. Text is just not in fashion, it would seem!
Aha, I see. These attitudes make more sense in a country where Spanish is a live vernacular, I guess; there isn’t really anything in Britain that carries these kinds of resonances, even English in Wales and Scotland. Welsh looked like splitting between native and learned at one point, but my recent experiences in the country suggest that that wound has closed up…