November 12, 2009
November 12, 2009
Have you ever wondered if you might fail an English test that your students take even though you are a native speaker of English? And if you did fail, what would that say about the test (Assuming that you were giving it your best shot)? It could happen. It has happened...
One of the more memorable ELT presentations I've attended recently was given by Terry Fellner, Associate Professor at Saga University. Terry is not only a native speaker of English (duh!), he's a particularly well-educated and articulate one. But earlier this year Terry, out of curiousity as well as a means of 'testing the test', decided to take on the new Speaking/Writing TOEIC test (also known as 'Walmart').
I won't keep you in suspense here regarding the results. You can probably guess what's coming:
Terry's score was not in the highest percentile but in the second rank, which meant that Terry was judged not to be a proficient speaker/writer of his native language. Among the weaknesses cited were:
- errors when using complex grammar
- imprecise use of vocabulary
- minor difficulties with pronunciation, intonation, or hesitancy
In none of these categories (or rubrics, if you will) did the highly articulate Terry Fellner deliberately or willfully fall short.
Terry was also judged to have some problems regarding the relevancy of his responses (readers should note that the test was all done online in real-time but obviously in a depersonalized manner). And this is where it gets interesting.
Terry decided to test the pragmatic preconceptions of the test by giving slightly unexpected responses but responses that were nonetheless, given the questions and tasks, logical, orderly, comprehensive and, of course, expressed with fluency. Let's take a look at some of these...
1. Terry was asked to describe a photograph. What he chose to do though was not start with an explanation of the foreground image (apparently a cart and horse) but rather focused upon the surrounding qualities of the picture: the weather, the background scenery etc. In one sense his choice might seem to be facetious, deliberately obverting the evaluator's expectations, but why should examines be expected to conform to certain narrow Western notions of centrality or importance on what is purpotedly a test of INTERNATIONAL communication?
Not only is it considered to be a cultural trait by many to focus on background and surroundings before articulating the 'center' but some personalities may also have this attribution (a skirt-chasing friend long ago displayed an incredible to spot, focus upon, and remember any 'hot babe' at locales such as The Parthenonor Notre Dame cathedral while forgetting what city he was in). A better example may be to look at a classical Chinese landscape painting. While there may be a hermit/poet scrawled in a lower corner somewhere this is not what the painting is 'about'. More central is the background- the atmosphere of the mountains, the textures and shapes that surround the 'subject'. Notions of background and foreground are blurred, mixed.
This can happen with music too. In many non-Western music styles the 'melody' is not nthe foreground or center, but rather tonality, timbre, texture, polyrhythm etc. Fans of modern classical music and most modern jazz, which tend to incorporate non-Western tonalities, will also be familiar with this aesthetic.
In other words, the test assumed a Eurocentric model of both viewing and description- again for a test of INTERNATIONAL communication. But wait, there's more...
For one speaking task, which required Terry to propose a solution to the problem of high office expenses, he suggested the use of clay tablets to replace computers and paper. This he supported logically and consistently (in keeping with the demands of the task) arguing that it was 'proven technology', with cheap and easily accessible materials, that are environmentally friendly, and that clay tablets could also recoup costs by being resold as housing material.
Maybe not what the test evaluators were suggesting but still expresed with relevance, logical consistency, sufficient support, and of course, fluency.
Terry also assumed a very familiar stance with his superiors in the task, going as far as to apologize for his lateness as being the result of a hangover. Here he was testing the TOEIC's notion of appropriacy- what kind of appropriacy? Whose standard?
Similarly, in a writing task in which he was required to respond to an email from a real estate agent with two requests and a question, Terry complied by not only thanking her for the email but expressing surprise that she was now out of jail. He also requested a location that would be near, among other things, a nunnery and a German bakery. His request was to pass along a hefty 'gift' to a police sargent while asking how the buasiness license was coming.
Socially inappropriate? In some (but certainly not all) cases. Does his response display a lack of knowledge or understanding of English discourse? Not at all. Was it unconnected to the demands of the the task? No. Was it expressed in an intelligible manner? Certainly. So where did Terry go 'wrong' on the test? What was the scoring rubric? And was it geared towards certain localized 'norms' that do not reflect a flexible, or international, standard? Seems likely.
It goes on..
In an essay writing task in which he had to explain qualities that Customer Service Representatives need to be succesful he responded by expounding upon the ability to project false sincerity and the willingness to work for a low salary. Again, he met the demands of the question, and utilized his English skills but not exactly in a way that the evaluators would be looking for. Perhaps then the place that they are looking is too narrow. To pragmatically focused upon a North American model. Culturally loaded. Morally loaded too.
Apparently, a lage number of TOEIC test takers end up being placed in the same percentile where Terry was rated. So, if a native speaker ends up there, what does this say about the accuracy and relevance of the scoring? A huge variety of skill levels seem to converge onto this one evaluation slot. It becomes rather meaningless.
And what about the feedback he received? Doesn't it sound a little like an o-mikuji bought at your local shrine at New Year's, where one's allotment of good luck or bad luck is already printed on the paper, prefabricated 'fortunes' completely independent of the individual actually buying them?
Now this isn't meant to rag on the TOEIC people. I know how hard it is to make a comprehensive test that is completely valid (it tests what it claims to be testing) and reliable (the result will be unaffected by happenstance). In many ways the TOEIC is admirable and comprehensive, but it is very very far from being foolproof- especially on the new Speaking/Writing version. I'll go even further. It is still very far away from being an accurate measure of a students' ability to speak or write English for international communication.
Recently, many universities have been getting all hot 'n sweaty about the alleged 'objective' value of TOEIC scores, which is supposed to represent an evaluation that is better than that of a trained in-house English teacher. Terry Fellner has shown though that this is still an illusion. Universities who think that a TOEIC orientation should replace normal communicative English learning had better think twice.
|Permalink | Leave a comment (6)|
December 17, 2009
One of the more persistent and widespread beliefs about Japanese universities is that all students pass their classes as a matter of course. Students who sleep or don't hand in any work are still given the green light to pass through the system. Apparently, administrative pressure and/or teacher apathy are the root causes. Hmmm.
I say this with some hesitancy because I haven't meant any teachers who actually admit to being in this situation so, while I'm certainly not saying that it doesn't happen, the extent of the behavior might well be overstated- something of an educational urban legend. In this way, it's similar to the widespread NJ notion that Japanese English teachers primarily teach grammar-translation lessons (which I've blogged about previously and with the same caveat that I've not actually met any Japanese teachers who admit to doing so). In short, it seems to be only second-hand 'common knowledge'. Most university teachers I've met have shown an almost defiant willingness to fail the laggards.
Now please realize I'm not talking about high schools here. I have heard regularly from very trustworthy sources that auto-passing is indeed a common practice in high schools. To some extent, this is understandable. If high schools fail students it looks as if they have failed to motivate or educate them properly (putting emphasis here on the phrase 'looks as if'). After all, student stewardship is a big part of a high-school teacher's role. This will therefore look bad on their records and any stats or data used to woo the public for recruiting purposes- which is, of course, a special concern for private high schools in particular. So, in order not to give off the appearance of creating 'failures' high school grades or standards might well be gerrymandered.
But universities? First, universities have almost nothing to gain from automatically passing students. After all, public perceptions of quality is based primarily upon entry standards. The fact that a student may take six years to do four years' work is unlikely to enter any meaningful record that would influence public perception of the institution (and it might even enhance the university's reputation for being tough).
Not only that, but by having students do an extra year or two means more revenue- not a small concern these days. And then there are the professors themselves- they will not in any way cause damage to their standing or reputations by failing students. There is also no 'teacher's room' or all-uni meetings where pressure to pass students (for what purpose I do not know) would be applied. And office administrators do not and cannot lord it over professors on such matters.
Most university professors I've met in Japan (both J and NJ) are in fact quite at home with the idea of failing students who do not meet expectations. It's no skin off their noses (although the big disadvantage may be that the laggards might be back in your class next year). At the university level, it is understood that professors are no longer responsible for motivating these young adults (it's university after all) and therefore generally do not feel that they have been derelict in their duties should a student get a failing grade.
Personally, I have never felt any pressure whatsoever here at Miyazaki University to automatically pass students. In fact, when some dicey pass/fail situations have come into play in the past administrators have been more than supportive of the failing option. I teach part-time at a nearby liberal arts university as well and they too have a similar policy (with the exception of soon-to-graduate students who have already secured jobs).
In the MU faculty of medicine (my home base) we have a year-fail ratio of about 15-20%. By 'year-fail' I mean that students fail three courses within a certain year and thereby have to repeat that year (although they will be obliged only to take the classes they fail and electives). Moreover, in their first two years, if a students fails ANY required course (and Communication English is numbered among these) they will be duly dropped a year (this can be traumatic for many students as they tend to build quite strong bonds with year-mates). Over six years in this medical school about 90% of students will fail some individual class at some time. I fail a few each year myself. I allow that this should be the norm when you are educating future doctors. medicine, of all faculties, should not be a walk-through.
So how do students fail? Well, attendance policies for one thing. More than three non-medical absences means an automatic zero. A total score of under 60% is the other criterion. No one in the administration will question how or why a student got under 60% (the professor's word is all that matters- it is unthinkable that any administrators, aside from the head professor's committee- the Kyouju kai, would interfere in this process).
There is a small catch though- and a good one I think. When preliminary grades are entered into the system, those with a grade of 30-59% must be offered a chance at some type of re-test (in the case of incorrigibly bad students a 29% score will conveniently offer no further re-testing opportunities). On the whole though, re-tests are a good thing. After all, the idea of education is to help the student learn the skill, complete the tasks, master the knowledge and if that means they get their asses in gear a little late- well, at least they will have fulfilled the basic requirements. (Of course if the re-test consists of little more than the pithy 'writing a report' the re-testing system is meanngless)
And here's where testing, content, and methodology come into play. If a student sleeps through all the classes, contributes nothing, and studies nothing, there should be no way that they can achieve the necessary 60%, even with a re-test. This is not so much a moral policy as a logical one. What I mean is that the course should NOT measured only by a singular final test based on discrete knowledge (akin to, in many ways, some entrance exams). Since education (especially that at the tertiary level) should be a process- a process that involves carrying out tasks and the development of specialized skills, students should be graded on the completion of these tasks and skill areas; things that are learned and practiced only in that class and cannot possibly be attained by a last-minute cramming of the textbook.
In other words, a returnee student who does nothing but easily fill in a discrete point English test form at the end of the semtster would end up get a passing 60% for doing nothing. This would indicate that there is something wrong with the class content, methodology and grading policy (pretty much the three strikes as to what constitutes a good class). In my 1st year English Communication classes I can categorically state that it would be impossible for such a student to get 60% because the medical discourse and related skills I teach- and they subsequently practice in process-based tasks- are NOT something they will have encountered in high school or by living/studying abroad.
As for sleeping students, that is a matter of the individual professor's responsibility and/or policy. I keep mine awake because the classes are task-based, not receptive 'lectures'. Pair and groupwork forces them into action. If they did sleep for any length of time, they simply would not know what to do and this would lead to- at the very least- two or three nasty re-tests. The students learn this very quickly (sometimes the hard way) and therefore avoid both lazy absences and sleeping.
Teachers who measure the course with a single year (or semester) ending test will likely not have this luxury. Students will know (from their seniors) that all they have to do is get the basic attendance, study the textbook just before the big exam, and focus on a few points that will be tested (all university students can get hold of old exams). Basically this serves a recipe not only for sloppy students attitudes but is pretty much a blueprint for meaningless education. If teachers prepare tests/grades this way they are basically shooting themselves in the foot. (Again, I don't know of anyone who actually admits to doing this)
But, if passing is incumbent upon actively participating in class-related tasks, learning something new and unique to the particular class, or manifesting a new skill (or best, all three of the above) then students will involve themselves accordingly. Not only that, but professors will feel that this makes their classes meaningful, that they are involved in the process of education, and not merely 'completing a course'.
In which case passing actually means something; and failing is a real option.
|Permalink | Leave a comment (7)|
January 21, 2010
The Center Shiken (National University Entrance Exam) took place a week back and I'm sure many readers were involved at some level, most likely by proctoring. And if you were proctoring, (even if you were a back-up proctor, yes, there are benchwarmers in Japan's Center Shiken proctoring world) you will know the intricate protocols, steps, conditions, and general hoop jumping that is involved in what many might mistakenly think of as an easy process.
The key notion is of course that the Center Shiken must be fair and fully objective. That's why it is held nationwide with the same subjects being tested at the same time in over a thousand locales Japan-wide with over 500,000 students taking part. In order to maintain this integrity the surrounding system has to be airtight. Details are meticulous and must be adhered to under threat of your photo appearing in newspapers regarding a breach of Center Shiken protocol. No compromises. Nothing slipshod is allowed.
Lengthy protocol explanation sessions, complete with instructional CD ROMS, are prepared for proctors. The instruction booklet is the size of a small telephone book and, as far as I can read, contains provisions regarding appropriate actions to take if an examinee freaks out, becomes physically ill, if an alien lands in the testing room, and if an examinee suddenly morphs into The Dave Clark Five.
You know, the Japanese are generally very good with this type of thing. One old school generalization about Japan that I hold on to is the fact that the couuntry is pretty risk adverse and great lengths will be taken to ensure that there are no 'misses' ('miss' being the standard abbreviation for 'mistake', and it is the default term used in Japanese). If you've ever been involved, or merely watched, a kindergarten or elementary school undo-kai (sports day) you can see the meticulous, orderly planning manifested in a seamless- but somewhat tense and regimented- performance. (Whether people actually ENJOY it is another matter).
The thing is though, the more you try to avoid 'misses' by fine-tuning, tightening the screws, or devising manuals that try to cover every contingency, the tighter the system the more likely that a 'miss' is likely to occur- precisely because you've created a huge checklist of protocols that now could go wrong. As analogies, think of pure-bred dogs and how finnicky they are. Think of the guy (it's almost always a guy) who tweaks his computer to a T but it's always malfunctioning when any new software is introduced. Think of body builders where each muscle teeters on the brink of both 'perfection' and complete physical breakdown. The fact is, the tighter you build the foundation, and the more pieces that you use, the greater the likelihood that one piece will falter and lead the whole thing to collapse.
Hence, the near fetishistic emphasis upon 'miss' avoidance can actually induce scenarios where more misses are likely to occur. At the Center Shiken we proctors were quite tense, with almost every second accounted for and formally backed up in some way, making sure that the myriad steps were taken in precise order, with military obedince to the manual. This meant that we had to act with speed and efficiency but also meant that any screw ups would lead delays or claims from examinees of some breach of norm. And the more nervous, cluttered, and time constrained you are, the more likely that a 'miss' will occur. (There was also a ubiquitous stretcher placed outside the examination area, as if to underscore the severity of it all).
Now, here's the twist.
A miss in the test administering protocol is considerede a huge black mark. Therefore, about 95% of the pre-test information sessions and meetings focus upon the avoidance of a 'miss'. But, as an English teacher, I am more concerned about 'misses' at the larger level. Let me explain.
At the orientation sessions for teachers making the second-stage university entrance exams (NOT the Center Shiken orientation sessions) the overwhelming emphasis is also placed upon not having any 'misses' in the test. There is, in my opinion, too little emphasis placed upon producing a test that is valid and reliable. In other words, the overriding rubric is negative: "Don't have any mistakes on the test. That's all we ask". The endless fix-up and follow-up sessions are designed to make sure that no misses get through.
A big, get-called-before-a-committee mistake would be something like the following:
Match the four paraphrased sentences below with the undelined sentences (1,2,3,4) in the passage.
Although the lack of a 'c' answer should not really confuse students or cause them to answer incorrectly, this would be a huge black mark for the test makers.
Anyway, administrators usually want 'objective' style tests because objectivity, it is believed, reduces the likelihood of mistakes. So, in order to meet the heavy 'no-miss' criterion you could make discrete English language test questions like the following:
1. The Montreal Canadiens last won the Stanley Cup in [ ].
2. Hitler's [ ] regime lead to the restructuring of Europe's political boundaries
As you will see, there are officially NO misses in the above questions. But they are clearly absolutely crap questions for an English test. (I've exaggerated the samples- I can't imagine any exam actually making such questions although they did come close in the not-too-distant past- to make a point).
The first question does not measure English skill in any way but rather teasts localized knowledge which happens to be presented in English. And even if this was accompanied by a passage containing the answer (c) it still would not be indicative of English skill, especially in terms of measuring suitability for university entrance. Also, if the answer was contained in the passage 99.9% of the examinees would get it correct which renders the stratifying force of the question meaningless. So, while there are technically no 'misses' in the question it is nonetheless both invalid (it doesn't measure what an English entrance exam is supposed to be measuring) and unreliable (it's either too hard, based on chance specialist knowledge, or -if the answer is in the passage- it is too easy) and thus cannot have any stratifying function for placing examinees.
But it IS 'objective'. It contains no 'misses'. Also, the answers can be immediately measured numerically: 2 out of 2. Administrators love this type of thing and consider it somehow more 'objective' because the results can easily be rendered as numbers- even though these numbers basically indicate NOTHING about actual English ability. "Hey, if it's mathematical it must be objective!"
In the second example, the vocabulary choices are obviously way over the students' heads which means that if the correct answer is chosen it will almost certainly be chosen randomly (and of course a trained chimpanzee has a 25% chance of getting the correct answer on a 4-item multiple choice question).
Hey, but it is still 'objective' and contains no 'misses'--- despite the fact that it is thoroughly invalid and unreliable.
OK- I can't imagine any university entrance exam test maker making such egregious errors (in fact, in my research I have found that many second stage entrance exams and recent Center Shiken are quite valid and reliable). But the point is that an inordinate focus upon avoiding misses and maintaining this surface, shallow notion of objectivity can obscure the bigger picture- that of makng valid and reliable tests that acuurately or reasonably measure a wide range of student English skills.
Questions that demand deep thinking or skills such as making inferences, reading between the lines, predicting, summarizing and so on tend to be both more complex and nebulous than simple kigou (so-called because they can be answered by a letter mark- a,, b, c, d) questions. This complexity or lack of clarity can often led to what overseeing commitees think of as 'misses'. Overseeing commitees don't like the alleged 'subjectivity' or interpretive element that such questions demand. Hence the safety factor in making more discrete TOEIC-type questions
I find this fear of alleged subjectivity odd. After all, as trained professionals it is precisely we who should be expected to be able discern which students display the greatest ability in a subjective or essay-type question. By taking away the subjective evaluation element from a trained, experienced pro (who is supposed to be an expert in the field- that's why you've hired them to teach at a university) you've basically narrowed the scope of the test. You're no longer measuring extensive English skills but discrete item knowledge. You're no longer testing English ability but knowledge about English.
Your emphasis on 'no misses' at the expense of greater test validity and an artificial sense of objectivity that in fact often reduces test reliability means that you've messed up the bigger picture of measuring holistic student English ability.
And that's the biggest 'miss' of all.
A QUICK FUNNY- My all-time greatest classroom mistake
A long time back, when I was new to Japan, I had a small class in which I asked the students to tell me about the Japanese person who they admired most. One of the students answered 'I admire Chiyonofuji'. At that time I had no idea who Chiyonofuji was, so I asked. "He is a small restaurant," came the reply. "Non, no," I responded. "He OWNS a small restaurant or he runs a small restaurant. Not 'He IS a small restaurant'". The student looked both frustrated and amused. "But he IS a small restaurant" he insisted. A few seconds later another student spoke up. "Chiyonofuji is a sumo wrestler," he explained.
But come to think of it, some sumo wrestlers are actually like small restaurants.
|Permalink | Leave a comment (2)|
March 03, 2010
If you work at a JHS, HS, college, senmon gakkko, or university in Japan you have probably just completed several year or semester end achievement tests. After all, you need grades for your students so some kind of evaluation is required. But this is an area in which a lot of mistakes are made, a lot of educational principles violated...
I'd like to think that testing is something I know a little about, an area that I've become at least a little sophisticated with. It was one of my specializations during my MA days as well as one of those areas in which I've kept up the research level, so I'm hoping that a few of the things I mention below might carry some weight above and beyond the 'some guy on the internet' level of credibility.
Achievement tests are not placement tests nor, usually, are they proficiency tests.
In an achievement test you are evaluating the students' course work. That means the focus of test content must be upon what students have, or were supposed to have, covered in the course. This means that any content that was not dealt with in the course should not be part of the test. It means that the skill emphasis should match the skills that you were trying to teach in your class. Test tasks should resemble those tasks which were practiced during the course. You are not gauging the students' overall English ability or general skill- which would be more representative of a placement or proficiency test- so don't try to. The test should measure a student's ability to meet the specific course goals as set out in the syllabus.
If you are an educator the test should have an educational function.
It should have a pedagogical purpose as well as an evaluative function. Students should be learning from their tests. This means that students must know what they did right, what they did wrong and be given a chance to fix it. In other words a good achievement test has a diagnostic function. This has several administrative implications:
1. You must give the test back to the students. It belongs to them.
2. There must be some type of review or feedback for the students.
3. You shouldn't give the test in the final class or else you can't review it.
4. Students should be able to find out what the correct or model answers are.
5. Students who did poorly should be made to do a re-test, or two, until they show that they have learned the material (or skill).
6. Why not have students obtain good or correct answers on those sections where they did poorly by checking with peers? I do a 'test interview' where students ask one another those questions they didn't answer correctly and if the partner knows the proper answer, they can teach (not just 'tell') it to the other student.
You can and should diagnose your own teaching effectiveness from the test results.
If students do poorly on the test, or on specific items on the test, it is very likely because either 1) the question, task, or entire test was invalid ( the test didn't actually test what is was supposed to) or unreliable (if a similar test was given to the similar students at a different time and place scores would be very different- meaning that happenstance affected the test results, usually as a result of poor test design).
2) you didn't teach whatever it is that you were testing well enough.
This should be telling you sometyhing. After all, tests test the teacher's effectiveness as well as the students'.
You need to test more than just recognition (memory) and discrete-item knowledge.
Memory is a limited skill. Not only that but memory is not just recognition (the most passive, receptive aspect of memory) but also recall (contextual understanding), and reproduction (application). If you were teaching a class that was expected to focus on developing productive skills but give a test that measures only memory-recognition you have an invalid test.
Likewise, language is not just a collection of discrete-item knowledge. It is a dynamic system that involves numerous social and pragmatic considerations. So again, if your class was expected to develop student skills in using English within meaningful and/or practical contexts, if you focus mainly (or solely) on discrete-items you will have made an invalid test, since the skills you are supposedly trying to inculcate will have escaped the net of evaluation.
The test can easily be used as a study and/or review experience
Open-book tests are great. Students can once again review material and find those things that the teacher wants them to understand. Open-book test success also relies more on a general comprehensive understanding of a subject as opposed to memorizing discrete items. Of course, given that the test is open-book we should also expect standards to be high. I have come to notice that students who are well-organized and think actively succeed at these tests while the laggards who weren't paying much attention or making much of an effort all year rarely rise above their 'stations'- at least on the first test. This doesn't always happen on discrete-point knowledge-based TOEIC-type tests.
Providing students with the test tasks or questions or old exams in advance (they'll usually get them from their seniors anyway) can help too. By letting students know what to study for, you focus their energies on those things you really want to inculcate and leave less to random chance, circumstance or wasted/misguided student effort.
Ongoing evaluation, especially if you are using a variety of evaluative means and measures, is more effective than the traditional 'one final paper exam' format.
Language learning is a process and so the evaluation should be process-based and focus less on the one, final 'this-is-your-official-result' mode of testing. Using a variety of testing methods and means allows students who respond differently to different challenges to strut their stuff. Not all 'good' students are sharp at paper tests and may do much better on a role-play, report, or some type of visual/tactile task. Ideally, using all test types you can get a panoramic view of their all-round skills, and therefore a more accurate reading of their English abilities (assuming that you are trying to educate them in holistic way, that is).
Weighting tests is also important. Putting something like 80% on a final test might not be a good indicator of actual student ability over the entire course of the class. Breaking evaluation up into 20% increments allows for more types of evaluation and widens range of the criteria. It also tends to keep students alert and focused.
Let students have some say in the test content
Productive, open-ended tasks are to be encouraged as these allow for some self-expression and variety, letting students use the language while actively thinking and engaging it. Most teachers will tell you that in terms of marking, these tasks and problems are easier to grade- and tend to provide a more comprehensive view of actual student abilities. Even better, allow students to make some tests themselves. This will allow for a good review of content and also show the teacher what students have learned (or not), or feel is important (or not). And what a teacher learns from this can be applied to next year's lesson plans.
I allow my students to appeal their test grades too- as long as they do so in English. If they feel that the grade on a 'subjective' test or item was unfair they have the opportunity to explain to me why their score should be higher, a process which demands that they consider both the test result and content but also how they will plead their cases in front of me.
Reader suggestions on testing are more than welcome in the comments section.
|Permalink | Leave a comment (2)|
April 30, 2010
Note to self-
Do something about the following student habits. You see these year after year and at some point you are going to have to address them directly:
1. Those cases when you give the students a homework assignment that includes a few concepts or vocabulary items they are not familiar with. Then, most students come to the next class with it incomplete (or worse, not completed at all) because they 'didn't know' certain items.
Figure out why this is happening. Is it because they see homework not as a preperatory research or study but as some kind of achievement 'test' to be immediately handed in and graded and therefore if they don't know it- they don't know it?
Teach/tell them that it is common sense for a university student to research that which they don't know. Look it up in a dictionary (duh!). Scan the internet to understand that concept or designation which you find troubling. Or utilize that age-old J university standby- your senpai (senior student)! But do something! Do NOT come to class after a week with that assignment sheet and tell me you 'don't know'!
2. Deal with those situations where students have a guided speaking assignment in English but as soon as they face the slightest bit of communicative adversity in English they switch over to Japanese, negating the primary value of the whole task.
Figure out why it is happening- Is it because the students think the only thing that counts is completing the spoken task and getting the necessary information or whatever from their partners? They seem to be inordinately focused upon the product whereas in second language acquisition going through the process is equally, if not more, important.
Teach/tell them that fighting through areas of communicative adversity (by language negotiation, circumlocutions, alternate strategies or whatever) is an essential part of developing their language skills. After all, if they want to be good tennis players how can they progress if they avoid working on their backhands and instead try to run backwards on every return so that they can utilize the more familar and comfortable forehand shot? Sure, you might spray a few balls into the bottom of the net as you work on that backhand at first but you'll never be much of a tennis player if you don't confront that weak spot directly. And after awhile it should become muscle memory; you'll be on autopilot. So with English. Add that when they are dealing with NJs outside Japan they will not have the luxury of resorting to clarfications with their interlocutors in their mother tongue.
3. Address those tasks where you are prompting students to be productive and creative, allowing for dynamic expansion for the purpose of extended communication, and they come up with little but dull, jejeune content which seems to exist more for the purpose of completing the assignment than communicating any content of note (e.g. Getting-to-know-you self-generated questions such as: "Do you like music?" or "How old is your father?"), or imprecise and vague content that does not technically violate grammatical rules but lacks a clear criterion, scope, or category (e.g., from the same activity- "What country do you like?" or "What are you interested in?").
Figure out why it is happening- Are the students more concerned with forming a 'grammatically correct' sentence than those which are semantically sound, pragmatically normative, or communicatively compelling? This may be a by-product of high school methodology- the notion that grammatical correctness equals correctness in all respects. You're going to have to hammer away at this deeply entrenched falsehood.
Teach/tell them that grammatical correctness is often meaningless or, to be frank, a lack of concern for the content of discourse can be stifingly boring for all participants. Give them Japanese examples which show this. Strongly express that as university students, especially given your own classes' discourse-based focus, that you (and your grades) are much more concerned with students creating and producing meaningful content.
|Permalink | Leave a comment (3)|
July 07, 2010
Three mini topics today...
1. Extreme J student nervousness
Today I held some role-play tests for my 1st year general English class (medical) students. These involve 2 students acting as doctors, taking a basic medical history, and putting the information on a chart while I act as the patient. Yes, it is a demanding test as it measures not only lexical and grammatical competence but also: topical knowledge, the ability to think on your feet and improvise, to predict and summarize. It also demands social and interactive skills and organizational skills for completing the medical chart.
I never expect perfection and that's what makes this test a learning experience. Tests should hold pedagogical value, value which is realized through having students face new challenges.
I naturally expect that students will be a bit nervous because this test does place them on-the-spot and, after all, a test is a test is a test. But I am often surprised at just how mindlessly nervous some students can become under pressure- which is not what you want to see in medical students.
Expanding a bit now, I suppose if I were to choose one widespread characteristic of Japan that I find negative it is this overbearing sense of nervousness. I'm sure you know what I mean. That scurrying and near-hyperventilation that accompanies most services and almost any sudden interaction between insiders and outsiders (not just Gaijin but anyone who might be considered non-household or friend). It seems that even the most innocuous situations, such as two housewives with kids at the same day care center meeting suddenly, are punctuated by this display of stress and tension.
Now, I understand that there is a 'cultural' factor involved to some extent here.This formalistic ritual expresses concern in Japan, that one is being attentive and actively involved in the other's sphere. Obsequiousness (is that even a word?) is a type of positive politeness, and a cool, relaxed exterior may be interpreted as a lack of concern for the other, that one is being lackadaisical or slovenly in one's relations. And as a cultural trait that's fine. Service is generally excellent in Japan, albeit over-laboured, and I have rarely met an arrogant or standoffish Japanese person in the service industry as a result.
But when students are taking a test they are not thinking about politeness or carrying out a social ritual. They are not partaking in the rites of 'Japanese culture'. They are all a-flutter merely because they are having a test. As a result one sees:
- students who almost completely lose their voice, on the verge of choking
- students who make a hash of the most basic patterns, the ones they've been absorbing for years
- students constantly breaking the lead on their 'shar-pens' due to excessive nervous force
- students becoming confused to the point of panic when hearing instructions such as, "Write your name on the top line of the chart"
- students writing the first stroke of an alphabet letter four times and erasing it each time for no apparent reason
- students dropping their bags and other goods off the desk after hurriedly placing them half on, half off
- students actively mopping their brows- the only times I ever see them sweating profusely
...this sort of thing. It's just too much. I mean, a certain amount of nervousness can spur one to a better result in many endeavours but too many students I've met here have it to the point of complete debilitation. In fact, you think that many would be so used to facing big exams that mine would be a yawner.
Anyway, this has negative applications outside the English classroom. Excessive J nerves when dealing with NJs can be annoying and sour relations. Communication becomes belaboured, artificial and awkward. The upshot of this is that many would rather duck away from an NJ rather than even risk the possibility of interaction (like the person who won't sit next to an NJ on the train out of fear that the NJ might possibly ask them a question in English).
It can come across as standoffish, self-absorbed, and exclusive when there is no such intention. For example, if you look at those (very, very rare) cases in which J business establishments have erected exclusionary signs the explanation/justification is almost always not that the person responsible had a pathological hatred of Gaijin, but rather 'couldn't speak English' or didn't know how to 'deal with foreigners' (Note- I'm not saying that these are legitimate excuses, but they are real). NJs make them nervous---- but as a result of trying to save face they end up coming across to the wider world even worse.
I've also noticed that Japanese people who make a lot of NJ friends tend to be those who are calm, cool, collected, and radiate what I might call that 'surfer bravura'. I find students who are not so tightly wound and wired to be much more pleasant to deal with. And the students who take my role play tests and try to engage me, the patient, with natural warmth and carry out normal interactive skills inevitably end up with higher grades for the test- not directly as a reward for having a desirable personality trait but because such students are more able to think on their feet, to adjust to the flow of the role-play content, and to find a way to circumnavigate tricky grammatical or lexical items.
But the question for you- dear readers- is... how can we reduce this high-tension sweat fest without removing any sense of challenge and authenticity (read: open-ended dynamic language use) from the classroom?
2) Creativity- Thinking inside the box
The theme for this year's national JALT Conference is, "Creativity- Think Outside the Box".
Hmmm. This bothers me for a number of reasons:
1. The term "thinking outside the box" is an old, drab, hackneyed cliche. Surely, if one wishes to address the issue of creativity one could conjure up a more original description?
2. People who like to use the phrase "think outside the box" generally attribute this skill to themselves and deny it to 'society', 'people' and anyone with any power or authority. And personally I've found that the self-platitude is inevitably a mismatch. In short, every mother's son believes that they "think outside the box".
3. This phrase reflects the dubious notion that creativity is indelibly tied with non-conformity or separation from confines, as if only outsider status confers the gift of creativity. To be frank here I find that a rather sophomoric, even naive, understanding of how a creative mind works.
4. People tend to make this claim about their ideological opponents- no matter what the ideology.
5. Real creativity, it seems to me, involves thinking from inside the box. We all live or have to work within box-like confines in one way or another and an undue emphasis on doing something 'different' is not always the most beneficial solution to a problem or the most endearing artistic expression of our lot. Creativity can easily be manifested by dealing with questions such as, "How can I re-arrange the contents of this box in a manner that most benefits myself and the others?" or "What contents of this box have the inherent ability to be manipulated into various shapes and relations- and which combinations of that will best allow problems to be resolved or truths to be expressed"?.
A great deal of twentieth-century art of all types has benefited from looking at the standard box, the detritus of normal life, and finding inspiration in the re-arrangement of the mundane, giving it voice through the commonplace, and ultimately finding creative expression in its repackaging of the banal. Show me that Brillo box again, Andy. I think I see something in it.
Kind of like this mini-treatise on creativity, if you will (wink wink).
3) Self-introductions- Bah!
Why on earth do English teachers in Japan pound the students with practice in giving self-introductions? Useless and boring? Indeed! Let me count the ways...
1. It is not a part of any naturally-occuring discourse. I have never in my life as a genuine, red-blooded native speaker of English given a self-introduction. The only time people carry this farce out is in EFL classes.
2. Self-introductions are inevitably boring because no one cares about the details and/or will not be able to remember 90% of what was said two minutes later anyway.
3. They take way too much time and, as such, are just a self-indulgent conceit. I've seen numerous 'International Symposiums' or round circles of some sort held in Japan where you have 15 people performing this pitiful soliloquy for several minutes each before you get to the actual topic of discussion, which by now has been now drained of any vitality.
4. Most people say the same thing or the bleeding obvious. For example, a foreign professor is meeting 4th year students at X university and each student duly says: "I am a 4th year student at X university". You don't say now!
5. I know that self-introductions may allow students to learn and practice basic identity statements. But if we want them to do so let's at least place them in the most appropriate discourse package. That is this: people reveal relevant self-information when they are asked for it or when the time seems right between interlocutors.
So, if I meet Dr. Y at a post-presentation wine & cheese doodad and start chatting, we may talk about any topic at hand. And at some point I may extend myself by saying, "By the way, I'm Mike". Now if Dr. Y wants to know where I come from, what I do for a living, or what my favourite type of Weisse beer is (Weihenstephan), I will wait until he asks, or there is sufficient reason to mention this. Otherwise I'm just a walking textbook pretending to engage in 'internationalization' by telling others data about myself.
|Permalink | Leave a comment (3)|
October 26, 2010
Two mini-posts today…
1. Nobel prizes, the office concept, and research in Japan
Much was made in Japan of Prof. Akira Suzuki of Hokkaido Univ. being awarded the 2010 Nobel Prize in Chemistry. There is no doubt that Nobel Prizes provide a boost for national egos, even if the winner is usually more a product of individual genius that a product of that society. Oddly though, when a Japanese academic wins a Nobel prize it is usually accompanied by an equal amount of hand-wringing about shortcomings in the nation’s educational and research environments.
I say 'oddly' because you’d think that achieving the ultimate academic recognition would serve as a vindication of an educational system but not in Japan. One reason is that co-winner Eichi Negishi is based at the U. of Chicago and has been so for almost all of his research career (and he is not the first Japanese researcher who has been able to flourish abroad and be critical of research setting in his country of birth).
The criticism is that university research institutes in Japan are static and rigid. That there is a stifling hierarchy which discourages the type of open environment necessary for innovation and success (although I would argue that most countries would like to have Japan’s –ahem- lack of academic/innovative success).
Not working in a research lab I cannot confirm all of this firsthand but the fact that even young Japanese researchers (among them some that I’ve met on my own campus) seem discouraged certainly lends some credence to the notion. But I’d like to raise another factor that inhibits the pursuit of excellence in almost all of Japanese educational institutions but is rarely mentioned as a factor....
OK. When you think of the term “Japanese worker” what comes to mind? The guy in the blue suit who sits at a cubicle (or a shared table) in a company office 8AM-8PM, right? Mr. Salaryman (or Ms. OL in the case of women). This seems to be the set model for ‘working’ in Japan. Therefore, if you are not somehow engaging in office work of some sort you are not really working.
Now you might think that primarily teachers should teach, doctors should treat patients, and researchers should do research, right? And perhaps the occasional bit of paper work might come their way for inputting grades and the like. But not in Japan.
An enormous amount of my working time, concentration, and effort is taken up by requests from various offices in the university. Elaborate questionnaires have to be filled in, meaningless committees have to write vapid reports, databases are changed and have to be re-inputted, the Student Affairs bureau wants you to keep a record of student visits to your office and the purposes thereof- I could go on and on but you get the point. It seems like almost everyday the secretary comes to me with something to fill out, prepare, input, or comment on.
To be perfectly honest, I've come to feel that if I read an academic book on EFL in my office for more than 5 minutes I’m screwing around, indulging in a personal hobby. If I work on an academic paper on my computer I’m somehow cheating the university time-wise. Help! They’ve gotten to me!
I often get the impression that administrative office staff thinks that if we are not on our actual teaching contract hours that we aren’t really working and therefore have to fill our idle hands with some nefarious tasks to legitimize receiving our paychecks. And yes, I have heard researchers here claim the same thing- that they are always busy with ‘zatsuyo’ (paper work) and thus are forced to delay the very research that the ‘zatsuyo’ is based upon or work until the wee hours. The surrounding, peripheral work has supplanted the real work. It seems that the most important thing is to dance through the hoops created by someone in the office downstairs, not to produce actual research of worth. Your research could be total crap and you'd still be rewarded for it as long as you completed your online 'Research Report- reflective imprssions of the allotted travel funds section' correctly. And only in 12 MS font.
As I work next to an attached hospital (plus the fact that my wife is an MD) I know that this afflicts doctors (and nurses) too. Doctors complain of rushing patient visits in order to complete the pre and post visit paper requirements, which are ever increasing, demanded by the paper pervert powers in those dusty cubicles.
Maybe this is why research is usually more practical and productive at Japanese companies than at universities. The expectation inside a company seems to be that office workers do office work and the lab people stay in the lab and there are a sufficient number of clerks and secretarial go-betweens to bridge the two. Less so for universities and hospitals. Secretaries and clerks have their roles here to be sure, but the more they do on behalf of the teaching/research staff, the more the bureaus downstairs make up because- well we have to do some real work, right? And real work of course means filling in online forms and shuffling more and more papers…
2. How to avoid a test: An almost true account of where my class apparently ranks in the student life hierarchy
(Setting- My classroom with 32 2nd year English communication students)
Me: OK. Next week we’ll start the role-play tests based on what we’ve been working on over the last five weeks. You’ll be doing the role-play in pairs- 12 minutes per pair. Even numbered students will come next week, odd numbered students the week after.
Me: What do you mean, ehhhh???!!! It’s a university. We have tests here, right?
Yamada: But we have a test the day right after that in Anatomy! We have to study hard for it!
Me: Perhaps then you should ask the anatomy teacher to postpone his test- because you have an English test the day before and you have to study for that!
Watanabe: But it’s not fair because the students like me who come next week have the anatomy test as well as your test, but the students who come in two weeks don’t!
Sato: But it’s not fair for students like me who come in two weeks either!
Me: Ummm, why not Sato?
Sato: The rugby team is playing a tournament that weekend and we have practices!
Me: You don’t have practices Thursday morning, when our test is held!
Kobayashi: But we’re having a drinking party on Wednesday night to celebrate the tournament.
Me: Now why on earth did you schedule a drinking party on a weeknight?!
Hayashi: Our club seniors decided. So we have to go, and then we won't be able to study for your test. Plus it’ll be hard to get up in the morning for this class!
Me: Well that’s a choice you make. Please your seniors or get a failing grade on the test.
Suzuki: Give the test in three weeks! It’s better!
Yamamoto: No way! In three weeks the orchestra is doing a concert the day after English class and we in the orchestra have to focus on that. I may have to miss English that day anyway to set up seats in the concert hall.
Me: If I listened to you guys we would never have a test at all. Or even classes for that matter.
Setoguchi: Why don’t you do the tests in the final test season, like other teachers?
Me: Because it’s not suited to two weeks of role-play testing AND I can’t give you proper feedback. Plus, we use ongoing evaluation in English class. It's not just a pile of knowledge that we’re testing.
Abe: Yeah, Setoguchi, shut up! If we had the test in the usual testing season we couldn’t study for it anyway because we have three other tests scheduled then. So we wouldn’t be able to study for the English test at all.
Me: All right. I hear you. The only solution it seems is to do the test right here, right now in the next 30 minutes. Take out one pen and one piece of paper everyone. Here we go. This test, or should I say pop quiz, will account for 60 percent of your grade. Good luck!
|Permalink | Leave a comment|
November 16, 2010
Let's get right into it.
I think that it would be better for English education in this country if it were not included as a core subject on the Center Shiken (hereafter 'CS'). I could possibly accept it being an elective Center Shiken subject. And I have no qualms with certain universities making it a core subject on their individual second-stage entrance exams- but it's not suited to the CS.
1. It perverts any holistic understanding, acquisition and appreciation of English, and possibly foreign languages as a whole. How?
The Center Shiken is administered to a huge number of students nationwide and demands strict standards for fairness and objectivity as well as allowing for the rapid machine calculation of results. It has to be measurable as a number, with no room for subjective or interpretive judgments. This means that the tasks and questions on the CS will ultimately be multiple choice items. This necessitates a reduction in task/question type and range, meaning that the focus will always be reduced to discrete points. The result is the atomization of the language, in which languages are treated basically as cumulative collections of discrete item knowledge. The backwash on high school pedagogy, although often overstated, is palpable (though I would say that the popular notion that this forces HS teachers to 'teach grammar' is false).
The CS has evolved over the yers to try and minimize the former narrow, discrete-point focus but it can never entirely eradicate that focus without compromising the necessary objectivity and calculation speed. This is not a criticism of the CS English makers- who do quite well within the restraints to capture a more wide-ranging number of skills and abilities- but the nature of the beast ensures that it will always fall short.
2. It is unfair, especially when it carries so much weight.
English could be considered primarily an academic subject, which then demands a calculated academic approach, but I think most would say that English is more fundmentally a skill, and a practical skill at that.
The CS shouldn't be testing skill subjects like this- even if they don't end up testing English 'skills' per se- especially those subjects which are largely non-academic (think of music as an example). Some examinees will, by sole virtue of having lived abroad, be quite competent in English but perhaps not academically suited to university. The current situation favours these students over someone who has simply had fewer social opportunities to engage the language. The student who grew up in L.A. might be less academically skilled than the student who grew up in Tottori. but the Angelino will almost certainly score higher on the CS. Although we can imagine all subjects containing some built in advantage for some students (we expect a student whose parents are biology researchers to do better on the science exam) none are determined by experiential happenstance to the degree that English is.
3. By having English employed more as a second-stage (individual university) exam subject will allow for more balanced teaching/learning and skill development.
The number of candidates at the second stage exams is fewer and more manageable from a grading/marking viewpoint. This affects test design and content. Attention can be paid to details of individual examinees by actual humans, humans who are hopefully certified and trained in the subject (absolute objectivity is less rigorously applied at this level, but a wider range of skills can be addressed, making it perhaps a more accurate measure of student English ability, 'objectively' speaking).
This approach, in turn, allows for more tasks that call for insight, analysis, use of cognition- the ability to discuss and elaborate upon content in English- a more holistic approach than multiple-choice or discrete-item approaches could ever allow for. It means that expression in writing, the ability to think in English become apparent, allowing the examiner to get a better read not only upon the student's English skills, but wider academic viability. Even spoken English interviews could be incorporated into the scheme.
I would expect the backwash to infiltrate throughout the education system to be duly positive. This would also have the effect of killing two birds with one stone- meeting the MoE's extant call for an increase in communicative skills while also addressing the need for HS students to prepare for university entrance exams.
4. It makes English more of an optional subject at the JHS/HS, allowing those who don't feel that it would benefit them (some kids who will take over Dad's farm in Iwate) much to put their emphasis elsewhere but allow those who are interested in the subject to develop more holistic, practical, and analytical skills. In short, preparing professionals who can actually use the language in discourse as opposed to the perpetual uniform national "false beginnerhood".
This would further rid the negative atmosphere associated with many English classes (by both teachers and students alike), emptying classes of students who see no value or have no interest in learning English, especially in the atomistic, mechanical way currently employed in many (most?) settings.
5. In education, streamlining is the catalyst for efficiency and higher-quality production. Freed from the drudgery and mundane, both teachers and students could focus upon more personal and/or extended\extensive avenues of English acquisition, with a focus on the productive as opposed to just the receptive, and upon the cognitive skill of reproduction rather than the lowest cognitive denominator of recognition. Local initiative would increase while the central bureaucracy's role would diminish.
1. The status of English in the Japanese education system would diminish.
That is, if status implies only core inclusion on the Center Shiken. It is problematic that many people view only the subjects that form the CS core to be academically legitimiate. In terms of what most people recognize as real academia, the ability to apply abstract knowledge into research or advanced self-expression or international communication would actually be bolstered.
2. The English study industry would suffer.
Probably. Billions of yen are made assuming to help students prepare for the CS. Obviously, guides and training materials would be helpful for English's inclusion on other exams but they would suffer. Even as I write this, some burly men in sunglasses and suits from "Eigo Corp" have entered my room brandishing very heavy dictionaries.
The CS is also a money maker for the MoE and some host institutions but, hey, are we arguing for educational or financial benefits?
3. The number of high school English teachers would decrease. People would lose jobs- including (possibly) some NJ.
The weaker end of the HS English teaching world might suffer- but is it not alreay argued that too many English teachers are ineffectual anyway? I also understand that NJs are often shunted out of the CS prep process anyway so...
Regardless, this more streamlined approach could even allow for more production-based, learning-centered classes due to decreased student numbers while retaining the same teachers.
What do you think?
*Apologies for typos in the original version- thanks to an impending migraine with zigzagging vision
|Permalink | Leave a comment (16)|
January 12, 2011
I noticed this item in the Daily Yomiuri on Dec. 30th (2010) about how some high schools are now including questions which allow examinees to express their opinions on entrance exams. I encourage you to read the article. Closely.
At first it is hard to argue with the intent. I have long been an advocate of avoiding discrete-item, passive, receptive test taking as being the sole determiner of entrance scores, since they capture only a small percentage of English skill and ability and, as we all know, tend to have a negative pedagogical washback. And I have long argued that most second-stage university entrance exams in Japan have moved more and more in this direction over the past decade. Essay writing, open ended writing tasks and other productive, active testing modes are now so routine that most high school and juku teachers will address these skills- obviously a good thing.
So, the fact that high schools are starting to take note and apply the same principles to their own entrance exams would seem to be cause for applause. But.. take a closer look at the article.
The main idea of this new approach is that 'independent thinking' should be encouraged and rewarded. Fine. But then in the article's test-item examples we see that 'correct answers' include very specific concepts and content (in the first example, students had to note that mankind had appeared on earth very recently, and in the second the term 'mutual assistance' had to be included in the answer.
So, hold on a second. We are asking for independent thinking, self-expression, and opinions and yet we have these very set, particular correct answers. Isn't this a contradiction?
An official from the Osaka Board of Education quoted in the article says, "These kind of questions test students' ability to choose important information, develop their own opinions and express their views intelligibly", except... the answers must include mention of specific items.
Here's the problem- it is entirely plausible that you could have a student address points raised in the text, write in an orderly and intelligible manner, and express an opinion with merit, and justify it, and still not receive due credit if they haven't made mention of the 'key' concepts.
In other words if the Osaka official really wants students to choose important information, develop their own opinions, and express their views intelligibly, if this is the criteria, then you have to drop entirely the notion of a correct answer. Instead you have to evaluate essay writing skills- Did the student actually address and understand the text? Was the response stylistically sound: rhetoric, organization, register etc.? Did the student present a meaningful opinion and were they able to justify it?
I think I know why the testmarkers still want to maintain the notion of a set answer. For one thing it makes the test papers easier to grade. Look for the keyword and if it appears, credit is given. No keywords = no credit. It also removes the dreaded notion of subjectivity in grading and the related possible charge of bias or imbalance in scoring. But arbitrarily assigning a 'correct' response to what is ostensibly an opinion-based writing task is worse than any aspect of subjectivity grading, as it renders the test item invalid- you are not grading what the question/task is actually asking.
And what's so bad about subjective grading anyway? Teachers do it on every classroom essay, report, or other assignments that don't feature fill-in-the-blanks or multiple choice (kigou) answers. We assume they can do so because they are trained professionals who, like judges, are expected to be specialists in evaluating the skills and abilities of their students. If they have no confidence in doing so on entrance exams, why are they teachers?
There's also a way to create more balance in scoring: Employ two scorers for any open-ended question. Have a skill criteria (a general one, not too detailed) established between the two of you and then mark separately. If the task is worth 20 points and you give one examinee a 17 and the second scorer gives a 13, you then make the final total for this question a 15. That seems fair.
Finally, I have to take issue with the seemingly automatic, but unnecessary, association teachers (both Japanese and native English speakers) make between productive writing/speaking tasks and 'expressing one's opinion'. First, it can be difficult to grade 'opinions' with all its value-laden baggage, but self-expression includes so much more than just giving opinions. Summarizing, narrating, predicting, creative writing, and commentary are all valid and important modes of self-expression that can also be tested. The easy fallback on 'giving your opinion' tasks fosters the unfortunate binary paradigm that if a text is not a cold hard fact it must be an opinion or, if you are not just regurgitating facts you must/should be indulging in expressing your opinion.
Some people just don't have strong opinions on certain topics, especially when an authority figure has chosen the topic. Cultural and even personal factors can come into play here too. Some cultures and some individuals are more indirect, opaque, restrained in their approach to offering opinions. They may not be comfortable with artificially forming a clear opinion in a certain number of words on a topic not of their choice, and yet they may understand the content perfectly well and likewise be adept at self-expression. Not everybody wants to be a Glenn Beck or a Michael Moore, nor should they. Students shouldn't be punished for this.
There is much more to productive, active, intelligence-engaging self-expression tasks than 'giving my opinion' (which seems to me to be a very post-sixties American value), just as there ways to grade such tasks without resorting to set answers.
|Permalink | Leave a comment|
January 27, 2011
1. Keio University drops the Center Shiken criteria for entry- Good!
Since it is exam season, and also because the aura surrounding exams are impossible to escape in Japan, I bring your attention to: this recent news item
... which informs us that prestigious Keio Univ. will drop the Center Shiken from its entrance requirements from next year.
This is, in my opinion, a good thing. I can well understand the argument made by Keio officials- that the Center Shiken did not sufficiently stratify student results, at least not enough so as to make it a meaningful or reliable indicator of suitability for entrance.
This is bound to happen of course when over 400,000 people take the exact same test. And at the higher-ranking institutions, entrance or non-entrance can be based upon a miniscule 1 point difference- hardly a reliable basis for determining whether you've got the right students, and definitely less so as a reliable measurement of intelligence or commitment.
If a university decides to use only it's own 'niji shiken' (second-stage) test plus an interview as the criteria for entrance (most now apply some weighted combination of the Center Shiken plus their own 'niji') they can more effectively streamline the procedure and judge students on their individual merit. Moreover, on a test made by Keio people, the element of anonymity would be reduced, making it more relevant to the specific goals or aims of the university.
This is not to say that there is something wrong with the content of the Center Shiken- it is quite well-written and reliable. It is simply the concept, this massive machinated mammoth that defaces the candidates and can make entrance to a specific university and department a matter of a computer spilling out numerical results somewhere in Tokyo.
Just think of the washback effect it would have on high school education if more universities chose to streamline or personalize their exams and bypass the goliath that is the Center Shiken.
2. There is no Monkasho English 'word list'. Sort of.
File this one under 'you learn something new everyday', or at my age, about once every three years.
I had long assumed, and not without good reason, that Monkasho (the Japanese Ministry of Education and A Whole Pile of Other Stuff) had a set list of English words that high school students could/should be expected to 'know' (whatever that may mean) upon graduation and in preparation for entrance exams. I had assumed this until a reader asked me to locate the list- and I couldn't. Then I started asking questions and no one seemed to know for sure- until I contacted a certain Mr. Big (not his real name, in case you were wondering) from a nearby campus.
I had assumed this because senior Japanese people around me had long made mention of a set list of words that were deemed suitable on entrance exams without a gloss. In other words, if 'catapult' or 'solenoid' appeared in your exam text (as they should!), you were pretty much required to mark them with a * and add glosses at the end. Or at least edit them in some way.
So, you might well ask, how did one know if 'catapult' or 'solenoid' were 'off-the-list' words that warranted the gloss treatment? Well, every educator worthy of his/her title in Japan has a large Shogakukan dictionary strategically placed at their right hand side (the 'Progressive' version being the closest to a standard- although Kenkyuusha is also widely used) in which words that are expected to be known at different levels of JHS and HS education were duly marked. No mark meant that we could not reasonably expect examinees to know the word.
Now, you might also well ask how the dictionaries set their asterisk criteria. This is where I had previously assumed that Monkasho had set the standard. After all Monkasho does have a required list which you can see by scrolling around on this page. But, as you will soon note, this is only a short beginner's list. A further careful reading of this Monkasho document reveals the number of words to be incrementally learned at each stage but no actual list of words. Thus, the JHS/HS teacher can use one of the 'marked' dictionaries as a reliable guideline.
But no one seems to know exactly how the compilers of the dictionaries set their standards, although it is widely believed that their choices are based upon the vast (and somewhat secretive, plus hard/expensive to obtain) Tokyo Eigo Kenkyuu (English Research) Corpus. Apparently, most of these marked items make up the bulk of the handiest reference available for such teachers and prospective examinees, this being the JACET 8000 , which is available in any bookstore that caters to dealing with entrance exams (meaning 99% of all bookstores in Japan).
So now you know. Like I didn't.
Any further insights would be appreciated- and questions welcomed.
|Permalink | Leave a comment (1)|
September 01, 2011
I often come up with EFL related items that I want to address in this blog but for many feel that just a few sentences might express all that I want to say. Trying to extract a full article from these snippets would be like drawing blood from a scone. So, in soundbite style, here are ten near-random EFL thoughts that have been camping out in my head recently...
1. Could GPAs motivate?
In most Japanese universities GPAs are a non-factor. As long as you graduate from the program with the university's name on your diploma nobody seems to care too much what your grades were. This seems to be only a minor factor in determining entry for graduate school too.
I teach medical students. Of course, since there is a doctor shortage students can find employment pretty much anywhere (yes, the ones who attend run-of-the-mill med schools can-- and do-- often end up working at the most prestigious university-affiliated hospitals). This means that a GPA has little influence-- it's just picking up the class credit that matters.
But what if the more prestigious companies, employers, and positions in general were reserved for those with the highest GPAs? What if a GPA became the key factor for graduate study? This might well increase the motivation in undergraduate courses. Rather than aiming at the low-bar 60%, more students will aim for the highest scores possible.
Perhaps raising the profile and value of GPAs should be a Monkasho concern. Thoughts?
2. Student writing and the (expletive) enter key
Where in the secondary educational system do students 'learn' that after typing an English sentence that the correct thing to do is to hit the enter key? The result is that the attempted paragraph reads more like a poem. What is the source of this behaviour?
A colleague has done some research on the experience of Japanese university students writing extended English using English writing software. Most have never used it and have little underatanding of formatting for any English script. They tend to stick with Japanese formats and software or (shudder) even try and compose from cell phones.
Addressing the issue of how to write in English on a computer should be a standard part of orientation, at least in an English department.
3. Sentences, letters, and names- student bafflers
"What's your first name?" "Watanabe" "No, your FIRST name!". Confused looks. What do you mean?
Many students still have trouble with the notion of what a first name is. After all the one said first in Japanese will be the family name (Watanabe in this case), so it's understandable they think of that name as being first. But even if they change their name order for English they often think of "first name" as meaning "primary name" which for them will still be the surname.
Similarly overlooked are the murky translations of the English words "word" "letter" and "sentence". With Kanji a "word" generally equals a "letter" so the two are often indistinct in student minds. Therefore, if you ask students, "What's the fifth word/letter in this word/sentence?" they'll often give you the wrong answer. The Japanese items/concepts "ji" "go" and "kotoba" also fail to match the concept of word or letter precisely, exacerbating confusion.
Japanese tends to use an all-purpose term, "bunsho" (or some variation of "bun"), to talk about just about any written text. It gets translated as "sentence" in many dictionaries but could just as easily be rendered as "text", "paragraph", "chunk" "essay" in many cases. The concepts are hard to pin down across languages.
This is another area that could be touched upon in English orientation classes. After all, before they start practicing the mechanics of English sentences and paragraphs students should have a clear mental representation as to what these actually mean.
4. Underrated in EFL teaching (1)- Strategic competence
We've probably all noticed how some students seem to be better English communicators than others despite doing less well on paper (or formal examinations) than their peers. There are some who are simply able to communicate well despite a paucity of grammatical skill or lexical knowledge. They make do with what they have.
These students tend to have good social skills and part of having good social skills is the ability to read the 'other', to negotiate and moderate where necessary. To pitch your communication in any way that allows your point to be made. The ones who do this better in Japanese tend to do it better in English too.
A big chunk of this is what we call strategic competence-- the ability to manage discourse when you are not in full control. This means the ability to manage breakdowns and repair, to ask for clarity or confirmation, to use circumlocutions or general words, gestures or facial expressions, and so on. We all have students who have a wide range of knowledge about English but little or no skills in the way of strategy. Noting how they manage discourse in their first language, let alone in English, might help them climb a few more rungs on the English competency ladder.
This is something that should probably be addressed more in EFL materials and curriculum development.
5. Underrated in EFL teaching (2)- Form vs. forms
This important distinction came to the forefront of the ELT world about twenty years ago and has been a key dichotomy since. Form-- the overall flow and pattern of a language or a text, is distinguished from forms--the individual elements that make up the structure of a language or text. Many teachers, especially those new to the field, tend to conflate the two, assuming that form is nothing but a cumulative set of forms. Therefore, the pedagogy usually goes, if you teach all these specific forms, such as the rules that govern grammar and lists of vocabulary, learners will naturally develop mastery over language form in general.
Except they don't. Those high school textbooks with 6000 sentences displaying endless samples of forms (next- 20 decontextualized, non-extended sentences employing the causative passive) are like a big language net, from which form falls through the mesh. Focusing only on forms is like trying to get children to understand a geopolitical map of the world starting with a street map of Tokyo. The bigger picture that a focus on form creates determines the individual forms that need to be employed. Focusing only upon forms alone is like teaching only the notes for playing a music composition and ignoring the timbre, texture, dynamics, and phrasing- things that make a piece actually worth listening to.
This should be popping up more in teacher training it seems to me.
6. Underrated in EFL teaching (3)- Presence
I like dogs. So I enjoy watching Cesar Millan, who you may know as National Geographic's 'Dog Whisperer'. The man's ability to calm and gain the respect of even the most aggressive dogs is stupendous. Obviously, I don't have the space to discuss his many techniques here but it is undeniable that when near dogs the man has presence.
Dogs read humans very closely. Friend or foe? Trustworthy or dangerous? Every nuance of human posture is calculated. Is this human in control or is he or she intimated by me? Every telltale facial tic is processed by the dog. What is the intention of this human? Do I resist, fight, or play along?
Now I don't want readers complaining to me that my students are not dogs, that I shouldn't compare the two, and that our goal as educators is not to tame or control the students. You know that. I know that. But there is nonetheless something similar to be said for a teacher's classroom presence and how much respect they gain from students based upon this presence. The postures, the facial expressions, the choice and delivery of language, the sense of purpose in managing a class-- all are aspects of overall presence. Students will start from a position of trust with a teacher who has it. A position of trust creates receptability for learning. The student will be open to where the teacher is guiding them. But teachers whose presence seems uncertain, betrayed by movements and measures that indicate that they are not in control of themselves, can lose students
Keep in mind that by presence, I definitely don't mean displaying aggression, using intimidation tactics, or being overly authoritative, flamboyant, or arrogant. Dogs can distinguish aggression from control, bluster from purpose. If dogs can do it, so can students. Overly aggressive teachers can appear to be covering up a weakness- their presence is threatening, not reassuring. Trust is not forthcoming.
Perhaps this is something that warrants more attention in teacher training.
7. A re-test formula that delivered the goods
A re-test for me is never a punishment but rather an opportunity for fixing and revising so that the desired skills or knowledge are finally attained.
But instead of having those students do the same, or a similar, test again (after giving general feedback on common weak points, model answers etc.) as a group I decided this year to have the students who hadn't performed to my satisfaction come to my office individually for 30 minutes to one hour each during the off-season.
They were told to bring along all their semester tests and assignments. Before the meeting they were told to fix, be ready to explain, and most importantly, understand the parts that they had done poorly on. Not only did this allow students to focus upon brushing up the areas they hadn't done well in (which again, is the whole point of education) but in dealing with them one-on-one I could go over in some detail the parts that they found confusing or troubling. They reacted very positively to this personal touch. It allowed me to underscore why certain learning points and skills were valuable for them and also provided me with a clear look as to what students found difficult-- and why.
8. A test idea that delivered the goods
I'm always thinking of ways to make my tests meaningful and pedagogically viable. How can I make a test that both serves as a valid indicator of student performance and helps the students master the content or skills aimed at in the course? This one worked well...
I defined eight skills/learning areas from the class that we had practiced in some detail-- areas of practice and study that contained a holistic emphasis but included new lexis, structure, content, social skills, rhetorical development, critical and creative thinking... the whole shebang. I asked students to create extended examples of each of these.
I gave them the test paper in advance with the eight tasks (I can't really call them questions) written on them. I told them that they would have to do only four of the tasks but that they wouldn't know exactly which four until test day. This meant that they had to prepare studying for all eight-- which forced them to carry out a thorough, fulfilling review of everything we had covered so far. That, of course, was the goal.
For test day, I made all sorts of random combinations of the four assignments (#3,5,6 and 8 for one student, #1,2,4, and 7 for another and so on) such that few students had exactly the same set. The only consideration was to make sure that each task was of the same difficulty so that some students wouldn't have an easier time of it than others. This meant that everything of value in the class had been covered in test prep but the test itself was not quite as heavy-- and easier to mark.
9. Has corpus-based research jumped the shark?
It seems like every EFL researcher and his/her dog is carrying out corpus-based research these days. The majority of presentations I've seen at ELT conferences recently, particularly by Japanese EFL practitioners, are focused upon corpus gathering or interpretation. Yes, I'm guilty-I've done it too.
I can understand the appeal-- especially to Japanese researchers whose intuitions about normative English might be flawed (not that NSs are flawless of course). Corpus study can be comfort food giving them a clearer idea as to what forms are normative. And it meets EFL academia's self-imposed research fetish for allegedly objective, empirical evidence (i.e. reducible to charts or numbers). Concordance as Bible.
But I worry that by focusing so much on the micro-forms (individual tokens or types) the larger question as to macro-form (the defining shape of the communicative event- who is participating, how does the exchanges begin and end, what the communicative goals are, how social signals and illocutions are being employed to serve the communicative goal etc.) is being ignored.
Henry Widdowson famously critiqued the hubris regarding the application of corpus research to pedagogy and materials development largely along these same lines. It's true that many current corpus-based studies are well-defined ("We examined the frequency and type of performative verbs used in air controller dialogues...") but I do worry that this is leading to a bottom-up, the-detail-explains-the-bigger-picture approach that might not be the best way of understanding how people construct communication.
10. Handwriting and scoring
OK. I admit it. The quality of student handwriting can influence how I score a paper. Even when the scoring criteria is content and/or form I have noticed that easy-to-look-at or even elegant penmanship positively influences me more than the scrawls and scribbles reminiscent of an eight-year old that a few students always display. It's understandable, but if penmanship is not the criterion it shouldn't affect the score at all. Have you noticed the same thing?
Of course, now that I am conscious of it I can deal with it but I have to resist the lure...
Comments are welcome but please remember that these thoughts are outtakes and impressions- not finished philosophical products.
|Permalink | Leave a comment (8)|
October 08, 2011
Think of all the bad cliches you can think regarding alleged Anglo-Saxon values (putting aside for a moment the fact that many people wrongly conflate 'Anglo-Saxon' with being white, or even with being Western). You know, the ones about winner-take-all cut-throat capitalism, the need to rationalize everything numerically, the low regard for the emotional welfare of the small fry, and an emphasis upon bottom-line results, all directed with ruthless efficiency.
It's a pretty damning caricature but one, as you will have surely noted if you are well-read or travelled, that is widely believed. I've often been in position where people have assumed these characteristics must inevitably be ascribed to my good self, being a wasp and all, despite my protestations that these attributes did not in fact reflect my personal values nor the education, formal or otherwise, that I received.
But after reading Paul Stapleton's article in the September/October issue of JALT's 'The Language Teacher' magazine I felt like this caricature had been not only underscored, but justified by being presented as virtuous.
Let me explain by outlining some of the key points made in Stapleton's article (although it is obviously better if you read the link provided above). Stapleton worked for twenty years in a Japanese university but recently left to take a new role in another country (Hong Kong to be exact). Stapleton's article compares the two systems and finds the Japanese lagging on many counts. Although Stapleton is careful to note that his experience cannot be assumed to be representative of Japanese universities as a whole, the conclusions he draws from this personal experience nonetheless are used to critique Japanese universities en masse.
'An atmosphere of mistrust'
For example, Stapleton relates how test grades given by individual teachers at his current (favourable, non-Japanese) institution will be subject to "internal monitoring and external review", and then possibly modified by others to ensure "fair and balanced grading". For me, having my own students'-- my own courses'-- graded assignments reviewed, and possibly changed, by other teachers violates the tenet of academic non-interference and smacks of institutional nannyism. Micro-management of this sort generates an atmosphere of mistrust. What is wrong with the idea that if you hire someone to do a job (such as grading) you assume competency, until some egregious problem raises its head?
Stapleton also explains how teachers at his current institution are ranked (!) based on a cumulative "magic score" garnered from student questionnaires about the teacher. Teachers who receive lower 'rankings' are called to task. He goes on to explain how this "can, and does" lead to non-renewal of contracts. First, the reason as to why teachers should be ranked against each is other beyond me. Universities are not Billboard charts. Student ratings and comments should primarily exist as a means of feedback for the teacher, and with an emphasis upon qualitative commentary as opposed to raw numericality.
Secondly, although Stapleton is aware of the dubious veracity of using student questionnaires as a measure of pedagogical competency, he does not address the likelihood that pandering to students in order to accumulate popularity points will be at odds with his supposed emphasis upon increasing academic rigor and accountability.
Low bar for research
Stapleton also criticizes at length the alleged "low bar" that Japanese universities maintain when evaluating personnel (referring to database scores which are carried out at all national Japanese universities, especially since the advent of 'houjinka' system, or semi-privatization). He mentions that dubious essays published in non-refereed department journal will suffice as research publications. But he also seems unaware of, or chooses to ignore, two factors that might considerably alter his perspective on this issue.
The first is that national universities rate publications by an established impact factor, so it is not possible for a throwaway piece in the department journal to have the same database value as a full publication in a top-notch publication. The second is that all teachers and researchers on the database can choose a weighting system for their contributions-- that is, researchers can choose to put greater weight on research scores, teachers on teaching roles, or on administrative involvement (which is a large part of a professorial role at national universities). In other words, people with different roles are not constrained by the same rubric, let alone some numerical "bottom line" acting as a cut-off barrier. It may seem fuzzy, but it is more flexible, and thus, I would argue, fairer.
Is the hamster-wheel scenario more humane?
Frankly speaking, it also seems much more humane to me. While Stapleton's faculty would appear to be running on a hamster wheel trying to maintain the bottom line under threat of losing their livelihoods, the "Japanese" system he criticizes recognizes the value of different roles and how individual contributions may not manifest themselves in fat database scores. While deadwood still occupies some Japanese academic offices to be sure, those (full-time faculty) with dubious scores or contributions will have their situations discussed so that all the affective factors can be made known.
While "clear benchmarks" may aid in illuminating expectations, set established minimal "bottom line" scores don't allow for such human variables. To me, Stapleton's approach seems more suited to the sharkpool world of retailing than academia: "Go out and sell a minimum of $50,000 or you'll be out on your ass!"-- Show me the money! I really wonder if this score chasing is really as conducive to raising research standards as Stapleton assumes, since I can easily imagine lower-tier academics focusing more on the tail-chasing act of maintaining numbers than on doing research because they love it or because it is truly beneficial to their teaching area. They produce because they fear the crack of the whip. Is that really a virtuous motivator?
Promotion- age, merit, or other?
And while Stapleton lauds promotion based upon merit (although he appears to conflate this with high database scores) I think he overstates the centrality of age as the determining factor in promotion in Japan. It is most certainly not the determining factor at my own university (although professors anywhere will generally be older because they have stayed in their positions longer, it's not that they originally attained that position solely or even largely because of age).
In fact, the whole notion of 'promotion', in the sense of the business-world model that Stapleton seems to be describing, doesn't really apply to national Japanese universities. Professorial seats, when open, are publicly announced-- and outsiders with excellent academic credentials or current Associate Professors very familiar with the existing system, who have been acting as de facto professors for awhile, tend to gain these seats. Moreover, department heads, deans, and committee leaders rotate regularly, often through internal elections. The need to jockey for position, to scramble, to outpace an opponent, is less pronounced.
A bigger question might be this: Who benefits from Stapleton's system? It is telling that not one of the improvements that Stapleton mentions is connected to pedagogy, education, or improving learning skills. Rather, every one of Stapleton's comparisons is about bureaucratic efficiency, garnering academic brownie points, justifying budgets, and about maintaining control and "accountability" or, as I read it, about keeping people on their toes by making them anxious about the possibility of losing their jobs. There is no reason to believe that students receive better teaching methods or superior curricula due to all the factors cited by Stapleton despite his claim that good students are naturally drawn to such universities, so we can't say that it really seems to benefit the students.
Surely lower-rung academics wouldn't be benefitting from this dance-or-I'll-shoot-at-your-feet scenario either. It seems that those who might benefit most, as is often the case when "accountability", "bottom lines", "meeting numerical standards", and contract renewal are buzzwords are the people in power which, perhaps unsurprisingly in Stapleton's current institution appears to include Paul Stapleton himself!
'To hell in a happi coat'
Unfortunately, the article ends with an old bugaboo or, I might even say, cliche. Stapleton argues that without changes, meaning the adoption of the systematic "rigor" and "efficiency" carried out at the university he now works at, Japanese universities will be marginalized, since they are already "outliers" in terms of accountability; that the negative effects of these qualities rooted in Japanese culture will lead to decline.
The old 'unless Japan changes this society is doomed' (Doomed I tells ya!) slogan is something I have heard on every Japan-related topic over the past twenty years. Yes, there are aspects of Japanese society that, if not addressed quickly and appropriately, could lead to future hardship (i.e., the aging problem), aspects of Japanese culture/tradition whose time has come and gone and now are burdensome anachronisms (the koseki and juuminhyou system), and features Japan would do well to borrow from other countries (traffic roundabouts). But the notion that Japan is headed to hell in a happi coat, a downward spiral into oblivion, unless Japan adopts Stapleton's preferred model (the superior one apparently held by "developed" countries) this just sounds like the same old alarmism.
If this is the future I don't want to be a part of it
If I recall correctly, I met Paul Stapleton once and have also attended one of his presentations. In no way did he come across personally in the same manner as the procedures he advocates do. And although it's true that different systems bring out the best in different people, I wonder if he is aware of how his article might come across, if he is aware of some of the demerits of what he calls 'rigor', 'efficiency', and 'accountability'. For this reader at least-- if this is supposed to represent an improvement in academics, education, and of societal advancement in general then, sorry, but I don't want to be a part of it.
|Permalink | Leave a comment (2)|