2018 Exam paper

  • Lee Murray
    Participant

    What did you all think of the question paper? I have many issues with it. I’ll just copy and paste my very informal thoughts that I collated as quickly as I could:

    8. “One aspect of consistency”. I don’t know what this means.

    9. “Type of coding”. What does that mean? Does it mean which language?

    14 c. Is the delivery cost 1.50 or 0 or even an error (since the variable being sent to display is ‘deliveryCost’, but the variable being set is ‘delivery’)?

    15. Weird question. Seems almost like a trick quesiton. Why? Well, because the programmer needs code to repeat until a condition is met.

    16. “What would happen to the table…” Nothing really. It’s the records that will be affected.

    17 b. Is this ‘why a compiler vs an interpreter?’ or is it ‘why is it compiled at all?’?

    18 a. I don’t like that this question is based on something that only exists in an appendix. It’s not in the main body of the specification. This is creeping into the area that caused so much trouble before (areas of the course that were only assessed in a single coursework instead of being explicitly stated as an examinable piece of knowledge in the specification). The appendices should not be used to sneak in extra examinable content.

    18 e. Appears to want comma separated selectors in a single CSS rule. This was not clear as an examinable piece of knowledge, so I didn’t teach it.

    19 e. The only accepted length of password is 8. Would 8 be considered normal or extreme (given that it’s right on the boundary). Would anything else be considered exceptional since it would not be accepted? I think this is a poorly thought out question. Also, the ‘program continues’ in the ‘Expected results’ column is confusing. The program would continue in both tests, just that the second test would continue with repeating the conditional loop while the first test would exit the conditional loop.

    20 d. “Describe how to evaluate the accuracy of the expected output…” is this as simple as saying ‘Figure it out manually and compare it to the results you got’?

    20 e. Referential integrity is only mentioned once and is in terms of the creation of a database. Nothing about understanding what it is or any issues that may occur without it. No materials to support this either (the sentence is not in the ‘Resources to support the N5 course’ table).

    22 a ii. Calculation of storage requirements is not in the specification and was explicitly stated by Raymond as completely removed because “it’s just maths” (regardless of how trivial it is, it shouldn’t be there).

    Ronnie Ross
    Participant

    Don’t know how to feel about it. Will need to go over it again. The question with the links to id’s within the page got me. I never taught that. Never thought to.

    The accuracy of expected output made no sense to me. I’m not even sure how to guess what it might be looking for.

    Charlie Test
    Participant

    An interesting N5 examination paper. What happened to questions on hardware? Do we need to teach it at all? Rather a pity I’d say.

    James Paterson
    Participant

    I agree with Lee Murray. I thought some of the questions in the paper were ambiguous and then some of the paper reflected parts of the speciman paper.

    Colin
    Participant

    I also thought some of the wording was vague or ambiguous. 23ci was a stinker, did anybody actual teach links to an id? Seems the pupils were set up to fail on that one, what is the actual answer?

    Willie Richardson
    Participant

    Not enchanted with that paper in the least. Despite having covered most topics comprehensively I am at a loss to consider how we could have covered much of that content particularly having wasted so much time covering theory that was then discarded. Such as hardware which we wasted weeks on…

    Emma Maley
    Participant

    23ci is an anchor (to a bookmark on the current page) which is stated in the course spec. I probably didn’t explicitly cover it with my class but it was definitely in our notes (not sure how many of them will be able to answer it correctly though).

    I didn’t like 17aii – what are they supposed to use in their condition? Are they checking the array of booleans against an array of correct answers to see if they match? It’s not obvious from the question (or to me at least).

    I thought 6 marks for the ER diagram was really generous, as well as the 5 marks for the query design question (20.c)

    a1an.robertson
    Participant

    Q14a Had me scratching my head. I think the problem was that I didn’t know explicitly what the code was meant to do, so couldn’t figure out where the error was. I kept looking at the condition and didn’t immediately notice the inconsistent variable name. Tricky (for me).

    Q15 Should this be a 2 marker. 1 mark for the idea of repeating code; 1 mark for explanation of a condition being used to control the loop. It seems a shame to not give partial credit when there are 110 marks available.

    Q17b Will be interesting to see how this will be marked. Would “To turn it in to binary so processor can execute it” be accepted, or are they looking for a comparison to interpreter?

    18a Missed this in the appendix, so did not teach it. 🙁 The fact that hierarchical was explicitly stated in the old CAS but not in the current one threw me.

    20e “Referential Integrity”. Taught it in terms of the implementation within a database, but did not formally explain the concept as I didn’t interpret the CAS in this way. At least this year I will have time to rectify this.

    23ci Interpreted CAS as straightforward tags and internal links to other pages but not bookmark links within a webpage. Nothing in the appendix, specimen assessments, or assignment to suggest it should be taught. Something else to refine for next year.

    It would be really helpful if our subject didn’t have such scope for such nebulosity. I’m sure next year will again throw up some more surprises 🙁

    David Lewis
    Participant

    I totally agree. The Course Specification document should explicitly state what could be asked in the exam, IN DETAIL. The pupils should not be at the mercy of teacher interpretation of what could or could not appear in the exam. Once again we’re finding ourselves teaching irrelevant information in fear that it may come up in the exam as it has some tenuous link to a piece of terminology stated in the specifications.

    Enrico Vanni
    Participant

    It’s been the case for a while now (and I’ve mentioned it ad-nauseum) that we routinely see our students being assessed on the basis of them having to guess the intentions of the assessors rather than demonstrating their knowledge and understanding – that the challenge in the task is to see through the obfuscation rather than understand and interpret the information given. It was the case in the 2018 practical exam and we are seeing it again in this latest written paper.

    This is coupled with us having to be aware that correct answers may not be given credit because they are not part of the course spec and therefor may not make it into the marking scheme. I would like to know for example what would happen if a particularly studious candidate who has extended their knowledge (or happened across some revision material for an earlier iteration of an SQA Computing course, or a current course at a different level for that matter) answers “decrease the sample size” in response to question 3? If there is anyone reading this who is involved in the marking I would genuinely like to know if this has been considered and if so what was the outcome.

    On a related note – I would like to see the practice of adjusting grade boundaries for these exams be banned. I can understand why it was introduced but it is a very blunt instrument and it appears to be being used as a catch-all for other issues such as the poor quality control of some of the questions. Not having this option would force the people involved to tighten-up on the abovementioned issues.

    Alan Wallace
    Participant

    I applauded myself for correctly predicting that referential integrity would be the specific subject of a question. The applauding was because, from reading the CAS, it definitely did not seem to stand on its own as something the kids should be able to describe – but we taught it anyway.

    As for bookmarks? Yes the anchor tag is involved but I focussed on internal and external hyperlinks and did not teach links to IDs. I’m fairly certain those involved in writing the question knew this wouldn’t have been taught by most schools so I can only wonder what the motivation was in including it without guidance.

    Lee Murray
    Participant

    @davidlewis regarding the specification explicitly stating what could be in the exam in detail, this is what we’ve been asking for for years. I remember a couple of years ago when a new specification was released with lots of added content; we complained and the SQA responded by saying “I thought you wanted more detail!”.

    They don’t seem to know the difference between detail and volume of content. It seems that the detail has been removed from some other subjects too, such as Physics, where there used to be a huge amount of detail and explanation given in the specification, but now it’s much more general terms.

    My suspicion is that the ‘specifications’ are deliberately vague to allow exam writers to set any question that takes their fancy. If anyone questions the exam, they can point to one of the many vague statements in the specification and say “See, it’s right there where it says ‘CSS selectors’!”.

    It was the same with the old coursework where teachers were marking them. The tasks and ‘marking instructions’ were so open to interpretation that it was a bloody nightmare trying to figure out what any pupil deserved in terms of their marks. Now that the courseworks are being marked by the SQA, they have created a MUCH more stringent coursework and marking scheme (going by the specimen – I haven’t seen the marking scheme for the real coursework). Call me cynical, but it looks like they are just making things as easy as possible for themselves and not really giving any craps about pupils or teachers.

    This exam looks like it was quickly knocked up by a teacher creating a bank of homework questions, where the questions and answers don’t really matter as long as it’s getting the pupils thinking about the right general area of study. It isn’t exactly in line with the specification, but it’s only homework so who cares?

    chalove
    Keymaster

    Just to be clear, from someone who has been at the hard end of writing these courses and dealing with the details/volume considerations, no-one ever sets out to make this process more difficult. Everyone involved in the course specification and exam writing is a teacher, with many years experience. Nothing is left “deliberately vague” and there is a process for exam writing which involves colleagues writing questions, then gathering together to consider, on balance, what makes the best paper. If candidates give answers which are valid but not in the course spec – they will be given full credit for these – I’ve sat at the table in the past when this has been done. No-one seeks to disadvantage candidates.

    As for the assessment of coursework, it will be more focused because the task is more tightly defined. I’ve always supported external marking of coursework because of the workload issues previously. The people writing these assessments are OUR colleagues and I’ve never met one who doesn’t care about young people or their fellow teachers who are involved in delivering our courses.

    I’m all for constructive feedback. The feedback about individual questions above is really helpful and I’m sure SQA folks (who are on this forum) will pick up on those points for the future.

    Perhaps a number of posters above could maybe have a read again of their more forceful comments and consider what they look like to those involved in the process or to teachers who are new to the profession?

    Lee Murray
    Participant

    I think you should trust that people have thicker skin and that opinions should not be stifled just because you don’t like them. You cannot possibly support what has happened over the past couple of years in our subject. It genuinely feels like there are people at the SQA deliberately making our lives harder and I’m not going to stop calling it out if it continues to happen.

    Our ‘specifications’ are vague for no apparent reason. While I would agree that people don’t deliberately make it harder for teachers, what I AM saying is that they are deliberately making it easier for themselves. It’s so much easier to write one word and leave it at that than it is to write paragraphs of explanations. BUT… writing the paragraphs would help us tremendously in actually doing our jobs. What is the exact reason for such undeveloped specifications? Don’t tell me it’s because they don’t want to be prescriptive, because an exam IS prescriptive. The specification MUST prescribe the required learning, otherwise the whole thing is pointless.

    The coursework marking may be more focused but not because the assignment is more tightly defined. The new marking could easily have been just as vague and woolly as it always was. Or the previous marking instructions could have been more tightly defined to match the tasks that pupils were asked to carry out, instead of writing extremely vague and general statements that could be applied to any coursework ever created. I’m just telling you my gut feelings about this.

    I’m not going to apologise for speaking the truth regardless of how forceful you think it sounds. I don’t know WHY you’re trying to defend these people (maybe you’re one of them), but claiming that certain posts are forceful and should be re-considered creeps towards censorship of opinions that you don’t agree with. I haven’t said a single thing that wasn’t true, so maybe it’s too close to home for some people’s liking.

    Just to be clear, here are some facts:

    • Our ‘specification’ is far too vague to be fit for purpose, as evidenced
      by the number of people who didn’t teach ‘bookmark’ style links (myself included>.
    • More content has been added over the years and little to no extra detail has been added. Still as vague as it has ever been (possibly more vague)
    • Every year there are MULTIPLE questions in exams that are dodgy at best. I looked over this one ONCE and spotted 12 sub-par questions. Subsequent comments reveal more.

    I get that the course writers and the exam writers etc. don’t set out to make the job of teachers harder or to disadvantage pupils BUT THAT IS WHAT THEY HAVE DONE. How’s that for forceful?

    DCullen
    Participant

    Charlie. Take the beginning of the new Higher course spec:

    Describe and compare the development methodologies:
     iterative development process
     agile methodologies

    A teacher has to plans a lesson(s) on this. To what depth do they go into on agile methodologies? I might think OK I’ve never worked in software development team using Agile methodologies, the SQA have not provided any notes, BBC bitesize has not updated yet but that’s ok I’ll just generally explain about both of them and then compare them. So I do a google search and find lots of results, read through them and have to decide what is important. But what is important? The spec only says describe and compare. Does that mean an examiner could ask a questions like “Walker Industries are developing a new app using the Scrum Agile framework. Name and describe another Agile framework. (2)”? You might think “no they will never ask about specific Agile frameworks” but that is what we are up against when planning how to teach the Nat 5 and Higher courses. There is an element of guessing what to teach and guessing what might be examined from a short bullet point. One school might guess rightly what is going to asked in the exam and another doesn’t, disadvantaging their pupils.

Viewing 15 posts - 1 through 15 (of 28 total)

You must be logged in to reply to this topic.