As part of the Certification Challenge, Veronica Sopher and Mel Batham asked me write a blog post walking our loyal readers through the different sections of a score report, explaining what each means, how to interpret your results, how to focus your studies if you fail an exam or simply want to improve, answer common questions, address common misunderstandings, etc. Of course, I said "yes" (I'm always looking for an excuse to write blog and drop some knowledge on my peeps!), but as I thought about it, I realized this too big for one post. Thus, you're entering into my second series of blog posts that I'll be writing over the next month or so.
Here begins my series on "dissecting score reports." We'll take it section by section, and I'll use this opportunity to (re-)introduce you some changes that we are making to our score reports.
Full disclosure/fine print: these changes will start appearing in score reports associated with most of our exams over the next 6 months, but they will take some time to roll out across all exams; further, some pieces of the updated score report may not be available for certain exams... more on that as I walk you through each piece. Because upgrade reports are a little different, I'll walk you through those in a separate blog. Oh, and this is specific to our technical certifications. If you want to spin through MTA or MOS score reports, let me know.
So, let's start at the beginning. At the top of the score report, you will find information about the exam delivery, including exam name and number, your name, registration ID, candidate ID (assigned to you by the delivery provider), test center, exam date, score needed to pass, your score, and your pass/fail status. It looks like this:
Here's what you need to know about this section.
1) On our technical certifications, you always need 700 to pass
2) If you have issues or concerns about your exam delivery, the registration ID, which is a unique identifier for this specific exam delivery, is critical in our ability to investigate. Make sure that you provide it if you escalate any exam-related issues.
Next, you will see a bar chart that summarizes your performance on each of the major skill areas (i.e., functional groups) on the exam. It looks like this:
This bar chart shows your performance on each section (functional group) on the exam.
How should you interpret this chart?
How should you use this information? Whether you want to improve your skills because you need to retake the exam or you simply want to grow your areas of weakness into strengths, you should focus on the skill areas that represent the highest percentage of exam content and your lowest performance. In this example, I would tell Alan to focus on "Designing Client Configurations" because 25-30% of the exam is focused on the skills identified in this functional group, and this was an area of weak performance in comparison to the other sections of the exam. He should review the Exam Details page for this exam and practice the skills (objectives) listed for this functional group. Doing so will improve his overall performance in this content domain (Configuring Windows 8.1).
The most frequently asked question/escalation that we get related to this section of the score report is: The bars on the score report show that I have scored more than 70%; why didn’t I pass the exam?
Keep in mind that the passing score of 700 is a scaled score and does not mean that you must answer 70 percent of the questions correctly to pass. The actual percentage varies from exam to exam, and for some exams the percentage needed to pass is greater than 70 percent. The passing score is based on input from subject matter experts, the skill level needed to be considered proficient in the content domain, and the difficulty of the questions delivered during the exam. (Want more details, watch this video.) Further, each section contains a different number of questions, and although it may look like you scored more than 70% on each section, it is possible that you did not and if that section contains a higher percentage of the exam content being slightly below 70% can result in an overall percentage that is less than 70.
What other questions do you have about this part of the score report? My next installment will be about a new comparison chart that we're adding to score reports so you can see how your performance compares to your peers. Stay tuned!
This is useful information for those who cannot interpret charts, but most of this post explains what I would consider obvious. The graph is not the issue, it's the weighted scores that make it difficult to determine how well we did, or did not do.
ahhh... another common misperception. Questions are not weighted. They are worth 1 point unless otherwise specified. We have a few questions worth more than 1 point, but points are assigned based on the number of actions required and partial credit is possible and we always indicate when this is how a question is scored--if it's not stated otherwise, the question is worth 1 point. Because partial credit is possible on items worth multiple points,, technically these are not "weighted."
Because each section of the exam contains a different number of items (because that skill area is more or less important than other skill areas as determined through the blueprinting process), a given section/functional group will comprise a higher or lower percentage of the exam. BUT this is a function of the number questions asked related to that skill area (as indicated by the percentages provided on the bar chart report) and is not a formal weighting process.
It is a good read and will help clients to be able to understand the weak points and put extra effort. The one question I get a lot when people want to understand the scores they got, they seem to want the 700 to equate to 70%. And how from 40-55 questions you arrive at the score you get
So a question is not assigned a weight, but may still have a weight based on the exam section. This inherited weight is not known by the exam taker.
On the topic of partial credit, I would like an explanation of how that is calculated for the different types of questions.
If you want more information about how we set passing scores, how scores are translated from a "raw" number correct (i.e., 0 to the max number of points possible) to scaled scores (ranging from 0-1000), and why 700 doesn't mean 70%, watch this video: borntolearn.mslearn.net/.../psychomagician-and-super-sigma-bust-the-myth-700-does-not-mean-70.aspx.
We might be getting into a semantic debate about weighting, but questions are not weighted. They are worth one point unless otherwise noted. Different content areas are "weighted" because they have more or fewer questions associated with them (as specified in the percentages provided next to the bars on the chart mentioned in my post).
In terms of scoring questions worth more than 1 point, stay tuned. I'm going to describe that in a future post.
Do questions that state "Select two" or "Select three" award partial credit if some of the selected answers are correct? Are questions where several steps must be placed in the correct order awarded partial credit if all the steps are selected by the order is not exactly what was expected?
So the skill areas are weighted, but the individual questions aren't if I'm understanding this correctly. If a skill area is worth 10% of the exam, those questions are worth 100/1000 of the exam's points. If I get 10 questions in that area, each is 'technically' worth 10 points each, but the individual questions aren't weighted. Assuming I'm getting this correctly, that's about as clear as mud and this is the first time in 10+ years of taking Microsoft exams I've actually understood that. It needs to be changed to something that is more clear and fair to the test taker.
I bet if I picked out 10 people I know who are hold a Microsoft certification, there wouldn't be a single one who could explain MS certification scoring. The fact that people are easily confused or simply don't understand the scoring of Microsoft exams underscores the fact that the exam scoring makes absolutely no sense to the average IT person.
If a question is worth 1 point (or possibly up to say 5), then there would need to be 700+ questions on an exam for a person to score a 700 to pass. I believe the last exam I took had less than 100 questions. The only logical conclusion to me is that the questions are indeed weighted or I couldn't have passed my exam. The Microsoft exam scoring policy would never fly in most schools because it is unclear to the test taker how much a question is actually worth, whether it is weighted, how the final score was generated, etc. Each question should clearly state something like "this question is worth X points out of 1000; partial credit is/isn't available" so that a test taker can clearly know how he/she is being scored.
I finally think I understand what candidates are misunderstanding in terms of weighting and scores. There is a difference between a raw score (the number of points you actually earn on an exam based on the number of questions you answer correctly) and the scaled score that is reported (it is essentially the raw score converted to a common metric so you can track your performance over time). This is a simple mathematical conversion from a raw score to a scaled score that I will explain in more detail in a future blog because I think an example will help. But, each question is still worth 1 point unless otherwise specified and different sections might have more or less points making them "weighted" in that a higher proportion of the exam may be devoted to a particular section.
Think about this. If you go to college and let's pick Pepperdine. You take a test and you get to see the test. Whether you get to keep it is another story, but you at least are able to review the exam. Here, with Microsoft. NO GO. ZIPPO, NADA, DO NOT PASS GO.
Rather than interpreting, why are you not allowing them to review the test after they fail? You don't have to give it to them, but at least let them review the test. We are trying to produce Microsoft Certified Engineers. Not professional Test Takers. We are trying to get our "workers" a skill and certified on Microsoft. We want them pumped up on selling and servicing a product which you are producing. We need more than a bar graph to tell us. This does nothing, absolutely ZERO other than..."go study harder on Client configurations". Really...where? Is the material I am using the right material? What am I getting wrong every time and why? If you don't know they “why” you will always keep receiving an incorrect score. How are you supposed to fix what you don't know what to fix? This testing system is such a pet peeve of mine as a CEO of a business. I have spent hours and hours of my time and my money paying my guys to pass an exam and to see them come back with such a blank stare and along with this really dumb score card is brutal.
Isn’t the whole process to make them feel good about getting closer to the finish line? What you’re doing is not helping by any means.
Patrick put it perfectly! - You can almost categorize me as a professional test taker, I've been with MSFT technologies so long, and taken nearly every test of certain technologies, nevertheless, I was flabbergasted at some of the scores I got from some of my most recent tests. Yes, I passed, but barely, and what makes it way worse, is that I had no idea what I didn't study for and play with correctly. I guess I never really thought of it before as I would just be happy to pass with a high score, but after studying the same as I always have, and passing so miserably, I was left with the feeling: "If I barely passed this, who IS passing? And what are they doing (other than braindumps) to pass correctly?"
If I was paying my employees to pass so we can meet a MS Partner criteria, I would be hard pressed to simply tell them, go study harder on.. (whatever was their bad score)... when they already put in a ton of work on either their time, or my dime, and no real direction.