Microsoft Introduces Progressive Case Studies: Psychomagician & Super Sigma Bring You the Latest News!

Microsoft Introduces Progressive Case Studies: Psychomagician & Super Sigma Bring You the Latest News!

Liberty Munson (Microsoft)

Hey VIPs! As promised, Psychomagician and Super Sigma are committed to making sure you are the first to know about changes, innovations, and improvements that Microsoft is making to their certification program and exams. And, they have some news for you…Microsoft Learning is exploring a variation of case studies called ‘progressive case studies,’ and they need YOUR feedback! 

This variation of case studies is a game changer—creating scenarios that are more real world in how you actually solve problems on the job because, just like in the real world, more information is introduced as you proceed through the case. Want to know more? Watch the video below, the demo of the new question format, and then complete a survey to have a say in how we ultimately roll this out. Because the structure of progressive case studies is so different from our other question types, we really need to know what you think! If you want to have a voice in the implementation of this question format, watch the video and respond to the survey by June 16. Psychomagician and Super Sigma look forward to seeing what you think!

...Cert Bunny is curious, too...

Comments
  • thomas-mullen
    |

    First thoughts:  

    1.  Only applicable for certain skill areas. See #4.  

    2.  Emphasizes the ability to infer, not just the ability to demonstrate mastery of MS technologies.  

    3.  Is in fact unrealistic, because in real-world situations we have access to web searches, phone follow-up, and colleagues to crack these.  Moreover, technical solutions often go through several iterations. And politics/budget will not infrequently trump technical optimization.  

    4.  It seems to me that this will drive the subject matter to be tested in the direction of material which can easily be "narrativized" and presented in storyboard form, whether or not this is the material we most often (or most importantly) use at work.

    Please note that as a MCT, technical project manager, and consultant, I constantly rely on the ability to read between the lines and make inferences, and these are absolutely vital skills to acquire.  However they are not the focus of the classes we teach.  

    With Tech-Ed upon us, this would be a great topic for a discussion, and would have been a great subject for a Birds of a Feather group.  Any chances of such a discussion happening?

  • KevinM
    |

    I have to agree.  Do you want to test on the features of the technology, or on someone's ability to read and comprehend email?  I vote for the former.

    I suppose that the question that I would ask is this:  how would this sort of question make the exams better?  How would they better filter qualified candidates from unqualified candidates?  At the end of the day I don't think that they do, because they shift a larger portion of the test to a candidates ability to infer rather than to be able to make technical decisions based on technical data.

    There is far too much room to infer from these questions, and this sort of thinking has already invaded the exams enough.  I have often flagged and commented on what I consider to be appallingly poorly written questions precisely because they require you to make assumptions and inferences about what a potential answer actually means.  In some cases I've seen questions where none of the possible answers could be considered accurate without having to assume or infer additional steps/items that are not included in the provided answers.  I've spent far too much time during exams trying to determine what was in the mind of the person who wrote the question (i.e., what piece of knowledge is it they are testing for) so that I could determine the "most correct" answer out of a group of all incorrect questions.  Granted, I've done a lot of beta exams but the last three that I have taken since March were not betas and they all had at least a handful of similarly bad questions.

    If your goal is to more accurately reflect real world situations then your should get the problems with the virtual labs sorted out and let us test against real world scenarios using the actual software.

  • thomas-mullen
    |

    Something else to consider- normally we spend much time and effort to accumulate all necessary information upfront, _before_ we start designing solutions, so that we measure twice and cut once.    Acting as a technical project manager, if I presented different solutions which changed as I got around to asking for more information, I would not be acting efficiently or excellently.   Committing to technical solutions/answers before you have all the facts (as this test format seems to require you to do) is not realistic, not real-world, and not the mark of a successful technologist or technical manager.  It's also not a characteristic I would look for in my team-members.