Vitaliy maksimov.org> writes: > I'm afraid you missed the point. Those who can solve the simple problems > quickly, are found to be better at solving more complex problems. That is so not true, same as scaling man-hours by assigning more than one man to solve the task. There are two sets of mindsets in any problem-solving situation: fast solving of 'standard' problems relevant to the domain, mostly using first order logic and calculus, and *non*-fast solving of complex nonlinear out-of-the-box problems relevant to the domain. 'Relevant to the domain' means that the idiotic questions related to what would the job applicant do with three bananas when having to feed two hungry monkeys and two hungry children do not apply for any kind of engineering job. That kind of monkey question is probably good for finding out whether the applicant would be a good conversation partner at the annual steak and salad bar company outing, at best. The 'fast' problem solving speed gives an idea of how one would perform in a production environment, likely under pressure. The 'slow' speed (and the ability to find workable solutions to hairy real life problems in reasonable time and at reasonable cost), determines whether the applicant has any hope to become a developer capable to carry ideas from abstract to solder outside a unionized government lab with unlimited funding and time, as opposed to the production engineer type skills represented by the 'fast' solvers. The 'slow' problems usually cannot be solved by people who apply things learned by rote and who do not have the experience to choose the out of the box solution that will really work and be implementable (no (0) Naquada powered fusion reactors allowed in the solution, Saint-Venant is important in engineering even if religion is not, and 'electricity does not flow inside insulated wires'). Afaik most tests (I don't know about SAT) mix problems of both types into the set of tests, and with good reason. From what I read about the writing and evaluation of tests, anyone who is not a trained psychologist with relevant statistics training, as pertaining to test scoring and result interpretation, would best serve his interests by purchasing relevant test sets from companies or universities which *do* have these credentials, and abstain from tweaking the results thereof at all costs. Whether or not he should also use additional testing methods, is up to him. In any case the 'standard' tests yield a point score and the test instructions show what can be considered a passing score, and why. There is no black/white testing solution. I am sorry if I sound like giving advice here, because I am not. I am just relaying my conclusion, which is consolidated by several tests taken by me and by others with whom I have discussed the matter. Proper tests, which reflect the real qualities of the testees, seem to have a few charactristics in common: - they are composed of many unrelated questions (>>10 and nearer 50 to 100) - all the questions are relevant to the field in some way. no monkeys. - some questions are hard, most are not so - they have a biased scoring sheet (no yes/no answers, multiple choice questions with more than one or zero solutions allowed per, for at least some questions) - some questions require calculus and inserting numeric or equation results, but most don't. technology is not calculus. - they take less than 3 hours to complete in any case, often under one hour, and the testing time is limited, often per question but not necessarily - this measures speed too. - *none* of the good tests are made by recruiters, company bosses looking for a recruit, engineers looking for a job applicant or a HR department employee who thinks she has read enough books to write one. - they have very specific scoring rules (not linear) and they *warn* about not removing any questions from the test set(s) The way I understand it, the reason for not 'writing your own' test is the same as that given for not writing your own 'secure, unbreakable crypto algorythm'. There is more to the issue than meets the eye, and failure is basically pre-programmed if attempting it blindly, due to the bewildering set of factors that can break the test or bias its results. The people who create tests have access to relevant statistics and to test methods (biased questions etc) which compensate for the problems that appear. A casual test writer probably has no clue about what skews a test in what direction and why, even if he can write meaningful technical test questions. Peter -- http://www.piclist.com PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist