Literatuur over decision-making


Ben Wilbrink



Gerd Gigerenzer, Ralph Hertwig & Thorsten Pachur (Eds.) (2011). Heuristics. The foundations of adaptive behavior. Oxford University Press. [niet als eBook in KB] info, & contents & abstract to every chapter available.




Gerd Gigerenzer and Wolfgang Gaissmaier (2011). Heuristic Decision Making. Annual Review of Psychology, 62, 451-482. abstract




Kris N. Kirby (2011). An empirical assessment of the form of utility functions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 461-476. abstract




James K. Rilling & Alan G. Sanfey (2011). The Neuroscience of Social Decision-Making. Annual Review of Psychology, 62, 23-48. abstract




Axelrod, Robert (1997). The complexity of cooperation. Agent-based models of competition and collaboration. Princeton University Press. isbn 0691015678




C. Emdad Haque 1987). Hazards in a fickle environment: Bangladesh. Kluwer Academic Publishers. isbn 0792348699

ao: Hazardous environment and disastrous impact - Human coping responses to natural hazards- Social class formation and vulnerability of the population: a historical account of human occupance and land resource management - Impacts of riverbank erosion disaster - Toward a sustainable floodplain development strategy



Daniel Kahneman & Amos Tversky (Eds.) (2000). Choices, values, and frames. Cambridge University Press. info




Jerad H. Moxley, K. Anders Ericsson, Neil Charness, Ralf T. Krampe (2013). The role of intuition and deliberative thinking in experts' superior tactical decision-making. Cognition, 124, 72-78. abstract




Lichtenstein, Sarah Lichtenstein & Paul Slovic (Eds) (2006). The construction of preference. Cambridge University Press. isbn 0521542200




G. J. Mellenbergh (1979). De beslissing gewogen. In A. D. Groot. Rede als richtsnoer. Mouton: 's-Gravenhage. 183-196. Liber amicorum.





Vaithilingam Jeyakumar and Alexander Rubinov (Eds) (2004). Continuous optimization. Current trends and modern applications. Springer.




Wim J. van der Linden & Gideon J. Mellenbergh (1978). Coefficients for Tests from a Decision Theoretic Point of View. Applied Psychological Measurement 2, 119-134. abstract




Wim J. van der Linden & Gideon J. Mellenbergh (1977). Optimal Cutting Scores Using A Linear Loss Function. Applied Psychological Measurement 2, 593-599. abstract




Ariel Rubinstein (1998). Modeling bounded rationality. MIT Press. isbn 0262681005 pdf (whole book)


'bounded rationality' is een term van Herb Simon, die ook een kritiek geeft in het laatste hoofdstuk van dit boek. In dit boek even opgeborgen: Savage, L.J. (1971). Elicitation of personal probabilities and expectations. JASA, 68, 783-800. (fc) Mooi artikel. Proper scoring rules. Thi; article is about a class of devices by means of which an idealized homo economicus-and therefore, with some approximation, a real person-can be induced to reveal his opinions as expressed by the probabilities that he associates with events or, more generally, his personal expectations of random quantities.



Gerd Gigerenzer (2007). Gut feelings. The intelligence of the unconscious. Penguin. isbn 9780713997514




Ralph L. Keeney and Howard Raiffa (1976). Decisions with multiple objectives. Preferences and value tradeoffs. Cambridge University Press. isbn 0471465100




Hooker, C. A. Hooker, J. J. Leach & E. F. McClennen (Eds) (1978). Foundations and applications of decision theory. Volume I: Theoretical foundations. Volume II: Epistemic and social applications. Reidel. isbn 9027708428 (I) 9027708444 (II)




Elke U. Weber (1994). From Subjective Probabilities to Decision Weights: The Effect of Asymmetric Loss Functions on the Evaluation of Uncertain Outcomes and Events. Psychological Bulletin, 115, No. 2, 228-242. pdf




Hillel J. Einhorn & Robin M. Hogarth (1978). Confidence in judgment: persistence of the illusion of validity. Psychological Review, 85, 395-416. )




H. Swaminathan, Ronald K. Hambleton & James Algina (1975). A Bayesian decision-theoretic procedure for use with criterion-referenced tests. Journal of Educational Measurement, 12, 87-98. preview


Gebruikt in 1980-artikelen.



Lord, F. M. (1985). Estimating the imputed social cost of errors of measurement. Psychometrika, 50, 57-68.




Fredric M. Lord (1980). Applications of item response theory to practical testing problems. Erlbaum. Ch. 11: Mastery testing.




P. van Rijn, A. Béguin & H. Verstralen (2009). Zakken of slagen? De nauwkeurigheid van examenuitslagen in het voortgezet onderwijs. Pedagogische Studiën, 86, 185-195. abstract




Bastiaan J. Vrijhof, Gideon J. Mellenbergh & Wulfert P. van den Brink (1983). Assessing and Studying Utility Functions in Psychometric Decision Theory. Applied Psychological Measurement, 7, 341-357. abstract




Donald A. Rock, John L. Barone and Robert F. Boldt (1972). A two-stage decision approach to the selection problem. British Journal of Mathematical and Statistical Psychology, 25, 274-282. abstract "Theoretical solutions developed on the computer suggest that a considerable amount of testing time may be saved with little or no decrease in the validity of the selection procedure for all values of the selection ratios."



Hans J. Vos (1997). Adapting the amount of instruction to individual student needs. Educational Research and Evaluation, 3, 79-97. abstract




Henk de Vos (1989). A rational-choice explanation of composition effects in educational research. Rationality and Society, 1, 220-239. (genoemd en gebruikt door Bosker & Guldemond, 1994 abstract


frog-pond effect Pratt, J. W. (1964). Risk aversion in the small and in the large. Econometrica, 32, 122-136. (genoemd door vd Gaag par 2.3). Reprinted in Tummala, V. M. R, & Henshaw, R. C. (Eds.) (1976). Concepts and applications of modern decision models. Division of Research, Graduate School of Business Administration, Michigan State University, East Lansing, Michigan. Pratt, J. W., Raiffa, H., & Schlaifer, R. (1964). The foundations of decision under uncertainty: an elementary exposition. Journal of the American Statistical Association, 59, 353-375. Tummala, V. M. R, & Henshaw, R. C. (Eds.) (1976). Concepts and applications of modern decision models. Division of Research, Graduate School of Business Administration, Michigan State University, East Lansing, Michigan.



Gideon J. Mellenbergh & Wim J. van der Linden (1978). Decisions based on tests: Some results with a linear loss function. Paper presented at the European Meeting on Psychometrics and Mathematical Psychology, University of Uppsala, Uppsala, Sweden, June 15-17, 1978. Kwantitatieve Methoden, 4, 51-61.


Two questions, reading the abstract: 1) is the resit properly modelled in decision-theoretic terms? 2) Is it really the case that personnel selection is an analogue?

Ad 1.1: the intention is to predict on the basis of the raw test scores.

Ad 1.2. No, correction. The to be ‘predicted’ scores turn out to be true scores of a variable ‘suitable’. How is it possible to predict platonic scores?

Ad 1.3 Introduces a linear loss function, following Mellenbergh & Van der Linden (1977). I will first annotate that one!



Mellenbergh, G.J., & Van der Linden, W.J. (1981). The linear utility model for optimal selection. Psychometrika, 46, 283 - 293.




Wim J. van der Linden & Gideon J. Mellenbergh (1977). Optimal cutting scores using a linear loss function. Applied Psychological Measurement, 1, 593-599.pdf


This is an exercise in reliability, as Wim van der Linden will call it later (1980, Applied Psychological Measurement). Does finding ‘optimal’ cutting scores, given one has ‘fixed in advance’ a latent cutting score solve any real problem? The article might present some useful techniques, or demonstrate some techniques to be not useful at all. Let’s see.

References: Hambleton & Novick (1973); Meskauskas (1976); Huynh (1976).

The analysis will be over the total group of testees. This particular choice is not discussed by the authors. An alternative analysis is to consider only the testees scoring x = c, c being the particular cutting score considered for analysis. Would that model choice have made a difference? Sure: an order of magnitude, much and much simpler, better transparency. See my 1980 in Tijdschrift voor Onderwijsresearch.

Specifying loss functions is a somewhat forced approach. Why not specify utility funtions?

One may wonder how it is possible and why it could be useful to specify utility on a variable that is latent. This is a serious objection; especially so where experimental subjects are being asked to specify their utilities. They will do so, of course, obligingly. (see dissertation Van der Gaag on this issue)

Reference to Huynh (1976) & Mellenbergh, Koppelaar & Van der Linden (1976) for threshold loss analysis: minimizing the risk. I will have to annotate these articles, too: searching for the ancestry of the concepts of loss and risc as used by Mellenbergh & Van der Linden. A shorcut: Mellenbergh & Gode 2005, last chapter on decision-theoretic analysis.



G. J. Mellenbergh & M. Gode (2006). Beslissend testgebruik. In W. P. van den Brink & G. J. Mellenbergh: Testleer en testconstructie (399-427). Boom. isbn 9053522395 info boek


I will comment in English, even though the book is in Dutch. The reason is that I expect the problematic aspects in this chapter to be typical of the decision-theoretic literature in the field of educational measurement.

The chapter identifies Cronbach & Gleser’s classification, allocation and selection, as well as Van der Linden’s (1985) mastery. In the latter case the prediction is of the latent trait or true score. Wow! This is 1977. Totally unacceptable, because is does not offer any practical solution? Let’s see. Mellenbergh & Gode here define allocation as classification; classification with Cronbach & Gleser is categorical (with the testee, not with the treatment): man/woman; healthy/cancer. I really am disappointed, already on the first page of the chapter. Will have to talk to Don about this, I suppose. The Van der Linden ‘mastery’ category is phoney and therefore superfluous (I have shown as much in my 1980 articles). The chapter does not treat the mastery decision at all; why then introduce this dubious distinction?

Utility functions get introduced on p. 405. Regrettably, this introduction is faulty. The text states: “A utility function represents what the ‘results’ are of the selection procedure’ [my translation, b.w.]. Expected utility gets mistaken voor utility. These concepts are categorically different! This is the kind of mistake that is rather typical of the literature on decision making in testing situations, regrettably.

The next problem is dat suitability is declared to be absolute: either the employee turns out to be suitable, or not. This kind of rationalizing is not unusual in selection psychology, yet it is very clumsy and above all it is unnecessary. It is also unnecessary if one has to take pass-fail decisions, as will be the case in, f.e., the situation depicted in Figure 12.1.

Here threshold loss gets introduced. The reference is Hambleton & Novick 1973. I will now annotate that one, it is pretty basic to pretty much all that has been published later on utility models for achievement tests.



Wim J. van der Linden (1985). Decision theory in educational research and testing. In T. Husen & T. N. Postlethwaite (Eds.), International encyclopedia of education: Research and studies (pp. 1328-1333). Oxford: Pergamon Press.




Lee J. Cronbach & Goldine C. Gleser (1957/1965 ). Psychological tests and personnel decisions. University of Illinois Press.




Ronald K. Hambleton & Melvin R. Novick (1972). Toward an integration of theory and method for criterion-referenced tests. ACT Research Report 53. Journal of Educational Measurement, 1973, 10, 159-170. pdf


The basic paradigm, believe or not, is sketched verbally in the following citation. It has been followed stubbornly by many researchers not asking some critical but simple questions. The formal apparatus follows the next description (see the report).

The formal model then gets presented in a formalistic way that makes it rather difficult to understand. Let me therefore first report in my own words what the authors propose here, and the extensions of the model that in my opinion are necessary to avoid any fuzziness..

  1. The situation to be modeled is that of pass-fail scoring, failed students will have to resit the test after some extra preparation time.
  2. The goal variable is mastery, the latent trait or true score that is.
  3. The utility function on mastery is a threshold funtion, f.e. it is zero below the point of sufficient mastery, one above it.
  4. The exact ‘point of sufficient mastery’ is assumed to be known! This is unacceptable, but at this point I will go along with this assumption. A complete decision theoretic approach, of course, should not hinge on this kind fuzziness, but instead resolve it. For example, by using a utility function that is derived from or identical with the learning curve for mastery.
  5. Somehow costs are relevant too according to the authors; they do not explicitly model it, however. Fuzziness again. I will leave costs out of the model altogether.
  6. The model may be applied in the case there is only one student, as well as in the case of groups (in the latter case some Bayesian statistics might be used forfine tuning)
  7. The question then is: given the individual’s test score X=x, should she be passed, or failed?
  8. Some mapping of observed score on the mastery variable is needed. I prefer using a binomial model here, so there is a definite likelihood function on the mastery variable (SPA-model)
  9. Expected utility for the pass decision under threshold utility E(u1) then simply is the probability this individual is a master.
  10. Now the question is: what is the expected utility of the fail-decision? The Hambleton-Novick model is thoroughly fuzzy on this point. Let me try to be specific, then
  11. Assume there is only one resit (it is always possible to extend the model to more resits, see, f.e., Van Naerssen’s tentamenmodel)
  12. After the resit, given de raw score on the resit, the expected utility E(u2) is, again, the probability of mastery.
  13. The problem then becomes: what is the prediction of the score on the resit, given de score on the first test? For an immediate resit the prediction function is the betabinomial function. The situation is more complicated than that, however: the student will spend time learning, heightening her mastery score. This will soon get way too complicated to model abstractly, however.
  14. Assume empirical data to be available — as they should be, of course (a validation study) (otherwise: do the experiment to obtain them) — on the resit-scores given the score on the first test.
  15. Assume a betabinomial distribution (n, a, b) fitted to the resit-score distribution given the score of our individual student X = x. The density function on mastery then is a beta function on parameters a and b, the expected utility E(U2) under threshold utility then is the probability of mastery.
  16. The expected utility of the fail decision given X = x now is E(u2), and that value, of course, is always higher than E(u1) barring extreme contingencies. Therefore: always fail all students, unless X = n.
  17. Wow. How further? Is there only fuzziness?
  18. Plot E(u2) - E(u1) for X = 0, 1 .. n. For an impression of this kind of plot, see the figure.
  19. gif/toetsen_HN.png
  20. A good criterion now might be to set the cutting/passing score X=c at the score c where the difference in expected utilities E(u2) - E(u1) is smaller than the corresponding difference for X = c-1. Assume the plot of differences to be decelerating in the range of interest, and deceleration first to increase and then to decrease. The optimum passing score then is the score corresponding to the inflection point: the number correct at the righ end of the steepest strech. Is this a procedure resulting in the optimal cutting score, within the restrictions of the situation as given? No, but it obviates fuzzy talk about costs. Call this solution ‘satisficing’ (Herbert A. Simon): it is evidently the case ‘better’ models can be developed, but this solution in many cases will do perfectly.
    • Simon introduced the distinction maximizers - satisficers in 'Rational choice and the structure of the environment. Psychological Review, 1956, 63, 129-138 (repr in his Models of thought as well as in his Models of man, social and rational. Wiley, 1957.
  21. Observe that in the above exposition there is no need for talk about ‘false negatives’ or ‘false positives’, or ‘incorrect decisions’. Also this kind of terminology does not belong in science: it is value-laden, better get rid of it.
80gif/80bGrens2.gif The figure is from Wilbrink 1980b, Figure 3. It illustrates the situation pretty well. I did not succeed in 1980 to get rid of the fuzzy ‘costs’ of the resit, however ;-)

The test supposedly is a rather short one, the authors never suggest a specific number of items, however. Yet the model has been used in later years for more serious testing in, for example, higher education. Will that make a difference? Supposedly so, but I do not know of any analyses on the subject (they should be available in the literature, I suppose).

Let me first take a look at the following: “Basically then, the examiner's problem is to locate each examinee in the correct category.” This is problematic, it runs counter to the intention to find an acceptable utility function on the goal variable that is relevant to the situation. The goal variable is not correct classification, it is mastery. The problem then is to optimize the level of mastery, using the instrument of extended instruction/learning and a second test., implying a cutoff score on the first test.

Another problem here is the decision to reduce the criterium variable ‘mastery’ to a dichotomy, for no good reason whatsoever. In fact, no reason is given at all, except implicitly that the talk of the town has it that there should be a very special point on the dimension of mastery: so special, in fact, that we speak of masters for those above this magical point, and non-masters for those still below it. I ridicule the thinking of Hambleton and Novick here, because they are smuggling in threshold utility. A mortgage on the house of decision theoretic test psychology. Categories are, f.e., man-woman; cancer yes-no; cat or dog. What Hambleton and Novick are doing is introducing a pseudo category that seems to come in handy in a situation where pass-fail decisions have to be taken.

See here above also the already familiar mistake of calling an expected utility (or loss) simply utility (or loss). Yet these are fundamentally different. Utility is a function over the goal variable, in this case the goal variable is mastery. Expected utility is what obtains for the options in your decision problem, in this case passing or retaining students with a score X=c. In fact it is really simple: whether the decision is to pass or fail this person, her mastery π stays the same and has one definite utility. Meaning also: there is no way to construct a loss here, there are no differences in utility at all, for this person. Therefore the decision model needs to be developed further: failing the student means she has to sit the test again, after some extra time spent in preparation and thus ameliorating her mastery π. The loss in passing this student is then the absolute difference between the utilities of both levels of mastery.

Allow me one more comment on the sentence cited above. The authors have it that (some) decisions are ‘incorrect’. How can that be? Should other decisions have been taken? This is all very clumsy. If decisions have been taken reckoning with the information available, how is it that they can be ‘incorrect’? Herbert Simon was quite explicit on this point: if two alternatives have expected utilities near each other, choose the one with the somewhat higher expected utility. It might turn out that the outcome is disappointing; does that make the decision ‘incorrect’? I don’t think so. There is quite another problem yet with this decision model: the decision maker is not the student. Yet students will adapt their preparation strategies contingent on where the cutting score will be placed (assuming the difficulty of the test will remain the same). See Van Naerssen (1970), or on this website my SPA-model. For the student as decision maker, the model is also one of threshold utility; assuming a pass will have utility 1, a fail utility 0, expected utility for the student is simply the probability to pass. That probability depends on her mastery. For the institution or the teacher the optimalization problem therefore is quite another one than Hamilton and Novick try to let us believe: it is to find that threshold on the test as well as the retest that will result in the highest mastery (for individuals or for the group of testees) in some sense (expected utility that is).



Naerssen, R. F. van (1965). Enkele eenvoudige besliskundige toepassingen bij test en selectie. Nederlands Tijdschrift voor de Psychologie, 20, 365-380. fc



Hunter, J. E. , & Schmidt, F. L. (1980?). Fitting people to jobs: the impact of personnel selection on national productivity. In Fleishman, E. A. (Ed. ), Human performance and productivity. (COWO?). fc selectie



Chen, J.J., & M.R. Novick (1982) On the use of a cumulative distribution as a utility function in educational or employment selection. Journal of Educational Statistics, 7, 19-35. fc uit het abtract: A least-squares procedure, developed by Lindley and Novick for fitting a utility function, is applied to truncated normal and extended beta distribution functions. The truncated normal and beta distributions avoid the symmetry and infinite range restrictions of the normal distribution and can provide fits in some cases in which the normal functional forms cannot provide a reasonable fit.



Novick & Grizzle (1965). A Bayesian analysis of data from clinical trials. JASA. (fc)



Novick (1980). Statistics as psychometrics. Pm, 45: 411. (fc)



Melvin R. Novick and D. V. Lindley (1979). Fixed-state assessment of utility functions. Journal of the American Statistical Association, 74, 306-310. (fc) preview


This approach may be a useful alternatiev to fixed probability methods, but only in an interactive environment in which the resolution of incoherence is encouraged and facilitated.



Melvin R. Novick and D. V. Lindley (1978). The use of more realistic utility functions in educational applications. Journal of Educational Measurement, 15, 181-191. fc preview


Sluit aan bij de manier waarop Pratt, en ook Schlaifer, nutsfuncties opstellen. De summary:



Novick, M. R. (1980). Statistics as psychometrics. Psychometrika, 45, 411- 424. Uitegebreid over nut. Bv. p. 420: It is evident that utility assessment is difficult and that there are many biases to be avoided. My only surprise is that there was ever any belief that simple methods would be adequate. Surely fifty years of work in opinion polling should have made us more sophisticated.



Petersen, N. S. (1976). An expected utility model for 'optimal' selection. Journal of Educational Statistics, 1, 333-358. fc



Michael T. Kane & Robert L. Brennan (1980). Agreement coefficients as indices of dependability for domain-referenced tests. APM, 4, 105-126. (loss functions) pdf




Julius Kuhl (1978). Standard setting and risk preference: an elaboration of the theory of achievement motivation and an empirical test. Psychological Review, 85, 239-248. abstract




N. v.d. Gaag (1990). Empirische utiliteiten voor psychometrische beslissingen. Proefschrift UvA 22 november 1990 (promotor: Don Mellenbergh).


mijn notitie d.d. 4-2002: Dan blijkt dat van proefpersonen heel vreemde dingen worden gevraagd, en dat ze keurige antwoorden geven die bij benadering lineaire nutsfucnties (inderdaad: twee, over ware beheersing) opleveren. Dit zijn experimenten die heel bruikbaar zijn om te illustreren hoe volgzaam proefpersonen zijn (niet alle proefpersonen, trouwens, er is wel een enkele opstandige proefpersoon geweest). Bijzonder problematisch, maar dat gaat al terug tot op het onderzoek van Vrijhof (1981) (zie Psychon aanvraag, 1986, van Mellenbergh), is dat studenten, als student, en docenten, als docent, tot dezelfde nutsfuncties komen. Dat suggereert dat de resultaten van deze onderzoeken artefactueel kunnen zijn

= Gruijter Dato N.M. de Ronald K. Hambleton (1984) On Problems Encountered Using Decision Theory to Set Cutoff Scores APPLIED PSYCHOLOGICAL MEASUREMENT Vol. 8, No. 1, Winter 1984, pp. 1-8 In the decision-theoretic approach to determining a cutoff score, the cutoff score chosen is that which maximizes expected utility of pass/fail decisions. This approach is not without its problems. In this paper several of these problems are considered: inaccurate parameter estimates, choice of test model and consequences, choice of subpopulations, optimal cutoff scores on various occasions, and cutoff scores as targets. It is suggested that these problems will need to be overcome and/or understood more thoroughly before the full potential of1the decision-theoretic approach can be realized in practice. Linden Wim J. van der Some Thoughts on the Use of Decision Theory to Set Cutoff Scores: Comment on de Gruijter and Hambleton APPLIED PSYCHOLOGICAL MEASUREMENT Vol. 8, No. 1, Winter 1984, pp. 9-17 In response to an article by de Gruijter and Hambleton (1984), some thoughts on the use of decision theory for setting cutoff scores on mastery tests are presented. This paper argues that decision theory offers much more than suggested by de Gruijter and Hambleton and that an attempt at evaluating its potentials for mastery testing should address the full scale of possibilities. As for the problems de Gruijter and Hambleton have raised, some of them disappear if proper choices from decision theory are made, while others are inherent in mastery testing and will be encountered by any method of setting cutoff scores. Further, this paper points at the development of new technology to assist the mastery tester in the application of decision theory. From this an optimistic attitude towards the potentials of decision theory for mastery testing is concluded. Dato N. M. de Gruijter Ronald K. Hambleton Reply to van der Linden's "Thoughts on the Use of Decision Theory to Set Cutoff Scores"



Ronald K. Hambleton, Hariharan Swaminathan, James Algina& Douglas Bill Coulson (1978). Criterion-referenced testing and measurement: a review of technical issues and developments. Review of Educational research, 48, 1-47. JSTOR read online free


Authors think in terms of classification. Philosophers would call this an category mistake. The better approach: decision-theoretic without artificial classificatory cutting scores.



Vos, H. J. (1990). Simultaneous optimization of decisions using a linear utility function. Journal of Educational Statistics, 15, 309-340. preview: http://www.jstor.org/discover/10.2307/1165091




W. J. van der Linden (1987). The use of test scores for classification decisions with threshold utility. Journal of Educational Statistics, 12, 62-75. open access




Huynh Huynh (1977). Two simple classes of mastery scores based on the beta-binomial model. Psychometrika, 42, 601-608. !--hardcopy bak dm--> preview


See Huynh (1976) on the idea of the referral task.



Huynh Huynh (1980). A non-randomized minimax solution for passing scores in the binomial error model. Pm, 45, 167. abstract




Huynh Huynh (1982). Assessing efficiency of decisions in mastery testing. JESt, 7, 47-63. preview


False positive error, false negative error.



Huynh Huynh (1976). On the reliability of decisions in domain referenced testing. JEM , 13, 265-276. preview


bivariate beta-binmial model. In fact, an exercise in threshold loss with criterion referenced tests.



Daniel Gigone & Reid Hastie (1997). Proper analysis of the accuracy of group judgments. sychological Bulletin, 123, 149-167. abstract




George K. Chacko (1971). Applied statistics in decision-making. American Elsevier. 0444001093




Conditions for Intuitive Expertise. A Failure to Disagree. Daniel Kahneman & Gary Klein (2009). American Psychologist pdf




Ben R. Newell, David A. Lagnado, David R. Shanks (2015 2nd). Straight choices. The psychology of decision making. Psychology Press. 9781848722835 info [UBL wassweg aanwezig] [Hoewel op een breder publiek gericht, is het wel up to date wat ontwikkelingen betreft]




Hal R. Arkes and Kennth R. Hammond (Eds.) (1986). Judgment and decision making. London: Cambridge University Press. isbn 0521339146 [er is in 1999 een 2e editie verschenen]




Kenneth R. Hammond (2000). Judgments under stress. Oxford University Press. isbn 0195131436 info


Judgments under stress are the kind of decisions leading to disasters such as with the Challenger



C. R. Bell (Ed.) (1979). Uncertain outcomes. MTP Press. isbn 0852001037




W. M. Goldstein & R. M. Hogarth (Eds) (1997). Research on judgment and decision making. Currents, connections, and controversies. Cambridge University Press. isbn 0521483344 info




Robin M. Hogarth (2001). Educating intuition. Chicago: The University of Chicago Press. isbn 0226348601













12 januari 2018 \ contact ben at at at benwilbrink.nl      

Valid HTML 4.01!       http://www.benwilbrink.nl/literature/decision-making.htm http://goo.gl/aq6uH0