- •Preface
- •Contents
- •Contributors
- •Modeling Meaning Associated with Documental Entities: Introducing the Brussels Quantum Approach
- •1 Introduction
- •2 The Double-Slit Experiment
- •3 Interrogative Processes
- •4 Modeling the QWeb
- •5 Adding Context
- •6 Conclusion
- •Appendix 1: Interference Plus Context Effects
- •Appendix 2: Meaning Bond
- •References
- •1 Introduction
- •2 Bell Test in the Problem of Cognitive Semantic Information Retrieval
- •2.1 Bell Inequality and Its Interpretation
- •2.2 Bell Test in Semantic Retrieving
- •3 Results
- •References
- •1 Introduction
- •2 Basics of Quantum Probability Theory
- •3 Steps to Build an HSM Model
- •3.1 How to Determine the Compatibility Relations
- •3.2 How to Determine the Dimension
- •3.5 Compute the Choice Probabilities
- •3.6 Estimate Model Parameters, Compare and Test Models
- •4 Computer Programs
- •5 Concluding Comments
- •References
- •Basics of Quantum Theory for Quantum-Like Modeling Information Retrieval
- •1 Introduction
- •3 Quantum Mathematics
- •3.1 Hermitian Operators in Hilbert Space
- •3.2 Pure and Mixed States: Normalized Vectors and Density Operators
- •4 Quantum Mechanics: Postulates
- •5 Compatible and Incompatible Observables
- •5.1 Post-Measurement State From the Projection Postulate
- •6 Interpretations of Quantum Mechanics
- •6.1 Ensemble and Individual Interpretations
- •6.2 Information Interpretations
- •7 Quantum Conditional (Transition) Probability
- •9 Formula of Total Probability with the Interference Term
- •9.1 Växjö (Realist Ensemble Contextual) Interpretation of Quantum Mechanics
- •10 Quantum Logic
- •11 Space of Square Integrable Functions as a State Space
- •12 Operation of Tensor Product
- •14 Qubit
- •15 Entanglement
- •References
- •1 Introduction
- •2 Background
- •2.1 Distributional Hypothesis
- •2.2 A Brief History of Word Embedding
- •3 Applications of Word Embedding
- •3.1 Word-Level Applications
- •3.2 Sentence-Level Application
- •3.3 Sentence-Pair Level Application
- •3.4 Seq2seq Application
- •3.5 Evaluation
- •4 Reconsidering Word Embedding
- •4.1 Limitations
- •4.2 Trends
- •4.4 Towards Dynamic Word Embedding
- •5 Conclusion
- •References
- •1 Introduction
- •2 Motivating Example: Car Dealership
- •3 Modelling Elementary Data Types
- •3.1 Orthogonal Data Types
- •3.2 Non-orthogonal Data Types
- •4 Data Type Construction
- •5 Quantum-Based Data Type Constructors
- •5.1 Tuple Data Type Constructor
- •5.2 Set Data Type Constructor
- •6 Conclusion
- •References
- •Incorporating Weights into a Quantum-Logic-Based Query Language
- •1 Introduction
- •2 A Motivating Example
- •5 Logic-Based Weighting
- •6 Related Work
- •7 Conclusion
- •References
- •Searching for Information with Meet and Join Operators
- •1 Introduction
- •2 Background
- •2.1 Vector Spaces
- •2.2 Sets Versus Vector Spaces
- •2.3 The Boolean Model for IR
- •2.5 The Probabilistic Models
- •3 Meet and Join
- •4 Structures of a Query-by-Theme Language
- •4.1 Features and Terms
- •4.2 Themes
- •4.3 Document Ranking
- •4.4 Meet and Join Operators
- •5 Implementation of a Query-by-Theme Language
- •6 Related Work
- •7 Discussion and Future Work
- •References
- •Index
- •Preface
- •Organization
- •Contents
- •Fundamentals
- •Why Should We Use Quantum Theory?
- •1 Introduction
- •2 On the Human Science/Natural Science Issue
- •3 The Human Roots of Quantum Science
- •4 Qualitative Parallels Between Quantum Theory and the Human Sciences
- •5 Early Quantitative Applications of Quantum Theory to the Human Sciences
- •6 Epilogue
- •References
- •Quantum Cognition
- •1 Introduction
- •2 The Quantum Persuasion Approach
- •3 Experimental Design
- •3.1 Testing for Perspective Incompatibility
- •3.2 Quantum Persuasion
- •3.3 Predictions
- •4 Results
- •4.1 Descriptive Statistics
- •4.2 Data Analysis
- •4.3 Interpretation
- •5 Discussion and Concluding Remarks
- •References
- •1 Introduction
- •2 A Probabilistic Fusion Model of Trust
- •3 Contextuality
- •4 Experiment
- •4.1 Subjects
- •4.2 Design and Materials
- •4.3 Procedure
- •4.4 Results
- •4.5 Discussion
- •5 Summary and Conclusions
- •References
- •Probabilistic Programs for Investigating Contextuality in Human Information Processing
- •1 Introduction
- •2 A Framework for Determining Contextuality in Human Information Processing
- •3 Using Probabilistic Programs to Simulate Bell Scenario Experiments
- •References
- •1 Familiarity and Recollection, Verbatim and Gist
- •2 True Memory, False Memory, over Distributed Memory
- •3 The Hamiltonian Based QEM Model
- •4 Data and Prediction
- •5 Discussion
- •References
- •Decision-Making
- •1 Introduction
- •1.2 Two Stage Gambling Game
- •2 Quantum Probabilities and Waves
- •2.1 Intensity Waves
- •2.2 The Law of Balance and Probability Waves
- •2.3 Probability Waves
- •3 Law of Maximal Uncertainty
- •3.1 Principle of Entropy
- •3.2 Mirror Principle
- •4 Conclusion
- •References
- •1 Introduction
- •4 Quantum-Like Bayesian Networks
- •7.1 Results and Discussion
- •8 Conclusion
- •References
- •Cybernetics and AI
- •1 Introduction
- •2 Modeling of the Vehicle
- •2.1 Introduction to Braitenberg Vehicles
- •2.2 Quantum Approach for BV Decision Making
- •3 Topics in Eigenlogic
- •3.1 The Eigenlogic Operators
- •3.2 Incorporation of Fuzzy Logic
- •4 BV Quantum Robot Simulation Results
- •4.1 Simulation Environment
- •5 Quantum Wheel of Emotions
- •6 Discussion and Conclusion
- •7 Credits and Acknowledgements
- •References
- •1 Introduction
- •2.1 What Is Intelligence?
- •2.2 Human Intelligence and Quantum Cognition
- •2.3 In Search of the General Principles of Intelligence
- •3 Towards a Moral Test
- •4 Compositional Quantum Cognition
- •4.1 Categorical Compositional Model of Meaning
- •4.2 Proof of Concept: Compositional Quantum Cognition
- •5 Implementation of a Moral Test
- •5.2 Step II: A Toy Example, Moral Dilemmas and Context Effects
- •5.4 Step IV. Application for AI
- •6 Discussion and Conclusion
- •Appendix A: Example of a Moral Dilemma
- •References
- •Probability and Beyond
- •1 Introduction
- •2 The Theory of Density Hypercubes
- •2.1 Construction of the Theory
- •2.2 Component Symmetries
- •2.3 Normalisation and Causality
- •3 Decoherence and Hyper-decoherence
- •3.1 Decoherence to Classical Theory
- •4 Higher Order Interference
- •5 Conclusions
- •A Proofs
- •References
- •Information Retrieval
- •1 Introduction
- •2 Related Work
- •3 Quantum Entanglement and Bell Inequality
- •5 Experiment Settings
- •5.1 Dataset
- •5.3 Experimental Procedure
- •6 Results and Discussion
- •7 Conclusion
- •A Appendix
- •References
- •Investigating Bell Inequalities for Multidimensional Relevance Judgments in Information Retrieval
- •1 Introduction
- •2 Quantifying Relevance Dimensions
- •3 Deriving a Bell Inequality for Documents
- •3.1 CHSH Inequality
- •3.2 CHSH Inequality for Documents Using the Trace Method
- •4 Experiment and Results
- •5 Conclusion and Future Work
- •A Appendix
- •References
- •Short Paper
- •An Update on Updating
- •References
- •Author Index
- •The Sure Thing principle, the Disjunction Effect and the Law of Total Probability
- •Material and methods
- •Experimental results.
- •Experiment 1
- •Experiment 2
- •More versus less risk averse participants
- •Theoretical analysis
- •Shared features of the theoretical models
- •The Markov model
- •The quantum-like model
- •Logistic model
- •Theoretical model performance
- •Model comparison for risk attitude partitioning.
- •Discussion
- •Authors contributions
- •Ethical clearance
- •Funding
- •Acknowledgements
- •References
- •Markov versus quantum dynamic models of belief change during evidence monitoring
- •Results
- •Model comparisons.
- •Discussion
- •Methods
- •Participants.
- •Task.
- •Procedure.
- •Mathematical Models.
- •Acknowledgements
- •New Developments for Value-based Decisions
- •Context Effects in Preferential Choice
- •Comparison of Model Mechanisms
- •Qualitative Empirical Comparisons
- •Quantitative Empirical Comparisons
- •Neural Mechanisms of Value Accumulation
- •Neuroimaging Studies of Context Effects and Attribute-Wise Decision Processes
- •Concluding Remarks
- •Acknowledgments
- •References
- •Comparison of Markov versus quantum dynamical models of human decision making
- •CONFLICT OF INTEREST
- •Endnotes
- •FURTHER READING
- •REFERENCES
suai.ru/our-contacts |
quantum machine learning |
Box 3. Mechanisms Explaining Context Effects
Attribute Comparison
Some models assume that the evaluation of an attribute for an option is formed by evaluating the value of the option for an attribute relative to the attribute values of the other options. Therefore, the relative advantage/disadvantage of each option for an attribute depends on the context of items within which it is presented (left part of middle panel of Figure 1).
Attention to Attributes
Models of attribute processing need to make assumptions about how attention is allocated to each attribute. Many models treat attention weight to each attribute as a free parameter in the model. Other approaches assume that attribute values drive the attention allocation process.
Filtration
When comparing options on attributes, both advantages and disadvantages are produced. Some models assume that there is an imbalance in the evaluation of gains and losses such that losses tend to have larger psychological impacts than gains [37].
Attribute Integration
Models need to integrate attribute information into an estimate of the overall preference for each alternative. A stochastic approach is to assume that attention fluctuates among the attributes from one moment to the next, and that the fluctuating comparisons are integrated over time into an evolving preference state. A deterministic approach is to assume a weighted average of attribute comparisons for an option, which determines the rate of growth in preference across time (see right part of middle panel of Figure 1).
Competition
Some models assume that choice alternatives compete with one another by a lateral inhibitory process, creating interesting dynamics in the deliberation process. The lateral inhibition can either be uniform across options, or dependent on the psychological distance between options [95].
Valuation Noise
Because human preferences often vary across occasions, it is also important to have mechanisms that describe stochasticity in the choice process. Models of probabilistic choice assume either moment-to-moment noise in the accumulation process, or trial-to-trial variability in the evaluation of the values of options.
Qualitative Empirical Comparisons
The paradoxical choice context effects posed a problem for traditional, static, models of preferential choice for over 30 years. These context effects also challenged evidence-based sequential sampling models such as the drift-diffusion decision model [7]. This challenge was
Table 1. Comparison of Value-Based Sequential Sampling Mechanisms
Model |
Unique mechanisms |
Refs |
|
|
|
Decision field theory |
Attention switching, distance-dependent lateral inhibition |
[38] |
Leaky competing accumulator |
Attention switching, constant lateral inhibition, loss aversion |
[16] |
Attentional drift diffusion |
Attention switches among alternatives |
[19] |
Selective integration |
Attention biased by ordinal comparisons of attribute values |
[20] |
Associative accumulator |
Attention switching driven by the magnitude of attribute values |
[21] |
Linear ballistic accumulator |
Deterministic race based on weighted attribute comparisons |
[22] |
Decision by sampling |
Counts pairwise ordinal comparisons of attribute values |
[23] |
|
|
|
256 Trends in Cognitive Sciences, March 2019, Vol. 23, No. 3
suai.ru/our-contacts |
quantum machine learning |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
finally met by advancing value-based sequential sampling models that achieve this capability by introducing a variety of new mechanisms (distance-dependent lateral inhibition, loss aversion, attention guided by rank value). Almost all the value-based sequential sampling models can account for similarity, attraction, compromise, and reference-point effects (except for [19] and [20]). In addition, decision field theory (based on lateral inhibition) and the leaky accumulator model (based on loss aversion) provide strong a priori reasons for the observed negative correlation between attraction/compromise effects and similarity effects (Box 1). The associative accumulation model provides the most systematic account of reference-point effects [43] so far. However, the decision by sampling model [23], being the most recent application to context effects, accounts for the largest number of different qualitative findings (see the 25 different phenomena listed in Table 4 of their article))]FID$T17[.
A crucial dynamic prediction made by the sequential sampling models concerns the temporal evolution of preferences. The sequential sampling models also predict that attraction and compromise effects grow larger as a function of increasing deliberation time (Figure 1B), and the predicted increasing effect of deliberation time has been confirmed in several experiments [39–41]. The dynamic nature of these effects is important because several new static choice models of context effects have been proposed [36,44–47] that have no mechanisms for making any a priori predictions about dynamic effects.
Quantitative Empirical Comparisons
Several quantitative model comparisons have been conducted to compare the accuracy of the competing sequential sampling models for predicting context effects in value-based choice. The models have been compared using several different methods (Box 4). Some comparisons are based on using aggregate data (pooled across participants), others are based on predictions for individual data, and finally some use hierarchical methods that apply to all participants by including an additional model for the distribution of individual differences. Table 2[172T$DIF]provides a summary of the model comparisons. Note that this only includes comparisons based on preferential choices among value-based options, and does not include comparisons based on perceptual or inference tasks ([22] gives an example of the latter). Also note that, although all the sequential sampling models are capable of predicting both choice probability and decision time, the comparisons shown in Table 2[173T$DIF] are based only on choice data.
Box 4. Methods for Evaluating Models
Quantitatively evaluating the predictions of cognitive models for empirical data usually requires estimating model parameters from part of the data. Bayesian estimation methods have become more popular in cognitive science because they enable hierarchical versions of cognitive models to be fit to the data, and many methods and software packages have facilitated this transition [96]. Once fit, researchers can use methods to obtain metrics such as Bayes factors [97] to assess the evidence for one model or another. However, the complexities of the models described in this article make it difficult to derive simple equations for model fitting, making parameters difficult to estimate. This situation is problematic because it prohibits researchers from using parameter estimates to characterize individual differences, and understand what combination of model mechanisms yields specific patterns of behavioral data. Fortunately, new methods of parameter estimation circumvent the complex mathematical details of the models through model simulation [98]. Often referred to as approximate Bayesian computation (ABC), these methods take summary statistics of simulated data, compare them to observed data, and use the discrepancy between the two statistics as a measure of how likely each model parameter is to have generated the observed data. The novelty of the ABC approach is that it can be used within a Bayesian framework, and thus hierarchical models and parameter uncertainty can easily be assessed. Many new algorithms have been developed for specific modeling applications, such as estimating parameters that are intercorrelated [96,99], models of choice response time [100,101], recognition memory [102], preferential choice [48], and hierarchical models [103]. Together, these algorithms have opened up new opportunities for assessing complex individual differences, as well as comparing model fit, balanced for model complexity.
Trends in Cognitive Sciences, March 2019, Vol. 23, No. 3 257
suai.ru/our-contacts |
quantum machine learning |
Table 2. Comparison of Competing Models with Respect to the Accuracy of Quantitative Predictions for]FID$T61[ Preferential Choicesa,b
Aggregate |
Individual |
Hierarchical |
Refs |
DFT > MNL |
|
|
[87] |
DFT > MNL |
|
|
[88] |
|
DFT = LBA |
[89] |
DFT > AA > LCA > LBA |
AA > LCA LBA > DFT |
[48] |
|
DFT = LBA = DbS |
[23] |
aAbbreviations: AA, associative accumulation model [21]; DbS, decision by sampling model [23]; DFT, decision field theory [38]; LBA, linear ballistic accumulator model [22]; LCA, leaky competing accumulator model [16]; MNL, multinomial logit model [26].
bNote that the attentional drift-diffusion model [19] and the selective integration model are not included because they have not been quantitatively compared with other models with respect to predictions for the main three choice context effects.
The results of the competition show that sequential sampling models generally make better predictions than the multinomial logit model (a popular random utility model). However, the results of competition among sequential sampling models indicates that, although decision field theory performs well for aggregate data, other competing models perform better when individual differences are taken into account.
A better way to evaluate the sequential sampling models is to form a new collection by systematically including or excluding the various component processes that are used in different existing models (Box 3). Using this strategy, it is possible to identify the crucial psychological mechanisms important to the decision, rather than any specific ensemble of mechanisms assumed by a particular model. Recently, Turner and colleagues [48] compared a collection formed by including or excluding different types of attention shifting/weighting, loss aversion, lateral inhibition, and noise assumptions. The results of this large ‘switchboard analysis’ of model comparisons indicated that the best-performing models include stochastic integration of attribute comparisons, attention weighting depending on attribute values, lateral inhibition, and non-linear evaluation of attribute comparisons.
It seems difficult to distinguish the sequential sampling models on the basis of choice data alone. However, an added advantage of these models is that they also predict decision time and derive implications for eye movements, which can also be used for model comparison. There are numerous applications of value-based sequential sampling models to choice and decision time for the simple case of binary choices []FID$T471[13,18,19,49–52], but fewer applications multi-alternative (more the two) choices []FID$T571[53]. By adding additional assumptions linking eye movements to attention, predictions can made regarding the direction of eye movements during the decision process [54] and the influence that this direction has on choice [18,19,41]. Another important way to distinguish between models may be obtained from neuroscientific evidence [55], as reviewed in the following section.
Neuroscientific Research on Mechanisms
Neural Mechanisms of Value Accumulation
Early neuroscientific studies that relied on sequential sampling models to better understand value-based decisions focused on the question of which brain regions mediate the processes of value integration and evidence accumulation [56–59]. Consistent with other work in
258 Trends in Cognitive Sciences, March 2019, Vol. 23, No. 3
suai.ru/our-contacts |
quantum machine learning |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
neuroeconomics [60,61], these studies agreed on the role of the ventromedial prefrontal cortex (vmPFC) as representing the subjective value of available choice options (middle panel of Figure 1B). However, although some studies further linked the vmPFC to comparison and evidence-accumulation processes [59,62], other studies attributed these cognitive mechanisms to downstream areas such as the dorsolateral prefrontal cortex (dlPFC) and dorsomedial prefrontal cortex (dmPFC) (right panel of Figure 1B) [57,58]. Importantly, these early as well as many more recent studies in decision neuroscience [63–66] assume that people accumulate and compare integrated value signals of each option, which stands in contrast to the idea of stochastic switching across attributes, a mechanism inherent to most theories of multi-attribute decision making (see above). One reason for this discrepancy appears to be the dominance of fMRI as a tool to study the neural basis of value-based decision making in humans. The low temporal resolution of fMRI does not allow us to measure rapid changes in attention and decision processes. In our view, future research will need to rely more heavily on techniques with higher temporal precision such as electroencephalography (EEG) and magnetoencephalography (MEG) to study value-based decisions. Furthermore, the integration of knowledge from animal studies that employ rapid single-unit recording as well as optogenetic interventions will be crucial [10,67,68] even though this research area has mostly focused on perceptual decisions so far. Another reason for the discrepancy between cognitive and neural studies of value-based choice seems to be the frequent use of choice stimuli with ambiguous attributes (e.g., food snacks) in decision neuroscience, which precludes measuring and dissociating attribute-specific computations.
Neuroimaging Studies of Context Effects and Attribute-Wise Decision Processes
Despite the modest take-up of insights from the cognitive sciences, a few neuroscientific studies have investigated context effects in multi-attribute decisions, and especially the attraction effect [69–73]. Two studies reported increased activation of the anterior insula, either when contrasting target against competitor choices [71] or when contrasting decisions with a decoy option against decisions with a neutral third option [72]. The involvement of the anterior insula may indicate that saliency-driven overweighting of the strongest attribute of the target underlies the attraction effect [21,72,74].
Only a subset of these studies made use of value-based sequential sampling models to connect the neural data with potential cognitive mechanisms that underlie the contexts effects in multi-attribute, multi-alternative choice tasks [72,73,75]. One of the first was an fMRI study investigating the neural mechanisms of changes in attribute relevance in a threealternative choice task [75]. In this study, an attribute became more relevant (i.e., had a stronger influence on the decision) if one of the options had an exceedingly high value on this attribute. To explain the (IIA-violating) choice behavior in their task, the authors used a hierarchical accumulator model that bears many similarities to multi-alternative decision field theory [38], although it would not be sufficient to explain all the above-mentioned context effects (i.e., attraction, similarity, compromise). At the neural level, it was found that the ventromedial prefrontal cortex and the intraparietal sulcus encoded a chosen-value signal that was modulated by attribute relevance, while the dorsomedial prefrontal cortex encoded an unmodulated value signal. A study more directly concerned with the attraction effect [72] applied multi-alternative decision field theory [38] to predict choices between risky prospects. The predicted choice probability of the model could be linked to fMRI activation in vmPFC and posterior cingulate cortex (PCC). Interestingly, the choice-related activity in PCC was stronger in those participants who – according to the cognitive model – exaggerated the psychological distance between the target and the decoy in the 2D attribute space.
Trends in Cognitive Sciences, March 2019, Vol. 23, No. 3 259