- •Preface
- •Contents
- •Contributors
- •Modeling Meaning Associated with Documental Entities: Introducing the Brussels Quantum Approach
- •1 Introduction
- •2 The Double-Slit Experiment
- •3 Interrogative Processes
- •4 Modeling the QWeb
- •5 Adding Context
- •6 Conclusion
- •Appendix 1: Interference Plus Context Effects
- •Appendix 2: Meaning Bond
- •References
- •1 Introduction
- •2 Bell Test in the Problem of Cognitive Semantic Information Retrieval
- •2.1 Bell Inequality and Its Interpretation
- •2.2 Bell Test in Semantic Retrieving
- •3 Results
- •References
- •1 Introduction
- •2 Basics of Quantum Probability Theory
- •3 Steps to Build an HSM Model
- •3.1 How to Determine the Compatibility Relations
- •3.2 How to Determine the Dimension
- •3.5 Compute the Choice Probabilities
- •3.6 Estimate Model Parameters, Compare and Test Models
- •4 Computer Programs
- •5 Concluding Comments
- •References
- •Basics of Quantum Theory for Quantum-Like Modeling Information Retrieval
- •1 Introduction
- •3 Quantum Mathematics
- •3.1 Hermitian Operators in Hilbert Space
- •3.2 Pure and Mixed States: Normalized Vectors and Density Operators
- •4 Quantum Mechanics: Postulates
- •5 Compatible and Incompatible Observables
- •5.1 Post-Measurement State From the Projection Postulate
- •6 Interpretations of Quantum Mechanics
- •6.1 Ensemble and Individual Interpretations
- •6.2 Information Interpretations
- •7 Quantum Conditional (Transition) Probability
- •9 Formula of Total Probability with the Interference Term
- •9.1 Växjö (Realist Ensemble Contextual) Interpretation of Quantum Mechanics
- •10 Quantum Logic
- •11 Space of Square Integrable Functions as a State Space
- •12 Operation of Tensor Product
- •14 Qubit
- •15 Entanglement
- •References
- •1 Introduction
- •2 Background
- •2.1 Distributional Hypothesis
- •2.2 A Brief History of Word Embedding
- •3 Applications of Word Embedding
- •3.1 Word-Level Applications
- •3.2 Sentence-Level Application
- •3.3 Sentence-Pair Level Application
- •3.4 Seq2seq Application
- •3.5 Evaluation
- •4 Reconsidering Word Embedding
- •4.1 Limitations
- •4.2 Trends
- •4.4 Towards Dynamic Word Embedding
- •5 Conclusion
- •References
- •1 Introduction
- •2 Motivating Example: Car Dealership
- •3 Modelling Elementary Data Types
- •3.1 Orthogonal Data Types
- •3.2 Non-orthogonal Data Types
- •4 Data Type Construction
- •5 Quantum-Based Data Type Constructors
- •5.1 Tuple Data Type Constructor
- •5.2 Set Data Type Constructor
- •6 Conclusion
- •References
- •Incorporating Weights into a Quantum-Logic-Based Query Language
- •1 Introduction
- •2 A Motivating Example
- •5 Logic-Based Weighting
- •6 Related Work
- •7 Conclusion
- •References
- •Searching for Information with Meet and Join Operators
- •1 Introduction
- •2 Background
- •2.1 Vector Spaces
- •2.2 Sets Versus Vector Spaces
- •2.3 The Boolean Model for IR
- •2.5 The Probabilistic Models
- •3 Meet and Join
- •4 Structures of a Query-by-Theme Language
- •4.1 Features and Terms
- •4.2 Themes
- •4.3 Document Ranking
- •4.4 Meet and Join Operators
- •5 Implementation of a Query-by-Theme Language
- •6 Related Work
- •7 Discussion and Future Work
- •References
- •Index
- •Preface
- •Organization
- •Contents
- •Fundamentals
- •Why Should We Use Quantum Theory?
- •1 Introduction
- •2 On the Human Science/Natural Science Issue
- •3 The Human Roots of Quantum Science
- •4 Qualitative Parallels Between Quantum Theory and the Human Sciences
- •5 Early Quantitative Applications of Quantum Theory to the Human Sciences
- •6 Epilogue
- •References
- •Quantum Cognition
- •1 Introduction
- •2 The Quantum Persuasion Approach
- •3 Experimental Design
- •3.1 Testing for Perspective Incompatibility
- •3.2 Quantum Persuasion
- •3.3 Predictions
- •4 Results
- •4.1 Descriptive Statistics
- •4.2 Data Analysis
- •4.3 Interpretation
- •5 Discussion and Concluding Remarks
- •References
- •1 Introduction
- •2 A Probabilistic Fusion Model of Trust
- •3 Contextuality
- •4 Experiment
- •4.1 Subjects
- •4.2 Design and Materials
- •4.3 Procedure
- •4.4 Results
- •4.5 Discussion
- •5 Summary and Conclusions
- •References
- •Probabilistic Programs for Investigating Contextuality in Human Information Processing
- •1 Introduction
- •2 A Framework for Determining Contextuality in Human Information Processing
- •3 Using Probabilistic Programs to Simulate Bell Scenario Experiments
- •References
- •1 Familiarity and Recollection, Verbatim and Gist
- •2 True Memory, False Memory, over Distributed Memory
- •3 The Hamiltonian Based QEM Model
- •4 Data and Prediction
- •5 Discussion
- •References
- •Decision-Making
- •1 Introduction
- •1.2 Two Stage Gambling Game
- •2 Quantum Probabilities and Waves
- •2.1 Intensity Waves
- •2.2 The Law of Balance and Probability Waves
- •2.3 Probability Waves
- •3 Law of Maximal Uncertainty
- •3.1 Principle of Entropy
- •3.2 Mirror Principle
- •4 Conclusion
- •References
- •1 Introduction
- •4 Quantum-Like Bayesian Networks
- •7.1 Results and Discussion
- •8 Conclusion
- •References
- •Cybernetics and AI
- •1 Introduction
- •2 Modeling of the Vehicle
- •2.1 Introduction to Braitenberg Vehicles
- •2.2 Quantum Approach for BV Decision Making
- •3 Topics in Eigenlogic
- •3.1 The Eigenlogic Operators
- •3.2 Incorporation of Fuzzy Logic
- •4 BV Quantum Robot Simulation Results
- •4.1 Simulation Environment
- •5 Quantum Wheel of Emotions
- •6 Discussion and Conclusion
- •7 Credits and Acknowledgements
- •References
- •1 Introduction
- •2.1 What Is Intelligence?
- •2.2 Human Intelligence and Quantum Cognition
- •2.3 In Search of the General Principles of Intelligence
- •3 Towards a Moral Test
- •4 Compositional Quantum Cognition
- •4.1 Categorical Compositional Model of Meaning
- •4.2 Proof of Concept: Compositional Quantum Cognition
- •5 Implementation of a Moral Test
- •5.2 Step II: A Toy Example, Moral Dilemmas and Context Effects
- •5.4 Step IV. Application for AI
- •6 Discussion and Conclusion
- •Appendix A: Example of a Moral Dilemma
- •References
- •Probability and Beyond
- •1 Introduction
- •2 The Theory of Density Hypercubes
- •2.1 Construction of the Theory
- •2.2 Component Symmetries
- •2.3 Normalisation and Causality
- •3 Decoherence and Hyper-decoherence
- •3.1 Decoherence to Classical Theory
- •4 Higher Order Interference
- •5 Conclusions
- •A Proofs
- •References
- •Information Retrieval
- •1 Introduction
- •2 Related Work
- •3 Quantum Entanglement and Bell Inequality
- •5 Experiment Settings
- •5.1 Dataset
- •5.3 Experimental Procedure
- •6 Results and Discussion
- •7 Conclusion
- •A Appendix
- •References
- •Investigating Bell Inequalities for Multidimensional Relevance Judgments in Information Retrieval
- •1 Introduction
- •2 Quantifying Relevance Dimensions
- •3 Deriving a Bell Inequality for Documents
- •3.1 CHSH Inequality
- •3.2 CHSH Inequality for Documents Using the Trace Method
- •4 Experiment and Results
- •5 Conclusion and Future Work
- •A Appendix
- •References
- •Short Paper
- •An Update on Updating
- •References
- •Author Index
- •The Sure Thing principle, the Disjunction Effect and the Law of Total Probability
- •Material and methods
- •Experimental results.
- •Experiment 1
- •Experiment 2
- •More versus less risk averse participants
- •Theoretical analysis
- •Shared features of the theoretical models
- •The Markov model
- •The quantum-like model
- •Logistic model
- •Theoretical model performance
- •Model comparison for risk attitude partitioning.
- •Discussion
- •Authors contributions
- •Ethical clearance
- •Funding
- •Acknowledgements
- •References
- •Markov versus quantum dynamic models of belief change during evidence monitoring
- •Results
- •Model comparisons.
- •Discussion
- •Methods
- •Participants.
- •Task.
- •Procedure.
- •Mathematical Models.
- •Acknowledgements
- •New Developments for Value-based Decisions
- •Context Effects in Preferential Choice
- •Comparison of Model Mechanisms
- •Qualitative Empirical Comparisons
- •Quantitative Empirical Comparisons
- •Neural Mechanisms of Value Accumulation
- •Neuroimaging Studies of Context Effects and Attribute-Wise Decision Processes
- •Concluding Remarks
- •Acknowledgments
- •References
- •Comparison of Markov versus quantum dynamical models of human decision making
- •CONFLICT OF INTEREST
- •Endnotes
- •FURTHER READING
- •REFERENCES
suai.ru/our-contacts |
quantum machine learning |
Quantum-Based Modelling of Database States
Table 2 Atomic conditions on car properties
|
117 |
|
|
Label |
Condition |
YC1 |
year of construction = 2016 |
YC2 |
year of construction = 2017 |
FT1 |
fuel tank ≈ 35 |
FT2 |
fuel tank is very large |
K1 |
kilometre ≈ 15.000 |
K2 |
kilometre is very small |
NC |
number of cylinders = 4 |
CA1 |
cylinder arrangement = Row |
CA2 |
cylinder arrangement = Boxer |
When we look at condition FT2 we make the following observation: testing FT2 against the state of a car object cannot adequately return yes or no. Instead, we expect to receive a grade of compliance from the interval [0, 1]. A high value signals a strong compliance and vice versa. Later on, we will show how the statistics of quantum measurements provides us a mean to compute the required gradual values.
At Þrst, we discuss how to model elementary data types by using the mathematics behind quantum mechanics. Here we focus on Þnite dimensional and real inner product spaces. Later on, we will explain how to construct complex data types and how to map them into the quantum world.
3 Modelling Elementary Data Types
An elementary data type deÞnes a data structure and operations to deal with its values. A data type is elementary if its values cannot be meaningfully decomposed into smaller semantic values. In our example the property year of construction is elementary. Its domain covers all possible year values of car construction. A useful operation could be the computation of the difference between 2 year values. We deÞne the function dom which assigns to a data type a set of valid values. That set is often called domain of a data type.
We distinguish between two types of elementary data types:
Ðorthogonal data type: The values of that data type are independent from each other. There is no meaningful similarity between them. Two values are either identical or not identical. In our example, the property cylinder arrangement is orthogonal.
Ðnon-orthogonal data type: Besides the test on identity between two values gradual similarity values can be required between them. In our example the property fuel tank is non-orthogonal: a required volume of 35 L is more similar to a given value of 40 L than 45 L.
The distinction between orthogonal and non-orthogonal often depends on the intended application semantics. In some application it may be important to demand
suai.ru/our-contacts |
quantum machine learning |
118 |
I. Schmitt et al. |
for an exact value of 35 L for a fuel tank and every deviation is seen as wrong. In that case, fuel tank would be modelled as an orthogonal data type. For simplicity, in the following we assume that every property is categorized either as orthogonal or non-orthogonal.
In next subsections we show how to map an elementary data type dt with a Þnite domain
Dom ( dt ) := {V1, . . . , Vk }
to a family of ket vectors of an inner product space. The mapping of a value to a ket vector is denoted by the symbol →. Function QDom assigns to a data type the set of ket vectors which appear as possible outcome of this mapping.
3.1 Orthogonal Data Types
The values of an orthogonal data type dt are bijectively mapped to ket vectors forming an orthonormal basis of an inner product space:
QDom( dt ) = {|V1 , . . . , |Vk }
Dom( dt ) → QDom( dt )
i [1, k] : Vi → |Vi .
The corresponding ket vectors are taken to be mutually orthogonal; they span a k- dimensional inner product space.
Let us take a basis ket vector |Vx for a value of an orthogonal property. If we want to test the value Vi for identity with Vx , we proceed in a way reßecting quantum measurement. We construct the projector P = |Vi Vi |:
Vx |
P |
Vx |
Vx |
Vi |
|
Vi |
Vx |
= |
1 if i = x |
| |
| |
|
= | |
|
| |
|
0 otherwise. |
For testing whether a value x is contained in a value set S = {s} we use the projector
P = s S |Vs Vs |:
Vx |P |Vx = Vx | |
|
|Vs Vs | |Vx |
|||||
|
|
s S |
|
|
|
|
|
= |
|
Vx |
Vs |
Vs |
Vx |
= |
1 if x S |
| |
|
| |
|
0 otherwise. |
|||
|
s S |
|
|
|
|
|
|
suai.ru/our-contacts |
quantum machine learning |
Quantum-Based Modelling of Database States |
119 |
Fig. 2 Value mapping into a real one-qubit system
|V1 = 0.9 · |0 + 0.435· |1 |V2 = 0.7 · |0 + 0.714· |1 |V3 = 0.3 · |0 + 0.954· |1
3.2 Non-orthogonal Data Types
Between values of a non-orthogonal data type dt a gradual similarity is required. Therefore we choose non-orthogonal ket vectors for modelling. As target of the mapping we take a real inner product space with dimension n ≤ k. As extreme case we can map all values to the two-dimensional inner product space of a real one-qubit-system, see, for example, the mapping of three values in Fig. 2.
An intuitive question arises from where we get the right ket vectors. Starting point is a k ×k similarity matrix S = {sij } expressing the required gradual similarity values between all value pairs. For the construction of the ket vectors, the similarity matrix must meet the following properties:
Ð Unit interval: All values of the matrix are elements from [0, 1].
Ð Diagonal values: All diagonal values refer to the similarity of values to themselves and are therefore 1.
Ð Symmetry: The matrix is symmetric since similarity is usually required to be symmetric.
Ð Square-rooted positive semi-definiteness: For reasons explained in the sequel, we
require the matrix of square roots S 1 := {√s } to be positive semi-deÞnite. That
2 ij
is, the eigenvalues must be non-negative.
Table 3 left shows an example of a similarity matrix.
Based on a similarity matrix S we can construct the ket vectors. First, we replace
1
all matrix elements by their square roots yielding S 2 . The motivation for this is that the projection probability given by quantum measurement corresponds to a squared
1
inner product. Second, we perform a spectral decomposition of S 2 and obtain the matrix V containing orthonormal eigenvectors as rows and a diagonal matrix L with the corresponding non-negative eigenvalues:
1 |
= V |
· L · V . |
S 2 |
suai.ru/our-contacts |
quantum machine learning |
120 |
I. Schmitt et al. |
Table 3 Similarity values (left) and their element-wise square roots (right)
S |
V1 |
V2 |
V3 |
V1 |
1 |
0.5 |
0 |
V2 |
0.5 |
1 |
0.5 |
V3 |
0 |
0.5 |
1 |
1 |
V1 |
V2 |
V3 |
||||||
S 2 |
|||||||||
V1 |
1 |
|
|
1/√ |
|
|
0 |
|
|
|
|
2 |
|
|
|||||
V2 |
1/√ |
|
|
1 |
|
|
1/√ |
|
|
2 |
|
|
2 |
||||||
V3 |
0 |
|
|
1/√ |
|
|
1 |
|
|
|
|
2 |
|
|
Since L is a diagonal matrix with non-negative values we can write it as a product
1 |
1 |
and obtain: |
|
|
||
of its square roots L = L 2 |
· L 2 |
|
|
|||
|
1 |
= |
|
1 |
1 |
· V |
|
S 2 |
V · L 2 |
· L 2 |
|||
|
|
= |
|
1 |
1 |
· V |
|
|
V |
· L 2 |
· L 2 |
||
|
|
= |
|
1 |
|
1 |
|
|
L 2 · V |
· L 2 · V |
|||
|
|
= |
K |
· K, |
|
|
1
with K = {kij } = L 2 · V . The columns of matrix K correspond to the required ket vectors. However, they are vectors of k dimensions. The number of dimensions is usually higher than necessary. Let us inspect the diagonal matrix L containing the eigenvalues. Very often, some of the eigenvalues are zero. The corresponding dimensions can therefore be removed and we end up with ket vectors of an inner product space of a dimension n less than k.1 The mapping is given by:
QDom( dt ) = {|V1 , . . . , |Vk }
Dom( dt ) → QDom( dt )
|
n |
j [1, k] : Vj → |Vj = |
. |
kij |i span{|1 , . . . , |n } = Rn |
|
|
i=1 |
where |i denotes the i-th canonical unit vector of Rn.
We will demonstrate the derivation of ket vectors from a similarity matrix using the example given in Table 3. The similarity matrix is given left and its square root is given right. The Cholesky-decomposition yields the square matrix given in Table 4. The matrix can be reduced by the last row since the corresponding eigenvalue is zero. Thus, we obtain three two-dimensional ket vectors from the resulting columns. They are illustrated in Fig. 3.
1A more efÞcient method to derive the ket vectors is to apply the Cholesky-decomposition from
1
S 2 [5].