Theoretical entities are aspects of the world that cannot be sensed directly but that, nevertheless, are causally relevant. Scientific inquiry has uncovered many such entities, such as black holes and dark matter. We claim that theoretical entities, or hidden variables, are important for the development of concepts within the lifetime of an individual and present a novel neural network architecture that solves three problems related to theoretical entities: (1) discovering that they exist, (2) determining their number, and (3) computing their values. Experiments show the utility of the proposed approach using discrete time dynamical systems, in which some of the state variables are hidden, and sensor data obtained from the camera of a mobile robot, in which the sizes and locations of objects in the visual field are observed but their sizes and locations (distances) in the three-dimensional world are not. Two different regularization terms are explored that improve the network's ability to approximate the values of hidden variables, and the performance and capabilities of the network are compared to that of Hidden Markov Models.
Loading....