A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 6 Issue 4
Jul.  2019

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Lei Xu, "An Overview and Perspectives On Bidirectional Intelligence: Lmser Duality, Double IA Harmony, and Causal Computation," IEEE/CAA J. Autom. Sinica, vol. 6, no. 4, pp. 865-893, July 2019. doi: 10.1109/JAS.2019.1911603
Citation: Lei Xu, "An Overview and Perspectives On Bidirectional Intelligence: Lmser Duality, Double IA Harmony, and Causal Computation," IEEE/CAA J. Autom. Sinica, vol. 6, no. 4, pp. 865-893, July 2019. doi: 10.1109/JAS.2019.1911603

An Overview and Perspectives On Bidirectional Intelligence: Lmser Duality, Double IA Harmony, and Causal Computation

doi: 10.1109/JAS.2019.1911603
Funds:  This work was supported by the Zhi-Yuan Chair Professorship Start-up Grant (WF220103010) from Shanghai Jiao Tong University
More Information
  • Advances on bidirectional intelligence are overviewed along three threads, with extensions and new perspectives. The first thread is about bidirectional learning architecture, exploring five dualities that enable Lmser six cognitive functions and provide new perspectives on which a lot of extensions and particularlly flexible Lmser are proposed. Interestingly, either or two of these dualities actually takes an important role in recent models such as U-net, ResNet, and DenseNet. The second thread is about bidirectional learning principles unified by best yIng-yAng (IA) harmony in BYY system. After getting insights on deep bidirectional learning from a bird-viewing on existing typical learning principles from one or both of the inward and outward directions, maximum likelihood, variational principle, and several other learning principles are summarised as exemplars of the BYY learning, with new perspectives on advanced topics. The third thread further proceeds to deep bidirectional intelligence, driven by long term dynamics (LTD) for parameter learning and short term dynamics (STD) for image thinking and rational thinking in harmony. Image thinking deals with information flow of continuously valued arrays and especially image sequence, as if thinking was displayed in the real world, exemplified by the flow from inward encoding/cognition to outward reconstruction/transformation performed in Lmser learning and BYY learning. In contrast, rational thinking handles symbolic strings or discretely valued vectors, performing uncertainty reasoning and problem solving. In particular, a general thesis is proposed for bidirectional intelligence, featured by BYY intelligence potential theory (BYY-IPT) and nine essential dualities in architecture, fundamentals, and implementation, respectively. Then, problems of combinatorial solving and uncertainty reasoning are investigated from this BYY IPT perspective. First, variants and extensions are suggested for AlphaGoZero like searching tasks, such as traveling salesman problem (TSP) and attributed graph matching (AGM) that are turned into Go like problems with help of a feature enrichment technique. Second, reasoning activities are summarized under guidance of BYY IPT from the aspects of constraint satisfaction, uncertainty propagation, and path or tree searching. Particularly, causal potential theory is proposed for discovering causal direction, with two roads developed for its implementation.

     

  • loading
  • [1]
    H. Bourlard and Y. Kamp, " Auto-association by multilayer perceptrons and singular value decomposition,” Biol. Cybern., vol. 59, no. 4-5, pp. 291–294, Sep. 1988. doi: 10.1007/BF00332918
    [2]
    L. Xu, " Least MSE reconstruction by self-organization. I. Multi-layer neural-nets, ” in Proc. Int. Joint Conf. Neural Networks, Singapore, 1991, pp. 2362-2367.
    [3]
    L. Xu, " Least mean square error reconstruction principle for self-organizing neural-nets,” Neural Networks, vol. 6, no. 5, pp. 627–648, Oct. 1993. doi: 10.1016/S0893-6080(05)80107-8
    [4]
    D. H. Ballard, " Modular learning in neural networks, ” in Proc. 6th National Conf. Artificial Intelligence, Seattle, USA, 1987, pp. 279-284.
    [5]
    J. L. Elman and D. Zipser, " Learning the hidden structure of speech,” J. Acoust. Soc. Am., vol. 83, no. 4, pp. 1615–1626, Apr. 1988. doi: 10.1121/1.395916
    [6]
    P. G. Cottrell, P. Munro, and D. Zipser, " Image compression by back propagation: An example of extensional programming, ” in Models of Cognition: A Review of Cognition Science, N. E. Sharkey, Ed. Norwood, USA: Ablex, 1989, pp. 208-240.
    [7]
    P. Baldi and K. Hornik, " Neural networks and principal component analysis: Learning from examples without local minima,” Neural Networks, vol. 2, no. 1, pp. 53–58, Dec. 1989. doi: 10.1016/0893-6080(89)90014-2
    [8]
    M. Kawato, H. Hayakawa, and T. Inui, " A forward-inverse optics model of reciprocal connections between visual cortical areas,” Network:Comput. Neural Syst., vol. 4, no. 4, pp. 415–422, Oct. 1993. doi: 10.1088/0954-898X_4_4_001
    [9]
    W. E. A. Huang, " Deep LMSER learning with symmetric weights and neuron sharing, ” 2018.
    [10]
    G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal, " The "wake-sleep" algorithm for unsupervised neural networks,” Science, vol. 268, no. 5214, pp. 1158–1161, May 1995. doi: 10.1126/science.7761831
    [11]
    P. Dayan, G. E. Hinton, R. M. Neal, and R. S. Zemel, " The Helmholtz machine,” Neural Comput., vol. 7, no. 5, pp. 889–904, Sep. 1995. doi: 10.1162/neco.1995.7.5.889
    [12]
    L. Xu, " Bayesian-Kullback coupled Ying-Yang machines: Unified learnings and new results on vector quantization, ” in Proc. Int. Conf. Neural Information Processing, Beijing, China, 1995, pp. 977-988.
    [13]
    L. Xu, " A unified learning scheme: Bayesian-Kullback Ying-Yang machine, ” in Proc. 8th Int. Conf. Neural Information Processing Systems, Denver, USA, 1996, pp. 444-450.
    [14]
    G. E. Hinton, S. Osindero, and Y. W. Teh, " A fast learning algorithm for deep belief nets,” Neural Comput., vol. 18, no. 7, pp. 1527–1554, Jul. 2006. doi: 10.1162/neco.2006.18.7.1527
    [15]
    G. E. Hinton and R. R. Salakhutdinov, " Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, Jul. 2006. doi: 10.1126/science.1127647
    [16]
    X. J. Mao, C. H. Shen, and Y. B. Yang, " Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections, ” in Proc. 30th Int. Conf. Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 2802-2810.
    [17]
    O. Ronneberger, P. Fischer, and T. Brox, " U-net: Convolutional networks for biomedical image segmentation, ” in Int. Conf. Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015, pp. 234-241.
    [18]
    K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, " Deep residual learning for image recognition, ” in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 770-778.
    [19]
    G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, " Densely connected convolutional networks, ” in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, USA, 2017, pp. 4700-4708.
    [20]
    D. P. Kingma and M. Welling, " Auto-encoding variational Bayes, ” arXiv preprint arXiv: 1312.6114, 2013.
    [21]
    J. Schmidhuber, " Deep learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85–117, Jan. 2015. doi: 10.1016/j.neunet.2014.09.003
    [22]
    C. K. Sønderby, T. Raiko, L. Maaløe, S. K. Sønderby, and O. Winther, " Ladder variational autoencoder, ” in Proc. 30th Conf. Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 3738-3746.
    [23]
    L. Xu, " Bayesian Ying-Yang system, best harmony learning, and five action circling,” Front. Electr. Electron. Eng. China, vol. 5, no. 3, pp. 281–328, Sep. 2010. doi: 10.1007/s11460-010-0108-9
    [24]
    L. Xu, " New advances on the Ying-Yang machine, ” in Proc. 1995 Int. Symp. Artificial Neural Networks, Taiwan, China, 1995, pp. 7-12.
    [25]
    L. Xu, " Codimensional matrix pairing perspective of BYY harmony learning: hierarchy of bilinear systems, joint decomposition of data-covariance, and applications of network biology,” Front. Electr. Electron. Eng. China, vol. 6, no. 1, pp. 86–119, Mar. 2011. doi: 10.1007/s11460-011-0135-1
    [26]
    L. Xu, " On essential topics of BYY harmony learning: Current status, challenging issues, and gene analysis applications,” Front. Electr. Electron. Eng., vol. 7, no. 1, pp. 147–196, Mar. 2012.
    [27]
    L. Xu, " Further advances on Bayesian Ying-Yang harmony learning,” Appl. Inform., vol. 2, pp. 5, Dec. 2015. doi: 10.1186/s40535-015-0008-4
    [28]
    D. J. Rezende, S. Mohamed, I. Danihelka, K. Gregor, and D. Wierstra, " One-shot generalization in deep generative models, ” in Proc. 33rd Int. Conf. Machine Learning, New York, USA, 2016.
    [29]
    S. J. Zhao, J. M. Song, and S. Ermon, " Learning hierarchical features from deep generative models, ” in Proc. 34th Int. Conf. Machine Learning, Sydney, Australia, 2017, pp. 4091-4099.
    [30]
    I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, " Generative adversarial nets, ” in Proc. 27th Int. Conf. Neural Information Processing Systems, Montreal, Canada, 2014, pp. 2672-2680.
    [31]
    S. Gurumurthy, R. K. Sarvadevabhatla, and R. V. Babu, " DeLiGAN: Generative adversarial networks for diverse and limited data, ” in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, USA, 2017.
    [32]
    L. Mescheder, S. Nowozin, and A. Geiger, " Adversarial variational Bayes: Unifying variational autoencoders and generative adversarial networks, ” in Proc. 34th Int. Conf. Machine Learning, Sydney, Australia, 2017.
    [33]
    I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau, " Building end-to-end dialogue systems using generative hierarchical neural network models, ” in Proc. 30th AAAI Conf. Artificial Intelligence, Phoenix, Arizona, 2016, pp. 3776-3783.
    [34]
    I. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio, " A hierarchical latent variable encoder-decoder model for generating dialogues, ” in Proc. 31st AAAI Conf. Artificial Intelligence, San Francisco, USA, 2017, pp. 3295-3301.
    [35]
    P. Ballester and R. Matsumura Araujo, " On the performance of GoogLeNet and AlexNet applied to sketches, ” in Proc. 30th AAAI Conf. Artificial Intelligence, Phoenix, Arizona, 2016, pp. 1124-1128.
    [36]
    S. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele, and H. Lee, " Learning what and where to draw, ” in Proc. 29th Conf. Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 217-225.
    [37]
    Y. J. Chen, S. K. Tu, Y. Q. Yi, and L. Xu, " Sketch-pix2seq: a model to generate sketches of multiple categories, ” arXiv preprint arXiv: 1709.04121, 2017.
    [38]
    T. Nakamura and R. Goto, " Outfit generation and style extraction via bidirectional LSTM and autoencoder, ” arXiv preprint arXiv: 1807.03133, 2018.
    [39]
    A. Augello, E. Cipolla, I. Infantino, A. Manfre, G. Pilato, and F. Vella, " Creative robot dance with variational encoder, ” arXiv preprint arXiv: 1707.01489, 2017.
    [40]
    E. Denton, S. Chintala, A. Szlam, and R. Fergus, " Deep generative image models using a Laplacian pyramid of adversarial networks, ” in Proc. 28th Int. Conf. Neural Information Processing Systems, Montreal, Canada, 2015, pp. 1486-1494.
    [41]
    P. Isola, J. Y. Zhu, T. H. Zhou, and A. A. Efros, " Image-to-image translation with conditional adversarial networks, ” in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, USA, 2017, pp. 1125-1134.
    [42]
    J. J. Wu, C. K. Zhang, T. F. Xue, B. Freeman, and J. Tenenbaum, " Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling, ” in Proc. 29th Conf. Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 82-90.
    [43]
    G. L. Liu, F. A. Reda, K. J. Shih, T. C. Wang, A. Tao, and B. Catanzaro, " Image inpainting for irregular holes using partial convolutions, ” in European Conf. Computer Vision, Munich, Germany, 2018.
    [44]
    F. L. Ma, R. Chitta, J. Zhou, Q. Z. You, T. Sun, and J. Gao, " Dipole: Diagnosis prediction in healthcare via attention-based bidirectional recurrent neural networks, ” in Proc. 23rd ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, Halifax, Canada, 2017, pp. 1903-1911.
    [45]
    C. Vondrick and A. Torralba, " Generating the future with adversarial transformers, ” in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, USA, 2017.
    [46]
    Z. F. Zhang, Y. Song, and H. R. Qi, " Age progression/regression by conditional adversarial autoencoder, ” in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, USA, 2017.
    [47]
    L. Xu, A. Krzyzak, and E. Oja, " Rival penalized competitive learning for clustering analysis, RBF net, and curve detection,” IEEE Trans. Neural Networks, vol. 4, no. 4, pp. 636–649, Jul. 1993. doi: 10.1109/72.238318
    [48]
    T. Kohonen, " Learning vector quantization, ” in Self-organizing Maps, T. Kohonen, Eds. Berlin, Heidelberg, Germany: Springer, 1995, pp. 175-189.
    [49]
    L. Xu, " Deep bidirectional intelligence: AlphaZero, deep IA-search, deep IA-infer, and TPC causal learning,” Appl. Inform., vol. 5, no. 1, pp. 5, Dec. 2018. doi: 10.1186/s40535-018-0052-y
    [50]
    T. Kohonen, " The self-organizing map,” Proc. IEEE, vol. 78, no. 9, pp. 1464–1480, Sep. 1990. doi: 10.1109/5.58325
    [51]
    L. Xu, " Adding learned expectation into the learning procedure of self-organizing maps,” Int. J. Neural Syst., vol. 1, no. 3, pp. 269–283, Apr. 1990. doi: 10.1142/S0129065790000175
    [52]
    M. A. F. Figueiredo and A. K. Jain, " Unsupervised learning of finite mixture models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 3, pp. 381–396, Mar. 2002. doi: 10.1109/34.990138
    [53]
    L. Xu, M. I. Jordan, and G. E. Hinton, " An alternative model for mixtures of experts, ” in Advances in Neural Information Processing Systems, J. D. Cowan, G. Tesauro, and J. Alspector, Eds. Cambridge: MIT Press, 1995, pp. 903-912.
    [54]
    G. Schwarz, " Estimating the dimension of a model,” Ann. Stat., vol. 6, no. 2, pp. 461–464, Mar. 1978. doi: 10.1214/aos/1176344136
    [55]
    J. Rissanen, " Modeling by shortest data description,” Automatica, vol. 14, no. 5, pp. 465–471, Sep. 1978. doi: 10.1016/0005-1098(78)90005-5
    [56]
    J. Rissanen, Information and Complexity in Statistical Modeling. New York, USA: Springer, 2007.
    [57]
    D. J. MacKay, " A practical Bayesian framework for backpropagation networks,” Neural Comput., vol. 4, no. 3, pp. 448–472, May 1992. doi: 10.1162/neco.1992.4.3.448
    [58]
    L. Xu, " Bayesian Ying Yang system and theory as a unified statistical learning approach: (I) unsupervised and semi-unsupervised learning, ” in Brain-like Computing and Intelligent Information Systems, S. Amari and N. Kassabov, Eds. Berlin, Germany: Springer-Verlag, 1997, 241-274.
    [59]
    L. Xu, " Data smoothing regularization, multi-sets-learning, and problem solving strategies,” Neural Networks, vol. 16, no. 5-6, pp. 817–825, Jun.-Jul. 2003. doi: 10.1016/S0893-6080(03)00119-9
    [60]
    A. J. Bell and T. J. Sejnowski, " An information-maximization approach to blind separation and blind deconvolution,” Neural Comput., vol. 7, no. 6, pp. 1129–1159, Nov. 1995. doi: 10.1162/neco.1995.7.6.1129
    [61]
    S. Amari, A. Cichocki, and H. H. Yang, " A new learning algorithm for blind signal separation, ” in Proc. 8th Int. Conf. Neural Information Processing Systems, Denver, USA, 1995, 757-763.
    [62]
    L. Xu, " Independent subspaces, ” in Encyclopedia of Artificial Intelligence, J. Ramón, R. Dopico, J. Dorado, and A. P. Sierra, Eds. Hershey, USA: IGI Global, 2009, pp. 892-901.
    [63]
    L. Xu, " Independent component analysis and extensions with noise and time: a Bayesian Ying-Yang learning perspective,” Neural Inf. Process. Lett. Rev., vol. 1, no. 1, pp. 1–52, Oct. 2003.
    [64]
    A. L. Yuille, S. M. Smirnakis, and L. Xu, " Bayesian self-organization, ” in Proc. 6th Int. Conf. Neural Information Processing Systems, Denver, USA, 1993, pp. 1001-1008.
    [65]
    L. Xu, " Ying-yang learning, ” in The Handbook of Brain Theory and Neural Networks, M. A. Arbib, Ed. Cambridge, USA: MIT Press, 2002, 1231-1237.
    [66]
    L. Xu, " BYY \Sigma-\Pi factor systems and harmony learning, ” in Proc. Int. Conf. Neural Information Processing, Taejon, Korea, 2000, pp. 548-558.
    [67]
    L. Xu, " Best harmony, unified RPCL and automated model selection for unsupervised and supervised learning on Gaussian mixtures, three-layer nets and ME-RBF-SVM models,” Int. J. Neural Syst., vol. 11, no. 1, pp. 43–69, Feb. 2001. doi: 10.1142/S0129065701000497
    [68]
    L. Xu, " BYY harmony learning, independent state space, and generalized APT financial analyses,” IEEE Trans. Neural Networks, vol. 12, no. 4, pp. 822–849, Jul. 2001. doi: 10.1109/72.935094
    [69]
    E. T. Jaynes, Probability Theory: The Logic of Science. New York, USA: Cambridge University Press, 2003.
    [70]
    L. Xu and M. I. Jordan, " On convergence properties of the EM algorithm for Gaussian mixtures,” Neural Comput., vol. 8, no. 1, pp. 129–151, Jan. 1996. doi: 10.1162/neco.1996.8.1.129
    [71]
    L. Xu, " RBF nets, mixture experts, and Bayesian Ying-Yang learning,” Neurocomputing, vol. 19, no. 1-3, pp. 223–257, Apr. 1998. doi: 10.1016/S0925-2312(97)00091-X
    [72]
    L. Xu and S. I. Amari, " Combining classifiers and learning mixture-of-experts, ” in Encyclopedia of Artificial Intelligence, J. Ramón, R. Dopico, J. Dorado, and A. P. Sierra, Eds. Hershey, USA: IGI Global, 2008, pp. 318-326.
    [73]
    L. Xu, " Learning algorithms for RBF functions and subspace based functions, ” in Handbook of Research on Machine Learning, Applications and Trends: Algorithms, Methods, and Techniques, E. S. Olivas, J. D. M. Guerrero, M. Martinez-Sober, J. R. Magdalena-Benedito, and A. J. S. López, Eds. Hershey, USA: IGI Global, 2009, pp. 60-94.
    [74]
    X. S. Qian, " On thinking sciences,” Chin. J. Nat., no. 8, pp. 563–567, 572-640, 1983.
    [75]
    Y. H. Pan, " The synthesis reasoning,” Pattern Recognition and Artificial Intelligence, vol. 9, no. 3, pp. 201–208, 1996.
    [76]
    D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, " Mastering the game of go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, Jan. 2016. doi: 10.1038/nature16961
    [77]
    D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. T. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, " Mastering the game of Go without human knowledge,” Nature, vol. 550, no. 7676, pp. 354–259, Oct. 2017. doi: 10.1038/nature24270
    [78]
    M. I. Jordan and L. Xu, " Convergence results for the EM approach to mixtures of experts architectures,” Neural Networks, vol. 8, no. 9, pp. 1409–1431, 1995. doi: 10.1016/0893-6080(95)00014-3
    [79]
    M. Jünger, G. Reinelt, and G. Rinaldi, " The traveling salesman problem, ” in Handbooks in Operations Research and Management Science, Amsterdam, Netherlands: Elsevier, 1995, pp. 225-330.
    [80]
    C. Y. Dang and L. Xu, " A globally convergent Lagrange and barrier function iterative algorithm for the traveling salesman problem,” Neural Networks, vol. 14, no. 2, pp. 217–230, Mar. 2001. doi: 10.1016/S0893-6080(00)00092-7
    [81]
    W. H. Tsai and K. S. Fu, " Error-correcting isomorphisms of attributed relational graphs for pattern analysis,” IEEE Trans. Syst.,Man,Cybern., vol. 9, no. 12, pp. 757–768, Dec. 1979. doi: 10.1109/TSMC.1979.4310127
    [82]
    L. Xu and E. Oja, " Improved simulated annealing, Boltzmann machine, and attributed graph matching, ” in European Association for Signal Processing Workshop, Sesimbra, Portugal, 1990, pp. 151-160.
    [83]
    L. Xu and S. Klasa, " A PCA-like rule for pattern classification based on attributed graph, ” in Proc. 1993 Int. Conf. Neural Networks, Nagoya, Japan, 1993, pp. 1281-1284.
    [84]
    J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, USA: Morgan Kaufmann, 1988.
    [85]
    J. Pearl, " Fusion, propagation, and structuring in belief networks,” Artif. Intell., vol. 29, no. 3, pp. 241–288, Sep. 1986. doi: 10.1016/0004-3702(86)90072-X
    [86]
    J. J. Hopfield, " Neural networks and physical systems with emergent collective computational abilities,” Proc. Natl. Acad. Sci. USA, vol. 79, no. 8, pp. 2554–2558, Apr. 1982. doi: 10.1073/pnas.79.8.2554
    [87]
    G. E. Hinton and T. J. Sejnowski, " Learning and relearning in Boltzmann machines, ” in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Cambridge: MIT Press, 1986, pp. 282-317.
    [88]
    L. Xu, " Machine learning and causal analyses for modeling financial and economic data,” Appl. Inform., vol. 5, no. 1, pp. 11, Dec. 2018. doi: 10.1186/s40535-018-0058-5
    [89]
    P. Spirtes, C. Glymour, and R. Scheines, Causation, Prediction, and Search. New York, USA: Springer, 1993.
    [90]
    P. Spirtes, C. Glymour, and R. Scheines, Causation, Prediction, and Search. 2nd ed. Cambridge, USA: MIT Press, 2000.
    [91]
    P. Judea, " An introduction to causal inference,” Int. J. Biostat., vol. 6, no. 2, pp. 7, Feb. 2010.
    [92]
    P. Spirtes and K. Zhang, " Causal discovery and inference: concepts and recent methodological advances,” Appl. Inform., vol. 3, pp. 3, Dec. 2016. doi: 10.1186/s40535-016-0018-x
    [93]
    S. Wright, " The method of path coefficients,” Ann. Math. Stat., vol. 5, no. 3, pp. 161–215, Sep. 1934. doi: 10.1214/aoms/1177732676
    [94]
    G. W. Imbens and D. B. Rubin, Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction. New York, USA: Cambridge University Press, 2015.
    [95]
    R. B. Kline, Principles and Practice of Structural Equation Modeling. New York, USA: Guilford Publications, 2016.
    [96]
    J. Peters, D. Janzing, and B. Scholkopf, " Causal inference on discrete data using additive noise models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2436–2450, Dec. 2011. doi: 10.1109/TPAMI.2011.71
    [97]
    S. Shimizu, P. O. Hoyer, A. Hyvärinen, and A. Kerminen, " A linear non-Gaussian acyclic model for causal discovery,” J. Mach. Learn. Res., vol. 7, pp. 2003–2030, Oct. 2006.
    [98]
    K. Zhang and Hyvärinen, " On the identifiability of the post-nonlinear causal model, ” in Proc. 25th Conf. Uncertainty in Artificial Intelligence, Montreal, Canada, 2009, pp. 647-655.
    [99]
    O. Goudet, D. Kalainathan, P. Caillou, I. Guyon, D. Lopez-Paz, and M. Sebag, " Causal generative neural networks, ” arXiv preprint arXiv: 1711.08936, 2017.
    [100]
    L. Xu and J. Pearl, " Structuring causal tree models with continuous variables, ” in Proc. 3rd Annu. Conf. Uncertainty in Artificial Intelligence, Seattle, USA, pp. 170-179, 1987.
    [101]
    B. Efron, The Jackknife, the Bootstrap and Other Resampling Plans. Philadelphia, USA: SIAM, 1982.
    [102]
    A. N. Gomez, M. Y. Ren, R. Urtasun, and R. B. Grosse, " The reversible residual network: Backpropagation without storing activations, ” in Proc. 31st Conf. Neural Information Processing Systems, Long Beach, USA, 2017, pp. 2214-2224.
    [103]
    J. H. Jacobsen, A. Smeulders, and E. Oyallon, " i-RevNet: Deep invertible networks, ” in Proc. 2018 Int. Conf. Learning Representations, Vancouver, Canada, 2018.
    [104]
    P. E. Hart, N. J. Nilsson, and B. Raphael, " A formal basis for the heuristic determination of minimum cost paths,” IEEE Trans. Syst. Sci. Cybern., vol. 4, no. 2, pp. 100–107, Jul. 1968. doi: 10.1109/TSSC.1968.300136
    [105]
    J. Pearl, Heuristics: Intelligent Search Strategies for Computer Problem Solving. Reading, USA: Addison-Wesley Pub. Co., Inc., 1984.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(16)  / Tables(8)

    Article Metrics

    Article views (4589) PDF downloads(214) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return