Unranked tensor decomposition using metric learning

Machine Learning


  • Kolda, TG, Bader, BW Tensor decomposition and applications. Pastor Siam 51(3), 455–500 (2009).

  • Tucker, LR Mathematical notes on three-mode factor analysis. psychometrica 31(3), 279–311 (1966).

  • Kruskal, JB Ternary arrays: rank and uniqueness of trilinear decompositions with applications to arithmetic complexity and statistics. Linear algebra applications. 18(2), 95–138 (1977).

  • Bengio, Y., Courville, A., Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. pattern anal. Mach. intelligence. 35(8), 1798–1828 (2013).

  • Tancik, M. et al. Fourier functions allow networks to learn high-frequency functions in low-dimensional domains. Advanced neural information processes. system. 17537–7547 (2020).

    Google Scholar

  • Hitchcock Frank, L. Tensor or polyadic representation as a sum of products. J. Math. Physics. 6(1–4), 164–189 (1927).

    Google Scholar

  • Acar, E., Dunlavy, DM, Kolda, TG & Mørup, M. Scalable tensor factorization for incomplete data. Kemama. intelligence. Laboratory systems. 106(1), 41–56 (2011).

    Google Scholar

  • Oseledets, IV Tensor train decomposition. Siam J. Sci. Calculate. 33(5), 2295–2317 (2011).

    Google Scholar

  • Cichocki, A., Rafal, Z., Shun-ichi, A. Nonnegative matrices and tensor factorization. [lecture notes]. IEEE Signal Process. Mug. twenty five(1), 142–145 (2007).

  • Xue, J., Zhao, Y., Tongle, W. & Chan, J. Tensor convolution-like low-rank dictionaries for high-dimensional image representation. IEEE Trans. circle system. video technology. (2024).

  • Wang, A. et al. The transformed low-rank parameterization helps in robust generalization of tensor neural networks. Advanced neural information processes. system. 363032–3082 (2023).

    Google Scholar

  • Wang, A., Qiu, Y., Huang, H., Jin, Z., Zhou, G., Zhao, Q. Toward a geometric understanding of tensor learning by t-products. in 39th Annual Conference on Neural Information Processing Systems (2025).

  • Bagherian, M., Chehade, S., Whitney, B. & Passian, A. Classical and quantum compression for edge computing: Dimensionality reduction of ubiquitous data. computing 105(7), 1419–1465 (2023).

    Google Scholar

  • Tenenbaum, JB, Silva, VD & Langford, JC A global geometric framework for nonlinear dimensionality reduction. science 290(5500), 2319–2323 (2000).

    Google Scholar

  • Roweis, ST & Saul, LK Nonlinear dimensionality reduction with local linear embeddings. science 290(5500), 2323–2326 (2000).

    Google Scholar

  • Belkin, M., Partha, N. Laplacian eigenmaps for dimensionality reduction and data representation. neural computing. 15(6), 1373–1396 (2003).

  • Van der Maaten, L., Hinton, G. Data visualization using t-sne. J. Mach. learn. resolution 9(11) (2008).

  • McInnes, L., Healy, J., Melville, J. Umap: Uniform manifold approximation and projection for dimensionality reduction, arXiv preprint arXiv:1802.03426 (2018).

  • Kulis, B. et al. Learning indicators: A survey. Found it. trend mach. learn. 5(4), 287–364 (2013).

  • Hadsell, R., Chopra, S., and LeCun, Y. Dimensionality reduction by learning invariant mappings. in 2006 ieee computer society conference on computer vision and pattern recognition (cvpr 06)1735–1742 (2006).

  • Schroff, F., Kalenichenko, D., and Philbin, J. Facenet: Unified embeddings for face recognition and clustering. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognitionpp. 815–823 (2015).

  • Wang, H., Wang, Y., Zhou, Z., Ji, X., Gong, D., Zhou, J., Li, Z., and Liu, W. Cosface: Large margin cosine loss for deep face recognition. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognitionpp. 5265–5274 (2018).

  • Faghri, F., Fleet, DJ, Kiros, JR, and Sanja, F. Vse++: Improving visual meaning embedding with hard negatives, arXiv preprint arXiv:1707.05612 (2017).

  • Hermans, A., Lucas, B., Bastian, L. Triplet loss defense for re-identificationarxiv preprint arxiv:1703.07737, (2017).

  • Wang, T. & Isola, P. Understanding contrastive representation learning through alignment and uniformity on a hypersphere 99299939 (2020).

    Google Scholar

  • Kulkarni, N., Gupta, A., Tulsiani, S. Canonical surface mapping with geometric cycle consistency. in Proceedings of the IEEE/CVF International Conference on Computer Visionpp. 2202–2211 (2019).

  • Hubble, EP Extragalactic Nebula. Astrophy. J. 64321–369 (1926).

  • Dieleman, S., Willett, KW & Dambre, J. Rotation-invariant convolutional neural networks for galaxy morphology prediction. There’s no moon. R. Astron. society 450(2), 1441–1459 (2015).

    Google Scholar

  • Walmsley, M. et al. Galaxy Zoo Decals: Detailed volunteer visual morphometry and deep learning of 314,000 galaxies. There’s no moon. R. Astron. society 509(3), 3966–3988 (2022).

    Google Scholar

  • Isayev, O. et al. Universal fragment descriptors for predicting properties of inorganic crystals. nut. common. 8(1), 15679 (2017).

  • Xie, T. & Grossman, JC Crystal graph convolutional neural networks for accurate and interpretable prediction of material properties. Physics. Pastor Rhett. 120(14), 145301 (2018).

    Google Scholar

  • Chen, T., Simon, K., Mohammad, N., and Geoffrey, H. A simple framework for contrastive learning of visual representations. in International conference on machine learningpp. 1597–1607 (2020).

  • Oord, A., Li, Y., Vinyals, O. Representation learning with contrastive predictive coding, arXiv preprint arXiv:1807.03748 (2018).

  • Bagherian, M. Tensor denoising with dual Schatten norms. Optimal. Let. 18(5), 1285–1301 (2024).

    Google Scholar

  • Bagherian, M., Kim, RB, Jiang, C., Sartor, MA, Derksen, H., and Najarian, K. Combination matrix and binding tensor matrix completion methods for predicting drug-target interactions. Easy. bioinf. twenty two(2), 2161–2171 (2021).

  • Bagherian, M., Tarzanagh, D.A., Ivo, D., & Welch, J.D. A bilevel optimization method for tensor recovery under metric learning constraints. arXiv preprint arXiv:2209.00545 (2022).

  • Hiller, C.J. & Lim, L.-H. Most tensor problems are np-hard. JACM 60(6), 1–39 (2013).

    Google Scholar

  • Wu, C.-Y., Manmatha, R., Smola, A. J., Krahenbuhl, P. Sampling is important in deep embedding learning. in Proceedings of the IEEE International Conference on Computer Vision2840–2848 (2017).

  • Mo, S., Sun, Z., and Li, C. Rethinking prototype contrastive learning with alignment, uniformity, and correlation, arXiv preprint (2022).

  • Kurt, H., Stinchcombe, MB, Halbert, W. Multilayer feedforward networks are general-purpose approximators. neural network 2(5), 359–366 (1989).

    Google Scholar

  • Mohri, M., Rostamizadeh, A., Talwalkar, A. (MIT Press, Fundamentals of Machine Learning, 2018).

  • Shalev-Shwartz, S., Ben-David, S. Understanding machine learning: From theory to algorithms. (Cambridge University Press, 2014).

  • Weinberger, K.Q., Blitzer, J., Saul, L. Distance metric learning for large margin nearest neighbor classification. Advanced neural information processes. system. 18 (2005).

  • Bottou, L., Curtis, FE, Jorge, N. Optimization techniques for large-scale machine learning. Pastor Siam 60(2), 223–311 (2018).

  • Lee, JD, Simchowitz, M., Jordan, MI, Recht, B. Gradient descent converges only to the minimizer. in Conference on learning theorypp. 1246–1257 (2016).

  • Hein, M., Audibert, J.-Y., and von Luxburg, U. Graphing the Laplacian and its convergence in random neighborhood graphs. J. Mach. learn. resolution (2007).

  • Robbins, H. & Monro, S. Stochastic approximation methods. Ann. Mathematics. statistics 1400–407 (1951).

    Google Scholar

  • Karl Pearson, FRS iii. On the straight line and plane closest to a system of points in space. Rondo. edinburgh dublin philos. Mug. J.Sci. 2(11), 559–572 (1901).

  • Kullback, S., Leibler, R.A. On information and sufficiency. Ann. Mathematics. statistics twenty two(1), 79–86 (1951).

  • Wang, X., Han, X., Huang, W., Dong, D., Scott, M.R. Multiple similarity loss with general pair weighting for deep metric learning. in Proceedings of the ieee/cvf conference on computer vision and pattern recognition.5022–5030 (2019).

  • Kingma, DP, and Welling, M. Variational Bay automatic encoding, arXiv preprint arXiv:1312.6114. (2013).

  • Xie, J., Girshick, R., and Farhadi, A. Unsupervised deep embedding for clustering analysis. in International conference on machine learning478–487 (2016).

  • Huang, G.B., Marwan, M., Tamara, B., Learned-Miller, E. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. in Workshop on faces in real images: detection, alignment, and recognition(2008).

  • Pedregosa, F. et al. Scikitlearn: Machine learning in Python. J. Mach. learn. resolution 122825–2830 (2011). Dataset: Olivetti Faces, available at https://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_oliveetti_faces.html.

  • Di Martino, A. et al. Strengthening connectome research in autism using autism brain imaging data exchange ii. Science. data 4170010 (2017).

    Google Scholar

  • Dektor, A., Rodgers, A., Venturi, D. Rank-adaptive tensor methods for high-dimensional nonlinear pdes. J.Sci.Calculate. 88(2), 36 (2021).

    Google Scholar

  • Sedigin, F., Cicciocchi, A., Huang, A.-H. Adaptive rank selection for tensor ring decomposition. IEEE J. Select. top. signal process. 15(3), 454–463 (2021).

    Google Scholar

  • Vaswani, A. et al. All you need is attention. Advanced neural information processes. system. 30 (2017).



  • Source link