Research and Reports on Mathematics

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Learning Neural Representations and Local Embedding for Nonlinear Dimensionality Mapping

There are M neural modules, respectively determining individual scopes of M data supports also as their neighboring relations. Neighboring relations among data supports are associated with edges of a graph. Derived from training observations, the graph configuration describes neighboring relations among data supports. All neural modules are further extended for NDR mapping. The extension simply equips every neural module with a posterior weight that represents the image of the centroid of a correspondent data support. The image of each centroid and pictures of its neighboring centroids following the property of locally nonlinear embedding induce nonlinear constraints for optimizing all posterior weights.

This work explores neural approximation for nonlinear dimensionality mapping supported internal representations of graph-organized regular data supports. Given training observations are assumed as a sample from a high-dimensional space with an embedding low-dimensional manifold. An approximating function consisting of adaptable built-in parameters is optimized subject to given training observations by the proposed learning process, and verified for transformation of novel testing observations to pictures within the low-dimensional output space. Optimized internal representations sketch graph-organized supports of distributed data clusters and their representative images within the output space. On the idea, the approximating function is in a position to work for testing without reserving original massive training observations. The neural approximating model contains multiple modules. Each activates a non-zero output for mapping in response to an input inside its correspondent local support. Graph-organized data supports have lateral interconnections for representing neighboring relations, inferring the minimal path between centroids of any two data supports, and proposing distance constraints for mapping all centroids to pictures within the output space. Following the distance-preserving principle, this work proposes Levenberg-Marquardt learning for optimizing images of centroids within the output space subject to given distance constraints, and further develops local embedding constraints for mapping during execution phase. Numerical simulations show the proposed neural approximation effective and reliable for nonlinear dimensionality reduction mapping.

Special Features

Full Text

View

Track Your Manuscript

Media Partners

GET THE APP