Research and Reports on Mathematics

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Editorial, Res Rep Math Vol: 5 Issue: 4

Learning Neural Representations and Local Embedding for Nonlinear Dimensionality Mapping

Syed Sohel*

Department of Mathematics, University of Saudi Arabia, Saudi Arabia

*Corresponding Author:
Syed Sohel
Department of Mathematics, University of Saudi Arabia, Saudi Arabia
Tel: 8519974606
E-mail: syedsohel@gmail.com

Received Date: April 13, 2021; Accepted Date: April 15, 2021; Published Date: April 17, 2021

Citation: Sohel S (2021) Learning Neural Representations and Local Embedding for Nonlinear Dimensionality Mapping. Res Rep Math 8:3

Copyright: © All articles published in Research and Reports on Mathematics are the property of SciTechnol, and is protected by copyright laws. Copyright © 2021, SciTechnol, All Rights Reserved.

Abstract

There are M neural modules, respectively determining individual scopes of M data supports also as their neighboring relations. Neighboring relations among data supports are associated with edges of a graph. Derived from training observations, the graph configuration describes neighboring relations among data supports. All neural modules are further extended for NDR mapping. The extension simply equips every neural module with a posterior weight that represents the image of the centroid of a correspondent data support. The image of each centroid and pictures of its neighboring centroids following the property of locally nonlinear embedding induce nonlinear constraints for optimizing all posterior weights.

This work explores neural approximation for nonlinear dimensionality mapping supported internal representations of graph-organized regular data supports. Given training observations are assumed as a sample from a high-dimensional space with an embedding low-dimensional manifold. An approximating function consisting of adaptable built-in parameters is optimized subject to given training observations by the proposed learning process, and verified for transformation of novel testing observations to pictures within the low-dimensional output space. Optimized internal representations sketch graph-organized supports of distributed data clusters and their representative images within the output space. On the idea, the approximating function is in a position to work for testing without reserving original massive training observations. The neural approximating model contains multiple modules. Each activates a non-zero output for mapping in response to an input inside its correspondent local support. Graph-organized data supports have lateral interconnections for representing neighboring relations, inferring the minimal path between centroids of any two data supports, and proposing distance constraints for mapping all centroids to pictures within the output space. Following the distance-preserving principle, this work proposes Levenberg-Marquardt learning for optimizing images of centroids within the output space subject to given distance constraints, and further develops local embedding constraints for mapping during execution phase. Numerical simulations show the proposed neural approximation effective and reliable for nonlinear dimensionality reduction mapping.

Keywords: Unsupervised, Learning, Distance, Preserving, Mapping.

Keywords

Unsupervised Learning; Distance Preserving Mapping; Nonlinear Dimensionality Reduction Mapping; Data Visualization; Topology Preservation; Data Support Approximation; Nonlinear System Solving; Levenberg-Marquardt Learning; Clustering Analysis; Principle Component Analysis; Locally Nonlinear Embedding

Introduction

Nonlinear dimensionality reduction (NDR) mapping addresses transforming high dimensional observations to an embedded lower dimensional manifold. NDR mapping has attracted many attentions for analyzing large volume of high dimensional observations, like genomics images video and audio signals. The goal is to preserve and visualize neighborhood relations of observations by displaying transformed images within the low dimensional output space. Principle component analysis (PCA) extracts orthogonal eigenvectors, termed as principle components, which function internal representations of given training observations for linearly transforming high-dimensional observations to an output space. Linear projections of observations on selected principle components are often determined without reserving original training observations. Linear transformation by selected principle components can operate as a web process that transforms one observation at a time, but it’s been shown infeasible for topology preserving and can’t be directly applied for NDR mapping.

Locally linear embedding (LLE) [1] has been presented for NDR mapping and data visualization. LLE may be a batch process that simultaneously determines images of all given observations during a training set X. Applying the k-nearest neighboring method, it recruits k closest observations to make the neighborhood Nk(x) of every observation x. It assumes a locally linear relation within Nk(x) such observation x may be a linear combination of observations in Nk(x). For topology preserving, images of observations in Nk(x) are considered neighbors of the image rx of x. After optimizing coefficients cx of a correspondent linear relation that expresses each x, LLE further poses a linear relation within images of k observations in Nk(x). Supported the idea that rx may be a linear combination of images of observations in Nk(x) using cx, solving linear relations that express all rx simultaneously attains images of all observations.

In LLE, expressing rx by a linear relation makes use of the neighborhood and coefficients of the linear relation that expresses x. Inferring the image of any novel observation during execution phase hence needs neighbors defined over all training observations. LLE cannot operate with only internal representations extracted from X for image inference during testing phase. This limits portability and computational efficiency of LLE due to massive access to all or any training observations. To overcome the problem, this work extends LLE to locally nonlinear embedding (LNE) for NDR mapping. LNE adopts nonlinear relations for inferring images of novel observations during execution phase. LNE stands within a larger scope than LLE and may operate with only extracted internal representations for neural approximation of NDR mapping.

Similar to LLE, Isomapand Laplacian Eigenmaps maintain Nk(x) for every x. Isomap applies the k nearest neighboring method to calculate geodesic distances and applying the normal multi-dimensional scaling method equivalently PCA, to infer images of all observations. Laplacian Eigenmaps sketch the k-nearest neighbor graph supported Nk(x) for all x and solve the generalized eigenvalue problem for inference of images of all observations. Both Isomap and Laplacian Eigenmaps require reserving all training observations for inferring images of novel observations during execution phase.

Self-organization maps (SOM) also as elastic nets (EN) use grid organized receptive fields as adaptable internal representations for inferring images of observations. Unsupervised learning may be a process that extracts internal representations subject to training observations. Equipped with well extracted receptive fields, SOM emulates a cortex-like map, and attains a two-dimensional embedding for topology preserving mapping. It activates one and just one node in response to an observation following the winner-take-all principle. The active neural node must have a receptive field that’s closest to the given observation and its geometrical location on a grid refers to the inferred image within the low-dimensional embedding. SOM infers images of novel observations during execution phase without reserving training observations. Since unsupervised learning of SOM makes use of updating operations, which directly adopt Euclidean distances among observations, it needs further improvement for NDR mapping.

The NDR mapping proposed during this work ensures properties of extracting essential internal representations and recovering the low-dimensional embedded manifold. Supported the extracted internal representations and locally nonlinear embedding, the NDR mapping infers images of novel observations during testing phase, requiring no reservation of coaching observations.

This work proposes graph-organized data supports to scope training observations. The union of graph-organized data supports well sketches the underlying global density support of raw observations. Internal representations of the proposed NDR mapping contain a group of receptive fields and built-in parameters of adalines (adaptive linear elements) where receptive fields are associated with represent centroids of distributed data supports. The scope of every local data support may be a K-dimensional regular box, where K is a smaller amount than or equals the dimension of the input space. A neural module consisting of K pairs of adalines is used to work out the membership of observations to a correspondent data support. An adaline neural module is an indicator to the scope of a correspondent data support.

Internal representations extracted from training observations include features well characterizing the membership to each data support. Supported extracted internal representations of M neural modules also as posterior weights, the NDR mapping following locally nonlinear embedding can infer images of novel observations during testing phase without reserving original training observations. This property highly increases portability of the proposed NDR mapping. The dimensions of adaptable built-in parameters for the proposed NDR mapping depends on the amount of neural modules and therefore the dimension of each data support. Massive training observations are not any more required during execution of the proposed NDR mapping for testing.

The challenge is to optimize adaptable built-in parameters and posterior weights of joint adaline neural modules for the proposed NDR mapping. The union of graph-organized data supports sketches a bounded domain of the proposed neural approximation for NDR mapping. The NDR mapping explored during this work transforms high dimensional observations to pictures within the output space that recovers the manifold embedded within the input space. It’s realized by adaline neural modules extended with posterior weights. The training process mainly contains stages respectively constructing graph-organized cluster supports and optimizing posterior weights by the Levenberg-Marquardt algorithm The primary learning stage is aimed to optimize centroids, bulit-in parameters of adaline modules and graph interconnections for representing graph-organized data supports. The second stage is to work out posterior weights by solving a system that characterizes distance preserving mapping of centroids to pictures within the output space. The proposed neural approximation realizes NDR mapping without reserving training observations, depending only on adaptable feature representations or built-in parameters. Equipped with well-trained built-in parameters, the proposed NDR mapping can determine the image of a completely unique testing observation during execution phase by resolving constraints of locally nonlinear embedding.

international publisher, scitechnol, subscription journals, subscription, international, publisher, science

Track Your Manuscript

Awards Nomination