With Nominal and Ordinal Attributes, Learnable Weighting of Intra-Attribute Distances for Categorical Data Clustering PROJECT TITLE : Learnable Weighting of Intra-Attribute Distances for Categorical Data Clustering with Nominal and Ordinal Attributes ABSTRACT: The distance metric, which determines the degree to which two objects are dissimilar to one another, is generally an extremely important factor in the success of categorical data clustering. However, the vast majority of the currently available clustering methods treat the two categorical subtypes, namely nominal and ordinal attributes, in the same manner when calculating the dissimilarity. This means that they do not take into consideration the relative order information of the ordinal values. In addition, there would be an interdependence between the nominal attributes and the ordinal attributes, which is something that should be explored further in order to indicate the dissimilarity. As a result, the intrinsic distinction between nominal and ordinal attribute values, as well as the connection between the two, will be investigated in this paper using a perspective analogous to that of a graph. As a result, we propose an innovative distance metric that can measure the intra-attribute distances of nominal and ordinal attributes in a unified manner, while simultaneously maintaining the order relationship among ordinal values. Following this, we propose a new clustering algorithm with the intention of combining the learning of intra-attribute distance weights and the partitioning of data objects into a single learning paradigm rather than treating them as two separate steps. This will allow us to circumvent a less-than-ideal solution. The effectiveness of the proposed algorithm in comparison to its existing counterparts has been demonstrated through the use of experiments. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest For IoT anomaly detection, learn latent representation. Affine Matrix Rank Minimization on a Large Scale Using a Novel Nonconvex Regularizer