sCOs: Similarity Preserving Approach for Semi-Supervised Co-Selection PROJECT TITLE : sCOs: Semi-Supervised Co-Selection by a Similarity Preserving Approach ABSTRACT: In this paper, the co-selection of instances and features in a semi-supervised learning scenario is the primary focus of our investigation. Due to the fact that the data contain both labeled and unlabeled examples drawn from the same population, the issue of co-selection has become more difficult to tackle in this setting. In order to carry out such semi-supervised co-selection, we propose a unified framework that we refer to as sCOs. This framework successfully incorporates both labeled and unlabeled parts into the co-selection process in an efficient manner. The foundation of the framework is the simultaneous introduction of a sparse regularization term and an approach that preserves similarity. It does this by analyzing the usefulness of both the features and the instances so that it can select the most pertinent ones simultaneously. We present two algorithms that are useful for both convex and nonconvex functions and are efficient in their execution. This is the first study that we are aware of that employs nonconvex penalties for the co-selection of semi-supervised learning tasks, and to the best of our knowledge, it is presented here for the very first time in this paper. For the purpose of validating sCOs and making a comparison with some representative methods from the current state of the art, experimental results are provided based on some well-known benchmark datasets. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest Lookup of Multiset Membership in Large Datasets Multi-Query Optimization of Sliding-Window Aggregations Evaluated Incrementally