Distributed peer-to-peer systems depend on voluntary participation of peers to effectively manage a storage pool. In such systems, knowledge is usually replicated for performance and availability. If the storage associated with replication isn't monitored and provisioned, the underlying benefits could not be realized. Resource constraints, performance scalability, and availability present various issues. Availability and performance scalability, in terms of response time, are improved by aggressive replication, whereas resource constraints limit total storage in the network. Identification and elimination of redundant information create fundamental issues for such systems. In this paper, we have a tendency to present a novel and economical answer that addresses availability and scalability with respect to management of redundant data. Specifically, we tend to address the matter of duplicate elimination within the context of systems connected over an unstructured peer-to-peer network in which there is no a priori binding between an object and its location. We tend to propose 2 randomized protocols to unravel this drawback during a scalable and decentralized fashion that does not compromise the availability necessities of the applying. Performance results using each giant-scale simulations and a prototype built on PlanetLab demonstrate that our protocols provide high probabilistic guarantees while incurring minimal administrative overheads.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here