PROJECT TITLE :

HDM: A Composable Framework for Big Data Processing - 2018

ABSTRACT:

Over the past years, frameworks like MapReduce and Spark are introduced to ease the task of developing massive knowledge programs and applications. But, the roles in these frameworks are roughly defined and packaged as executable jars without any functionality being exposed or described. This suggests that that deployed jobs aren't natively composable and reusable for subsequent development. Besides, it conjointly hampers the power for applying optimizations on the information flow of job sequences and pipelines. In this Project, we tend to gift the Hierarchically Distributed Knowledge Matrix (HDM) that could be a practical, strongly-typed data representation for writing composable big knowledge applications. In conjunction with HDM, a runtime framework is provided to support the execution, integration and management of HDM applications on distributed infrastructures. Based mostly on the purposeful knowledge dependency graph of HDM, multiple optimizations are applied to enhance the performance of executing HDM jobs. The experimental results show that our optimizations can achieve improvements between ten to forty p.c of the work-Completion-Time for different varieties of applications in comparison with the current state of art, Apache Spark.


Did you like this research project?

To get this research project Guidelines, Training and Code... Click Here


PROJECT TITLE : HDM: A Composable Framework for Big Data Processing ABSTRACT: Frameworks like MapReduce and Spark have been established in recent years to make constructing big data programs and applications easier. The jobs

Ready to Complete Your Academic MTech Project Work In Affordable Price ?

Project Enquiry