A Stream Analytics Platforms may be implemented over Apache™ Storm or Apache™ Spark engines for designing a distributed processing pipeline. The distributed processing pipeline may be created using multiple channel components and processor component. The channel components and processor components are available on a Graphical User Interface (GUI) provided for creating the distributed processing pipeline. A user may drag and drop the components over a canvas of the GUI to create the distributed processing pipeline. The distributed processing pipeline created may be executed by the cluster of computing resources associated with the Apache™ Storm/Apache™ Spark engine. It must be understood that since the components are having the specified tasks/functionalities, the components may not be utilized for executing any custom logic to be executed in the Apache™ Storm/Apache™ Spark engine. For example, the user may want to develop a custom logic that determines whether a loan may be granted to a customer of a bank based upon the current salary of the customer. In order to support such scenario, a custom component is provided in the distributed processing pipeline that may be utilized for configuring the custom logic as desired by the user.
However, it is observed that the user need to re-perform steps for configuring the same custom logic on a custom component of another distributed processing pipeline. That is, if the user creates a new distributed processing pipeline and desires to execute the same custom logic as above (i.e. determining whether a loan may be granted to a customer of a bank based upon the current salary of the customer) on the new distributed processing pipeline, the user may have to re-upload the program file on the new distributed processing pipeline, re-extract the program code, corresponding to the custom component, from the program file, to execute the custom logic on the stream analytics platform. In other words, it must be understood that for executing the same custom logic on different distributed processing pipelines, the user has to repeat the steps of uploading the program file, configuring the custom component, and extracting the program code, corresponding to the custom logic in the program file uploaded, on the custom component while configuring the custom component. It must be understood that such repetition of the above steps for execution of the same custom logic is onerous, time consuming and undesirable. Further, the user may create errors while re-performing the same tasks. Specifically, the user may conduct mistake while passing arguments corresponding to a function of the custom logic during the configuration of the custom component.