Abstract

With the advent of big data scenarios, centralized processing is no more feasible and is on the verge of getting obsolete. With this shift in paradigm, distributed processing is becoming more relevant, i.e., instead of burdening the central processor, sharing the load between the multiple processing units. The decentralization capability of the ADMM algorithm made it popular since the recent past. Another recent algorithm PDMM paved its way for distributed processing, which is still in its development state. Both the algorithms work well with the medium-scale problems, but dealing with large scale problems is still a challenging task. This work is an effort towards handling large scale data with reduced computation load. To this end, the proposed framework tries to combine the advantages of the SVRG and PDMM algorithms. The algorithm is proved to converge with rate $\mathcal{O}(1/K$ for strongly convex loss functions, which is faster than the existing algorithms. Experimental evaluations on the real data prove the efficacy of the proposed algorithm over the state of the art methodologies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.