Facebook Twitter Instagram
    Tuesday, March 21
    • Demos
    • Lifestyle
    • Health
    • Buy Now
    Facebook Twitter LinkedIn VKontakte
    Tabloid News Stories
    Banner
    • Features
      • Featured Layouts
      • Post Layouts
      • Page Layouts
        • Meet The Team
        • Full-Width Page
        • Latest News
      • Boxed Layout
      • Wallpaper Ad
      • Typography
    • Entertainment
    • Politics
    • Fashion
    • Sports
    • Tech
    • Business
    Tabloid News Stories
    Home»Uncategorized»Stochastic Coded Federated Learning: Theoretical Analysis and
    Uncategorized

    Stochastic Coded Federated Learning: Theoretical Analysis and

    tbuzzedBy tbuzzedNovember 9, 2022No Comments2 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Stochastic Coded Federated Learning: Theoretical Analysis and
    Share
    Facebook Twitter LinkedIn Pinterest Email

    [Submitted on 8 Nov 2022] Download PDF Abstract: Federated learning (FL) has achieved great success as a privacy-preserving distributed training paradigm, where many edge devices collaboratively train a machine learning model by sharing the model updates instead of the raw data with a server. However, the heterogeneous computational and communication resources of edge devices give rise to stragglers that significantly decelerate the training process. To mitigate this issue, we propose a novel FL framework named stochastic coded federated learning (SCFL) that leverages coded computing techniques. In SCFL, before the training process starts, each edge device uploads a privacy-preserving coded dataset to the server, which is generated by adding Gaussian noise to the projected local dataset. During training, the server computes gradients on the global coded dataset to compensate for the missing model updates of the straggling devices. We design a gradient aggregation scheme to ensure that the aggregated model update is an unbiased estimate of the desired global update. Moreover, this aggregation scheme enables periodical model averaging to improve the training efficiency. We characterize the tradeoff between the convergence performance and privacy guarantee of SCFL. In particular, a more noisy coded dataset provides stronger privacy protection for edge devices but results in learning performance degradation. We further develop a contract-based incentive mechanism to coordinate such a conflict. The simulation results show that SCFL learns a better model within the given time and achieves a better privacy-performance tradeoff than the baseline methods. In addition, the proposed incentive mechanism grants better training performance than the conventional Stackelberg game approach. Submission history From: Yuchang Sun [view email] [v1] Tue, 8 Nov 2022 09:58:36 UTC (298 KB)
    Read More

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleDCFS addresses failures within agency after another
    Next Article Prophet Inequality: Order selection beats random order.
    tbuzzed

    Related Posts

    Bush, Lee hold meeting with Democrats, DOJ

    March 21, 2023

    Jackson pens solo dissent as Supreme Court

    March 21, 2023

    DEA issues alert about widespread threat of

    March 21, 2023

    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Instagram
    • Pinterest
    About
    About

    Your source for the lifestyle news. This demo is crafted specifically to exhibit the use of the theme as a lifestyle site. Visit our main page for more demos.

    We're social, connect with us:

    Facebook Twitter Pinterest LinkedIn VKontakte
    From Flickr
    Ascend
    terns
    casual
    riders on the storm
    chairman
    mood
    monument
    liquid cancer
    blue
    basement
    ditch
    stars
    Copyright © 2017. Designed by ThemeSphere.
    • Home
    • Lifestyle
    • Television
    • Lifestyle
    • Buy Now

    Type above and press Enter to search. Press Esc to cancel.