
As network connectivity increasingly shapes modern vehicular applications, in-advance knowledge of Quality-of-Service (QoS) degradation could unlock the potential for efficient and safer mobility. Predictive QoS (pQoS) has long resorted to traditional Machine Learning (ML) methods, but distributed approaches, such as Federated Learning (FL), have lately emerged as alternatives promising performance (and privacy) gains. Vehicular environments, however, appear prone to concept drift; frequent changes in the underlying client data distribution degrade the ML model’s accuracy. To mitigate drift, existing FL algorithms employ continuous model training at the expense of valuable network resources. DareFL, our drift management algorithm for distributed pQoS, (a) detects drift without violating FL’s privacy restrictions, and (b) unlike previous works, carefully schedules the (re-)training process thereafter, achieving remarkably reduced resource consumption. For evaluation purposes, we release a high-fidelity vehicular network simulator. We then realize two intuitive drift scenarios over which DareFL consistently yields comparable accuracy to existing FL schemes, while saving up to 70% on network resources.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
