Downloads provided by UsageCounts
Beyond 5G networks are expected to distribute the computations from the cloud to multiple edge nodes, some of which should process both low-level baseband functions and high-level tasks, most notably AI/ML. These edge computing nodes will demand high-performance, re-programmability, and low-power, especially when located at the far-edges of the network. An attractive solution in meeting these needs is to utilize highly-complex SoC FPGAs, such as the state-of-the-art RFSoC or ACAP devices. As proposed in this work towards minimizing cost and power consumption of nodes, sophisticated SoC programming will enables us to execute in parallel the baseband and AI processes, i.e., to exploit a single device as an accelerator for the full range of tasks executed on a network node. The current paper explores how to integrate RFSoC/ACAP in such architectures, including the high throughput interfaces, the HW/SW co-processing capabilities, the accelerators’ deployment, and a preliminary estimation of performance/utilization.
BBU, RFSoC, AI/ML, FPGA
BBU, RFSoC, AI/ML, FPGA
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 14 | |
| downloads | 19 |

Views provided by UsageCounts
Downloads provided by UsageCounts