
Several projection-based learning models are presented for dynamic thermal simulations of multicore CPUs/GPUs enabled by proper orthogonal decomposition (POD) and Galerkin projection of the heat transfer equation. Effective learning via POD offers optimized basis functions with best least squares fit to training data, which significantly reduces the degree of freedom to obtain thermal solution. Galerkin projection instead incorporates physical principles in every step of calculations during simulation to improve accuracy and enables accurate predictions beyond training. Three variants of POD-Galerkin are developed and applied to an AMD quad-core CPU and an NVIDIA Tesla-V100 GPU with 13,400 cores to demonstrate training effectiveness and thermal simulation efficiency and accuracy. Additionally, the remarkable learning ability for POD-GP simulation beyond training is illustrated. This is very different from mainstream machine learning methods based on neural networks whose predictions are usually unpredictable beyond training since no physical principle is involved during simulation.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
