
GPU-64 is a power-efficient 64-bit GPU architecture optimized for Large Language Model (LLM) inference. Key innovations:- Content-Addressable Memory (CAM) based KV-Cache with O(1) lookup latency- 16,384 KV entries per SM (4× more than GPU-256)- 8×8 tensor cores for FP16/INT8 inference- 75W TDP for edge/mobile deployment- 4× inference speedup vs traditional GPUs The architecture uses compact 64-bit registers (KEY[32] + VALUE[32]) enabling 4× more KV-Cache entries compared to GPU-256, making it ideal for long-context LLM inference on edge devices. RTL implementation and Python emulator available on GitHub.
