Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Report
Data sources: ZENODO
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

GPU-64: A 64-bit Inference GPU with Native O(1) KV-Cache for Edge LLM Deployment

Authors: Peyriguere, Boris;

GPU-64: A 64-bit Inference GPU with Native O(1) KV-Cache for Edge LLM Deployment

Abstract

GPU-64 is a power-efficient 64-bit GPU architecture optimized for Large Language Model (LLM) inference. Key innovations:- Content-Addressable Memory (CAM) based KV-Cache with O(1) lookup latency- 16,384 KV entries per SM (4× more than GPU-256)- 8×8 tensor cores for FP16/INT8 inference- 75W TDP for edge/mobile deployment- 4× inference speedup vs traditional GPUs The architecture uses compact 64-bit registers (KEY[32] + VALUE[32]) enabling 4× more KV-Cache entries compared to GPU-256, making it ideal for long-context LLM inference on edge devices. RTL implementation and Python emulator available on GitHub.

Powered by OpenAIRE graph
Found an issue? Give us feedback