
arXiv: 2403.13289
AbstractRecent years have seen an explosion of work and interest in text‐to‐3D shape generation. Much of the progress is driven by advances in 3D representations, large‐scale pretraining and representation learning for text and image data enabling generative AI models, and differentiable rendering. Computational systems that can perform text‐to‐3D shape generation have captivated the popular imagination as they enable non‐expert users to easily create 3D content directly from text. However, there are still many limitations and challenges remaining in this problem space. In this state‐of‐the‐art report, we provide a survey of the underlying technology and methods enabling text‐to‐3D shape generation to summarize the background literature. We then derive a systematic categorization of recent work on text‐to‐3D shape generation based on the type of supervision data required. Finally, we discuss limitations of the existing categories of methods, and delineate promising directions for future work.
FOS: Computer and information sciences, Computer Vision and Pattern Recognition (cs.CV), Computer Science - Computer Vision and Pattern Recognition
FOS: Computer and information sciences, Computer Vision and Pattern Recognition (cs.CV), Computer Science - Computer Vision and Pattern Recognition
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 11 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
