Vision, our richest sensor, allows inferring big data from reality. Arguably, to be “smart everywhere” we will need to have “eyes everywhere”. Coupled with advances in artificial vision, the possibilities are endless in terms of wearable applications, augmented reality, surveillance, ambient-assisted living, etc. Currently, computer vision is rapidly moving beyond academic research and factory automation. On the other hand, mass-market mobile devices owe much of their success to their impressing imaging capabilities, so the question arises if such devices could be used as “eyes everywhere”. Vision is the most demanding sensor in terms of power consumption and required processing power and, in this respect, existing mass-consumer mobile devices have three problems: 1) power consumption precludes their ‘always-on’ capability, 2) they would have unused sensors for most vision-based applications and 3) since they have been designed for a definite purpose (i.e. as cell phones, PDAs and “readers”) people will not consistently use them for other purposes. Our objective in this project is to build an optimized core vision platform that can work independently and also embedded into all types of artefacts. The envisioned open hardware must be combined with carefully designed APIs that maximize inferred information per milliwatt and adapt the quality of inferred results to each particular application. This will not only mean more hours of continuous operation, it will allow to create novel applications and services that go beyond what current vision systems can do, which are either personal/mobile or ‘always-on’ but not both at the same time. Thus, the “Eyes of Things” project aims at developing a ground-breaking platform that combines: a) a need for more intelligence in future embedded systems, b) computer vision moving rapidly beyond academic research and factory automation and c) the phenomenal technological advances in mobile processing power.