Powered by OpenAIRE graph
Found an issue? Give us feedback

Oracle (United States)

Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
15 Projects, page 1 of 3
  • Funder: European Commission Project Code: 223996
    more_vert
  • Funder: UK Research and Innovation Project Code: EP/L016427/1
    Funder Contribution: 4,746,530 GBP

    Overview: We propose a Centre for Doctoral Training in Data Science. Data science is an emerging discipline that combines machine learning, databases, and other research areas in order to generate new knowledge from complex data. Interest in data science is exploding in industry and the public sector, both in the UK and internationally. Students from the Centre will be well prepared to work on tough problems involving large-scale unstructured and semistructured data, which are increasingly arising across a wide variety of application areas. Skills need: There is a significant industrial need for students who are well trained in data science. Skilled data scientists are in high demand. A report by McKinsey Global Institute cites a shortage of up to 190,000 qualified data scientists in the US; the situation in the UK is likely to be similar. A 2012 report in the Harvard Business Review concludes: "Indeed the shortage of data scientists is becoming a serious constraint in some sectors." A report on the Nature web site cited an astonishing 15,000% increase in job postings for data scientists in a single year, from 2011 to 2012. Many of our industrial partners (see letters of support) have expressed a pressing need to hire in data science. Training approach: We will train students using a rigorous and innovative four-year programme that is designed not only to train students in performing cutting-edge research but also to foster interdisciplinary interactions between students and to build students' practical expertise by interacting with a wide consortium of partners. The first year of the programme combines taught coursework and a sequence of small research projects. Taught coursework will include courses in machine learning, databases, and other research areas. Years 2-4 of the programme will consist primarily of an intensive PhD-level research project. The programme will provide students with breadth throughout the interdisciplinary scope of data science, depth in a specialist area, training in leadership and communication skills, and appreciation for practical issues in applied data science. All students will receive individual supervision from at least two members of Centre staff. The training programme will be especially characterized by opportunities for combining theory and practice, and for student-led and peer-to-peer learning.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/K01790X/1
    Funder Contribution: 618,883 GBP

    Traditionally, most software projects have been tackled using a single programming language. However, as our ambitions for software grow, this is increasingly unnatural: no single language, no matter how "good", is well-suited to everything. Increasingly, different communities have created or adopted non-traditional languages - often, though not always, under the banner of Domain Specific Languages (DSLs) - to satisfy their specific needs. Consider a large organisation. Its back-end software may utilise SQL and Java; its desktop software C#; its website back-end PHP and the front-end Javascript and HTML5; reports may be created using R; and some divisions may prototype software with Python or Haskell. Though the organisation makes use of different languages, each must execute in its own silo. We currently have few techniques to allow a single running program to be written using multiple languages. In the Cooler project, we call this the "runtime composition" problem: how can languages execute directly alongside each other, exchange data, call each other, optimise with respect to each other, etc.? The chief existing technique for composing language runtimes is to translate all languages in the composition down to a base language, most commonly the byte code for one of the "big" Virtual Machines (VMs) - Java's HotSpot or .NET's CLR. Though this works well in some cases, it has two major problems. Firstly, a VM will intentionally target a specific family of languages, and may not provide the primitives needed by languages outside that family. HotSpot, for example, does not support tail recursion or continuations, excluding many advanced languages. Secondly, the primitives that a VM exposes may not allow efficient execution of programs. For example, dynamically typed languages running on HotSpot run slower than their seemingly much less sophisticated "home brew" VMs. The Cooler project takes a new approach to the composition problem. It hypothesizes that meta-tracing will allow the efficient composition of arbitrary language runtimes. Meta-tracing is a recently developed technique that creates efficient VMs with custom Just-in-Time (JIT) compilers. Firstly, language designers write an interpreter for their chosen language. When that interpreter executes a user's program, hot paths in the code are recorded ("traced"), optimised, and converted into machine code; subsequent calls then use that fast machine code rather than the slow interpreter. Meta-tracing is distinct from partial evaluation: it records actual actions executed by the interpreter on a specific user program. Meta-tracing is an exciting new technique for three reasons. Firstly, it leads to fast VMs: the PyPy VM (a fully compatible reimplementation of Python) is over 5 times faster than CPython (the C-based Python VM) and Jython (Python on the JVM). Secondly, it requires few resources: a meta-tracing implementation of the Converge language was completed in less than 3 person months, and runs faster than CPython and Jython. Third, because the user writes the interpreter themselves, there is no bias to any particular family of languages. The Cooler project will initially design the first language specifically designed for meta-tracing (rather than, as existing systems, reusing an unsuitable existing language). This will enable the exploration of various aspects of language runtime composition. First, cross-runtime sharing: how can different paradigms (e.g. imperative and functional) exchange data and behaviour? Second, optimisation: how can programs written in multiple paradigms be optimised (space and time)? Finally, the limits of the approach will be explored through known hard problems: cross-runtime garbage collection; concurrency; and to what extent runtimes not designed for composition can be composed. Ultimately, the project will allow users to compose together runtimes and programs in ways that are currently unfeasible.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/K008730/1
    Funder Contribution: 4,135,050 GBP

    The last decade has seen a significant shift in the way computers are designed. Up to the turn of the millennium advances in performance were achieved by making a single processor, which could execute a single program at a time, go faster, usually by increasing the frequency of its clock signal. But shortly after the turn of the millennium it became clear that this approach was running into a brick wall - the faster clock meant the processor got hotter, and the amount of heat that can be dissipated in a silicon chip before it fails is limited; that limit was approaching rapidly! Quite suddenly several high-profile projects were cancelled and the industry found a new approach to higher performance. Instead of making one processor go ever faster, the number of processor cores could be increased. Multi-core processors had arrived: first dual core, then quad-core, and so on. As microchip manufacturing capability continues to increase the number of transistors that can be integrated on a single chip, the number of cores continues to rise, and now multi-core is giving way to many-core systems - processors with 10s of cores, running 10s of programs at the same time. This all seems fine at the hardware level - more transistors means more cores - but this change from one to many programs running at the same time has caused many difficulties for the programmers who develop applications for these new systems. Writing a program that runs on a single core is much better understood than writing a program that is actually 10s of programs running at the same time, interacting with each other in complex and hard-to-predict ways. To make life for the programmer even harder, with many-core systems it is often best not to make all the cores identical; instead, heterogeneous many-core systems offer the promise of much higher efficiency with specialised cores handling specialised parts of the overall program, but this is even harder for the programmer to manage. The Programme of projects we plan to undertake will bring the most advanced techniques in computer science to bear on this complex problem, focussing particularly on how we can optimise the hardware and software configurations together to address the important application domain of 3D scene understanding. This will enable a future smart phone fitted with a camera to scan a scene and not only to store the picture it sees, but also to understand that the scene includes a house, a tree, and a moving car. In the course of addressing this application we expect to learn a lot about optimising many-core systems that will have wider applicability too, and the prospect of making future electronic products more efficient, more capable, and more useful.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/L000725/1
    Funder Contribution: 1,166,420 GBP

    The ecosystem of compute devices is highly connected, and likely to become even more so as the internet-of-things concept is realized. There is a single underlying global protocol for communication which enables all connected devices to interact, i.e. internet protocol (IP). In this project, we will create a corresponding single underlying global protocol for computation. This will enable wireless sensors, smartphones, laptops, servers and cloud data centres to co-operate on what is conceptually a single task, i.e. an AnyScale app. A user might run an AnyScale app on her smartphone, then when the battery is running low, or wireless connectivity becomes available, the app may shift its computation to a cloud server automatically. This kind of runtime decision making and taking is made possible by the AnyScale framework, which uses a cost/benefit model and machine learning techniques to drive its behaviour. When the app is running on the phone, it cannot do very complex calculations or use too much memory. However in a powerful server, the computations can be much larger and complicated. The AnyScale app will behave in an appropriate way based on where it is running. In this project, we will create the tools, techniques and technology to enable software developers to create and deploy AnyScale apps. Our first case study will be to design a movement controller app, that allows a biped robot with realistic humanoid limbs to 'walk' over various kinds of terrain. This is a complex computational task - generally beyond the power of embedded chips inside robotic limbs. Our AnyScale controller will offload computation to computers on-board the robot, or wirelessly to nearby servers or cloud-based systems. This is an ideal scenario for robotic exploration, e.g. of nuclear disaster sites.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • chevron_right
8 Organizations, page 1 of 1

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.