
ISNI: 0000000121813404
A fundamental aspect of many large scale systems is that the input to these systems are inherently "dynamic" in nature. Indeed, this happens to be the case with social networks like Facebook and Twitter, transport networks that are meant to be tracked in google maps, and data centres which support the increasingly popular cloud computing services. All these systems need to cope with the fact that the input data changes very rapidly over time. Friendship links get created and destroyed in the social networks every moment, the congestion on any given road in a transport network changes from one time of the day to another, and a cloud computing system like Microsoft Azure has to serve users that arrive and depart in an online manner. This leads us to one of the key challenges in dealing with "big data" under limited computational resources: How can an algorithm quickly update the solution to a computational problem after observing an update (i.e., change in its input)? A naive solution here will be to recompute the solution from scratch after every update. Is there any way one can do significantly better than this naive approach? The project will lead to major advances in our understanding of "dynamic algorithms", which is an important research area within theoretical computer science that is concerned with addressing precisely the question described above. The project will consist of three strands of work. The first two strands will develop a unified framework for dynamic algorithm design for fundamental computational problems, by exporting the popular paradigms for designing efficient static algorithms into the dynamic world. Specifically, it will focus on problems that admit fast primal-dual and greedy algorithms in the static setting, and convert them into fast algorithms in the dynamic setting. This will improve the state of the art dynamic bounds for multiple fundamental computational problems such as maximum matching, packing/covering linear programs, maximal independent set and finding dense subgraphs. The third strand will consider dynamic resource allocation problems that are of relevance to data centres and cloud systems, and it will build efficient dynamic algorithms for these problems. The project deals with a fundamental research topic and it will contribute towards significant advances in this important research area within Theoretical Computer Science.
The Centre's themes align with the 'Towards A Data Driven Future' and 'Enabling Intelligence' priority areas, meeting the needs identified by UKRI to provide a highly skilled - and in demand - workforce focused on ensuring positive, human-centred benefits accrued from innovations in data driven and intelligence-based systems. The Centre has a distinct and methodologically challenging "people-first" perspective: unlike an application-orientated approach (where techniques are applied to neatly or simplistically defined problems, sometimes called "solutionism"), this lens will ensure that intense, multi-faceted and iterative explorations of the needs, capabilities and values of people, and wider societal views, challenge and disrupt computational science. In a world of big data and artificial intelligence, the precious smallness of real individuals with their values and aspirations are easily overlooked. Even though the impact of data-driven approaches and intelligence are only beginning to be felt at a human scale, there are already signs of concern over what these will mean for life, with governments and others worldwide addressing implications for education, jobs, safety and indeed even what is unique in being human. Sociologists, economists and policy makers of course have a role in ensuring positive outcomes for people and society of data-driven and intelligence systems; but, computational scientists have a pivotal duty too. Our viewpoint, then, will always see the human as a first-class citizen in the future physical-digital world, not perceiving themselves as outwitted, devalued or marginalised by the expanding capabilities of machine computation, automation and communication. Swansea and the wider region of Wales is a place and community where new understandings of data science and machine intelligence are being formed within four challenging contexts defined in the Internet Coast City Deal: Life Science and Well-being; Smart Manufacturing; Smart and Sustainable Energy; and Economic Acceleration. Studies commissioned by the City Deal and BEIS evidence the science and innovation strengths in Swansea and region in these areas and indicate how transformational investments in these areas will be for the region and the UK. Our Centre will, then, immerse cohorts in these contexts to challenge them methodologically and scientifically. The use of data-driven and intelligence systems in each of the four contexts gives rise to security, privacy and wider ethical, legal, governance and regulatory issues and our Centre also has a cross-cutting theme to train students to understand, accommodate and shape current and future developments in these regards. Cohort members will work to consider how the Centre's challenge themes direct and drive their thinking about data and intelligence, benefitting from both the multidisciplinary team that have built strong research agendas and connections with each of the contexts and the rich set of stakeholders that are our Centre has assembled. Importantly, a process of pivoting between challenge themes will be applied: insights, methods and challenges from one theme and its research projects will be tested and extended in others with the aim of enriching all. These, along with several other mechanisms (such as intra- and inter-cohort sandpits and side projects) are designed to develop a powerful bonding and shaping "cohort effect". The need for and value of our Centre is evidenced by substantial external industrial investment we have have secured: £1,750,000 of cash and £4,136,050 in-kind (total:£5,886,050). These partners and stakeholders have helped create the vision and detail of the proposal and include: Vint Cerf ("father of the internet" and Vice President of Google); NHS; Pfizer; Tata Steel; Ford; QinetiQ; McAfee; Ordnance Survey; Facebook; IBM; Microsoft; Fujitsu; Worshipful Company of IT Spiritual and Ethical Panel; and, Vicki Hanson (CEO, Association of Computing Machinery).
Observed, Strategic, sustained action is now needed to avoid further negative consequences of climate change and to build a greener, cleaner and fairer future. According to the Intergovernmental Panel on Climate Change the rise in global temperature is largely driven by total carbon dioxide emissions over time. In order to avoid further global warming, international Governments agreed to work towards a balance between emissions and greenhouse gas removal (GGR), known 'net zero', in the Paris Agreement. In June 2019 the UK committed to reaching net zero emissions by 2050, making it the first G7 country to legislate such a target. Transitioning to net zero means that we will have to remove as many emissions as we produce. Much of the focus of climate action to date has been on reducing emissions, for example through renewable power and electric vehicles. However, pathways to net zero require not just cutting fossil fuel emissions but also turning the land into a net carbon sink and scaling up new technologies to remove and store greenhouse gases. This will require new legislation to pave the way for investment in new infrastructure and businesses expected to be worth billions of pounds a year within 30 years. This challenge has far-reaching implications for technology, business models, social practices and policy. GGR has been much less studied, developed and incentivised than actions to cut emissions. The proposed CO2RE Hub brings together leading UK academics with a wide range of expertise to co-ordinate a suite of GGR demonstration projects to accelerate progress in this area. In particular the Hub will study how we can (1) reduce technology costs so that GGR becomes economically viable; (2) ensure industry adopts the concept of net zero in a way that will maintain and create jobs; (3) put in place sensible policy incentives; (4) make sure there is social license for GGR (unlike fracking or nuclear); (5) set up regulatory oversight of environmental sustainability and risks of GGR; (6) understand what is required to achieve GGR at large scale and (7) guarantee there are the skills and knowledge required for all this to happen. Building on extensive existing links to stakeholders in business, Government and NGOs, the Hub will work extensively with everyone involved in regulating and delivering GGR to ensure our research provides solutions to strategic priorities. We will also encourage the teams working on demonstrator technologies to think responsibly about the risks, benefits and public perceptions of their work and consider the full environmental, social and economic implications of implementation from the outset. CO2RE will seek to bring the GGR community in the UK as a whole closer together, functioning as a gateway to UK inter-disciplinary research expertise on GGR. We will inform, and stay informed, about the latest developments nationally and internationally, and reach out to engage the wider public. In doing so we will be able to respond to a rapidly evolving landscape recognising that technical and social change are not separate, but happen together. To accelerate and achieve meaningful change, we will be guided by consultation with key decision-makers and the general public, and set up a £1m flexible fund to respond to priorities that emerge with the help of the wider UK academic community. Ultimately we will help the UK and the world understand how GGR can be scaled up responsibly as part of climate action to meet the ambition of net zero.
Testing is a crucial part of any software development process. Testing is also very expensive: Common estimations list the effort of software testing at 50% of the average budget. Our society increasingly depends on a working information infrastructure for more and more aspects of civic, commercial, and social life, while software at the same time becomes ever more complex. For example, a modern car has up to 100 million lines of software code, and software errors can easily lead to fatal consequences. Improving techniques to identify errors in software is therefore of utmost importance. Manual testing is common practice in software development. As manually testing a program is a laborious and error prone task, automation is desirable. However, automation requires the user to specify the correct behaviour up-front in terms of a specification, or later by adding test oracles to automatically generated tests - both alternatives are difficult. This problem is obliterated as test quality is usually measured with oracle-agnostic code coverage metrics. In truth, however, a test without a good oracle cannot find software bugs. This is the oracle problem, one of the longest standing and greatest remaining challenges in software testing. As both writing specifications and writing test oracles is difficult and needs to be done manually, this proposal aims to push automation further by exploring the middle ground: The novel concept of an oracle template allows to specify what should be tested and checked, but crucially, it does not require specifying the expected behaviour. Instead, automated test generation instantiates user-specified oracle templates to concrete tests with oracles, and the developer decides case by case about correctness. Thus, programs can be tested without the developer needing to write a specification or having to suffer through seemingly purposeless generated tests. Because test generation is driven by oracles, all tests have a purpose and the essential oracles required to be effective at finding software bugs. The novel concept of oracle templates requires extension of the current state of the art in test generation, as current techniques either assume the existence of an automated oracle (e.g. a specification) or focus exclusively on the code. This creates three challenges, which will be addressed in this project: -- Existing code-based testing techniques focus on reaching points in the code. This project will define the concept of oracle templates, and will explore test generation based on oracle templates as a search problem. Given an oracle template, search-based testing techniques will automatically create instances, which are test cases with oracles. -- Systematic testing is traditionally driven by the idea that a good test set covers all the code, which completely ignores the test oracle problem. This project will define systematic criteria and corresponding search-based test generation techniques to thoroughly test programs based on oracle templates. These criteria will ensure coverage of oracle templates, but will also ensure that the code is executed and checked by oracles (e.g. by applying mutation and data-flow analysis). -- It is impossible to take the human out of the software testing loop completely. Oracle templates are an attempt at minimizing the human effort, but the task of writing oracle templates still requires manual effort. Therefore, this project will explore strategies to automatically synthesise oracle templates based on standard testing patterns and usage examples. Ultimately, a developer would have all tests and oracles generated automatically on the click of a button, leaving only the task of confirming correctness of the produced examples. The success in addressing these challenges will be measured using automated experiments, controlled studies with student subjects, and industrial case studies at Google and Microsoft.
Security systems break because design practices focus too much on mechanisms, at the expense of clearly-defined properties. The vision of this research is to bring about a shift of emphasis to highlight the properties that security systems are expected to provide. This will be done by developing methods for verification of security systems. I will focus on a selection of interconnected real-world problems that are of great importance to society, but that are currently in need of greater industry/academe cooperation. The combination of fundamental research with close collaboration with industry, government and users is expected to achieve significant results and impact. I will develop and apply new methods and techniques to create and analyse solutions in three areas:* Trusted computing is an industry-led technology that aims to root security in hardware. Since its launch, academics including me have discovered significant issues that threaten to undermine its potential at providing a range of security benefits. This has arisen because industry does not have the expertise to analyse the protocols.* Electronic voting is an application currently attracting significant interest from government and industry, but numerous security issues have resulted in failure of confidence among politicians, commentators and public alike.* Privacy for citizens using electronic services is hotly debated by journalists and user groups and politicians, but has been substantially eroded by new technologies and policies.In these three areas, there is currently the risk of significant waste of resources on inappropriate or unaccepted technologies, resulting in user disempowerment and exclusion. The outcomes of this fellowship are intended to address that risk.A distinguishing feature of the proposal is the substantial engagement with industry and user groups that are active in these three areas. As a result of discussions with them, several organisations have committed significant resources, including cash contribution, manager and developer time, and access to users and experts.