Powered by OpenAIRE graph
Found an issue? Give us feedback

V&A

Victoria and Albert Museum
Country: United Kingdom
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
62 Projects, page 1 of 13
  • Funder: UK Research and Innovation Project Code: AH/W003244/1
    Funder Contribution: 2,941,950 GBP

    The capacity to make strong connections between historical objects and sources lies at the heart of this project as it does in the everyday museum and historical practices that it is designed to support. Curators creating displays combine artefacts, images, audio-visual materials and histories. Family and local historians connect records of ancestors and localities to establish their genealogy or to understand the past of where they live. Academic historians patiently and critically connect a diverse range of archive sources with existing literature to tell new stories about the past. All rely on connecting different fragments of the past as they create the tapestries of narrative that constitute our local and national histories. The Congruence Engine will create the prototype of a digital toolbox for everyone fascinated by the past to connect an unprecedented range of items from the nation's collection to tell the stories about our industrial past that they want to tell. Until now, we have become acclimatised to a world of research where it has only been possible to work with a selection of the potentially relevant historical source material for any historical investigation we want to undertake. And now, in our information society, we expect to go to a search engine and find a record of anything. But so often such searches disappoint, and for two main reasons. First because the tyranny of the free-text search where ranked results lists favour the results of previous searches, and cannot be guaranteed to include the full set of what is relevant to the search. The second reason is that the records of so very many of our heritage collections are thin, inconsistent, or kept in institutional siloes hidden from outside access. This project explicitly works with these collections that are generally represented by weak data. In place of the two-dimensional ranked list of search engines, we aim, with 'The Congruence Engine', to model a world in which users will be able to explore data neighbourhoods (technically 'knowledge graphs') where a great diversity of information about heritage items that are deeply relevant to their investigations will be readily to hand - museum objects, archive documents, pictures, films, buildings, and the records of previous investigations and relevant activity. Building on the successful experimentation of 'Heritage Connector' (the Science Museum's TaNC foundation project), this major project will develop a repertoire of prototype discovery tools to access the industrial and related collections brought into the study from our investigating and collaborating organisations and partners. To achieve this breakthrough in collections accessibility, it will bring together in collaboration a unique combination of skills and interests. Here, digital researchers will work with professional and community historians and curators to address real-world historical investigations of Britain's industrial past. Through 27 months of iterative exploration of three industrial sectors - textiles, energy and communications - the digital researchers will work with the historians and curators, tuning the software to make it responsive to user needs. They will responsively use computational and artificial intelligence techniques - including machine learning and natural language processing (specifically, eg, named entity recognition) and a suite of bespoke entity-linking routines - to create and refine datasets, provide routes between records and digital objects such as scans and photographs, and create the tools by which the participants - who will not need to be digital experts - will be able to enjoy and employ the sources that are opened to them in the construction of narratives. These narratives will be expressed in the project's mobile digital exhibition space, on its website and a variety of conventional popular and academic outputs. Software will be made available via GitHub; we will produce 'how to' guides.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R013993/1
    Funder Contribution: 100,801 GBP

    Smart environments are designed to react intelligently to the needs of those who visit, live and work in them. For example, the lights can come on when it gets dark in a living room or a video exhibit can play in the correct language when a museum visitor approaches it. However, we lack intuitive ways for users without technical backgrounds to understand and reconfigure the behaviours of such environments, and there is considerable public mistrust of automated environments. Whilst there are tools that let users view and change the rules defining smart environment behaviours without having programming knowledge, they have not seen wide uptake beyond technology enthusiasts. One drawback of existing tools is that they pull attention away from the environment in question, requiring users to translate from real world objects to abstract screen-based representations of them. New programming tools that allow users to harness their understandings of and references to objects in the real world could greatly increase trust and uptake of smart environments. This research will investigate how users understand and describe smart environment behaviours whilst in situ, and use the findings to develop more intuitive programming tools. For example, a tool could let someone simply say that they want a lamp to come on when it gets dark, and point at it to identify it. Speech interfaces are now widely used in intelligent personal assistants, but the functionality is largely limited to issuing immediate commands or setting simple reminders. In reality, there are many challenges with using speech interfaces for programming tasks, and idealised interactions such as the lamp example are not at all simple, in reality. In many cases, research used to design programming interfaces for everyday users is carried out in research labs rather than in the real home or workplace settings, and the people invited to take part in design and evaluation studies are often university students or staff, or people with an existing interest or background in technology. These interfaces often fall down once taken away from the small set of toy usage scenarios in which they have been designed and tested and given to everyday users. This research investigates the challenges with using speech for programming, and evaluates ways to mitigate these challenges, including conversational prompts, use of gesture and proximity data to avoid ambiguity, and providing default behaviours that can be customised. In this project, we focus primarily on smart home scenarios, and we will carry out our studies in real domestic settings. Speech interfaces are increasingly being used in these scenarios, but there is no support for querying, debugging and alternating the behaviours through speech. We will recruit participants with no programming background, including older and disabled users, who are often highlighted as people who could benefit from smart home technology, but rarely included in studies of this sort. We will carry out interviews in people's homes to understand how they naturally describe rules for smart environments, taking into account speech, gesture and location. We will look for any errors or unclear elements in the rules they describe, and investigate how far prompts from researchers can help them to be able to express the rules clearly. We will also explore how far participants can customise default behaviours presented to them. This data will be used to allow us to create a conversational interface that harnesses the approaches that worked with human prompts, and test it in real world settings. Some elements of the system will be controlled by a human researcher, but the system will simulate the experience of interacting with an intelligent conversational interface. This will allow us to identify fruitful areas to pursue in developing fully functional conversational programming tools, which may also be useful in museums, education, agriculture and robotics.

    more_vert
  • Funder: UK Research and Innovation Project Code: AH/J005142/1
    Funder Contribution: 3,939,590 GBP

    London is a complex environment for Knowledge Exchange and cultural and creative interactions. It faces distinctive challenges as it attempts to sustain global competiveness in the Creative Economy, particularly in terms of digital innovation. Creativeworks London builds on the London Centre for Arts and Cultural Exchange (LCACE), a seven year partnership of nine London-based Higher Education Institutions: Birkbeck College, City University, the Courtauld Institute, Goldsmiths College, Guildhall, King's College London, Queen Mary University of London, Royal Holloway and University of the Arts. We will be joined by smaller specialist organisations such as the University of London's Centre for Creative Collaboration, Central School of Speech and Drama, Roehampton, SOAS, Kingston and Trinity Laban Conservatoire of Music and Dance and by major cultural organisations such as the BBC, the British Museum,the V&A and the British Library. We will be liaising with the London Mayor's office and the Tech City Investment Company (part of UK Trade and Investment), and UK-wide groups such as the Creative and Cultural Skills Council. We will also be working closely with industry partners, both large and small, including IBM, Playgen/ Digital Shoreditch, Mediaclarity and Bellemedia. This enables the Hub to provide a step-change in the multiple and often fragmented approaches to London's Creative Economy and to provide crucial Arts and Humanities interventions into the sector. Crucially, the Hub will also ensure that the importance of these interventions are widely recognised by business, policy-makers and government. To do so, it will undertake research into London's previous and current attempts to implement creative economy strategies; investigate the special requirements of London's digital economy and the relationship that London's audiences have between the live and the digital experience of performances and artefacts. The Hub's Knowledge Exchange programme focuses on 'Creative Vouchers' where Arts & Humanities researchers will offer a range of services (such as historical information that the Media would like to access, policy overviews, IP advice, digital solutions, alternative approaches to business models or practices) which can be accessed by SMEs. The scheme will also allow us to track the sector's changing needs, feeding back into our research into London's distinctive creative economy. There will also be a 'People Exchange Scheme' for both postgraduate researchers who want to experience industry and entrepreneurs who would benefit from a period of time within an HEI environment. The combination of excellent research and innovative KE will ensure that Creativeworks London provides a strategic overview and network support. This will be essential if London, and hence the UK, is to cultivate entrepreneurial capacity and facilitate new routes to markets in inter-related fields such as digital media, music, fashion and the visual arts.

    more_vert
  • Funder: UK Research and Innovation Project Code: AH/X006719/1
    Funder Contribution: 267,600 GBP

    This programme will develop a cohort of future leaders in the cultural heritage sector, by supporting Early Career Research Fellows to develop ambitious and innovative research projects, buttressed by a robust and expansive career development and mentoring programme, in partnership with the IRO community and wider GLAM sector. This programme is designed to address current shortcomings in the sector, most notably the break in the pipeline after the end of doctoral students' studies, by providing a cohort of Early Career Fellows with a thorough grounding of key issues in the cultural and heritage sector, equip them with the tools, techniques and confidence to navigate IROs and GLAM sector organisations as researchers. It seeks to expand and retain diversity, and build research capacity in the sector, producing high-quality, innovative research with public benefit. A Coordination Team, based at the V&A, will respond to both the needs of host organisations and individual Fellows, through three workstreams: I) Supporting the Development and Commissioning of Fellowship Proposals; II) Designing and Delivering a Cohort Development Programme; III) Evaluating and Reporting. The programme activities will seek to balance the expertise of the Coordination Team in designing a programme of events and activities that respond to GLAM sector and future leadership needs, whilst recognising the flexibility needed to respond to the collective needs of the Fellowship cohort and the individual Fellows themselves. A responsive grants scheme will be offered to enable Fellows to build on and expand their individually- and collectively-led opportunities. Equality Impact Assessments will be conducted throughout the planning for the activities, considering accessibility needs, accommodating different styles of learning, and additional considerations such as caring responsibilities. This will guide our approach to the balance between in person and remote/hybrid events. Through the delivery and evaluation of this pilot programme, we aim to create a template for future research fellowships that can be rolled out across the cultural and heritage sector.

    more_vert
  • Funder: UK Research and Innovation Project Code: AH/X002241/1
    Funder Contribution: 284,117 GBP

    The brushstroke - in its various manifestations - is the only communication tool that is encountered in paintings and drawings across generations. The production of a stroke involves a complex interplay between different perceptual, cognitive and physical processes. Its reproduction with computational and robotic technologies provides us with the opportunity to study and better understand these processes. The project "Embodied Agents in Contemporary Visual Art'' (EACVA) approaches this study through a multi-disciplinary collaboration between artists, philosophers, sociologists, psychologists, as well as computer art and robotics engineers (website of the project accessible at www.eacva.co.uk). The collaboration will unfold throughout artistic residencies and workshops, during which we will develop a methodology informed by our respective fields of expertise while also producing artworks with state-of-the-art robotic painting and drawing systems. The software and tools, developed during this period in close collaboration with the artists involved, will be open source, thereby contributing to the growing community of artists and researchers working on artistic applications of robotics. Lastly, the resulting artworks, texts, and systems will be presented in the form of a public-facing exhibition at Goldsmiths College, University of London. The exhibition will include the presentation of live performance installations with several robots, exposing their internal representations and decision-making processes with the aim of demystifying machine and computer-driven creation. By combining state-of-the-art robotic systems with historical contexts and didactic curatorial methodologies, we will offer the public an informed insider view into the creative potential of machines. The show will provide a unique opportunity to further investigate our research questions by gathering quantitative and qualitative data through surveys. We will dedicate a post-production period to the creation of a printed catalogue.documenting the artworks and their production, and we will prepare publications relevant to the diverse disciplines represented within our collaborative team.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.