Powered by OpenAIRE graph
Found an issue? Give us feedback

UNIVERSITY OF WOLVERHAMPTON

UNIVERSITY OF WOLVERHAMPTON

56 Projects, page 1 of 12
  • Funder: UK Research and Innovation Project Code: 133891
    Funder Contribution: 58,932 GBP

    Awaiting Public Project Summary

    more_vert
  • Funder: UK Research and Innovation Project Code: 508118
    Funder Contribution: 36,300 GBP

    To support the development and expansion of hosted services for the provision of online services.

    more_vert
  • Funder: UK Research and Innovation Project Code: 509705
    Funder Contribution: 63,526 GBP

    To develop new enviromental DNA (eDNA) testing for the presence of a range of protected and invasive species and species biodiversity concerns in the freshwater and marine water enviroment.

    more_vert
  • Funder: UK Research and Innovation Project Code: 509435
    Funder Contribution: 108,900 GBP

    To develop new boom and arm design on MVC liquid waste tankers. To improve the ergonomics, usage, operations, functionality, styling and cost.

    more_vert
  • Funder: UK Research and Innovation Project Code: 10017288
    Funder Contribution: 59,899 GBP

    According to recent survey by global analytics firm FICO and Corinium, 65% of companies cannot explain how Artificial Intelligence (AI) model decisions/predictions are made and poor data has caused 11.8 million/year, financial waste on average. Also, over the last years many companies have invested in AI applications, though have not raised the value of responsible and governance AI to the appropriate level. Most of these companies (specifically the small and medium businesses) will go with AI-as-a-Service choice to save cost, which provides incorporative AI functionality for companies without the need for related expertise. The big challenge here is, how much trust would be given to an output produced by AI-as-a-service platform? Several studies have shown that standard AI algorithms currently used are not suitable for regulated services due to their lack of security, transparency, reliability and explainability. The vision of this project is to develop a secure and trustworthy AI platform suitable for AI developers and data scientists, which provides scoring mechanism to measure the quality and trust levels of datasets and AI/ML algorithm during development and deployment phases. The TrustMe platform will be running based on a local-host web application with enabled features for designing, developing, and implementing explainable and trustworthy AI applications. TrustMe platform also offers a data quality score using Quality of Data (QoD) estimator. Using QoD feature, end users can obtain the quality level of their data along with auto generated reports highlight all bad records. QoD also offers automated solution to enrich quality of data or perform data processing according to pre-defined and editable rules by data admins. To deal with possible bias within training/testing data samples, TrustMe offers Quality of Training/Testing (QoT) toolkits. This tool will help developers to automatically generating training/testing samples with considering over/below sampling for imbalanced data samples due to data acquisition process. We envisage that in the next 3 years TrustMe Score and generated reports would become like a universal score that all organisations would want to use to prove how secure and trustworthy their AI platforms are.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.