Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other literature type . 2023
License: CC BY
Data sources: ZENODO
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other literature type . 2023
License: CC BY
Data sources: ZENODO
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Report . 2024
License: CC BY
Data sources: ZENODO
ZENODO
Presentation . 2023
License: CC BY
Data sources: Datacite
ZENODO
Presentation . 2023
License: CC BY
Data sources: Datacite
ZENODO
Report . 2024
License: CC BY
Data sources: Datacite
ZENODO
Report . 2024
License: CC BY
Data sources: Datacite
versions View all 5 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Collective Mind: toward a common language to facilitate reproducible research and technology transfer

Authors: Fursin, Grigori;

Collective Mind: toward a common language to facilitate reproducible research and technology transfer

Abstract

You can cite this project using the following ArXiv paper: https://arxiv.org/abs/2406.16791 . This is a keynote presentation from the author of the Collective Mind framework at the 1st ACM conference on reproducibility and replicability (ACM REP'23). [ Video ACM YouTube channel ] [ GitHub project ] [ Related reproducibility initiatives ] The Collective Mind framework (CM & CM4MLOps) was developed by Grigori Fursin and donated to MLCommons to benefit everyone and continue further developments as a collaborative community initiative. Abstract During the past 10 years, we have considerably improved the reproducibility of experimental results from published papers by introducing the artifact evaluation process with a unified artifact appendix and reproducibility checklists, Jupyter notebooks, containers, and Git repositories. On the other hand, our experience reproducing more than 150 papers shows that it can take weeks and months of painful and repetitive interactions between teams to reproduce artifacts. This effort includes decrypting numerous README files, examining ad-hoc artifacts and containers, and figuring out how to reproduce computational results. Furthermore, snapshot containers pose a challenge to optimize algorithms' performance, accuracy, power consumption and operational costs across diverse and rapidly evolving software, hardware, and data used in the real world. In this talk, I explain how our practical artifact evaluation experience and the feedback from researchers and evaluators motivated me to develop a simple, intuitive, technology agnostic, and English-like scripting language called Collective Mind (CM) with a collection of automation recipes for MLOps, DevOps and MLPerf (CM4MLOps repository with CM scripts). It helps to automatically adapt any given experiment to any software, hardware, and data while automatically generating unified README files and synthesizing modular containers with a unified API. I donated CM and CM4MLOps to MLCommons as a part of my Collective Knowledge project to continue developing it as a collaborative community initiative. My long term goal is to help the community facilitate reproducible AI/ML Systems research, minimize manual and repetitive benchmarking and optimization efforts, reduce time and costs for reproducible research, simplify technology transfer to production, and learn how to co-design more efficient and cost-effective AI systems. I also present several recent use cases of how CM helps MLCommons and the Student Cluster Competition to run complex MLPerf benchmarks, and artifact evaluation at ACM/IEEE conferences to make it easier to reproduce results from research papers. I conclude with our development plans, new challenges, possible solutions, and upcoming reproducibility and optimization challenges powered by the Collective Knowledge Playground and CM: access.cKnowledge.org. I would like to thank all CK and CM contributors for their help and support since 2014! Please check this white paper for more details: https://arxiv.org/abs/2406.16791.

Keywords

collective mind, llm, collective knowledge, cknowledge, chatgpt, artificial intelligence, mlperf, machine learning, competitions, llm automation, artifact evaluation, replicability, reusability, systems, cTuning, mlcommons, optimization challenges, reproducibility, performance, automation

  • BIP!
    Impact byBIP!
    citations
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    1
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
    OpenAIRE UsageCounts
    Usage byUsageCounts
    visibility views 514
    download downloads 291
  • 514
    views
    291
    downloads
    Powered byOpenAIRE UsageCounts
Powered by OpenAIRE graph
Found an issue? Give us feedback
visibility
download
citations
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
views
OpenAIRE UsageCountsViews provided by UsageCounts
downloads
OpenAIRE UsageCountsDownloads provided by UsageCounts
1
Average
Average
Average
514
291
Green
Related to Research communities