Written and presented by Dagmar Waltemath (University of Rostock) as part of the Reproducible and Citable Data and Models Workshop in Warnemünde, Germany. September 14th - 16th 2015.
Improving the Management of Computational Models -- Invited talk at the EBIMartin Scharm
Improving the Management of Computational Models:
storage – retrieval & ranking – version control
More information and slides to download at http://sems.uni-rostock.de/2013/12/martin-visits-the-ebi/
FAIR Data, Operations and Model management for Systems Biology and Systems Me...Carole Goble
This document discusses the FAIRDOM consortium's efforts to promote FAIR (Findable, Accessible, Interoperable, Reusable) principles for managing data, operations, and models from systems biology and systems medicine projects. It outlines challenges in asset management for multi-partner, multi-disciplinary projects using multiple formats and repositories. FAIRDOM provides pillars of support including community actions, platforms/tools, and a public project commons to help address these challenges and better enable sharing, reuse, and reproducibility of research assets according to FAIR principles.
FAIRDOM - FAIR Asset management and sharing experiences in Systems and Synthe...Carole Goble
Over the past 5 years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs and so forth. Don’t stop reading. Data management isn’t likely to win anyone a Nobel prize. But publications should be supported and accompanied by data, methods, procedures, etc. to assure reproducibility of results. Funding agencies expect data (and increasingly software) management retention and access plans as part of the proposal process for projects to be funded. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. The multi-component, multi-disciplinary nature of Systems Biology demands the interlinking and exchange of assets and the systematic recording
of metadata for their interpretation.
The FAIR Guiding Principles for scientific data management and stewardship (http://www.nature.com/articles/sdata201618) has been an effective rallying-cry for EU and USA Research Infrastructures. FAIRDOM (Findable, Accessible, Interoperable, Reusable Data, Operations and Models) Initiative has 8 years of experience of asset sharing and data infrastructure ranging across European programmes (SysMO and EraSysAPP ERANets), national initiatives (de.NBI, German Virtual Liver Network, UK SynBio centres) and PI's labs. It aims to support Systems and Synthetic Biology researchers with data and model management, with an emphasis on standards smuggled in by stealth and sensitivity to asset sharing and credit anxiety.
This talk will use the FAIRDOM Initiative to discuss the FAIR management of data, SOPs, and models for Sys Bio, highlighting the challenges of and approaches to sharing, credit, citation and asset infrastructures in practice. I'll also highlight recent experiments in affecting sharing using behavioural interventions.
http://www.fair-dom.org
http://www.fairdomhub.org
http://www.seek4science.org
Presented at COMBINE 2016, Newcastle, 19 September.
http://co.mbine.org/events/COMBINE_2016
FAIR Data and Model Management for Systems Biology(and SOPs too!)Carole Goble
MultiScale Biology Network Springboard meeting, Nottingham, UK, 1 June 2015
FAIR Data and model management for Systems Biology
Over the past 5 years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs and so forth. Don’t stop reading. Yes, data management isn’t likely to win anyone a Nobel prize. But publications should be supported and accompanied by data, methods, procedures, etc. to assure reproducibility of results. Funding agencies expect data (and increasingly software) management retention and access plans as part of the proposal process for projects to be funded. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. And the multi-component, multi-disciplinary nature of Systems Biology demands the interlinking and exchange of assets and the systematic recording of metadata for their interpretation.
Data and model management for the Systems Biology community is a multi-faceted one including: the development and adoption appropriate community standards (and the navigation of the standards maze); the sustaining of international public archives capable of servicing quantitative biology; and the development of the necessary tools and know-how for researchers within their own institutes so that they can steward their assets in a sustainable, coherent and credited manner while minimizing burden and maximising personal benefit.
The FAIRDOM (Findable, Accessible, Interoperable, Reusable Data, Operations and Models) Initiative has grown out of several efforts in European programmes (SysMO and EraSysAPP ERANets and the ISBE ESRFI) and national initiatives (de.NBI, German Virtual Liver Network, SystemsX, UK SynBio centres). It aims to support Systems Biology researchers with data and model management, with an emphasis on standards smuggled in by stealth.
This talk will use the FAIRDOM Initiative to discuss the FAIR management of data, SOPs, and models for Sys Bio, highlighting the challenges multi-scale biology presents.
http://www.fair-dom.org
http://www.fairdomhub.org
http://www.seek4science.org
Being FAIR: FAIR data and model management SSBSS 2017 Summer SchoolCarole Goble
Lecture 1:
Being FAIR: FAIR data and model management
In recent years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs, workflows. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for scientific data management and stewardship [1] have proved to be an effective rallying-cry. Funding agencies expect data (and increasingly software) management retention and access plans. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. The multi-component, multi-disciplinary nature of Systems and Synthetic Biology demands the interlinking and exchange of assets and the systematic recording of metadata for their interpretation.
Our FAIRDOM project (http://www.fair-dom.org) supports Systems Biology research projects with their research data, methods and model management, with an emphasis on standards smuggled in by stealth and sensitivity to asset sharing and credit anxiety. The FAIRDOM Platform has been installed by over 30 labs or projects. Our public, centrally hosted Asset Commons, the FAIRDOMHub.org, supports the outcomes of 50+ projects.
Now established as a grassroots association, FAIRDOM has over 8 years of experience of practical asset sharing and data infrastructure at the researcher coal-face ranging across European programmes (SysMO and ERASysAPP ERANets), national initiatives (Germany's de.NBI and Systems Medicine of the Liver; Norway's Digital Life) and European Research Infrastructures (ISBE) as well as in PI's labs and Centres such as the SynBioChem Centre at Manchester.
In this talk I will show explore how FAIRDOM has been designed to support Systems Biology projects and show examples of its configuration and use. I will also explore the technical and social challenges we face.
I will also refer to European efforts to support public archives for the life sciences. ELIXIR (http:// http://www.elixir-europe.org/) the European Research Infrastructure of 21 national nodes and a hub funded by national agreements to coordinate and sustain key data repositories and archives for the Life Science community, improve access to them and related tools, support training and create a platform for dataset interoperability. As the Head of the ELIXIR-UK Node and co-lead of the ELIXIR Interoperability Platform I will show how this work relates to your projects.
[1] Wilkinson et al, The FAIR Guiding Principles for scientific data management and stewardship Scientific Data 3, doi:10.1038/sdata.2016.18
Being Reproducible: SSBSS Summer School 2017Carole Goble
Lecture 2:
Being Reproducible: Models, Research Objects and R* Brouhaha
Reproducibility is a R* minefield, depending on whether you are testing for robustness (rerun), defence (repeat), certification (replicate), comparison (reproduce) or transferring between researchers (reuse). Different forms of "R" make different demands on the completeness, depth and portability of research. Sharing is another minefield raising concerns of credit and protection from sharp practices.
In practice the exchange, reuse and reproduction of scientific experiments is dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: the codes fork, data is updated, algorithms are revised, workflows break, service updates are released. ResearchObject.org is an effort to systematically support more portable and reproducible research exchange.
In this talk I will explore these issues in more depth using the FAIRDOM Platform and its support for reproducible modelling. The talk will cover initiatives and technical issues, and raise social and cultural challenges.
Reproducible and citable data and models: an introduction.FAIRDOM
Prepared and presented by Carole Goble (University of Manchester), Wolfgang Mueller (HITS), Dagmar Waltermath (University of Rostock), at the Reproducible and Citable Data and Models Workshop, Warnemünde, Germany. September 14th - 16th 2015.
This document introduces FAIRDOM, a consortium that provides a platform and services to help researchers organize, manage, share, and preserve research outputs according to FAIR principles. FAIRDOM has been in operation for 10 years and has over 50 installations supporting over 118 projects. It provides tools and services to help researchers collaborate better and integrate their data, models, publications and other research objects. FAIRDOM also works with other organizations and infrastructure providers to support broader research initiatives.
FAIR data and model management for systems biology.FAIRDOM
Written and presented by Carole Goble (University of Manchester) as part of Intelligent Systems for Molecular Biology (ISMB), Dublin. July 10th - 14th 2015.
Reproducibility, Research Objects and Reality, Leiden 2016Carole Goble
Presented at the Leiden Bioscience Lecture, 24 November 2016, Reproducibility, Research Objects and Reality
Over the past 5 years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs, workflows. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for scientific data management and stewardship have proved to be an effective rallying-cry. Funding agencies expect data (and increasingly software) management retention and access plans. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. It all sounds very laudable and straightforward. BUT…..
Reproducibility is a R* minefield, depending on whether you are testing for robustness (rerun), defence (repeat), certification (replicate), comparison (reproduce) or transferring between researchers (reuse). Different forms of "R" make different demands on the completeness, depth and portability of research. Sharing is another minefield raising concerns of credit and protection from sharp practices.
In practice the exchange, reuse and reproduction of scientific experiments is dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: the codes fork, data is updated, algorithms are revised, workflows break, service updates are released. ResearchObject.org is an effort to systematically support more portable and reproducible research exchange
In this talk I will explore these issues in data-driven computational life sciences through the examples and stories from initiatives I am involved, and Leiden is involved in too including:
· FAIRDOM which has built a Commons for Systems and Synthetic Biology projects, with an emphasis on standards smuggled in by stealth and efforts to affecting sharing practices using behavioural interventions
· ELIXIR, the EU Research Data Infrastructure, and its efforts to exchange workflows
· Bioschemas.org, an ELIXIR-NIH-Google effort to support the finding of assets.
Reflections on a (slightly unusual) multi-disciplinary academic careerCarole Goble
Talk given at the School of Computer Science, The University of Manchester, UK Postgraduate Research Symposium 2019
the Carole Goble Doctoral Paper award was given for the first time
What is Reproducibility? The R* brouhaha (and how Research Objects can help)Carole Goble
presented at 1st First International Workshop on Reproducible Open Science @ TPDL, 9 Sept 2016, Hannover, Germany
http://repscience2016.research-infrastructures.eu/
This document discusses Research Objects (RO), which provide a framework for bundling, exchanging, and linking resources related to experiments in order to improve reproducibility. The RO framework uses unique identifiers, aggregation, and metadata to group related resources. Real-world examples of ROs include reviewed scientific papers, workflow runs, and Docker images. ROs can help make research fully FAIR (Findable, Accessible, Interoperable, Reusable). Tools and platforms like FAIRDOM, SEEK, and Figshare support the use of ROs.
Reproducible Research: how could Research Objects helpCarole Goble
Reproducible Research: how could Research Objects help, given at 21st Genomic Standards Consortium Meeting
Dates: May 20-23, 2019
https://press3.mcs.anl.gov/gensc/meetings/gsc21/
Short talk on Research Object and their use for reproducibility and publishing in the Systems Biology Commons Platform FAIRDOMHub, and the underlying software SEEK.
NSF Workshop Data and Software Citation, 6-7 June 2016, Boston USA, Software Panel
FIndable, Accessible, Interoperable, Reusable Software and Data Citation: Europe, Research Objects, and BioSchemas.org
COMBINE 2019, EU-STANDS4PM, Heidelberg, Germany 18 July 2019
FAIR: Findable Accessable Interoperable Reusable. The “FAIR Principles” for research data, software, computational workflows, scripts, or any other kind of Research Object one can think of, is now a mantra; a method; a meme; a myth; a mystery. FAIR is about supporting and tracking the flow and availability of data across research organisations and the portability and sustainability of processing methods to enable transparent and reproducible results. All this is within the context of a bottom up society of collaborating (or burdened?) scientists, a top down collective of compliance-focused funders and policy makers and an in-the-middle posse of e-infrastructure providers.
Making the FAIR principles a reality is tricky. They are aspirations not standards. They are multi-dimensional and dependent on context such as the sensitivity and availability of the data and methods. We already see a jungle of projects, initiatives and programmes wrestling with the challenges. FAIR efforts have particularly focused on the “last mile” – “FAIRifying” destination community archive repositories and measuring their “compliance” to FAIR metrics (or less controversially “indicators”). But what about FAIR at the first mile, at source and how do we help Alice and Bob with their (secure) data management? If we tackle the FAIR first and last mile, what about the FAIR middle? What about FAIR beyond just data – like exchanging and reusing pipelines for precision medicine?
Since 2008 the FAIRDOM collaboration [1] has worked on FAIR asset management and the development of a FAIR asset Commons for multi-partner researcher projects [2], initially in the Systems Biology field. Since 2016 we have been working with the BioCompute Object Partnership [3] on standardising computational records of HTS precision medicine pipelines.
So, using our FAIRDOM and BioCompute Object binoculars let’s go on a FAIR safari! Let’s peruse the ecosystem, observe the different herds and reflect what where we are for FAIR personalised medicine.
References
[1] http://www.fair-dom.org
[2] http://www.fairdomhub.org
[3] http://www.biocomputeobject.org
Metadata and Semantics Research Conference, Manchester, UK 2015
Research Objects: why, what and how,
In practice the exchange, reuse and reproduction of scientific experiments is hard, dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: codes fork, data is updated, algorithms are revised, workflows break, service updates are released. Neither should they be viewed just as second-class artifacts tethered to publications, but the focus of research outcomes in their own right: articles clustered around datasets, methods with citation profiles. Many funders and publishers have come to acknowledge this, moving to data sharing policies and provisioning e-infrastructure platforms. Many researchers recognise the importance of working with Research Objects. The term has become widespread. However. What is a Research Object? How do you mint one, exchange one, build a platform to support one, curate one? How do we introduce them in a lightweight way that platform developers can migrate to? What is the practical impact of a Research Object Commons on training, stewardship, scholarship, sharing? How do we address the scholarly and technological debt of making and maintaining Research Objects? Are there any examples
I’ll present our practical experiences of the why, what and how of Research Objects.
Written and presented by Tom Ingraham (F1000), at the Reproducible and Citable Data and Model Workshop, in Warnemünde, Germany. September 14th -16th 2015.
Aspects of Reproducibility in Earth ScienceRaul Palma
The document discusses aspects of reproducibility in earth science research within the European Virtual Environment for Research - Earth Science Themes (EVEREST) project. The key objectives of EVEREST are to establish an e-infrastructure to facilitate collaborative earth science research through shared data, models, and workflows. Research Objects (ROs) will be used to capture and share workflows, processes, and results to help ensure reproducibility and preservation of earth science research. An example RO is described for mapping volcano deformation using satellite imagery and other data sources. Issues around reproducibility related to data access, software dependencies, and manual intervention in workflows are also discussed.
This document summarizes Professor Carole Goble's presentation on making research more reproducible and FAIR (Findable, Accessible, Interoperable, Reusable) through the use of research objects and related standards and infrastructure. It discusses challenges to reproducibility in computational research and proposes bundling datasets, workflows, software and other research products into standardized research objects that can be cited and shared to help address these challenges.
ROHub is a digital library and management system for research objects (ROs). It enables scientists to create, manage, and share ROs, which are semantic aggregations of related scientific resources, annotations, and research context. ROHub provides APIs and a web portal for scientists to use throughout the research lifecycle. It stores ROs long-term to support reproducibility and allows for monitoring changes to assess quality.
Trust and Accountability: experiences from the FAIRDOM Commons Initiative.Carole Goble
Presented at Digital Life 2018, Bergen, March 2018. In the Trust and Accountability session.
In recent years we have seen a change in expectations for the management and availability of all the outcomes of research (models, data, SOPs, software etc) and for greater transparency and reproduciblity in the method of research. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for stewardship [1] have proved to be an effective rallying-cry for community groups and for policy makers.
The FAIRDOM Initiative (FAIR Data Models Operations, http://www.fair-dom.org) supports Systems Biology research projects with their research data, methods and model management, with an emphasis on standards and sensitivity to asset sharing and credit anxiety. Our aim is a FAIR Research Commons that blends together the doing of research with the communication of research. The Platform has been installed by over 30 labs/projects and our public, centrally hosted FAIRDOMHub [2] supports the outcomes of 90+ projects. We are proud to support projects in Norway’s Digital Life programme.
2018 is our 10th anniversary. Over the past decade we learned a lot about trust between researchers, between researchers and platform developers and curators and between both these groups and funders. We have experienced the Tragedy of the Commons but also seen shifts in attitudes.
In this talk we will use our experiences in FAIRDOM to explore the political, economic, social and technical, social practicalities of Trust.
[1] Wilkinson et al (2016) The FAIR Guiding Principles for scientific data management and stewardship Scientific Data 3, doi:10.1038/sdata.2016.18
[2] Wolstencroft, et al (2016) FAIRDOMHub: a repository and collaboration environment for sharing systems biology research Nucleic Acids Research, 45(D1): D404-D407. DOI: 10.1093/nar/gkw1032
Presentation on the Chemical Analysis Metadata Platform (ChAMP) as a new project to characterize and organize metadata about chemical analysis methods. The project will develop an ontology, controlled vocabularies, and design rules
Citing data in research articles: principles, implementation, challenges - an...FAIRDOM
Prepared and presented by Jo McEntyre (EMBL_EBI) as part of the Reproducible and Citable Data and Models Workshop in Warnemünde, Germany. September 14th - 16th 2015.
Written and presented by Carole Goble (University of Manchester) as part of the Reproducible and Citable Data and Models Workshop in Warnemünde, Germany. September 14th - 16th 2015.
Being Reproducible: SSBSS Summer School 2017Carole Goble
Lecture 2:
Being Reproducible: Models, Research Objects and R* Brouhaha
Reproducibility is a R* minefield, depending on whether you are testing for robustness (rerun), defence (repeat), certification (replicate), comparison (reproduce) or transferring between researchers (reuse). Different forms of "R" make different demands on the completeness, depth and portability of research. Sharing is another minefield raising concerns of credit and protection from sharp practices.
In practice the exchange, reuse and reproduction of scientific experiments is dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: the codes fork, data is updated, algorithms are revised, workflows break, service updates are released. ResearchObject.org is an effort to systematically support more portable and reproducible research exchange.
In this talk I will explore these issues in more depth using the FAIRDOM Platform and its support for reproducible modelling. The talk will cover initiatives and technical issues, and raise social and cultural challenges.
Reproducible and citable data and models: an introduction.FAIRDOM
Prepared and presented by Carole Goble (University of Manchester), Wolfgang Mueller (HITS), Dagmar Waltermath (University of Rostock), at the Reproducible and Citable Data and Models Workshop, Warnemünde, Germany. September 14th - 16th 2015.
This document introduces FAIRDOM, a consortium that provides a platform and services to help researchers organize, manage, share, and preserve research outputs according to FAIR principles. FAIRDOM has been in operation for 10 years and has over 50 installations supporting over 118 projects. It provides tools and services to help researchers collaborate better and integrate their data, models, publications and other research objects. FAIRDOM also works with other organizations and infrastructure providers to support broader research initiatives.
FAIR data and model management for systems biology.FAIRDOM
Written and presented by Carole Goble (University of Manchester) as part of Intelligent Systems for Molecular Biology (ISMB), Dublin. July 10th - 14th 2015.
Reproducibility, Research Objects and Reality, Leiden 2016Carole Goble
Presented at the Leiden Bioscience Lecture, 24 November 2016, Reproducibility, Research Objects and Reality
Over the past 5 years we have seen a change in expectations for the management of all the outcomes of research – that is the “assets” of data, models, codes, SOPs, workflows. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for scientific data management and stewardship have proved to be an effective rallying-cry. Funding agencies expect data (and increasingly software) management retention and access plans. Journals are raising their expectations of the availability of data and codes for pre- and post- publication. It all sounds very laudable and straightforward. BUT…..
Reproducibility is a R* minefield, depending on whether you are testing for robustness (rerun), defence (repeat), certification (replicate), comparison (reproduce) or transferring between researchers (reuse). Different forms of "R" make different demands on the completeness, depth and portability of research. Sharing is another minefield raising concerns of credit and protection from sharp practices.
In practice the exchange, reuse and reproduction of scientific experiments is dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: the codes fork, data is updated, algorithms are revised, workflows break, service updates are released. ResearchObject.org is an effort to systematically support more portable and reproducible research exchange
In this talk I will explore these issues in data-driven computational life sciences through the examples and stories from initiatives I am involved, and Leiden is involved in too including:
· FAIRDOM which has built a Commons for Systems and Synthetic Biology projects, with an emphasis on standards smuggled in by stealth and efforts to affecting sharing practices using behavioural interventions
· ELIXIR, the EU Research Data Infrastructure, and its efforts to exchange workflows
· Bioschemas.org, an ELIXIR-NIH-Google effort to support the finding of assets.
Reflections on a (slightly unusual) multi-disciplinary academic careerCarole Goble
Talk given at the School of Computer Science, The University of Manchester, UK Postgraduate Research Symposium 2019
the Carole Goble Doctoral Paper award was given for the first time
What is Reproducibility? The R* brouhaha (and how Research Objects can help)Carole Goble
presented at 1st First International Workshop on Reproducible Open Science @ TPDL, 9 Sept 2016, Hannover, Germany
http://repscience2016.research-infrastructures.eu/
This document discusses Research Objects (RO), which provide a framework for bundling, exchanging, and linking resources related to experiments in order to improve reproducibility. The RO framework uses unique identifiers, aggregation, and metadata to group related resources. Real-world examples of ROs include reviewed scientific papers, workflow runs, and Docker images. ROs can help make research fully FAIR (Findable, Accessible, Interoperable, Reusable). Tools and platforms like FAIRDOM, SEEK, and Figshare support the use of ROs.
Reproducible Research: how could Research Objects helpCarole Goble
Reproducible Research: how could Research Objects help, given at 21st Genomic Standards Consortium Meeting
Dates: May 20-23, 2019
https://press3.mcs.anl.gov/gensc/meetings/gsc21/
Short talk on Research Object and their use for reproducibility and publishing in the Systems Biology Commons Platform FAIRDOMHub, and the underlying software SEEK.
NSF Workshop Data and Software Citation, 6-7 June 2016, Boston USA, Software Panel
FIndable, Accessible, Interoperable, Reusable Software and Data Citation: Europe, Research Objects, and BioSchemas.org
COMBINE 2019, EU-STANDS4PM, Heidelberg, Germany 18 July 2019
FAIR: Findable Accessable Interoperable Reusable. The “FAIR Principles” for research data, software, computational workflows, scripts, or any other kind of Research Object one can think of, is now a mantra; a method; a meme; a myth; a mystery. FAIR is about supporting and tracking the flow and availability of data across research organisations and the portability and sustainability of processing methods to enable transparent and reproducible results. All this is within the context of a bottom up society of collaborating (or burdened?) scientists, a top down collective of compliance-focused funders and policy makers and an in-the-middle posse of e-infrastructure providers.
Making the FAIR principles a reality is tricky. They are aspirations not standards. They are multi-dimensional and dependent on context such as the sensitivity and availability of the data and methods. We already see a jungle of projects, initiatives and programmes wrestling with the challenges. FAIR efforts have particularly focused on the “last mile” – “FAIRifying” destination community archive repositories and measuring their “compliance” to FAIR metrics (or less controversially “indicators”). But what about FAIR at the first mile, at source and how do we help Alice and Bob with their (secure) data management? If we tackle the FAIR first and last mile, what about the FAIR middle? What about FAIR beyond just data – like exchanging and reusing pipelines for precision medicine?
Since 2008 the FAIRDOM collaboration [1] has worked on FAIR asset management and the development of a FAIR asset Commons for multi-partner researcher projects [2], initially in the Systems Biology field. Since 2016 we have been working with the BioCompute Object Partnership [3] on standardising computational records of HTS precision medicine pipelines.
So, using our FAIRDOM and BioCompute Object binoculars let’s go on a FAIR safari! Let’s peruse the ecosystem, observe the different herds and reflect what where we are for FAIR personalised medicine.
References
[1] http://www.fair-dom.org
[2] http://www.fairdomhub.org
[3] http://www.biocomputeobject.org
Metadata and Semantics Research Conference, Manchester, UK 2015
Research Objects: why, what and how,
In practice the exchange, reuse and reproduction of scientific experiments is hard, dependent on bundling and exchanging the experimental methods, computational codes, data, algorithms, workflows and so on along with the narrative. These "Research Objects" are not fixed, just as research is not “finished”: codes fork, data is updated, algorithms are revised, workflows break, service updates are released. Neither should they be viewed just as second-class artifacts tethered to publications, but the focus of research outcomes in their own right: articles clustered around datasets, methods with citation profiles. Many funders and publishers have come to acknowledge this, moving to data sharing policies and provisioning e-infrastructure platforms. Many researchers recognise the importance of working with Research Objects. The term has become widespread. However. What is a Research Object? How do you mint one, exchange one, build a platform to support one, curate one? How do we introduce them in a lightweight way that platform developers can migrate to? What is the practical impact of a Research Object Commons on training, stewardship, scholarship, sharing? How do we address the scholarly and technological debt of making and maintaining Research Objects? Are there any examples
I’ll present our practical experiences of the why, what and how of Research Objects.
Written and presented by Tom Ingraham (F1000), at the Reproducible and Citable Data and Model Workshop, in Warnemünde, Germany. September 14th -16th 2015.
Aspects of Reproducibility in Earth ScienceRaul Palma
The document discusses aspects of reproducibility in earth science research within the European Virtual Environment for Research - Earth Science Themes (EVEREST) project. The key objectives of EVEREST are to establish an e-infrastructure to facilitate collaborative earth science research through shared data, models, and workflows. Research Objects (ROs) will be used to capture and share workflows, processes, and results to help ensure reproducibility and preservation of earth science research. An example RO is described for mapping volcano deformation using satellite imagery and other data sources. Issues around reproducibility related to data access, software dependencies, and manual intervention in workflows are also discussed.
This document summarizes Professor Carole Goble's presentation on making research more reproducible and FAIR (Findable, Accessible, Interoperable, Reusable) through the use of research objects and related standards and infrastructure. It discusses challenges to reproducibility in computational research and proposes bundling datasets, workflows, software and other research products into standardized research objects that can be cited and shared to help address these challenges.
ROHub is a digital library and management system for research objects (ROs). It enables scientists to create, manage, and share ROs, which are semantic aggregations of related scientific resources, annotations, and research context. ROHub provides APIs and a web portal for scientists to use throughout the research lifecycle. It stores ROs long-term to support reproducibility and allows for monitoring changes to assess quality.
Trust and Accountability: experiences from the FAIRDOM Commons Initiative.Carole Goble
Presented at Digital Life 2018, Bergen, March 2018. In the Trust and Accountability session.
In recent years we have seen a change in expectations for the management and availability of all the outcomes of research (models, data, SOPs, software etc) and for greater transparency and reproduciblity in the method of research. The “FAIR” (Findable, Accessible, Interoperable, Reusable) Guiding Principles for stewardship [1] have proved to be an effective rallying-cry for community groups and for policy makers.
The FAIRDOM Initiative (FAIR Data Models Operations, http://www.fair-dom.org) supports Systems Biology research projects with their research data, methods and model management, with an emphasis on standards and sensitivity to asset sharing and credit anxiety. Our aim is a FAIR Research Commons that blends together the doing of research with the communication of research. The Platform has been installed by over 30 labs/projects and our public, centrally hosted FAIRDOMHub [2] supports the outcomes of 90+ projects. We are proud to support projects in Norway’s Digital Life programme.
2018 is our 10th anniversary. Over the past decade we learned a lot about trust between researchers, between researchers and platform developers and curators and between both these groups and funders. We have experienced the Tragedy of the Commons but also seen shifts in attitudes.
In this talk we will use our experiences in FAIRDOM to explore the political, economic, social and technical, social practicalities of Trust.
[1] Wilkinson et al (2016) The FAIR Guiding Principles for scientific data management and stewardship Scientific Data 3, doi:10.1038/sdata.2016.18
[2] Wolstencroft, et al (2016) FAIRDOMHub: a repository and collaboration environment for sharing systems biology research Nucleic Acids Research, 45(D1): D404-D407. DOI: 10.1093/nar/gkw1032
Presentation on the Chemical Analysis Metadata Platform (ChAMP) as a new project to characterize and organize metadata about chemical analysis methods. The project will develop an ontology, controlled vocabularies, and design rules
Citing data in research articles: principles, implementation, challenges - an...FAIRDOM
Prepared and presented by Jo McEntyre (EMBL_EBI) as part of the Reproducible and Citable Data and Models Workshop in Warnemünde, Germany. September 14th - 16th 2015.
Written and presented by Carole Goble (University of Manchester) as part of the Reproducible and Citable Data and Models Workshop in Warnemünde, Germany. September 14th - 16th 2015.
The document discusses licensing, citation, and sustainability of intellectual property. It covers different types of licenses for software and data including open source, proprietary, and Creative Commons licenses. It provides resources for choosing an appropriate license, ensuring works are properly cited and credited to help sustain them, and guidelines for repositories, audits, and certifications.
Improving the management of computational models.FAIRDOM
Written by Martin Scharm (University of Rostock), Ron Henkel (University of Rostock), Dagmar Waltemath (University of Rostock), Olaf Wolkenhauer (University of Rostock, Stellenbosch University), and presented by Martin Scharm (University of Rostock) as part of the Reproducible and Citable Data and Models Workshop in Warnemünde, Germany. September 14th - 16th 2015.
Written and presented by Wolfgang Müller (HITS) as part of the Reproducible and Citable Data and Models Workshop in Warnemünde, Germany. September 14th - 16th 2015.
The two-day Systems Biology Data Management Foundry Workshop brought together 35 participants from 5 countries to improve collaboration among data management practitioners and explore opportunities in systems biology, synthetic biology, and systems medicine. Participants gained a better understanding of different systems through show-and-tell sessions, generated ideas for cross-integration, and discussed establishing a foundry to support developers. Outcomes included forming collaborations and planning for future meetings to continue developing solutions for open, interoperable, and reusable data management.
The webinar discussed FAIRDOM services that can help applicants to the ERACoBioTech call with their data management plans and requirements. FAIRDOM offers webinars on developing data management plans, and their platform and tools can help with organizing, storing, sharing, and publishing research data and models in a FAIR manner by utilizing metadata standards. Different levels of support are available, from general community resources through their hub, to premium customized support for individual projects. Consortia can include FAIRDOM as a subcontractor within the guidelines of the ERACoBioTech call.
Introduction to the hands on session on "Standards and tools for model management" at the ICSB 2015.
Focus on COMBINE standards, tools for search, version control and archiving. Used management platform is SEEK.
This talk was part of the 2020 Disease Map Modeling Community meeting, covering the steps towards publishing reproducible simulation studies (based on a reused model). Links to different COMBINE guidelines, tutorials and efforts. Grants: European Commission: EOSCsecretariat.eu - EOSCsecretariat.eu (831644)
Standards and software: practical aids for reproducibility of computational r...Mike Hucka
My presentation during the session titled "Reproducibility of computational research: methods to avoid madness" on Wednesday, 17 September 2014, during ICSB 2014, held in Melbourne, Australia.
This document discusses SED-ML (Simulation Experiment Description Markup Language), a standard for describing computational simulations. SED-ML files contain information like the models, data, simulation settings and algorithms used in an experiment. Using SED-ML allows experiments to be reproduced and shared. The document encourages adopting SED-ML to make research more reproducible and help curation of models in repositories. It also provides an overview of tools that support SED-ML and ways to get involved in its development.
Confessions of an Interdisciplinary Researcher: The Case of High Performance ...tiberiusp
Scaling up economics models to run on large input sizes, complex market and agent model settings, and on big computational resource pools is a demanding feat.
This presentation tells you what it takes to work as a computational economist.
This document discusses data and model management in systems biology. It covers topics such as data ownership, metadata, ontologies, standards for encoding models and analyses, and tools for working with systems biology models and data. Standards like SBML, SBGN, SED-ML and COMBINE Archive allow for structured representation, visualization, simulation, and sharing of models and data. Resources like SEEK enable curation, simulation and publication of models in a findable, accessible, interoperable and reusable (FAIR) manner.
SBML, SBML Packages, SED-ML, COMBINE Archive, and moreMike Hucka
SBML, SBML Packages, SED-ML, COMBINE Archive, and more is a presentation about standards for representing computational models in systems biology. It introduces SBML (Systems Biology Markup Language) as a format for exchanging models of biological processes between software tools. It describes extensions to SBML called packages that add new modeling constructs. It also briefly mentions related standards like SED-ML and COMBINE Archive.
This document provides an overview of standards and best practices for making computational models reusable through the use of model repositories and standard formats. It discusses the COMBINE initiative for standardizing the encoding of models and simulations. The document encourages authors to make their models and data FAIR (Findable, Accessible, Interoperable, Reusable) by using community standards for publishing, exchanging, and archiving models. Examples of open model repositories and standards-compliant tools and libraries are provided to demonstrate how authors can improve sharing and reuse of their models.
M2CAT: Extracting reproducible simulation studies from model repositories usi...Martin Scharm
Martin Scharm presents M2CAT, a workflow to extract reproducible simulation studies from model repositories. It searches the graph database Masymos for relevant models, simulations, and related data. M2CAT then uses the CombineArchive Toolkit to export the selected studies as COMBINE archive files, which provide a standard format for sharing complete simulation experiments. These archives can be explored and modified using the CombineArchive Web interface or other tools. The goal is to make the large amounts of data in repositories more usable and enable researchers to more easily reproduce and build upon existing computational studies.
Slides from the presentation at IDAMO 2016, Rostock. May 2016.
Most scientific discoveries rely on previous or other findings. A lack of transparency and openness led to what many consider the "reproducibility crisis" in systems biology and systems medicine. The crisis arose from missing standards and inappropriate support of
standards in software tools. As a consequence, numerous results in low-and high-profile publications cannot be reproduced.
In my presentation, I summarise key challenges of reproducibility in systems biology and systems medicine, and I demonstrate available solutions to the related problems.
M2CAT: Extracting reproducible simulation studies from model repositories usi...Martin Scharm
The document discusses M2CAT, a workflow that extracts reproducible simulation studies from model repositories. It searches the model repository Masymos for relevant studies, retrieves the necessary data, and exports it as a COMBINE archive using the CombineArchive Toolkit. This packages all the files into a single container that can be shared, modified and explored using various CAT tools. The workflow aims to make simulation studies more reproducible and accessible by bundling related models, data and descriptions into standardized packages.
This document discusses improving reproducibility of simulation studies in computational biology through better management of simulation models and data. The SEMS project aims to develop standards and tools to link related data such as publications, models, simulations, results and more. This will be achieved by using graph databases and COMBINE standards to integrate data from various repositories. Tools will be created to search, compare, cluster and visualize models and their evolution over time to enable more reproducible and reusable simulation studies.
This document outlines a Ph.D. proposal to examine the use of workflow engines and coupling frameworks in developing hydrologic modeling systems. Specifically, it will develop hydrologic models within the TRIDENT workflow engine and OpenMI coupling framework to evaluate their capabilities for building community modeling systems. The research will include developing component models, building sample workflows, and testing models on three sites. The goal is to contribute optimized hydrologic modeling tools and assess the suitability of these approaches for collaborative hydrologic modeling.
The document presents work from the Department of Systems Biology and Bioinformatics at the University of Rostock on improving reproducibility in systems biology simulations. It discusses developing standards for representing simulations (SED-ML) and modeling provenance to better reproduce published results and enable model reuse. The goals are to specify simulation experiments, develop simulation management methods focusing on model provenance, establish links between model data, and promote reproducible science.
The document discusses the evolution of computational models over time. It provides the example of models of the cell cycle control network in the bacterium Clostridium acetobutylicum that have been developed and expanded on since 1984. The models have become more detailed over time, incorporating new experimental findings and insights. The document also discusses how models of the core cell cycle oscillator have evolved since 1991 with the addition of new molecular components and regulatory interactions based on studies in different organisms.
Short summary of recent SBML developments given at the COMBINE (COmputational Modeling in BIology NEtwork) 2014 meeting held at the University of Southern California in August, 2014. The meeting page is available at http://co.mbine.org/events/COMBINE_2014
This document summarizes Dagmar Waltemath's presentation on model management for systems biology projects. It discusses the need for effective data management strategies due to the large, complex, and heterogeneous nature of systems biology data. It recommends using a data management plan, dedicated model management systems like FAIRDOMHub, standards for sharing data, publishing models in repositories, ensuring model quality, and tracking provenance. The goal is to make studies reproducible, valuable, and sustainable.
The human eye is a complex organ responsible for vision, composed of various structures working together to capture and process light into images. The key components include the sclera, cornea, iris, pupil, lens, retina, optic nerve, and various fluids like aqueous and vitreous humor. The eye is divided into three main layers: the fibrous layer (sclera and cornea), the vascular layer (uvea, including the choroid, ciliary body, and iris), and the neural layer (retina).
Here's a more detailed look at the eye's anatomy:
1. Outer Layer (Fibrous Layer):
Sclera:
The tough, white outer layer that provides shape and protection to the eye.
Cornea:
The transparent, clear front part of the eye that helps focus light entering the eye.
2. Middle Layer (Vascular Layer/Uvea):
Choroid:
A layer of blood vessels located between the retina and the sclera, providing oxygen and nourishment to the outer retina.
Ciliary Body:
A ring of tissue behind the iris that produces aqueous humor and controls the shape of the lens for focusing.
Iris:
The colored part of the eye that controls the size of the pupil, regulating the amount of light entering the eye.
Pupil:
The black opening in the center of the iris that allows light to enter the eye.
3. Inner Layer (Neural Layer):
Retina:
The light-sensitive layer at the back of the eye that converts light into electrical signals that are sent to the brain via the optic nerve.
Optic Nerve:
A bundle of nerve fibers that carries visual signals from the retina to the brain.
4. Other Important Structures:
Lens:
A transparent, flexible structure behind the iris that focuses light onto the retina.
Aqueous Humor:
A clear, watery fluid that fills the space between the cornea and the lens, providing nourishment and maintaining eye shape.
Vitreous Humor:
A clear, gel-like substance that fills the space between the lens and the retina, helping maintain eye shape.
Macula:
A small area in the center of the retina responsible for sharp, central vision.
Fovea:
The central part of the macula with the highest concentration of cone cells, providing the sharpest vision.
These structures work together to allow us to see, with the light entering the eye being focused by the cornea and lens onto the retina, where it is converted into electrical signals that are transmitted to the brain for interpretation.
he eye sits in a protective bony socket called the orbit. Six extraocular muscles in the orbit are attached to the eye. These muscles move the eye up and down, side to side, and rotate the eye.
The extraocular muscles are attached to the white part of the eye called the sclera. This is a strong layer of tissue that covers nearly the entire surface of the eyeball.he layers of the tear film keep the front of the eye lubricated.
Tears lubricate the eye and are made up of three layers. These three layers together are called the tear film. The mucous layer is made by the conjunctiva. The watery part of the tears is made by the lacrimal gland
2025 Insilicogen Company English BrochureInsilico Gen
Insilicogen is a company, specializes in Bioinformatics. Our company provides a platform to share and communicate various biological data analysis effectively.
DNA Profiling and STR Typing in Forensics: From Molecular Techniques to Real-...home
This comprehensive assignment explores the pivotal role of DNA profiling and Short Tandem Repeat (STR) analysis in forensic science and genetic studies. The document begins by laying the molecular foundations of DNA, discussing its double helix structure, the significance of genetic variation, and how forensic science exploits these variations for human identification.
The historical journey of DNA fingerprinting is thoroughly examined, highlighting the revolutionary contributions of Dr. Alec Jeffreys, who first introduced the concept of using repetitive DNA regions for identification. Real-world forensic breakthroughs, such as the Colin Pitchfork case, illustrate the life-saving potential of this technology.
A detailed breakdown of traditional and modern DNA typing methods follows, including RFLP, VNTRs, AFLP, and especially PCR-based STR analysis, now considered the gold standard in forensic labs worldwide. The principles behind STR marker types, CODIS loci, Y-chromosome STRs, and the capillary electrophoresis (CZE) method are thoroughly explained. The steps of DNA profiling—from sample collection and amplification to allele detection using electropherograms (EPGs)—are presented in a clear and systematic manner.
Beyond crime-solving, the document explores the diverse applications of STR typing:
Monitoring cell line authenticity
Detecting genetic chimerism
Tracking bone marrow transplant engraftment
Studying population genetics
Investigating evolutionary history
Identifying lost individuals in mass disasters
Ethical considerations and potential misuse of DNA data are acknowledged, emphasizing the need for careful policy and regulation.
Whether you're a biotechnology student, a forensic professional, or a researcher, this document offers an in-depth look at how DNA and STRs transform science, law, and society.
2. http://sems.uni-rostock.de
What is a model?
Fig.: Modeling Cellular Reprogramming Using Network-based
Models. Courtesy Antonio del Sol Mesa, LCSB Luxembourg
Fig.: Modeling the cell cycle using ODE systems. Goldbeter
(1991), http://www.ncbi.nlm.nih.gov/pubmed/1833774
Fig.: Modeling large-scale networks. Lee et al (2013),
http://www.nature.com/articles/srep02197.
2
In systems biology, a computational model represents biological facts in
the computer. Often, the representation is simulated to help understand
the system's dynamic behavior.
8. http://sems.uni-rostock.de
→ Strategies for model similarity, ranking, clustering, filtering
Fig.: Henkel et al 2010 http://www.biomedcentral.com/1471-2105/11/423/
Fig.: Schulz et al 2011 DOI: 10.1038/msb.2011.41
x x x x
x x x x
x x x
x
x x
x x x
x
x x x
x
x
x x x
x x x
x
CellCycle Models
x x x x x x
x x x x x
x x
x
x x x x
x x x
x
x x x x
x x x
x
x
x x x x
x
x x x x
x x x
x x
x x
x x x
x x
x x x x
x x x
x
x x x x x
x x x x x x
x x x x x x
x x x x x x x
x x x x x
x x x x x
x x x x
x x x x x x x
x x x x x x
x x x x x x x x x
x x x x x x
x x
Fig.:Alm et al (2014) doi:10.1186/s13326-015-0014-4
12. http://sems.uni-rostock.de
→ Retrieval and archiving of simulation studies and asssociated files
Model-related data
in the systems biology
workflow
Linking model-related data
Give me all the files I need to
run this simulation study.
Which are the most frequently used
GO annotations in my model set?
Which models contain reactions with 'ATP'
as reactant and 'ADP' as product?
Find good candidates for
features describing my set of
models.
13. http://sems.uni-rostock.de
State of affairs in 2015
●
Standards:
– support for all steps of the modeling cycle
– support of various modeling techniques
– Still: some modeling concept not yet covered (→ Report of whole Cell modeling
workshop, Waltemath et al 2015 (under review))
●
Infrastructures:
– Software tools export/import standards
– Open model repositories and management systems
– Education
●
Recognition
14. http://sems.uni-rostock.de
COMBINE Standards
●
COmputational Modeling in BIology Network
●
Goals:
– Avoid overlap of standardisation efforts
– Coordinate standard developments
– Coordinate meetings
– Coordinate development of procedures & tools
– common infrastructure for specification development, semantic
annotation, and dissemination
●
All specifications now citable and accessible in one place:
Schreiber et al. (2015) http://journal.imbio.de/articles/pdf/jib-258.pdf
16. http://sems.uni-rostock.de
COMBINE Standards
●
Data formats
– Community-developed representation formats for models and
related data
– Format: XML, OWL, RDF/XML
●
Minimum Information/Reporting guidelines:
– Minimum amount of data and information required reproduce
and interpret an experiment
– Format: human-readable specification documents
●
Basis for the specification of data models and metadata
●
Bio-ontologies
20. http://sems.uni-rostock.de
MIRIAM – information to provide about a model
●
Models must
– be encoded in a public machinereadable format
– be clearly linked to a single publication
– reflect the structure of the biological processes described in the
reference paper (list of reactions, …)
– be instantiable in a simulation (possess initial conditions, …)
– be able to reproduce the results given in the reference paper
– contain creator’s contact details
– unambiguously identify each model constituent through annotation
21. http://sems.uni-rostock.de
MIRIAM – information to provide about a model
●
Models must
– be encoded in a public machinereadable format
– be clearly linked to a single publication
– reflect the structure of the biological processes described in the
reference paper (list of reactions, …)
– be instantiable in a simulation (possess initial conditions, …)
– be able to reproduce the results given in the reference paper
– contain creator’s contact details
– unambiguously identify each model constituent through annotation
You should worry about the details of the guidelines,
as they help you to check whether you provide all necessary information.
22. http://sems.uni-rostock.de
Bio-ontologies for model annotation
●
Major ontologies
●
Linking framework: RDF/XML
●
Annotation scheme: used to semantically enrich model files with
detailed descriptions of the underlying biological entities, mathematical
concepts or algorithms used during analysis
●
De facto standard: SBML annotation scheme
26. http://sems.uni-rostock.de
State of affairs in 2015
●
Standards:
– support for all steps of the modeling cycle
– support of various modeling technique
– Still: some modeling concept not yet covered (→ Report of whole Cell modeling
workshop, Waltemath et al 2015 (under review))
●
Infrastructures:
– Software tools export/import standards
– Open model repositories and management systems
– Education
●
Recognition
27. http://sems.uni-rostock.de
Software tool support
●
Standard converters (SBML ↔ SBGN; SBML ↔ CellML...)
●
Standard support in software
●
Interoperability tools
– Cytoscape for network analysis and visualization (SBML,
SBGN, BioPax)
– The Virtual Cell for modeling (SBML, BioPAx)
– VANTED for network analysis, visualization and manipulation
(SBML, SBGN)
Check COMBINE Website
for details
31. http://sems.uni-rostock.de
Getting involved
●
COMBINE user meeting→ next: COMBINE 2015, OCT 11-16,
Salt Lake City
●
COMBINE developers meeting → next: HARMONY 2016, June
7-11, Auckland
●
FAIR-DOM activities: webinars, blogs, foundries
●
COMBINE activities: workshops, presentations, tutorials
●
Help through specification documents, show cases, mailing
lists, ...
http://co.mbine.org/ http://fair-dom.org/
32. http://sems.uni-rostock.de
State of affairs in 2015
●
Standards:
– support for all steps of the modeling cycle
– support of various modeling technique
– Still: some modeling concept not yet covered (→ Report of whole Cell modeling
workshop, Waltemath et al 2015 (under review))
●
Infrastructures:
– Open model repositories
– Software tools export/import standards
– Model management systems
– Education
●
Recognition
35. http://sems.uni-rostock.de
Functional curation of models through virtual experiments
Fig.: Functional curation of models in the Web Lab. Cooper et al (2015) https://peerj.com/preprints/1338/ ;
Cooper et al (2014) doi:10.1016/j.pbiomolbio.2014.10.001
Try out the
Cardiac physiology
Web Lab
38. http://sems.uni-rostock.de
So far for the theory… and in practice?
●
Check for existing standards and specifications thereof: http://co.mbine.org
●
Get involved in standard development → through the relevant mailing lists
●
Problems with getting your model into the right format?
– Is it a problem with finding the approriate format or tool? → Ask on the
relevant mailing list... people are friendly and happy to help.
– Is it a tool problem? → Complain with tool developers... who will
hopefully change it.
– Is is a problem with the lack of a standards? → Feed back into the
standard's community… people are friendly and happy to improve the
standard.
●
Follow best practices when aiming at publishing a result.
39. http://sems.uni-rostock.de
Best practices for publishing reproducible modeling results
1) Encode the model in a standard format, e.g. SBML.
2) Annotate the SBML model, following MIRIAM.
3) Publish the simulation experiment descriptions in standard
format, e.g. SED-ML. If unsure what to include, consult the
MIASE guidelines.
4) Try to reproduce the results *yourself*.
5) Ask a colleague to reproduce the results.
6) If successful: Archive all steps that led to your results.
7) Disseminate model code and simulation description through an
open repository. Adapted from: Waltemath et al (2013), doi:10.1007/978-94-007-6803-1_10