[SD-4] MVPR Architecture: Science Review DAO #4
Labels
No Label
discussion
draft
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: DGF/dao-governance-framework#4
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
SD-3 has been identified as a distinct deliverable: a specification of the interfaces available to individual implementations, of a dynamic self-governing system.
This proposal therefore is separate and is specific to the architecture of a DAO which we hope to build for the specific use case of scientific publishing.
Miro
Pattern exists of using configuration files that may be optionally local to subdirectories. We could support using specific file naming conventions for optional per-subdirectory elements. And/or we can support defining a centralized index.
Each subdirectory would then be able to contain a segment of a processing pipeline, where its configuration would specify the source data, and some set of operations on that data. It should also include some explanations of the choices made in the work. This can and should include specific citations, referenced in context.
We want any figures produced for the work to include instructions that can be followed to reproduce the image. Best case is that this can be done programmatically. To that end we would be well-served to define a convenient mechanism for expressing these image-producing operations. Again there can be options. Some kind of per-directory config spec that supports a single script enumerating multiple image results. Maybe use routing paradigm and provide an optional default mode of one-script-per-image; or a more flexible mode where you can provide a routing expression for image references. Separately there can be an index of scripts that produce images. Sounds like we want a format for referencing images.
A reviewer would expect the following:
A reviewer does not necessarily need to agree with the conclusions drawn from the work and expressed by the author.
How will we deal with this challenge? One mechanism is that we can support an additional layer of review: Review-of-the-review; a.k.a. Comments.
It makes sense for peers to have an opportunity to interject comments among one another's work. It also presents challenges: how can we moderate such comments?
Our approach will be to create a framework within which communities can work to establish their own agreements about the functioning of their own organization.
Orchestrating such an open-ended framework presents a "wicked problem," meaning intractably complex. With this in mind we consider that our system can do no better than to apply a diminished reflection of the same mechanisms that govern the weightier posts.
Thus, a comment should be able to function as itself an artifact; and it should be able to function as a review of another comment; or for that matter a post or review.
changed title from MVPR Architecture: Science Review DAO to {+[SD-4] +}MVPR Architecture: Science Review DAO
changed the description
changed the description
Regarding....
https://www.youtube.com/watch?v=01RUsvDZQQY&t=11010s
Chris's DeSci Labs Talk (2:58:10). He gets to the technical bits/product/Demo at 3:08:20.
Basically, the idea is to post code and data and be able to computationally replicate the figures in a pdf using his teams product. They are working intimately with Weavechain.
This fits nicely with an aspect of our computational replication plan. However, replication shouldn't be thought of as just some boring checking process (though at times it definitely should be) since it is impossible to recreate the exact conditions as the original study.
Say there is a computational chemist that that is reading a paper by an experimental and theoretical chemist. The theory seems right, but he/she/they can beef up the computational side as it also overlaps with their own expertise. After updating the computational elements, layered with his/her/they's interpretation can add more nuance to the conversation.
There have been a few cases where Nobel works had failed their replication studies only for those replications to add to the robustness of the theory in the long-term. The reputation system we are building incentivizes replication of different types based on the attention it is getting.
Types of Peer-Review | Wiley
mentioned in issue #3
changed the description