Get the FREE Ultimate OpenClaw Setup Guide →

bio-logic

Scanned
npx machina-cli add skill fmschulz/omics-skills/bio-logic --openclaw
Files (1)
SKILL.md
4.4 KB

Bio-Logic: Scientific Reasoning Evaluation

Use structured frameworks to evaluate scientific claims, methodology, and evidence strength.

Instructions

  1. Identify the task (claim assessment, paper critique, study design review).
  2. Apply the relevant checklist below.
  3. Structure output using the provided format.

Critique Checklist

Use relevant sections based on the review scope. Skip items not applicable to the study type.

## Methodology
- [ ] Design matches research question (causal claim → RCT needed)
- [ ] Sample size justified (power analysis reported)
- [ ] Randomization/blinding implemented where feasible
- [ ] Confounders identified and controlled
- [ ] Measurements validated and reliable

## Statistics
- [ ] Tests appropriate for data type
- [ ] Assumptions checked
- [ ] Multiple comparisons corrected
- [ ] Effect sizes + CIs reported (not just p-values)
- [ ] Missing data handled appropriately

## Interpretation
- [ ] Conclusions match evidence strength
- [ ] Limitations acknowledged
- [ ] Causal claims only from experimental designs
- [ ] No cherry-picking or overgeneralization

## Red Flags
- [ ] P-values clustered just below .05
- [ ] Outcomes differ from registration
- [ ] Correlation presented as causation
- [ ] Subgroups analyzed without preregistration

Claim Assessment

  1. Identify claim type (causal, associational, descriptive).
  2. Match evidence to claim type.
  3. Check logical connection between data and conclusion.
  4. Ensure confidence matches evidence strength.

Claim strength ladder:

LanguageRequires
"Proves" / "Demonstrates"Strong experimental evidence
"Suggests" / "Indicates"Observational with controlled confounds
"Associated with"Observational, no causal claim
"May" / "Might"Preliminary or hypothesis-generating

Output Format

## Summary
[1-2 sentences: What was studied and main finding]

## Strengths
- [Specific methodological strengths]

## Concerns
### Critical (threaten main conclusions)
- [Issue + why it matters]

### Important (affect interpretation)
- [Issue + why it matters]

### Minor (worth noting)
- [Issue]

## Evidence Rating
[GRADE level: High/Moderate/Low/Very Low with justification]

## Bottom Line
[What can/cannot be concluded from this evidence]

Quick Reference

TaskAction
Review a paperUse the Critique Checklist below
Evaluate a claimUse Claim Assessment steps
Assess evidence strengthSee references/evidence.md
Identify biasesSee references/biases.md
Statistical errorsSee references/stats.md
Logical fallaciesSee references/fallacies.md
Study design reviewSee references/design.md

Input Requirements

  • Manuscript or study summary
  • Claim statements to evaluate
  • Methods and results (as available)

Output

  • Structured critique with strengths, concerns, and evidence rating
  • Clear recommendation on claim strength and limitations

Quality Gates

  • Claim type matches evidence type
  • Key methodological risks are identified
  • Evidence rating is justified with concrete issues

Examples

Example 1: Quick Paper Critique

Input: "Review this RCT on vitamin D and depression"

Output:

## Summary
RCT of 200 adults testing vitamin D supplementation for depression over 12 weeks.

## Strengths
- Randomized, double-blind, placebo-controlled
- Validated depression scale (PHQ-9)
- Adequate power analysis

## Concerns
### Critical
- 35% dropout, differential by group (attrition bias)
- ITT analysis not performed

### Important
- Single-site limits generalizability

## Evidence Rating
Moderate (downgraded from high due to attrition bias)

## Bottom Line
Suggestive but not conclusive due to differential attrition.

Example 2: Claim Assessment

Input: "This study proves that coffee prevents Alzheimer's"

Assessment: If based on observational data, this is a correlation→causation error. Appropriate language: "Coffee consumption is associated with lower Alzheimer's risk."

Troubleshooting

Issue: Insufficient methods details Solution: Request missing design/statistics information before rating evidence.

Issue: Conflicting results across studies Solution: Report uncertainty and suggest stronger study designs for resolution.

Source

git clone https://github.com/fmschulz/omics-skills/blob/main/skills/bio-logic/SKILL.mdView on GitHub

Overview

Bio-Logic provides structured frameworks to evaluate scientific claims, methods, and the strength of evidence in papers and study designs. It guides users through task identification, checklist application, and producing objective critiques, helping readers distinguish robust findings from overreach.

How This Skill Works

Identify the review task (claim assessment, paper critique, or study design review). Apply the relevant sections of the Critique Checklist (Methodology, Statistics, Interpretation, Red Flags) and tailor them to the study type. Structure the critique into strengths, concerns, an evidence rating, and a bottom line using the provided output format.

When to Use It

  • Before citing a new study in policy or clinical decisions, to assess rigor.
  • During manuscript or grant review to evaluate methodological soundness.
  • When performing evidence synthesis or systematic reviews to rate evidence strength.
  • When encountering causal vs associational claims and matching design to claim type.
  • When scanning papers for biases, data handling, preregistration, and red flags.

Quick Start

  1. Step 1: Gather the manuscript or summary and define the task (claim, paper, or design).
  2. Step 2: Apply the relevant Critique Checklist sections and record findings.
  3. Step 3: Produce a structured report with strengths, concerns, evidence rating, and bottom line.

Best Practices

  • Match the review type to the appropriate checklist sections.
  • Check sample size justification, randomization, and blinding where applicable.
  • Report effect sizes with confidence intervals, not just p-values.
  • Clearly separate causal claims from correlational findings and acknowledge limitations.
  • Highlight red flags (p-hacking signals, preregistration gaps) and avoid cherry-picking.

Example Use Cases

  • Critiquing an RCT on a new drug and noting allocation concealment and dropout-related biases.
  • Evaluating a large observational study linking diet to disease and distinguishing association from causation.
  • Assessing a meta-analysis for consistency, heterogeneity, and overall evidence strength.
  • Reviewing a grant proposal’s study design for power, randomization, and confounder control.
  • Performing a quick claim assessment of 'X is associated with Y' in a preprint and checking claims against data.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers