Responsible AI Policy
How we approach AI in qualitative research
Our Commitment
Soak uses large language models to assist with qualitative analysis. We're committed to responsible AI use that supports -- rather than replaces -- human research judgment.
No Training on Your Data
Your research data is not used to train AI models.
We use Azure OpenAI with data use for training disabled. Your transcripts, analysis results, and embeddings are not used to improve foundation models.
AI as Assistant, Not Authority
Soak is designed to assist qualitative researchers, not to automate research decisions. Our tools help you:
- Generate initial codes and themes for review
- Find patterns across large datasets
- Compare analyses for consistency
- Search semantically across transcripts
You remain responsible for interpreting results, validating outputs, and making research judgments.
Limitations of LLM Analysis
Large language models have known limitations that researchers should consider:
Hallucination
LLMs may generate plausible-sounding but incorrect content. Always verify quotes against source transcripts and validate interpretations.
Bias
LLMs reflect biases present in their training data. Be aware that generated codes and themes may reflect cultural, demographic, or ideological biases.
Context Limitations
LLMs process text without the contextual understanding a human researcher brings. They don't know your research questions, theoretical framework, or disciplinary conventions unless you provide them.
Consistency
LLM outputs can vary between runs. We provide comparison tools to help you assess consistency, but some variation is inherent.
Researcher Responsibility
As a researcher using Soak, you are responsible for:
- Critically evaluating all AI-generated outputs
- Verifying quotes and citations against source material
- Applying your domain expertise and judgment
- Disclosing AI assistance in your research methodology
- Ensuring outputs meet your institution's research ethics standards
Human-in-the-Loop
We design Soak to keep humans central to the research process:
- All outputs are editable and reviewable
- Quote verification highlights confidence levels
- Comparison tools help assess reliability
- Export formats support further manual analysis
Research Ethics
When using AI tools in research, consider:
- Does your ethics approval cover AI-assisted analysis?
- How will you disclose AI use in publications?
- Are participants aware their data may be processed by AI?
Consult your institution's research ethics guidelines for AI use.
Feedback
We welcome feedback on our responsible AI practices: [EMAIL]
soak