Applied Linguistics Inquiry recognizes the transformative potential of artificial intelligence (AI) while affirming that human expertise remains irreplaceable in scholarly communication. As an international journal dedicated to advancing rigorous research across all domains of applied linguistics, we establish this policy to ensure AI tools are deployed ethically, transparently, and accountably. Our guidelines align with the Committee on Publication Ethics (COPE), European Association of Science Editors (EASE), and disciplinary best practices established by the International Association of Applied Linguistics (AILA) and American Association for Applied Linguistics (AAAL).
Core Principles
-
Human Accountability Prevails
All scholarly work must reflect intentional human intellectual contribution. Authors, reviewers, and editors retain full responsibility for content integrity, methodological validity, and ethical compliance.
-
Transparency as Non-Negotiable
Any AI tool influencing substantive aspects of research or writing must be explicitly disclosed, with justification for its use.
-
Confidentiality and Data Ethics
Manuscripts under review or in production are confidential. AI tools must never process unpublished content containing sensitive linguistic data (e.g., participant transcripts, endangered language corpora), which may violate research ethics protocols and participant anonymity.
Field-Specific Policy Directives
1. AI and Authorship
- LLMs (e.g., ChatGPT, Claude) cannot be authors. Authorship requires accountability for conceptualization, interpretation, and scholarly judgment—capacities exclusive to humans. This aligns with AILA/AAAL standards on intellectual contribution in language research.
- Mandatory Disclosure:
- Use of LLMs for generative tasks (e.g., drafting arguments, creating examples, synthesizing literature) must be detailed in the Methods section (for research design) or Acknowledgments (for writing assistance), specifying: "Tool name (e.g., "ChatGPT-4"), version, prompt structure, and human validation procedures."
-
- Exemption: AI tools used only for copy editing (grammar, spelling, syntax) require no disclosure, provided:
(a) Edits do not alter meaning, theoretical framing, or empirical claims;
(b) All authors verify that the final text accurately reflects their original work.
2. AI-Generated Content in Linguistic Research
- Textual Data: AI-generated language samples (e.g., simulated dialogues, synthetic corpora) are prohibited unless explicitly framed as computational experiments with full methodological transparency in the Methods section. All such data must be labeled: "Example 3: AI-generated dialogue (created using [Tool], validated against [Corpus])."
- Visual Materials:
- Charts, graphs, or diagrams created solely by AI (e.g., DALL-E, Midjourney) are not permitted.
- Exceptions require:
(a) Explicit labeling of AI origin within the figure caption;
(b) Editorial approval confirming the visual serves illustrative (not evidentiary) purposes;
(c) Verification that no human participants or culturally sensitive materials were used to train the model.
3. AI Use by Peer Reviewers
- Strict Prohibition: Reviewers must not input manuscripts, excerpts, or reviewer comments into any generative AI tool. This safeguards:
- Confidentiality of unpublished linguistic data;
- Integrity of context-sensitive evaluation (e.g., discourse coherence, pragmatic meaning, cultural nuance);
- Protection against AI hallucinations in methodological critique.
- Accountability: Reviewers must certify that evaluations reflect their independent expertise in applied linguistics. AI-assisted grammar checks on review reports are permitted but must not alter substantive judgments.
4. AI Use by Editors
- Zero Tolerance for Confidentiality Breaches: Editors must not use AI tools to process manuscripts, decision letters, or author correspondence. This protects:
- Intellectual property rights of authors;
- Ethical obligations to vulnerable language communities (e.g., indigenous speakers);
- Compliance with GDPR/FERPA regarding linguistic data.
- Editorial Judgment: AI tools cannot inform editorial decisions. Human editors alone evaluate theoretical innovation, methodological appropriateness, and disciplinary relevance.
- Vigilance: Editors must verify AI disclosures, screen for undisclosed AI-generated content (using forensic tools where appropriate), and investigate suspected policy violations per COPE guidelines.
Implementation & Enforcement
- Author Compliance: Manuscripts lacking required AI disclosures will be rejected. Suspected undisclosed AI use triggers investigation per COPE flowcharts.
- Reviewer/Editor Training: All editorial board members and invited reviewers must complete annual certification on this policy.
- Dynamic Review: This policy is reviewed biannually by the Editorial Board in consultation with AILA/AAAL ethics committees to address emerging technologies (e.g., multimodal LLMs, voice synthesis).