Generic and customizable rubric framework for assessing assignments that integrate AI tools (chatbots, LLMs, RAG systems, code assistants, creative AI, etc.) across disciplines.
© 2026 Michael Borck, AI Facilitator, School of Management and Marketing, Curtin University
For Curtin University use. For external distribution or use outside Curtin University, see LICENSE.md.
For academic staff: Start with the documents in outputs/docx/ in numerical order.
-
Download the Word documents from
outputs/docx/:1_README_Getting_Started_v2.docx— Start here2_Generic_Rubric_Template_v2.docx— Your framework3_Adaptation_Guide_v2.docx— How to customise4_Example_CloudCore_Audit_v2.docx— Worked example
-
Follow the workflow outlined in the README
-
Adapt the template for your unit
Prerequisites:
pandoc(for markdown → docx conversion)git
Workflow:
-
Edit markdown files in
sources/:# Make changes to any .md file nano sources/2_Generic_Rubric_Template.md -
Regenerate Word outputs:
chmod +x build.sh ./build.sh
-
Commit changes:
git add sources/ outputs/ git commit -m "Update rubric criteria and examples"
assessment-rubrics-for-ai/
├── sources/ # Single source of truth (markdown)
│ ├── 1_README_Getting_Started.md
│ ├── 2_Generic_Rubric_Template.md
│ ├── 3_Adaptation_Guide.md
│ └── 4_Example_CloudCore_Audit.md
├── outputs/
│ ├── docx/ # Generated Word documents
│ │ ├── 1_README_Getting_Started.docx
│ │ ├── 2_Generic_Rubric_Template.docx
│ │ ├── 3_Adaptation_Guide.docx
│ │ └── 4_Example_CloudCore_Audit.docx
│ └── html/ # (Optional future) Web versions
├── build.sh # Build script (md → docx)
├── .gitignore
└── README.md # This file
Markdown files (sources/) are authoritative. Word documents are generated from these.
Never edit .docx files directly for long-term changes. Edit the markdown, then regenerate Word.
Workflow:
Edit .md → Run build.sh → Commit both → Distribute .docx
See 3_Adaptation_Guide.md for detailed guidance on:
- Different AI modalities (chatbots, LLMs, RAG, code, creative AI)
- Discipline-specific adjustments (Business, STEM, Humanities, Law)
- Weighting by context
- Performance descriptor customization
Developed for Curtin University's A2030 initiative by Dr Michael Borck (Business Information Systems).
Philosophy:
- Critical evaluation of AI outputs matters more than tool efficiency
- Transparency is an integrity measure, not a compliance burden
- Process (how students think) is as important as product (outcomes)
- One-size-fits-all rubrics don't work; customization is essential
- Treat AI as a "junior intern" that students must manage and validate
v1.0 (Feb 2026): Initial release
- Generic rubric template
- CloudCore audit example
- Adaptation guide for different AI tools
- Implementation guide for staff
To suggest improvements:
- Open an issue describing the change
- Include examples where relevant
- Consider disciplinary context
To add new examples:
- Create a new file in
sources/(e.g.,5_Example_Code_Assistant_Rubric.md) - Update numbering in README
- Run
build.sh - Commit and push
© 2026 Michael Borck, AI Facilitator, School of Management and Marketing, Curtin University
For use within Curtin University. External distribution, reproduction, or use outside Curtin University requires explicit written permission.
See LICENSE.md for full terms.
Dr Michael Borck School of Management & Marketing Curtin University
Last updated: February 2026