Skip to Main Content
Community College of Aurora logo
Databases A-Z Guides Homepage Hours Get Help

AI & the Academy

AI & Academic Integrity

It’s no secret that generative AI have created new challenges in maintaining academic integrity. The decision to use, exclude, or address AI is a complex one. 

There is no reliable way to prevent the use of AI on assignments and no reliable way to detect the use of AI on assignments. Instructors must work together with students to build a culture of trust and authentic engagement. Effective assignment design and transparent policies help to support that culture. Academic integrity experts suggest that the best solution is to make the right thing easier to do, instead of making the wrong thing harder to do. 

Understanding Academic Dishonesty

Though ChatGPT has certainly contributed to academic dishonesty, increased cheating has been reported in colleges and universities since before GenAI hit the popular market. Increased cheating is often related to changing student behaviors, circumstances, and values, not just changing tools. 

Some self-reported reasons that students cheat include: 

  • Assignments feel irrelevant

  • They face time pressures or conflicting priorities (work, family responsibilities, etc.)

  • They lack clear understanding of expectations

  • Education feels purely transactional

  • Classes are perceived as "busy work"

  • Performance pressure (from family, minimum GPA requirements, etc)

  • Focus on grades

  • “High stakes” exams or assignments, often defined by high point value

  • Judging course workload as too high

  • Feeling “anonymous,” disconnected from course material, a class community, professor, or institution

  • Peer acceptance of cheating

  • Perception that academic dishonesty will go unpunished

  • Misunderstanding plagiarism or how to avoid plagiarism

  • Cultural or regional differences about what constitutes plagiarism or academic dishonesty

Making The Right Thing Easier

What Works

Rather than relying on unreliable detection tools, institutions should focus on creating meaningful assignments, building supportive learning environments, and helping students understand appropriate uses of AI technology.

Teach Academic Integrity Values

Like any set of values, values related to academic integrity are learned and developed over time. Many students arrive on campus without clear expectations, and this includes their understanding of college-level academic integrity. Students need their instructors to:

  • Explain what academic integrity is in the context of the course, the discipline, and the college.

  • Explain why these academic integrity rules exist (e.g. why we cite sources, why we work collaboratively or individually, how, why, and when we use co-authoring tools like ChatGPT or Grammarly).

  • Provide examples of what appropriate attribution and collaboration look like

  • Create opportunities for students to practice their learning and skills low-stakes assignments before major projects.

  • Demonstrate the importance of academic work by recognizing academic labor and processes as valuable learning experiences

  • Help students understand cultural differences by acknowledging that academic integrity norms may vary across cultures and disciplines.

  • Help students understand that academic labor and process are valuable learning experiences. For example:

  • In math, working through problem-solving steps builds critical thinking, even if AI could provide the answer

  • In science, conducting and documenting experiments develops observation skills and scientific reasoning, beyond just getting results

  • In writing, drafting and revision strengthen analytical and communication skills that AI can't replace

Academic integrity values are learned through consistent exposure, clear explanation, and meaningful practice. Students develop these values gradually as they understand their importance and see them modeled in their academic community. Regular discussions about academic integrity, combined with opportunities to practice ethical decision-making in low-stakes situations, help students internalize these values and apply them confidently throughout their academic careers. This ongoing process of learning and reinforcement is especially important as students navigate new tools like AI and adapt to evolving academic expectations.

Create, Explain, and Reiterate Gen AI Use Policies

While departmental statements outline broad AI policies, instructors should provide clear, assignment-specific guidelines about AI use. These guidelines help students make ethical decisions and understand how departmental policies apply to specific tasks. They also demonstrate to students why limiting or enabling AI use is important and reiterate how their work on assignments contributes to their learning.

The AI Use Scale

When developing assignment-specific AI guidelines, consider using this scale to clearly communicate acceptable levels of AI engagement:

  • Level 1: Independent Work: Complete tasks without any AI assistance to show case individual skills and knowledge.

  • Level 2: AI-Assisted Ideation and Refinement: Use AI to generate ideas and improve the quality of your work (e.g., brainstorming, refining structure). The core content remains your own.

  • Level 3: AI-Assisted Drafting: Create initial drafts with AI, but critically evaluate and revise them. This involves modifying AI output to ensure originality.

  • Level 4: AI Collaboration: Work collaboratively with AI on tasks, maintaining ownership while critically evaluating AI content.

Components of Effective AI Guidelines

When creating assignment-specific AI guidelines, include these key elements:

  1. AI Permission & Justification: Clearly state which level of AI use is permitted and explain why this level is appropriate for the learning objectives. Even when AI use is not permitted, explain the pedagogical reasoning to help students understand the connection between assignment design and learning objectives.

  2. Use Parameters: Provide specific examples of acceptable and unacceptable AI use for the assignment.

  3. Platform Recommendations: If applicable, suggest specific AI tools that are appropriate for the permitted uses.

  4. Attribution Requirements: Explain how students should document their AI use in their work.

  5. Instructor AI Use: Be transparent about how you, as the instructor, use AI in creating or evaluating assignments.

Every Assignment, Every Time

Students need reminders about AI use for a couple of reasons:

  1. AI guidelines often vary by discipline and course. In one course, it may be acceptable to engage in Level 4 AI use on most assignments, while others may limit use to Level 1.

  2. Clear statements about the value of assignments help students understand why independent work matters for their learning. When students see the connection between assignment design and learning outcomes, they're more likely to engage authentically with the work.

  3. Regular reminders normalize discussions about AI use and create opportunities for students to ask questions about appropriate AI engagement. Clear guidelines help students develop good habits around AI use that will serve them in their academic and professional careers.

Take Advantage of Campus Resources

The Hub offers comprehensive resources to help both faculty and students navigate academic integrity in the digital age. Faculty can access support for designing assignments that promote authentic learning, while students can get assistance with research, citation, and proper use of academic tools. With locations at both CentreTech and Lowry campuses, plus online services, The Hub provides tutoring, library resources, and technology assistance to help build strong academic integrity practices.

Research Appointments for Students (practicing finding and citing research)

Tutoring for Students

Professional Development for Faculty & Instructors

Course Design to Support Academic Integrity

Effective Strategies for Promoting Academic Integrity:

  • Create supportive learning environments that build trust

  • Design meaningful assignments connected to students' goals

  • Integrate AI responsibly with clear guidelines

  • Focus on process documentation

  • Address systemic issues that drive cheating

  • Update academic integrity policies to specifically address AI

The TRUST Model for Assignment Design was developed to help reduce instances of cheating and plagiarism by directly addressing the reasons students do it.

  • Transparency: Clear communication about assignment purpose and requirements

  • Real World Applications: Making assignments relevant to real-world scenarios

  • Universal Design for Learning: Reducing barriers and increasing access. To learn more about this framework, read UDL: A Powerful Framework and explore the UDL on Campus website and check out the CCA UDL Summit on March 28.

  • Social Knowledge Construction: Incorporating peer interaction and feedback

  • Trial and Error: Allowing students to learn from mistakes through revision

What Doesn't Work

Banning AI from All Classroom Use

Experts have argued that banning AI won’t help solve the problem of academic integrity. Instead, banning AI may contribute to existing achievement gaps. Key reasons not to ban AI from classrooms include:

  • Banning AI can widen the digital divide between students who have access to these tools outside class and those who don't

  • AI is becoming increasingly integrated into many professional fields and careers

  • Low-tech alternatives like handwritten exams can create accessibility barriers for disabled students, English language learners, and neurodiverse students

  • AI detection tools are unreliable and can produce false positives, particularly harming non-native English speakers

GenAI Detection Software

AI detection tools are fundamentally flawed and should not be used to catch or punish students. Key issues include:

  • Unreliable Accuracy: AI detectors only guess probabilities and regularly produce both false positives and false negatives

  • Discriminatory Impact: These tools disproportionately flag writing from:

    • Non-native English speakers

    • Students with communication disabilities

    • Writers from marginalized linguistic backgrounds

    • Anyone taught to write in more formal/standardized styles

  • Lack of Transparency: The detection algorithms are proprietary "black boxes" that cannot be independently verified

  • Explicit Warnings from Providers: Major AI detection companies specifically state their tools:

    • Should not be used as primary decision-making tools

    • Are not meant for punishing students

    • Should only be one small part of holistic assessment

  • Rapid Obsolescence: As AI language models evolve, detection tools quickly become outdated

  • Easy to Circumvent: Students can easily bypass detection by:

    • Slightly modifying AI-generated text

    • Using multiple AI tools

    • Employing AI tools specifically designed to avoid detection

  • Creates Adversarial Environment: Using detection software:

    • Erodes trust between students and faculty

    • Focuses on punishment rather than learning

    • May encourage more sophisticated cheating methods

AI Writing Red Flags

Adapted from AI Writing Detection: Red Flags – Office For Faculty Excellence - Montclair State University 

Currently, no software is able to detect AI-generated content with 100% certainty. As you review student submissions, in addition to using feedback from AI-detecting software, check for the following red flags. 

AI-generated content is often:

  • Affected by factual errors

    • Outdated information

    • Incorrect information 

    • Citing made-up sources

    • Features that come with paid upgrades may mitigate these concerns and make plagiarism less obvious.

  • Not consistent with assignment guidelines. A submission that is AI-generated may not be able to follow the instructions, especially if the assignment asks students to reference specific data and sources. 

  • Atypically correct in grammar, usage, and editing.

  • Predictable. It follows predictable formations: strong topic sentences at the top of paragraphs; summary sentences at the end of paragraphs; even treatment of topics that reads a bit like patter: On the one hand, many people believe X is terrible; on the other hand, many people believe X is wonderful.”

  • Directionless and detached. It will shy away from expressing a strong opinion, taking a position on an issue, self-reflecting, or envisioning a future. 

    • For example: when asked to express an opinion, AI tends to describe a variety of possible perspectives, without showing preference for any one of them. 

Join the AI in Higher Education Miniworkshop!

Join Our AI & Higher Education Miniworkshop

The Hub is offering a professional development workshop to help faculty navigate teaching and assessment in the age of AI. This asynchronous online miniworkshop takes 1-3 hours to complete and provides practical strategies for maintaining academic integrity while acknowledging AI's growing role in education. You'll learn how to create clear AI policies, design both AI-friendly and AI-resistant assignments, and develop effective approaches for helping students use AI tools ethically.

Workshop topics include:

  • Evaluating AI tools for educational use
  • Creating discipline-specific AI guidelines
  • Designing assignments that promote authentic learning
  • Building academic integrity into course design

Visit the Academic Professional Development Website to register for upcoming workshop dates.

Works Consulted & Further Reading