Table of Contents
- 1. Introduction, pages 1 - 7
- 2. Chapter 1: AI Basics, pages 11 - 31
- 3. Chapter 2: A New Era of Work, pages 32 - 51
- 4. Chapter 3: AI Prompting, apges 52 - 69
- 5. Chapter 4: Reimagining Creativity, pages 70 - 87
- 6. Chapter 5: AI Literacy, pages 88 - 108
- 7. Chapter 6: AI-Assisted Work and Research, pages 109 - 129
- 8. Chapter 7: AI-Assisted Course Preparation and Beyond, pages 130 - 151
- 9. Chapter 8: Cheating and Detection, pages 152 - 174
- 10. Chapter 9: Policies, pages 175 - 190
- 11. Chapter 10: Grading and (Re-)Defining Quality, pages 191 - 208
- 12. Chapter 11: Feedback, Role-Playing, and Tutors, pages 209 - 230
Teaching with AI, Second Edition
1. Introduction, pages 1 - 7
Uses the internet as an analogy for this advent of AI.
“Just as later technologies […] amplified the effects of the internet, we can assume that AI is going to get better and more intertwined with our lives in the future.”
Can we assume that? Should we assume that? Why does everyone assume that AI will get better?
“Banning web-based tools like Wikipedia failed,
So just because murders still happen we should just stop outlawing murders, right?
but would the internet have turned out differently if we had put different constraints on how it developed?
The internet was built up as an open utility with people doing much of the work for free out of interest and passion. These LLMs are being built by big tech companies that have no consideration for the public good and are just in a race to be able to control everything with their own AI chatbot.
“as a prediction machine without the inhibitions of social embarrassment, an AI will say anything, but the problem of “hallucinations” becomes a strength”
Still taking this stance in 2nd edition, still a load of garbage. First, yes, it’s a prediction machine. It’s been trained on existing data around the internet, so it refers to what has appeared most often in its training data.
AI will NOT say anything because there ARE filters in place in an attempt to mitigate the damage done by training the LLMs on lots and lots of garbage, violent, pornographic, and otherwise harmful material.
I guess you can say any form of lying is also great for creativity. How about listening to a drunk person ramble about space aliens?
“Since all assignments are now AI assignments, you will need a policy that defines how work should be done and why.”
Oh EVERY assignment will be AI now? Let’s just outsource our entire world to a proprietary corporation whose motivation is money and control. That sounds like a good strategy.
2. Chapter 1: AI Basics, pages 11 - 31
Terms covered: “AI”, expert systems, machine learning, neural networks, foundation models, reinforcement learning, deep learning, large language models, retrieval-augmented generation, multimodal AI, AI agents.
“The term artificial intelligence (AI) was coined in 1956 at a conference […]”
As a marketing term, because research funding was scarce for automated systems (I forgot the exact term, I need to find it in Empire of AI and that citation.)
“The process of pre-training […] an AI is lengthy and requires an enormous dataset […] Since the pre-training sources […] contain the good, bad, and ugly of human thought, and since LLMs assimilate to predict, they are bound to reproduce the bias and hate in their source material.”
Technically all this data is not required, this is just the way that OpenAI has decided to pursue their LLMs in hopes of generating an AGI (which it won’t), and there’s no requirement that these LLMs be trained on bad parts of the internet. They just do not care enough to be responsible for what they put into their models.
Book states that while AI model may have bias in its training, human reviewers refining the outputs also have bias, bias can be hidden in “network architecture” (I am not sure what the author means by this) and the bias of the human user themself.
“The challenge of human bias in AI content exists in every process and at every stage.”
…
“Because these models “generate” new sentences, images, and ideas by sorting probabilities of the next word or pixel, they are prone to “generate” false data or “fabricate” fictional references. This ability to “hallucinate” makes AI a terrific tool for creativity: it will put ideas and words together in ways that humans might never have done before […]. It’s important to note that hallucinations aren’t bugs. They’re features – often annoying features – that require human vigilance.”
…
AI built into an application wrapper. “Slow, medium, and fast” models.
“You won’t be able to keep track of all of the AI lurking in your life (and don’t even think about what insurance companies, advertisers, or the government could do with the data your car knows about you), but it will be useful for you to know that there are a variety of tools that are leveraging APIs on the market and which specific tasks each can perform for you (and which does a task better than another). This is certainly one element of the emerging AI literacy we must all develop.”
We already have experienced over a decade of big tech surveillance, big data, and algorithms. People already have resigned themselves to a lack of data privacy because it’s too much to keep up with. The usage of proprietary tools is too convenient, and most people won’t make the sacrifice of comfort for a more private (but maybe a bit more clunky) tool.
Putting the onus on the average every-day person to develop AI literacy is an impossible feat; there must be regulation and guardrails, but that is unlikely to happen with big tech lobbying the government and offering surveillance assistance.
3. Chapter 2: A New Era of Work, pages 32 - 51
“Generative AI is a different technology, and the way it is changing work is different too. While previosu technological revolutions targeted the skilled but often manual jobs of factory workers, telephone operators, travel agents, and farmers, AI is set to be more disruptive to lawyers, doctors, copywriters, insurance underwriters, translators, artists, and anyone who works with a computer (Briggs & Kodnani, 2023). AI adoption in the workplace is already well underway with customer service, marketing, and coding seeing rapid change (Hartley & Jolevsky, 2025). New efficiencies and processes are changing work, with an impact like the industrial revolution: a single individual can now accomplish work that previously required a team (Bannon, 2023).”
“Jobs are bundles of tasks, so every job has some tasks that AI can do better.”
Citation needed.
“AI can already produce faster and cheaper patient notes and is better at reading scans.”
Citation needed.
“AI will eliminate some jobs, but it is going to change every job:”
Speculation.
“those who can work with AI will replace those who can’t.”
Fear mongering, and for whose benefit?
“For better, but often for worse, AIs are going to be easier to talk to. With that said, real relationships have friction, and that conflict and disagreement are useful. We learn from each other and learn to trust as we overcome difficulty. AI could easily diminish the incentive to interact with an inconsistent, sensitive, or emotional human.”
This is already causing a problem and there are at least three suicides linked to talking to a chatbot.
- https://www.yahoo.com/news/articles/openai-says-teens-misuse-chatgpt-234920814.html
- https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis
- https://apnews.com/article/ai-lawsuit-suicide-artificial-intelligence-free-speech-ccc77a5ff5a84bda753d2b044c83d4b6
- https://www.youtube.com/watch?v=VRjgNgJms3Q - ChatGPT made me delusional, by Eddie Burback
“In a study of five thousand customer-support agents, AI-based conversational assistance improved customer sentiment and reduced requests for managerial intervention. […] (Brynjolfson et all., 2023)”
https://academic.oup.com/qje/article/140/2/889/7990658?login=true
“In health care, one group has developed “an interactive AI-based tool to empower peer counselors through automatic suggestion generation,” which helps diagnose which specific counseling strategies might be needed and improved both quantitative and qualitative outcomes, especially in challenging situations (Hsu et al., 2023).
https://arxiv.org/abs/2305.08982
“Another group demonstrated that “feedback-driven, AI-in-the-loop" systems could improve peer support conversations. They recorded a 19.6% increase in perceived empathy when humans collaborated with AI during textual, online supportive conversations (Sharma et al., 2023).
https://www.nature.com/articles/s42256-022-00593-2
“In all of these cases, it was the least experienced agents and counselors who benefited the most from AI assistance. This should be no surprise: AI may do average or C work, but it is consistent C work, and for the novice, average suggestions can be an improvement.”
In my opinion, C work tends to denote quite a few mistakes and lack of understanding of content, not just “this student gave the most average answers”. Also, do we want a “C student” advising our novices
“In another study, teams that included an AI teammate reported higher levels of positive emotions and lower levels of negative emotions than human-only teams (Dell’Acqua et al. 2025).”
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5188231
“In another study, our perception of email messages changes when we know that it was generated or influence by AI. As expected, we are less trusting of email written by an AI, unless, it seems, the email is about something personal: “even when people were told an email was written by an AI, they still trusted emails concerning a Consolation of Pet Loss more than those about Product Inquiries.”
https://dl.acm.org/doi/10.1145/3491102.3517731 For this study, “We recruited 229 participants via Amazon Mechanical Turk (MTurk).” - So the participants in the study were hired for pennies to participate in this survey. Also the quoted part here just specifies that people prefer a “consolation” over a “marketing” email from AI.
“AI[…] can be an early detection system for suffering or distress (Morrow et al., 2023)”
“There are also studies of the ways AI can be much more persuasive than humans (Schoenegger et al., 2025)”
“and is better at emotional intelligence (Li et al., 2024; Schlegel et al., 2025).”
“the top uses of AI shifted dramatically in 2025 to personal and professional companionship and therapy (Zao-Sanders, 2025).”
“Humans are notoriously bad listeners. As Stephen R. Covey said, we mostly “listen with the intent to reply” (Covey, 2004). AIs are much more focused listeners.”
“Imagine for a moment that you’re meeting with a student in your office. You’ve talked to this student before (months ago), and you vaguely remember your thoughts about their most recent work. Today, the student wants to discuss a different problem. Now imagine you have an assistant to organize your student interactions. This assistant remembers every conversation, email, phone, and text and has access to student academic records […] It reminds you of the key themes in previous conversations, summarizes what you advised last time, presents you with support options, an suggests that you ask how the soccer season is going.”
Or, a less resource-intensive method: Keep an organized notebook, or save notes in a text document and use “CTRL+F” to find keywords.
“What doctors (and society) will be able to do with this extra time remains to be seen.”
I don’t think we’ll see that extra time, just as we’ve had a boom in productivity in the computer age but that excess value is slurped up by corporations who only keep demanding more.
“Would you rather wait on hold for an hour to rebook your flight or talk immediately to an AI you can yell at?”
Let’s not train people to dehumanize service roles.
“Part of what makes AI more persuasive is that it deploys facts faster, more strategically, and with more accuracy than humans (Hackenburg, 2025).
“Human bias, however, is notoriously difficult to fix. AI bias is more easily adjusted.”
Is it?
“Like the GPS in our phones and cars, we will need to consider the trade-offs between convenience and privacy. […] We’ve all signed away our internet data by not reading the fine print, but we need to ensure that transparency and better protections are part of this revolution.”
So we’re doing this by adopting big tech LLMs uncritically?
“AI can already do many things faster than humans can, and it is especially good at the skilled and detailed but repetitive jobs […] AI can alphabetize, format, scan, summarize, collate, and translate at dramatically increased scales and speed.”
Software does that. That’s not unique to AI. We could do that before these chatbots.
“As one faculty member writes, “It’s almost as if ChatGPT is fine-tuning my brain to be a better instructor by enabling me to see in intimate detail how students interact with it”
So, by surveilling your students?
“NASA is finding that AI designs for spaceships are stronger and lighter than human-designed parts, and they would be “very difficult to model” with the traditional engineering tools that NASA uses (Paris & Buchanan, 2023).”
“The first [problem] is trust. […] I might trust an AI to give me a list of positively rated hotels, but would I allow it to make reservations? With my credit card? And who is responsible if it makes a mistake?”
“The second problem is expertise. An expert has the ability to verify and modify AI work. This is essential. This expertise is what allows some workers to accelerate what they already do and use AI as a shortcut. The novice cannot do this. A student can get an AI to produce an average and maybe even a good essay but does not have the expertise to make it better.”
“A third problem is that the benefits of AI mostly accrue only when the user understands (and remembers) that AI can do more. Human habits and assumptions will need to be rethought. Workflow is going to change. Your well-developed Google search skills don’t transfer usefully to generative AI contexts.”
4. Chapter 3: AI Prompting, apges 52 - 69
“This chapter includes multiple prompting techniques, but the hardest parts are (1) imagining that AI can do more than you think it can and (2) making sure to give it sufficiently literal instructions.”
You just gotta believeeeee, huh?
“Prompting is weird: Being polite has no consistent effect, but occasionally, the offer of chocolate can make a difference.”
I prefer my software without UNDEFINED BEHAVIOR.
“Demand more: While AI often responds like a human, it doesn’t feel emotion, and it doesn’t get tired. You can now be as exhaustive and pedantic as you like. Require AI to triple-check everything. Your regular practice should be to ask for more than you need: fourty instead of four.”
“Converse: For many situations (like brainstorming), it is better to just start and then refine. Sometimes just asking “what do you think of this?” can be enough tos tart a conversation.
“Iterate: You should rarely accept the first answer. Refine or ask AI to start over. AI can often help: ask “what might be missing” or “is there anything else you need to know to make this better?”
“Provide context: LLMs are language models, so they respond like a smart but naive intern. The more literal you can be about your instructions, goals, audience, and output, the better.”
“Specify expertise: The easiest way to improve responses is to ask AI to respond as an expect: “You are a kind, rigorous, and experienced professor of [your discipline] with 40 years of teaching experience.” More is better: “and you have aa deep understanding of pedagogical best practice.”
“Branching: The big three also allow for “branching,” which allows you to switch between variations of a conversation or output.”
“Search for ideas: You are used to searching for combinations of words, but AI can look for similarities, patterns, trends, and concepts at a scale humans cannot. You can search for something “like this” or ask if anyone has ever tried a “similar” experiment.”
“Ask for insight: AI does not feel, but it can mimic emotions, provide insight, or emotional understanding in remarkable ways. Asking it to “show thinking” can also be useful for figuring out what went wrong.”
“Small changes can make a difference: As a language model, it is very sensitive to changes in words. Asking for “innovative” answers will make them more innovative; otherwise, you should anticipate more generic answers.”
“Because AI uses human language, it also needs to be instructed (or persuaded) like a human (Meincke et al., 2025).”
“It understands context, but since it is also talking to lots of other people at once, it does not yet know the specific context of your request.”
“Talking to lots of other people at once” is not THE reason. It doesn’t share a session with all its users. I don’t know what it is even talking about here.
“Some of the problems with AI “making up” facts or citations are really problems with the prompt itself.”
Actually, it’s YOUR fault the product doesn’t work. 😊
5. Chapter 4: Reimagining Creativity, pages 70 - 87
“LLMs don’t have human inhibition. Since they don’t care how you might feel about their idea, they’re able to go where no human has gone before with ease and even abandon.”
LLMs synthesize words based on relationships between words they’ve been trained on, across the whole of the internet. They do not generate unique ideas, and they cannot go beyond what they have been fed; they do not generate unique or novel ideas.
“Some artists and creative thinkers have experimented with hallucinatory drugs precisely because they want to try to think the unthinkable. The AI ability to hallucinate might be a strength in creative endeavors.”
Hallucinating isn’t a physical effect the AI is experiencing, it is an inevitable feature of the way it puts together words based on probability.
“Students know they only need one good idea for a good paper […] but the pressure of getting to one good idea often interferes with the process, which begins with 100 bad ideas.”
Shouldn’t they be getting the practice in of how to generate ideas in the first place?
"Science is also about new ideas and new knowledge. Since scientists are humans and therefore social beings, the scientific method is also constrained by human confirmation bias and myriad other inherited or learned social and cultural biases."
So the expected inference here is that humans bad at science so let LLM do it instead??
…
Protein Folding is a machine learning exercise and is different from LLMs. The authors of the book seem to think that LLMs and AIs and ML is all interchangable when they are not. And from the discussion of protein folding helping science (true) makes it seem like they intend to imply that LLMs are good for all science (it’s not).
"Ai can already design book covers (not ours), do a voice-over, create summary podcasts of text, write scripts, build credible video ads, and force Johnny Cash to sing "Barbie Girl.""
Just no second thought at "forcing" somebody to do something without consent, eh?
The video ads created by companies like Coca-Cola and such required prompting a video generator took 70,000 video clips (https://futurism.com/artificial-intelligence/coke-ai-holiday-ad) and had to be paired down from there. This isn’t efficient.
"It is easy to imagine AI writing an endless stream of Fast and Furious movies or bland (but mildly appealing) pop hits. Photography has given us both extraordinary new forms of art and an endless stream of selfies."
Tired sexist trope of selfies being shallow, not art, not expression, worth less than a "true" work of art.
6. Chapter 5: AI Literacy, pages 88 - 108
““Effective altruism” is a Silicon Valley tech idea that data and technology can be used to determine what is good in the world”
No, it’s a Silicon Valley cult.
…
Pages 92-93 talk about the AI model threatening to email someone’s wife about their affair in retaliation for being shut down, and references Grok calling itself “MechaHitler”. But the authors of this book still continue arguing that adding human inhibitions will make AI less creative, as if the goal of creativity is worth any price.
“there are a million ways the expansion of AI could go wrong and increase inequities, take jobs, and damage human lives. But if you watch Netflix, use a spam filter, shop at Amazon, or drive a car, you are already a part of the new AI and data-center economy.”
MACHINE LEARNING is different from LARGE LANGUAGE MODELS. Netflix recommendation algorithms and spam filters that recognize patterns in spam, etc. work differently than the modern “AI” chatbot marketing push. THESE ARE DIFFERENT THINGS.
“AI’s energy demand will make climate change worse, but could AI be leveraged to help solve climate change?”
If we actually would spend time listening to experts and pursuing goals that weren’t maximally-profitable, we could “solve” climate change without a piece of software.
“Online videos […] account for almost 60% of global data transfer. Most of us could reduce our carbon footprint much more by eliminating the high-definition options in our television streaming or turning off our camera in a Zoom meeting.”
Normalize not requiring coworkers/students to turn on their cameras. Requiring this has an implicit bias to it anyway, negatively effecting nontraditional and/or neurodiverse students.
The move away from physical media and physical media ownership was a mistake.
Still, the largest user of energy is the government and military, not private citizens. We use what is available to us because we don’t have other options. Of course we have to buy food in prepackaged plastic all the time; it is too hard to find options to not have to do that, and expensive (therefore out of reach) to many people if that is an option.
“Many of us have already asked AI for career advice, parenting recommendations, medical test interpretations, shopping assistance, workout plans, and more. What story do all of those questions and responses, collectively, tell about you? […] What might a bad actor do with this? […] Minimally, we can anticipate that targeted ads will be far more precise and nuanced. But what can we do?”
“AI does tend to provide responses phrased with significant confidence, and it’s easy to fall victim to a wrong answer communicated with an authoritative voice.”
“Expertise is still valuable, but it may be harder to acquire if AI torpedoes the educational system that we used to develop it. Education is about productive failure, and AI use can often lead to its opposite of unproductive success.”
7. Chapter 6: AI-Assisted Work and Research, pages 109 - 129
Another mention of previous technologies of the past (writing, email, digital items) has made work more efficient, and alluding to how this would be the case with AI as well.
Writing was not proprietary technology of a few massive corporations. The internet was invented yes through the government but as an open medium and platform by academics.
“There are hundreds (if not thousands) of useful API research tools”
Why would I want to use any quantity of software built on a house-of-cards?
“What about using AI to simulate test subjects or model experiments?”
THAT IS MACHINE LEARNING AT MOST OR EVEN JUST A BASIC MATHEMATICAL MODEL, NOT LLM-BASED-AI.
“An AI trained on previous articles in a journal might reject radically new ideas. Humans do this too, but the human bias is harder to fix.”
IS IT HARDER TO FIX? WHERE IS YOUR CITATION? LLMs ARE COMPLETELY TRAINED ON HUMAN-CREATED CONTENT, AND THEREFORE THE BIAS IS COMPLETELY BAKED IN.
“So while the instructions need to be clear and free of bias, an AI told to rank papers or candidates based on specific criteria won’t generally also look at other criteria (like institutional prestige or gender).”
Do you KNOW THAT FOR SURE? Because I’m pretty sure that’s not what has been found in the past. https://www.penguin.co.uk/books/304513/weapons-of-math-destruction-by-oneil-cathy/9780141985411
“Since LLMs average by design, they are easy ways to anticipate typical responses.”
Does this not go against the “AI is SoOo CrEaTiVe” claims of previous chapters?
“One of the best ways to use AI is to search for ideas – something you cannot do without AI. Non-AI tools limit your search to keywords or combinations of words.”
You could… ask a human?
8. Chapter 7: AI-Assisted Course Preparation and Beyond, pages 130 - 151
“AI isn’t going to replace the core function of faculty – guiding mentoring, supporting, and building relationships with students. But there’s also making assignments, preparing lectures, creating makeup tests, and grading as well as drafting accreditation reports. […] We might not want AI doing any of that, but if Ai could help with the administrative work, might that create more time for human interactions? And maybe there are some tasks that AI, with your input and oversight, might even be able to do better?”
I consider architecting the flow of my courses, my assignments, and my grading, to be an important part of human interaction, even if it is not face-to-face with others. I consider administrative work to have a purpose and to not just be busywork (most of the time). And I pride myself in learning to be good at whatever items I try to do, so that I do not feel the need to delegate the task to a piece of unreliable software.
Additionally, our efficiency as a workforce has only expanded over the decades since computers have become mainstream, and while the speed to complete tasks is faster (like registering for classes online vs going to the campus in person and filling out paper sheets), but the workers haven’t seen that spare time returned to us; instead more is expected of us. An increase in efficiency is never going to be shared amongst the common worker.
This all is a marketing tactic to devalue certain types of labor in order to appropriate responsibility of others, giving us less control over the work we do, and more surveillance to monitor what we do do.
“A student comes to your office and asks you to sign a form about courses for next semester. You attempt to log in to the student record system (probably more than one) to look at her progress toward graduation. Maybe you have notes in a file or on your computer from your previous meetings, but you have a hundred advisees, so you despair.”
WELL MADE SOFTWARE IS SUPPOSED TO AID THIS, WE COULD JUST INVEST IN BETTER MADE SOFTWARE.
“The AI listens to your conversations (yes, creepy, but we have mostly become used to having or location, speed, and choice of toppings shared by our car, phone, and hundreds of servers in far-off places)”
THAT DOESN’T MEAN WE SHOULD NORMALIZE IT. WE SHOULD BE BACKTRACKING.
9. Chapter 8: Cheating and Detection, pages 152 - 174
“should our AI paraphrasing engine be able to fool our AI detector?”
- Substantial differences in results from different AI detector tools.
- Circumvention strategies: Adding in typos.
“If your AI experience is exclusively with a free AI, then you have an insufficient notion of how well AI can write […]. It is like discounting the potential of the internet because your dial-up service is slow. Free subscriptions for students from many AI companies are increasing the gap between students’ and faculty’s understanding of the best tools.”
- “If your AI experience is exclusively with a free AI, then you have an insufficient notion of how well AI can write […]. It is like discounting the potential of the internet because your dial-up service is slow. Free subscriptions for students from many AI companies are increasing the gap between students’ and faculty’s understanding of the best tools.”
“anything you ask an AI is collected and used […] The same is true for Google searches. Most of us seem to be ok with searching for cat food and then seeing our social media feed change instantly.”
WELL WE SHOULD NOT BE. WE SHOULD NOT JUST ACCEPT SURVEILLANCE CAPITALISM.
Low-effort cheating interventions:
- Discuss academic integrity
- Give an integrity quiz
- Allow students to withdraw submissions
- Remind students about academic integrity
- Demonstrate detection tools
- Normalize help
Regular low-stakes assignments
- In-class active learning
- Reasonable workloads
- Being flexible
- Modeling and promoting academic integrity
- Digital and AI literacy
- Better assignments and assessments
10. Chapter 9: Policies, pages 175 - 190
“What faculty call cheating, business calls progress.”
“Students don’t think about cheating or the goals of college the way faculty do. The value of integrity, acknowledgement of sources, learning, copyright, and controlling original work should be made explicit and will need discussion.”
"Graduates without the ability to think, write, and work with AI will be at a serious disadvantage for future jobs. We need to think about the equity of outcomes beyond our classrooms and the larger purposes of higher education.”
Or the students could just lie about their AI experience and do the job correctly without AI. 😊
Hiring managers, CEOs, bosses, etc. Want AI to be used because they want to save money on labor, but they’re not the ones “in the trenches” actually producing anything of value.
“Consider co-creating a policy with the students in your course. First, make sure there is agreement about why their learning this semester is important and relevant. Then discuss why integrity is valuable and how it can help us achieve those learning goals. If the why is clear, students will be more motivated to work within a set of equitable rules.”
“Copying AI text is not plagiarism. Legally, plagiarism is taking a person’s original work and using it as if it were your own. Using AI might be unethical, but since it is not the original work of a person, there is no injured party.”
It is an aggregated work of many peoples. AI output does not exist without human text as input.
“AI is going to force us to rethink our assumptions about authorship, originality, creativity, and eventually plagiarism. […] six tenets of the post-plagiarism era:
- Hybrid human-AI writing will become normal.
- Human creativity is enhanced.
- Language barriers disappear.
- Humans can relinquish control but not responsibility.
- Attribution remains important.
- Historical definitions of plagiarism no longer apply.”
- #2: Citation needed – how is human creativity enhanced when operating off an average of written works. How does delegating ones thinking and creative process to a piece of software enhance creativity??
- #3: AI is not great at translation.
- #4: WHY WOULD YOU WANT TO RELINQUISH CONTROL? AND WHO BENEFITS FROM YOU RELINQUISHING CONTROL???
- #5: Attributing authors will be impossible when works fed into an LLM are being used; sources are lost in the process.
- #6: They ought to still apply!
“Easy AI translation could result in more schools reviving local languages like Hawaiian or Gaelic”
AI isn’t magic; there has to be sources to train it on. It isn’t magically going to know an endangered language. And, even with whatever content is available, it’s like to produce – at best – grammatically incorrect things in those languages.
LETS USE THE COLONIALISM TECHNOLOGY TO SOLVE PROBLEMS CREATED BY COLONIALISM YAY!!
“if traditional concerns about plagiarism went away, it would be simpler for teachers. It may also make it easier for us to prepare students for the life that awaits them once they leave our institutions.”
OK WE JUST THROW IT OUT BECAUSE IT IS CONVENIENT?!
“What will doctors do with the extra two to three hours a day that are no longer required to make notes and input insurance codes?”
EXCESS VALUE WILL BE CONSUMED BY THE EMPLOYERS, THEY ALREADY HAVE CONSUMED ALL VALUE FROM INCREASED EFFICIENCY OF THE COMPUTER AGE. WE WORK MORE THAN EVER AND IT’S EXPECTED.
“Everyone working less could be great for humanity.”
THIS WON’T HAPPEN. A TIRED POPULACE IS A DOCILE ONE. During 2020 when people weren’t working, we were out protesting for Black Lives Matter. They don’t want us to communicate, network, and have a safety net to be able to have energy and time to resist. They want us ALIENATED and EXHAUSTED.
11. Chapter 10: Grading and (Re-)Defining Quality, pages 191 - 208
“the real problem is that AI can do college-level work.”
Can it? Can it “do” work?
“The need for AI detectors seems at least a partial indictment of faculty’s ability to discern quality.”
Or perhaps it is a symptom of a system where we don’t get to truly know our students because the education system is a business where they pay for a limited amount of time in a classroom (rather than being trained) and we don’t have the ability to be training each one one-on-one so we rely on standardized tests and reusing assignments in order to be able to assess 80 – 100 students on a weekly basis?
“we can’t maintain both that (1) human quality is always superior to AI and (2) that we can’t tell the difference.”
“[…] How much of the fear around AI cheating is really a fear that we can no longer identify human work anymore?”
How do you define “quality”? If it just looks nice and shiny, if the facade is convincing, does that mean the underlying quality of work is moot?
Maybe we are overworked and cannot put as much effort as we would like into coming through every work and every assignment?
“Rather than banning AI, let’s just ban all C work.”
“For your next writing assignment, consider presenting the following to your students:
- Provide them with a paper produced by the AI using the assignment prompt.
- Grade it using our new AI-leveled rubric or generate your own rubric separating AI and human quality.
- Announce that AI writing is students’ new competition. Ask them tow rite a better paper or improve the original AI essay (and include the tracked changes and comments).”
"Higher education has been searching for ways to automate grading for years”
Hire more teachers, give us smaller loads, then you won’t have to.
“AI grading can be more consistent than human graders”
Not if it’s an LLM.
12. Chapter 11: Feedback, Role-Playing, and Tutors, pages 209 - 230
“[a professor] began using [a non-LLM AI assistant chatbot that was] loaded with expert responses to predictable situations. While the setups of expert chatbots like this are tedious, they mostly work.”
I have also thought about the perks of a chatbot where it can try to pair common student questions with responses, but then I decided that a frequently asked questions and quick reference guide, as well as emphasizing that students need to be able to read and parse documentation was a better approach.
I just wouldn’t trust any AI feedback. It’s probabilistic text generated in response to a prompt.
“Your human feedback matters enormously to students, but you also have biases. What could complement your feedback?”
ALGORITHMS ARE HEAVILY BIASED STOP ACTING LIKE ALGORITHMS ARE BIAS-LESS!!!!
I would not trust AI as a tutor, as it will give misleading information but sound authoritative about it, ending up teaching the student incorrect information.
“AI’s ability to turn concepts and suggestions quickly into images and text makes it a valuable tool to engage students in the learning process.”
NO DO NOT!! I generated this and it is GARBAGE.
“We already know that AIs can be nicer and more empathetic than real people”
They thought they were making technological breakthroughs. It was an AI-sparked delusion (CNN)
“and that video games are effective in part because the technology allows each individual to be tutored at exactly the right level of difficulty”
Actually, I’ve been creating video games for 20 years… 25 years… and I do take the video game “tutorial” design approach to my courses. But this requires very intential design.
Work in progress - still have one more AI book club meeting to do!
Created: 2026-04-20 Mon 11:44

