AI is still here and is coming to get us all! This Halloween season Brent and Ixchell bring you a list of thirteen scary approaches to AI. The discussion points range from assumptions about student cheating and AI proficiency to neglecting AI’s impact on critical thinking and the digital divide.
AI is coming to get you~~
Ixchell Reyes 0:04
What are some of the scary approaches to AI we should all reconsider?
Brent Warner 0:21
Welcome to the DIESOL podcast, where we focus on Developing Innovation in English as a Second or Other Language. I’m Brent Warner, Professor of ESL and author of upcoming ISTE book, which we’ll talk about a little bit more later, but I’m here with Ixchell Reyes, award winning educator in innovation and professional development. And Ixchell, you are in Saudi Arabia.
Ixchell Reyes 0:45
I am arhaba,
Brent Warner 0:47
yeah. How is it?
Ixchell Reyes 0:49
It’s great. I love it. It’s wonderful,
Brent Warner 0:51
Wonderful. So we so we’re a little behind, you know, as you were settling in, you know, this happens as you were, you and or I are traveling and get all of our episodes that we’re planning. I think everybody who listens is friendly to that idea, I hope and so. But today, we do have a little tradition now on the show of Halloween episodes, right? Yeah, yeah. What do we got going on?
Ixchell Reyes 1:18
It’s our third year. This our third year, Man. Third year of scary practices.
Brent Warner 1:26
Yeah, so, so what did we cover? Last time, I can’t even remember, um,
Ixchell Reyes 1:30
We covered scary things that teachers should stop doing in the classroom, and also scary things that students probably need to stop doing in the classroom.
Brent Warner 1:41
That’s right, yeah, and and today,
Ixchell Reyes 1:46
With AI as now part of our lives, we are covering Yeah, scary approaches to AI,
Brent Warner 1:53
Yeah, alright. So very scary,
Ixchell Reyes 1:56
Scary approaches.
Brent Warner 1:57
We need a Wilhelm Scream out there.
Ixchell Reyes 2:00
Eee eeh!
Brent Warner 2:00
Alright. So that’s the psycho the psycho knife scene,
Ixchell Reyes 2:06
The psycho approach to AI.
Brent Warner 2:08
I was thinking the Wilhelm scream – You know that one that’s all is in the movies. Anyways, I know every time I talk about these kind of movies, you’re like, What are you talking about? Okay, so let’s delve into the the AI world. Okay, so, Ixchell, we got a handful here. Uh, we’re kind of splitting these up. We do have our 13 scary approaches, and we’re splitting them up a little bit. The first half, we’re kind of talking about teachers approaches, and then the second half we’ll talk kind of like combination student and or teacher approaches, right? So let’s start off with the first one, which I think is just a big deal that w’ere talking about, what do you got?
Ixchell Reyes 2:51
Yeah, well, I think a scary approach is making assumptions that students are using AI to cheat just because they have access to it. And this is a huge pet peeve of mine, but it but it’s actually a scary approach to always suspect your students, right? I mean, what does that tell you about how much you trust them, or what rapport you have going on in the classroom? Yeah?
Brent Warner 3:14
I mean, this one is kind of at the end of the day, this kind of covers the vast majority of the problems around it is like, well, what are we doing in the classroom to build a community of learners, right? Is it about chasing that grade? Is it about, you know, really learning something in the classroom? Now, all of us have different setups, right? But, but we know there’s just so many people going, well, students are cheating and this and this and this. But we also know a couple of things that are really worth paying attention to. And the number one thing is, students do not cheat because of access to technology. They cheat because they are stressed out, because they have pressures, like external pressures, like their parents are saying, Hey, you have to get this thing done, or whatever else it is they have maybe financial burdens or time burdens when they’re overworked at something else and they can’t make the time to get all these things going. So it’s not about access to the tool. It’s always other things happening else, right? And so then the question becomes, not, should we block AI, right? It should, it becomes, well, how am I supporting my student, or how am I redesigning my curriculum in ways that make it a better opportunity for students to succeed in this class? Now you can go back to, you know, grading for equity, ungrading, you know, grading for equity by Joe Feldman, or ungrading books, or, you know, this, that whole movement has been really great for me, in my in my credit classes to kind of have already been involved with those settings. But it’s it does take work, right? So teachers who have already been doing the work to be more equitable graders, right? Are having fewer problems, I think, as compared to the teachers who are like hardcore. Old fashion. We’ve got 1000 points in this class, and you have to finish these assignments, right? Those ones are probably having more problems with the ideas, because the the way that the classes are structured is based on, you know, get that grade and not show me your learning,
Ixchell Reyes 5:18
right?
Brent Warner 5:20
So, alright, yeah, big one. What’s up next?
Ixchell Reyes 5:22
I was going to say the next scary approach is using AI checkers on students. And I think this is sort of maybe again, falls kind of on the back of the first one we mentioned. Because if you’re constantly trying to catch your students cheating, again, what does that say about the rapport you have in the classroom. What does that say about how you’ve structured your assignments and explained ethic use of AI or ethic use of information in the classroom? And also, the goal is not to have this, this culture of gotcha right? And I think that oftentimes teachers fall into that, yeah,
Brent Warner 6:04
Yeah, well, it’s very easy to get into that like, oh, now I’m policing your work instead. Yeah, we’re working together for this.
Ixchell Reyes 6:12
And the whole fact that AI checkers, AI also can trick can, can doesn’t necessarily detect AI or falsely detects AI, right? There’s all sorts of cases going on right now about that. So that’ll continue to be a conversation.
Brent Warner 6:28
Yeah, there’s a lawsuit going on right now about a student who got accused and falsely accused, and now they’re going to court about it. It’s really interesting. So another thing to think about this is one that I think we pointed out in the past. But if you have a class of 25 students, and let’s say your AI checker has a 96% success rate on accurately calling out what is AI and what’s not right. And those numbers are questionable very much, but 96 sounds great, right? But when you have a class of 25 students, that means that one student, every time you do this, will be falsely accused, or, you know, false falsely marked with with one side or the other, right? What number are you comfortable with falsely accusing a student who did their work properly and saying that they’re lying, that they’re using AI, right? Like, that’s a real question for me, and I’ve asked this before. My personal number is I am comfortable with 0% right? Like, that’s, that’s the number where I’m at, but, but are you okay with, you know, potentially, maybe, maybe the phrasing of this is a little too aggressive. Are you potentially okay a student’s life for the rest of their
Ixchell Reyes 7:37
But you know what, the student who’s in that lawsuit, I bet that they’re that it could potentially ruin their life.
Brent Warner 7:45
Yeah, yeah. I mean, they’re going to have a lot to deal with this. They’re always now, they’re going to be the student that sued about AI. And even if they were the right, like they’re –
Ixchell Reyes 7:54
It doesn’t matter. It’s put a stop. And a lot of their their what they’re normally would be doing right now, carrying along with all the other classes. That’s right, all right. Okay, another scary approach. This is one of my favorites, or I guess not my one of my favorites, one of the ones I like to point out, assuming that students know how to use AI appropriately just because they’re from the quote, unquote, digital generation, or zillennials or what have you, right? Um, this is false. Yeah, just because students are now, you know, growing up with one or two cell phones and cameras all over the place does not mean that they know how to use any kind of technological tool appropriately, right? And I’ve seen teachers say, Oh, you can figure it out. You’re You’re young. It’s like, Wait a minute. Did you just push your responsibility of teaching a tool, teaching how to use a tool appropriately, onto that for student?
Brent Warner 8:55
Yeah, this one is for sure, one to pay attention to. I’ve heard this a long time ago, and I’m kind of assuming it’s still true, but there’s this idea that actually the younger generations are less digitally savvy because they have things that are so well designed that they don’t really learn how to kind of, quote, unquote, hack their way through the user interface of programs, right? Where it’s like, now, you know, a lot of these developers are so good at making these things where it’s just a step through, and you don’t have to think about the process, that a lot of people are actually worse at figuring out how to use tech, because it’s been such a smooth operation for them, right? It’s like, interesting, kind of, like, the same thing of like, you know, now we have, now, I mean, we have full, full automatic cars that are just such, like, great, you know, computers on wheels, essentially to drive around. And then it means that, like, okay, it’s great and very useful. We all know how to drive a car, but fewer and fewer people know how to fix their cars, right? Right? And so it kind of goes into that thing,
Ixchell Reyes 10:03
Right. Yeah, okay, so the next one is refusing to address AI with students in the classroom. So just ignoring the whole movement overall that that we’re never going back, right? But just ignoring it and not having conversation of conversations about it, talking about it, talking about how to use it, when to use it, and just or, I guess I would say banning it, because some I’ve had some teachers tell me, nope, my solution is I’m not allowing them to use it, and that’s it. We had the conversation, and they’re not allowed.
Brent Warner 10:38
Yeah, sounds less like a conversation and more like a lecture.
Ixchell Reyes 10:44
You know, What you resist persists, yeah.
Brent Warner 10:49
So, so that that is an interesting you know, how do you how do you deal with that, right? So it’s like, it doesn’t take a lot, even if you’re going to say, hey, there’s no AI in my class, still being honest with your students about the conversation and saying, Hey, why? Yeah, this is what’s going on, right? Like, these are the concerns that we have right now or that I’m worried about in in relation to my pedagogy in my class. Like that doesn’t necessarily apply to other teachers approaches, but like, this is why I’m doing these things. Students do respect that, right? When they’re explain why things are happening, it’s way more likely that they’ll go, okay, I get it. We’re going to do this, right? But if you’re just going to say, hey, there’s no AI in this class, don’t talk to me about it. I am morally outraged at this, or whatever else it is, then it’s like they’re not going to feel like they can have a conversation with you about it, or even, you know, and then, which leads them to say, like, well, maybe I can sneak away with this stuff if they hate
Ixchell Reyes 11:43
Yeah, then it becomes a taboo. You don’t want misuse of AI either.
Brent Warner 11:49
Okay, number five, Alright, next.
Ixchell Reyes 11:52
The next one is assuming that AI destroys critical thinking. And Brent you and I were talking about this pre show, and I mentioned that it’s not about AI, right? Even without AI, we were still worried about critical thinking, yeah, it’s just that the approach to it has to be redesigned. And we’ve had to redesign something so quickly while facing the growth of AI, right? And so that becomes a daunting task for many, especially if you got were really good at what you used to do, yeah, but, but again, I think I’ve always been of the other camp, that with AI you can, you can still teach critical thinking, and it’s just a different way of approaching it. Yeah, it’s just a different way of looking at problems. It’s just a it’s a different tool.
Brent Warner 12:44
Yeah, I had a, we had a meeting this week about AI in our department, and we have one of our teachers, a wonderful, outstanding teacher, super sharp. She knows what she’s doing in the classroom, all these things, but it was a concern for her about this and saying, “Hey, like, we’re, you know, I’m seeing, like, less critical thinking ability for blah, blah, blah,” and it’s like, totally legitimate, right? Like, I understand those concerns, but it’s also to say, like, are we really going to expect that students are not going to have access to this? And so, you know, the ways that we came up, or the ways that we have learned until now are fine, but we have to make these adjustments to be able to say, Hold on a second. We are, we are living in a new world. So, you know, are we doing more project based learning opportunities to show the critical thinking through these processes, for example? And so again, it’s, it’s a legitimate concern, but also, I think it’s on us as teachers to start to shift the ways that we’re approaching those
Ixchell Reyes 13:43
Yeah, and that’s not to say that it’s going to take more effort, because we’re redesigning, right? Our approach, we’re redesigning. But again, I think it’s scary to just blame AI for that.
Brent Warner 13:54
Yeah, yeah. Okay,
Ixchell Reyes 13:56
All right. What’s the next one, Brent?
Brent Warner 13:59
We are on number six, which is assuming that all students have equal access to quality, AI, right? I think you know, again, that kind of goes back to, like, the students know how to use them, but like, also, just like, what do they have access to? So one student might have access to paid versions of advanced models, and other students might be getting the, you know? Oh, I can interact with Microsoft co pilot for three conversations until it resets me, and then I’m done, right? And so they’re not all getting the same thing necessarily. They can’t necessarily afford different versions of things. Again, some of the free ones are coming out pretty good at this point, but you still want to pay attention to that, because, if they’re the the one of the problems that happens with the assumptions on students using AI is that the students who are really good at it and have access to the better ones are less likely to get caught, quote, unquote, caught, because they know how to kind of re prompt it, and they have had enough. You know, they have the high. Quality versions, and they have the RE pump, the skills to work with it, to be able to sneak past the detectors, for example. And so you’re actually ending up punishing the people who have access to fewer resources, which is definitely not an equitable approach.
Ixchell Reyes 15:18
Okay, yeah, number number seven. Number seven.
Brent Warner 15:22
No clear policies on use expectations in your classes.
Ixchell Reyes 15:27
This is where we should scream, ah, I don’t know, scream out of fear or frustration or both.
Brent Warner 15:36
Yeah, you know, I mean, it’s hard. You could do this right now in the middle of a semester, right? So a lot of teachers I’ve had this conversation. I was at a presentation, you know, this weekend, and a lot of teachers are like, Hey, I didn’t do anything, and therefore it’s too late until spring semester. This is what I’m going to start doing in the spring. It’s like, No, you can start talking to your students right now and saying, Hey guys, we’re making a change to the approach. These are the things that we’re going to be doing. These are my expectations. And so you can still have the no AI policy if you have a reason for that, right, but you would. But if you have that written out and say, Hey, this is an addition to the syllabus, or this is a new classroom policy that we’re all going to agree on or try and work out together, you can absolutely build expectations around that, even in the middle of a semester, and then just kind of very briefly. Here, a lot of people are doing, like, a scaled access acceptable use, which is like, yeah, you know, you can use zero. You can use it for brainstorming and outlining. You can use it for, you know, actually helping you build content and or you can use it totally freely. We’re seeing lots of different people make different approaches to how much AI is considered acceptable, and then, like, at what level do you need to start citing it in your work, for example.
Ixchell Reyes 16:52
yeah. And I think students really respect transparency when it comes to AI. I think that, again, they want to know the expectations, and when you reinforce it by by being clear, it helps them to to be more responsible in their usage of AI,
Brent Warner 17:10
Yeah. 100% okay!
Ixchell Reyes 17:12
Alright, number eight.
Brent Warner 17:14
Number eight. You got it.
Ixchell Reyes 17:16
You got it.
Brent Warner 17:16
I got it. Okay: ignoring compliance expectations on campus.
Ixchell Reyes 17:22
AAAhhh – Another scream, (laughter)
Brent Warner 17:25
Yeah, so, well, I mean, this is hard though, right? Because it’s like you’ve got things like FERPA compliance, or, if you know the COPPA depending, like, you know, across the world, there are different compliance levels and things like that, right? But where, where is this? So is the data that your students are putting into it being training the next thing, right? That would be a violation of FERPA, especially if the students are putting in their personal information, or, yeah, all of those things. And so you do need to kind of have a sense, and it’s not just like AI, you can’t really say it that way, right? Because chat GPT, for example, is not FERPA compliant, right? But Microsoft co pilot, if you have, we have the institutional account, which is FERPA compliant, but then we’ve got our, our institutional account, yes, it’s FERPA compliant, but it kind of sucks, because every like, half the tools are stripped out of it, right? And so in order to do that. And so it’s, it’s very interesting to kind of see what’s going on with these. Like the we have that institutional account for us, and I go in and it’s awful. So I go back and use my, my free, my free personal account that it does turn on the data. But I’m like, Well, I can get so much more cool stuff going on with this. So so you kind of have to balance that out and be aware, like, hey, remember, you still have your regular policies that you have to align with, even when you’re using these tools. Alright, so we’ve got a mailing list. We haven’t used it yet, still. I’m sorry, you guys. I know some of you signed up for it. You’re like, where’s my mails? I really, the truth is, but we too busy to spam you. Yeah, too busy to spam you. Sorry about that, but we do have the mailing list and and part of the goal for the mailing list, one to get started at some point on, build on actually launching the mailing list, but also, really, to kind of give you heads up on the book. So my book is, I mean, the the issue is moving along. Everything’s going well. But yes, the book will be announced through there. And then the idea is that there will be, you know, pre launch freebies, giveaways, you know, free, you know, we’ll do some webinars and some some conversations, and as much stuff as I can give away. I will be doing it through that link. So if you’re on the mailing list and you want to kind of get free extras, maybe, you know, free chapters or activities, or,
Ixchell Reyes 19:52
Yeah, get informed first. Yeah.
Brent Warner 19:55
So you can go get that at DIESOL.org/book so D. E, S, O, L, dot, O, R, G, slash book, and you can go sign up. And you know, it’s like the the promise that you’re not going to spam people, I guess I’m going to have to start promising to spam. Gonna do it at some point. I will spam you, I swear. But yeah, that is not, it hasn’t been a problem up until now, so sign up if you’re interested in the book in the future
Ixchell Reyes 20:27
Alright, so let’s go ahead and finish up with part two of scary approaches, and this is more of approaches that are for both teachers and students. So we’re at number nine, and number nine is failure to keep up with the changes and updates. And of course, with AI, simply because you’ve attended a workshop does not mean that now you’re ready to go and that’s it, and you’ll never hit another workshop. Workshop again. There are so many changes, right? I you know, I was telling you, I opened up LinkedIn, and I start learning about all these other new uses or potential problems, and it’s like, holy crap, I’m so behind. And that doesn’t mean that, doesn’t mean that you have to keep up with everything, but be aware that every few weeks, something new is coming up. There’s cases on, on, on the news and stuff. Laws are changing so, so much is changing in our landscape.
Brent Warner 21:27
Yeah, and I do want to be careful, because it’s hard, like, don’t, don’t put too much demand on yourself. But I guess the one, the one way that I would say to approach this is, if you’re planning on using it in your class, check it a day before class, and maybe even check it, you know, an hour before you’re going to jump into class. Because if you’re, if you’re really trying to use something, it changes the whole layout, changes, the features change. And they’re like, Wait, I didn’t get any warning about this, right? So, so if, even if we’re not talking about, hey, guess what, this cool new thing can do that never existed before, which there are a lot of those, even if it’s just like, hey, I’m planning to use this. But I hadn’t, you know, I built this activity out, you know, six weeks ago, and I’m going to try it again. And, oh, by the way, the links to be able to get these things done don’t lead to the same features, etc. So yeah, 100% want to be careful that next one, not being vigilant about the gradual or blatant bias of AI outputs. Now that was a little bit wordy here, Ixchell, but essentially we’re kind of talking about the so we talk about the bias, right, and the hallucinations, or whatever else it is, problems with the AI. Here’s what happens. Is there’s a creep when we start using these tools. So we use it for one, you know, one quick interaction, and we’re like, okay, I keeping in mind that this could be biased. What am I going to be careful about? But when we start getting into deep iterations and we’re working with it for a couple of hours, or, you know, even like a, you know, an hour at a time, we kind of start to forget that, hey, this is not a, you know, this is just a prediction text model at the end of the day. And so maybe this information is still being biased later on in the conversation, and or maybe it’s still being, you know, hallucinating at some point. And so we really want to make sure that we keep this in mind, right? It’s like, you know, do we trust it? Maybe I’m going to get myself a big X Files poster. You know, the truth is out there, but not in there. So be careful about not just the fact that we recognize that there’s bias, but keeping it in mind as you’re using it over time.
Ixchell Reyes 23:39
Alright, number 11. And this seems like it would be that everyone would already know this, but assuming that AI output is valid, I think that’s just something that we’re just going to have to come back over and over and over again and just make it a habit. There’s got to be critical evaluation of the output always. I think that’s like number one rule, right? Yeah, otherwise, we fall into the trap of accepting whatever it gives us, and again, that’s also not being vigilant about the gradual potential bias that or false information that it can give you, yeah.
Brent Warner 24:17
And this one particularly for students, right? Like, I mean, for teachers, we have a level of expertise to be able to evaluate, right? For students, a lot of times, they don’t even have the full knowledge, so they can’t make a judgment. And this is where it gets really dangerous for them, is like, they’re just going to go, Okay, it’s good, I think, right? And I’m going to send that forward, it’s like, well, hold on a second. There are things that you don’t know, right? So, right. So we have to, you know, we have to figure out ways to guide students through that process, and it’s not easy. I’m not saying, hey, like, you know, this is part of the new approach to education that we have to be aware of. So for sure, that that is a big one, I want to talk about number 12 here, Rachel for just a minute, which is not understanding how AI works and doesn’t. Work. This is quite specific for students, but also for teachers, which is for pronunciation right now, I’ve done a couple of videos, or I think I did, or some conversation practice ones, you know, on YouTube and stuff. But just to be aware when you are speaking to these AIs, if you’re doing the voice version right? And you’re speaking to an AI. What it’s doing is it is transcribing your information into text, and then it is analyzing it based on the text, and then it is outputting it back into voice and sending you that feedback. So it is not able to actually, at this point, it’s not actually doing pronunciation work, right? It’s making assumptions on what you said, how it transcribed that back into text, and then how it transcribes it back out. So it might make good guesses sometimes. But it’s also really not like doing deep pronunciation, things like blends of words and sounds and all those kinds of things. Like, it’s really not actually catching those things, so you really have to be careful when you say something like, Hey, please check my pronunciation if you’re generally right, or if you’re trying to get fine tuning on these things right, like, that is not quite how these models work at this point. I think they probably will in the future, just be able to be fully, you know, audibly based, but they are. They do go into text and back at the moment and so, at least for the general ones. So please be careful about for pronunciation.
Ixchell Reyes 26:31
Okay, and the 13th, scary approach,
Brent Warner 26:35
number 13,
Ixchell Reyes 26:37
Assuming, and this is for students, assuming that teachers don’t know your work. Most of us have been around for a while and have been looking at student samples, and so we are very good at spotting when something is off, particularly if we are familiar with our students work, because we do take time to get to know their writing. We do take time to get to know their errors, and again teachers will know. Mm, hmm, yeah,
Brent Warner 27:07
I’m going to add in a 13 B here, the B for boo, which is that Be careful teachers coming back to this Be careful, because teachers also assume that they can very clearly identify AI, which is also not true, right? They already have saying like, Hey, you’re not as good. You’re not nearly as good as you think you are about doing this. But back to the student side is like, students very much make assumptions like, oh, teacher’s never going to figure this out, that I’m doing it, or that I’m using these tools. And it’s like, well, it’s not so much because of just the one output, but it’s a collection of understanding of who you are and what you can do over the term of the semester. It doesn’t matter if it’s AI or not AI. I can see that over time and so, so yes, please be careful students out there, assume that your teachers are professionals and know what they’re talking about. (laughter) That’s kind of what we’re trying to say here. All right, I hope you’re all scared by all of those possible, you know, approaches to AI, and we’ll be back next year with 13 other scary things. Maybe we’ll see how it goes.
Ixchell Reyes 28:17
All right, it is time for our fun finds, and this time, I have pink lady perfume by Asaph, made in Saudi Arabia. And I The description says it is a white flower blend with patchouli, vanilla notes, orange blossom and bergamot. I don’t know if that’s how you pronounce it, Bergamo bergamot. I believe it online, but it is currently sold out. It smells so good, Okay, nice.
Brent Warner 28:47
Pink Lady perfume, very good. My fun find is going to be throwing some clay, doing some pottery work. So we had a professional –
Ixchell Reyes 28:56
Oh, like actually throwing clay? Yeah, real. That was going to be, like, a song or a band that sounds like a band – “Throwing clay.”
Brent Warner 29:03
Like “it says, Throwing Clay – what’s that song I want to hear it!” You can go look it up. You can go write, write your own song in suno, I guess, if you want. But the so we had a little professional development like for campus, you know, like, just get people together and have a little fun type of thing. And so we did, we have a new pottery building in our arts complex, and I was able to make a bowl, you know, like, just some basic testing of it. I did okay, not too bad for my first time around. So it was fun, kind of relaxing. And, you know, like, it’s not, I love doing some, just like some non digital hobby, right? It’s like, okay, just kind of be in the moment. And so a little pottery, a little throw and clay can be a great way to spend the day.
Ixchell Reyes 29:45
Wow, that rhymed
Alright for the show notes and other episodes, check out DIESOL.org/ 111 that’s 111, And you can find us on threads or on Facebook at @DIESOLpod.
Brent Warner 30:05
I am on the socials at @BrentGWarner and
Ixchell Reyes 30:09
and I am @Ixy_Pixy, that’s I X Y, underscore p, i, x, y, all right. Thank you for listening.
Brent Warner 30:17
Stay safe in the scary Halloween! All right. Goodbye.
Ixchell Reyes 30:23
Bye.
13 Scary Approaches – Main Points
- Making assumptions that students are using AI to cheat just because they have access to it
- Using AI checkers on students
- Assuming that students know how to use AI appropriately just because they’re from the digital generation
- Refusing to address AI with students in the classroom
- Assuming AI destroys critical thinking
- Assuming that all students have equal access to quality AI
- Not having clear policies on use expectations in your classes
- Ignoring compliance expectations on campus
- Failure to keep up with the changes and updates
- Not being vigilant about the gradual or blatant bias of AI outputs
- Assuming that AI output is valid (lack of critical evaluation)
- Not understanding how AI works/doesn’t work for pronunciation
- Assuming teachers don’t know your work.
Fun Finds
- Ixchell – Pink Lady Perfume by Assaf
- Brent – Throwin’ Clay