J.D. Mosley-Matchett (00:01) Hi everyone, welcome to this first episode of AI Updates brought to you by Infomaven. And our first guest, our inaugural guest, is Dr. Glenn Phillips. Glenn Phillips (00:16) Hi JD, thanks so much for having me. It is an honor to be the inaugural guest. J.D. Mosley-Matchett (00:22) ⁓ love it, love it. Okay, I am going to read this bio because it is so terribly impressive. Glenn Allen Phillips is a senior insights consultant at Watermark based in Austin, Texas. In his role, he works directly with client colleagues to discuss good practices in assessment and accreditation and suggest the use of integrated technology as appropriate. Prior to Watermark, Dr. Phillips served as the Director of Assessment at Howard University and the Acting Director of Assessment at the University of Texas at Arlington. His research interests include assessment and accreditation across institutional types, equity and assessment, the use of AI in assessment and accreditation, and the experiences of veterans in higher education. Dr. Phillips earned his PhD at Texas A&M University, and he splits his time between Texas and Washington, D.C. So what do you do in Washington, D.C.? Are you a secret politician? Glenn Phillips (01:31) I am not a secret or a non-secret politician. I stay as far away from politics as I can, but the food in D.C. is too good not to go back, but the queso in Texas is too good to leave forever, so I have to balance it out. J.D. Mosley-Matchett (01:47) I agree wholeheartedly. All right, so let's get right into the questions. The first question, which I think is absolutely phenomenal. and way too small. Hang on, I'm going to have to make this bigger, move this over. Everything that I had prepared is on the wrong monitor. So, first question. Glenn Phillips (02:13) And just so you know from my view, you're a little pixelated and I'm sure that'll be fixed later on, but I just wanted to give you that feedback. J.D. Mosley-Matchett (02:22) And I look crystal clear on my end. So it pulls it up. It pulls up the final versions from our actual devices. So what's happening on the internet isn't that big a deal, but I'm crystal clear on my end. And that's why it was important for you to be crystal clear on your end as well. Okay. So our first question. And my computer made a noise. Okay. Glenn Phillips (02:31) Perfect. Okay, perfect. J.D. Mosley-Matchett (02:51) First question, how do the adopt, let's try this again. Our first question, How do the AI adoption issues being faced by higher education administrators differ from those faced by faculty? Glenn Phillips (03:10) It's a good question. I think that one of the challenges for faculty is, as they are wrestling with AI adoption, they are constantly being shadowed by their students who are also looking at what they're doing as instructive for how they should be managing AI, how they should be using AI. And so it's almost a double responsibility of not only can I use AI, but should I use AI in this moment? as I am showcasing to my students what that would look like. And as many universities are still struggling with exactly what the policy is, what their practice is going to be regarding AI use. I think this differs from institution to institution, but also within the institution, it differs from discipline to discipline in how they're using AI. I think that faculty are maybe burdened is too strong a word, but they are certainly carrying a higher level of responsibility because they're under the watchful eyes of their students. And then in the administrative world, they're kind of in many situations, especially in the world of assessment, they're kind of an island onto their own and they're kind of doing whatever they do behind these closed doors that no one sees. And so it's a really different approach. J.D. Mosley-Matchett (04:17) You make a really great You make a really great point about the fact that from one institution to another, the policies regarding artificial intelligence are varying. I read with interest that Notre Dame has actually banned What's the name of it? Okay, when you're writing and grammarly. Okay. You make an excellent point. Thanks, Glenn, you make a really good point. I recall that Notre Dame has just recently banned the use of Grammarly as part of their AI policy. And I thought that was very interesting because institutions have been using Grammarly for a very, very long time now. And to suddenly decide that it's not going to be acceptable. That's going to make things really tough for those students. Glenn Phillips (05:47) Well, as a recovering and former freshman composition instructor, I do see a lot of purpose and wisdom in making sure that students are able to understand the foundational basics of editing and writing. The challenge, of course, comes into this world where you want to prepare students for how they will be operating outside of the classroom upon graduation. And so you don't want to create a strange world in which inside the institution, they're not allowed to use calculators. But then everyone outside the institution is using calculators. Inside the institution, you can't use Grammarly, but wouldn't it be ironic if the email that was written or the policy that was written about banning Grammarly actually used Grammarly to make sure that it was a quality piece? So I think it's a balance of understanding that there are tools and there are times to use those tools. But I think... J.D. Mosley-Matchett (06:45) and there was high school, so there was that had schools Glenn Phillips (06:50) holistic policies J.D. Mosley-Matchett (06:50) that had schools Glenn Phillips (06:52) that just ban it without giving purpose and meaning to that ban, which I'm sure Notre Dame gave plenty of purpose and meaning, I think that that can be a little dangerous. J.D. Mosley-Matchett (06:56) Thank you. It's true. Another thing that I find interesting when I went to the NACADA conference in Pittsburgh recently, it's a conference for academic advisors. And so many of those individuals are not just advisors, but they're also faculty members. So now we've got this interesting dichotomy between when you are going to be using AI and when you're not going to be using AI just based on the role that you have in the institution. Glenn Phillips (07:41) Yeah. And I think that there's even some challenges in people making decisions about when they're going to use AI professionally or personally. And then when we decide we need to make these differentiators, right? Based on... J.D. Mosley-Matchett (07:56) Thank you. Glenn Phillips (07:58) the kind of data that you're using, there may be some really clear restrictions about not being able to use AI. But it's challenging when you might use AI to organize your personal schedule every day, and then it makes sense to use AI to organize your professional schedule. But then if there is a policy in place, you need to rethink how you approach that. J.D. Mosley-Matchett (08:22) So true. Okay, let's move on to the second question. So what are some things that may be holding people back from embracing AI in their current positions? Glenn Phillips (08:36) Well, I think there's a handful of things. I think that one of the main things that holds people back is just a misunderstanding of whether or not they're allowed to. A lot of people want permission. They want people to say it's okay to do this. In another life, I was a developmental mathematics instructor and all of my students, I had to say at the beginning of whatever we were doing, you can use your calculator or you're not allowed to use your calculator. calculator. I had to give them direction because they were just really hungry to understand what my expectations were. And I think Generative AI in particular is interesting because it's so ubiquitous, but no one has really given anyone permission to do it. They haven't said you're allowed. And what happens is They don't get proper training. They don't collaborate with others to talk about how others are using AI. And then also when they are using AI, they're often doing it in this kind of cone of shame that they don't want anyone to know that they used AI. And all that does is spur on this kind of additional people doing it in silence because they think, well, you know, my neighbor here, my, ⁓ my hallmate doesn't use AI, so I shouldn't use AI. So I think that's one of the big challenges. And then obviously when you're doing it by yourself, it leads to a lot of, ⁓ arrested development of AI expertise. You don't get to learn quickly if you have to do it in silence. J.D. Mosley-Matchett (10:15) This is true. So what would you suggest that institutions do to either assist people or make sure that they understand what those parameters are so that there isn't that cone of shame? Glenn Phillips (10:31) Well, I think one thing that can be done is once an institution has decided or defined what their policy is or has often been mentioned recognizes the policies at their institution that are already in place about using different technologies and then clarify the ways that those policies apply to Generative AI. Once it's clear what the expectations are and what's allowed and what's not allowed, that's when I think training sessions, professional development opportunities need to start popping up where it makes sense and that can be done by discipline, that can be done by kind of the bucket of job that you're in, that there may be a really good use of AI when working with academic advisors. So let's get all the academic advisors together and train them or anyone who's doing academic advising. There may be a really different approach if you're in the world of student affairs and so maybe we need to get all the student affairs people together and say how are you using it? How did you like to use it? What scares you about using it? So they can have those conversations. And I think that this can be top down if you have an AI progressive leader, but it can also be bottom up in some grassroots movement as long as it has the proper blessing of the authorities of the institution to say, let's get a working group together. Let's just have some fun. Let's talk about it. Let's talk about what tools you use, what tools I use. And then once there's more opportunity for people to learn together and share together how they use AI, I think it'll naturally start breeding more opportunity for AI use. And then the second thing is that I would just say, if you use AI to just say that you use AI. And the more people are comfortable saying that they use AI, the better that it's gonna be. And if you are ashamed to say that you used AI, you probably shouldn't be using AI for whatever you're ashamed of. So. J.D. Mosley-Matchett (12:27) Oh goodness. So true. It's interesting. Some of the advisors that I spoke with absolutely did come up and say that they were afraid. And I believe that a lot of the uncertainty that you talked about is part of that fear factor. How can institutions really come up with good policies when generative AI keeps changing? I mean, everything is happening so quickly in this arena. Glenn Phillips (13:16) I think that my best answer to that is to create a policy that in itself can adapt to changing conditions. We dealt with this quite a bit during the pandemic when our policies suggested that we had to be present for certain things or you had to have a wet signature on something. And then we transitioned to a point where a wet signature is no longer going to happen and a committee where everyone is physically there is not going to happen. So people had to adjust some of their policies to accommodate a changing world. And while certainly some people may compare generative AI and its kind of sweep of the world as similar to the pandemic, it is a different kind of animal that still necessitates us to look at our policies. How do we approach things? And is this still the right way? And I think that institutions are not obliged to use generative AI. in significant ways, but they are obliged to understand the potential possibilities and opportunities of AI and make a decision if that's something that they want to use at their institution and certainly make that information available to students. And if they are restricting in some way, be clear about why so that the student goes forward with at least new knowledge, if not a new skill set. J.D. Mosley-Matchett (14:53) Thank you so much. Well said, sir. All right, let's move on to the third question. What AI use cases might be appropriate for higher education administration? Glenn Phillips (15:06) I actually talk about... When I talk to people about AI and I talk to people about what they're allowed to use it for, what they shouldn't use it for, I've kind of developed what I call the bucket method so that there are these three kind of buckets. So one of those buckets is busy work. And I think that you should always use AI for busy work. An example that I give outside of higher education is if you were a homeowner and your homeowner association required you as part of the association to write a letter once a year saying your house did not catch on fire. That is busy work. You should not be spending your time, your thought, your heart on any of that. So those kinds of busy work. And there are some things in higher ed that are busy work like that. The second is more head work. And head work is the kind of things that you may use AI to support, but you wouldn't just let AI write it and not even read it, right? You want to engage with it in a meaningful way. And it might give you a structure for something, an agenda for a meeting you're about to have. That might be head work because it can create the agenda and then you can go in and adjust it and say, I don't want to do this. I do want to do this and have kind of a productive collaborative relationship with the generative AI. And then the third bucket of course is heart work. And these are the things that you would never let AI touch. These are the, maybe the kind of things you don't even want to do on a computer. You would rather use a quill dipped in a little pot of ink and write it on some paper. something like that, the things that are really, really special to you. The challenge with that kind of setup, that idea, is that what is heart work to you may be busy work to me. And what is busy work to me may be heart work to you. And if you don't believe me, just think about writing a letter of recommendation. For a new faculty member, J.D. Mosley-Matchett (17:10) Yes. Glenn Phillips (17:13) their first request for a letter of recommendation, this is like, am changing this student's life. I am guiding them on their path. Of course, I will write you a letter of recommendation. I will write 16 drafts of this letter of recommendation. It will be beautiful. Whereas maybe a professor who's a little further down the line who has written a few, they'll say something as I was told when I asked my professor for a letter of recommendation upon graduation. And she said, why don't you do this? Why don't you write the letter of recommendation pretending that you're me and then I'll do some quick edits and sign it. And so for her, it was even though I do believe she cared about me, that was busy work. Whereas for others, when I wrote my first letter of recommendation for a student, it was absolutely heart work. So that's where the challenge comes in. J.D. Mosley-Matchett (18:06) Okay, I'll go along with that. All right, Let's move on to the next question. For those folks who have never used AI, what advice would you give them to get started? Glenn Phillips (18:21) My advice would just be to play. I think that when we... are introduced to new technology. One of the things that we've failed to do is just play and have fun with it and enjoy it. And there are some silly things out there that you can write a song, you can write a poem, you can make a picture, you can do a lot of things. I had a colleague who said that they had a child who always wanted to hear a new story every single night and they just weren't creative and they couldn't make up a new story. And so they would go to AI and they'd say, write me a story about dragons, princesses, and LeBron James. And they would write a story about dragons, princesses, and LeBron James. And then they had a nighttime story. So have fun with it in very low stakes environments. And then you can start having conversations with yourself and maybe your community about the degree to which you want to use it in a higher stake environment. Another piece of advice that I would offer is people are very wisely concerned about putting things into AI that then become part of the language large language model, especially if they don't have a specific account. Sometimes even if they do have accounts where it says it's protected. You know, my phone also says it doesn't listen to me, but here we are. And so I think in those circumstances, it's wise just to use kind of my rule of thumb, which is if you wouldn't comfortably put information on an outward facing website, don't put it in AI. So you would never put student data on an outward-facing website. You never put financial data on an outward-facing website. You never put conversations about board members' ethics on an outward-facing website. So don't put it into AI. So that rule of thumb has served me pretty well. J.D. Mosley-Matchett (20:29) I like that. All right. And speaking of ethics, what can universities and colleges do to encourage ethical and efficient use of generative AI? Glenn Phillips (20:43) I think one of the things that institutions can do to encourage ethical use is to encourage conversations about ethical use. It's not the kind of technology that anyone on a college campus has grown up with. We have ethical conversations about the use of computers. because we now have digital natives, people who grew up with computers, and so they've been having these conversations their whole lives. Whereas folks like me who didn't grow up with the internet, we still have to have conversations about like, is that an ethical use of the internet? Should we use that? Should we not do that? Whereas for folks who are dipped into it like the River Styx upon birth, they're all in it. This is all that they know. And there will come new generations that were born after Generative AI that this is what they know. And I think part of it is understanding what we want to keep and what we're willing to part with. and leaning on our faculty for their knowledge of what's best for their students. So when a faculty member, even a faculty member at Notre Dame says, we need to make sure that our students and freshmen composition courses aren't using Grammarly, they're an expert in the field. They probably have a reason for that. I guarantee they would prefer that their students come with less edits, right? Because as someone who's graded many, many a paper, I would love to have a perfectly grammar, like, you know, perfect punctuation, perfect grammar, perfect syntax, but it's in engaging in that editing process that the students learn. And so I think that as we move forward, we need to... be open to the opportunity of AI while simultaneously listening to the experts. And luckily we work in higher education and our halls are just teaming with experts on anything you'd want to be an expert in. And so it's not an easy balance sometimes and there'll certainly be conversations where it goes one way or the other, but I think that's the key. J.D. Mosley-Matchett (23:01) Well, considering that you're, I will say, an expert with respect to accreditation and assessment, how do you start those discussions? Who do you have those discussions with? Because we tend to think of assessment and accreditation as being the laws drafted in stone and they're immutable. So, So So, how do we Glenn Phillips (23:29) Yah. J.D. Mosley-Matchett (23:31) start having those discussions. Where do we go? Glenn Phillips (23:34) Well, I think part of it is understanding what AI is and what it's doing. A great example would be that for many discipline level or institutional self studies, institutions will hire someone to help read, maybe in some cases even to help write. Consultants are paid well and they're all over the place and they support people going through these processes because we know these are high stakes processes. And then if you then say, can I use AI for this? They would say, that's unethical because you're not writing it yourself. But it's okay to pay someone to come in and write it. And so I think partially for me, it's also an equity issue because not all institutions are resourced in a way where they can pay someone to write their accreditation report or to organize their accreditation report or to edit their accreditation report. What if now we have a tool that levels that playing field a bit and gives them the opportunity to do that? That's exciting. And so I think that when it comes to accreditation specifically, also, not a lot of of the accrediting agencies have rules that they've created about using or not using AI. And I think that they won't until there is some kind of clear violation or the accreditation self-study is incorrect. Or someone says, well, you know, I'm sorry I used AI and it just came up with this stuff. Because it's always important to remember that AI is super useful, but it's also a liar. It says untrue things all the time, and it doesn't feel bad about it. And so it can be a tool, but it can't be the only tool. It's not a, we love talking about automation. And AI at this point, who knows what tomorrow and tomorrow and tomorrow will bring, but AI at this point will not automate things like assessment and accreditation. It will only help you do it a little faster, maybe in a slightly different way. And that's kind of the best that we can get out of it right now. And then we need to make decisions if we want to engage in that. J.D. Mosley-Matchett (25:56) sensor. Okay, Do you have any parting thoughts that you'd like to leave with our audience? Glenn Phillips (26:03) Sure, other than my sincere thanks for being a part of this little world, especially at the beginning of it, it's really exciting. And I think that one of the things that you stated earlier is incredibly important for people to remember, which is that AI is changing significantly and quickly. There used to be a joke that whenever you bought a computer, a brand new computer from, you know, Best Buy or whatever technology store you went to, it was already an old computer, the second that you walked out the door. And now, as we talk about AI, anytime a book is published on AI or a website is updated on how to use AI, the second that it goes to print, it's old. Because we're doing new things, we're able to do new things. And I encourage people to think about learning as much as they can about AI so that they are comfortable, if they choose not to use AI, telling people why. instead of potentially, you know, as the old metaphor is, sticking your head in the sand and pretending that it'll all go away. J.D. Mosley-Matchett (27:14) I love it. Thank you so much for being here with us. And as always, you are the best, Dr. Phillips. Glenn Phillips (27:29) Thank you. J.D. Mosley-Matchett (27:30) Thanks so much.