J.D. Mosley-Matchett (00:00) Hi, and welcome to another episode of AI Update brought to you by InfoMaven. Today's guest is Dr. Joseph Levy, who is the Associate Vice Provost of Accreditation and Quality Improvement at Excelsior University. Joe is a member of the Student Affairs Assessment Leaders Board of Directors, a co-creator and instructor for the Applying and Leading Assessment in Student Affairs open course, and he's an endorsed speaker for the National Institute of Learning Outcomes Assessment. He's an author, a presenter, a trainer, and consultant. Joe enjoys leveraging his academic and student affairs experience from multiple institutional types to talk assessment, quality assurance, and institutional betterment. Dr. Levy earned a BA in English from Baldwin-Wallace College, an MS in Student Affairs in Higher Education, from Colorado State University, and his Ed.D. in Higher Education Leadership from National Lewis University. Welcome to the podcast, Joe! Joe Levy (he/him) (01:06) Thanks JD, excited to be here. J.D. Mosley-Matchett (01:08) I've been a huge fan of yours for years and I'm thrilled to have this opportunity to hear your thoughts about AI in higher education administration. Everybody talks about AI in teaching and learning. What have you been seeing and hearing from other administrators with respect to AI? Joe Levy (he/him) (01:26) You said it perfectly. obviously a lot of buzz about AI in education and in general, but the conversations tend to be circling around how students may be using it and thinking about policies of what to restrict. Institutions are also thinking about what is safe or protected in terms of their data. know, what can be loaded into AI versus not. But it's interesting because there's this opportunity to be talking about the excitement and the potential that AI could be helping administrators in terms of alleviating tasks, right? Helping make things more efficient. But there seems to be much more focus on, instead of the excitement, the hesitance, right? Of, well, we're not sure about this and we're worried about what this might mean or how to use it. Or again, this idea of where we're focusing so much on policy and limiting implementations of it as opposed to thinking about the possibilities of what it could do for us and how we could use it to help us. And so I think there's obviously a lot of conversations. I just wonder if we're focusing on the right. part of the conversation and the right potential for AI. J.D. Mosley-Matchett, PhD (02:44) That's a good point. So what have your experiences been like? Joe Levy (he/him) (02:49) For me, I will first say that there are so many other people way more informed and experienced with AI than myself, but I know enough to be dangerous. And I certainly have been exploring a lot of tools and easing my way into the use of it. I'll say that it can be a little overwhelming to figure out what to do, how to use it. me, it's just been really helpful to try different tools, find ways in which it's being beneficial and helpful for me and hearing from colleagues of what's useful. So specifically, I've been using it for reviewing some data, helping summarize data and seeing what that summary looks like compared to what my manual summary like, as well as I do a good amount of coding for data analysis and making scripts to automate reports. And it's been just... an incredible resource help solve some particularly confusing or issues where I want the data to be cleaned and formatted a certain way and what I did didn't quite work. I used to have to scour discussion boards and look at all these different ways of similar projects that I have to modify for my purpose and that used to take me so long and now I can just type in and even share my code and not only will it tell me what to do, but give me the exact code personalized to my data set to run. So by and large, that's where I've been leveraging it the most is around coding and to help with my reporting. But I've also been using it for helping summarize and tease out themes in information to also validate as almost like a second brain, right? You know, I read through these things and these were the takeaways I had. let me do a quick check. I put this through AI and see what themes it pulls out. And good, we're aligned. Have we caught these things? Or interesting, they pulled out this piece and I hadn't thought about that. So I've been easing my way into it, but certainly leaning on the strengths and aspects that have been benefiting me so J.D. Mosley-Matchett, PhD (04:56) So when we're talking about different kinds of AI tools that you've played with, which ones come to mind? J.D. Mosley-Matchett (05:02) Because there are so many out there, and it seems as though everyone has their own favorite. Joe Levy (he/him) (05:08) Yeah. I mean, I made a whole bookmark tab for my browser for just for AI things. And I found I quickly had to not only bookmark the thing, but right next to it, almost a shorthand of this is what this tool does. Cause I'll look at it later and be like, well, I don't know what that name means or what that product is. so, I've learned to, when I bookmark something put in parentheses, like what this AI tool is used for. There's also a whole website called there's an AI for that. You can. put in your preferences and even get email updates when new AI tools come about. But for me, the ones I've been leaning on the most are ChatGPT for And also, I've played around with Gemini and some others, but ChatGPT seems to be the one that has been the most especially for the coding pieces I've been doing. I've also used a couple tools for summarizing PDFs, although ChatGPT can do that, but there's some research based tools for that. One funny named tool that I've used a handful of times is called Goblin Tools. And it's a funny but it's AI website that has a whole bunch of things integrated into it. The two that I would call out is one is a task manager or creator, and you can just put in a task and, click set and it'll create action items. saying like, know, so one thing I put in there of like checking accreditation requirements for a nursing program. And so it immediately was like, well, go to the website, look this up, this piece, send it by X date. And so it'll just you can then edit those things or you can also say, make this more detailed or add in this step. And so it'll populate that. So that's been a fun tool to play around. But the piece I use Goblin tools the most for is they have a... I believe it's called Formalizer, and you can paste in text, and then it gives you an option of what you want to do to that language. Do you want to make it more formal, more informal, more to the point, more friendly, but it gives you a lot of different options to soften the tone, change the message a little bit, which is helpful for me because with my English background, I... and just my nature, I can be a bit wordy. I hope I'm not doing that too much now. And so sometimes when I write an email on a topic I'm excited about, it can be very long. And so sometimes I'll put it in there and say, you know, tighten this up a little bit or shorten this, have it be more to the point. And then maybe that's the version that goes out to folks instead of the novel I wrote initially. J.D. Mosley-Matchett (07:44) I hear you. We all have AI tools, I think, in the same way. And it does help to have entity, even if it isn't a human being, take a look at your work so that you can polish it up before it goes out for prime time. So I agree wholeheartedly with you. Because that sounds like something I'm definitely going to have to check out. But how can AI be leveraged for institutional effectiveness or quality assurance activities at universities? Joe Levy (he/him) (08:17) Yeah, I think, you know, AI can obviously help again, summarize information, synthesize it. You know, one of the things I think about having a lot on my plate currently with accreditation related things. There's a lot of detail in the and there's times and people who need to know those details. And then there's also times where they just need a summary of in general, what's this talking about? What, what's pertinent or related here? And it can be a really helpful tool to quickly summarize some of that content and alleviate the mental load of having to do that task yourself, but also be a place to quickly search for things, especially if it's public content that's easily accessible on a website. You can leverage that. You can pose those AI to scour those documents and pull out the specific keywords or pieces that, again, might have taken you a lot longer to look through. And again, It may not be a hundred percent accurate, but it can be a start and a help there. And I think that we, in doing that work, I think it's also important that we approach it accordingly. know, going back to a bit that, that first thing I was talking about of administrators and their mindsets with things, I think we have to be sure to evolve and have an open mindset with AI. Inherently it's easy to think that it's cheating. It's easy to think that use of it is shameful or bad when in fact it's a tool like any other tool to help. And so example where we were having our institutions having a conversation of reviewing our university learning outcomes and some faculty had pointed out potentially some of the content might be duplicative or redundant. So we're maybe looking to streamline it. to strengthen the respective outcomes and not potentially dilute it down. And so I invited perspectives to look at it. so our university assessment councils digging in and representatives are going out and getting feedback. And when some feedback came back, one of the responses I got, I could tell just by the formatting of it, it clearly was ran through AI and mass. Like, pull out similarities, pull out redundancies. And my initial, I admit my initial response was, my gosh, they just put this through AI and sent it to me. They didn't even like look at it themselves. However, I then, had to check myself well, they did that. Right. And, I realized actually the thing I was a bit more disappointed with was, I had tasked those people with getting representative feedback, right? They were part of a committee, so I wanted them to go get feedback from multiple individuals. And so that was a great opportunity to say, again, not necessarily to call them out on the AI piece, but to say, looks like you got some summary information here. How much of this was agreed upon or aligned with the faculty feedback that you were supposed to collect, right? And so I think that was a good teaching moment of, again, Clearly they used AI and it served a purpose, but it didn't serve the purpose, right? And so I think that's a good example, much like we can do with our students is we don't just want to use AI to cut a corner, right? But how can we use it to achieve the goal we're trying to have there? But with that, again, it was a moment for me too of having to. do some root cause analysis of the feelings I was having when I got that back. And it's like, no, the problem isn't just that they used AI, it's that they stopped there. And so how do we make sure that we are using these tools to effectively get to accomplish the task or to the end goal appropriately? J.D. Mosley-Matchett (11:48) Well said. the fact that you pointed out how AI can really assist, but it isn't supposed to be the end point. We have to keep the humans in the loop. We have to add our own thoughts and insights to augment what AI is synthesizing, because it really is just a synthesis. Is there any other advice that you'd give to other administrators about AI? Joe Levy (he/him) (12:18) I think for administrators, it's going to be really important to get familiar with it. I think it's really easy as an administrator to, if you're not engaged with it, to not understand the nuances of the tools and the possibilities and the considerations, and to instead just focus on the news stories or what you're hearing from other administrators or your colleagues. But if you're not actually seeing how your people are using this information or the possibilities of it, you may not be making the most informed decisions, right? So it's important to get connected, to play around with it yourself, and if you don't have time to play with it, then talking to the people at your institution who are using it to better understand its capabilities and possibilities and allow you to ask questions. But with that, and by getting involved, you'll probably also pretty quickly see that you may need to set aside some funds for this because being a consumer myself and trying to learn more about AI and hearing more and more about different tools. And as you go on websites and explore, a lot of them say, well, here's what you can do with the free version. Here's what you can do with the paid version. And then even when you talk to certain colleagues and they have some real robust, awesome automation going on or some tool they created. Usually it's with a paid version of a tool. And so it's really important for administrators to make sure to set aside some resources, not only their time and attention, but funds, because that may be an investment to make to best explore something, especially in a pilot way before making a big investment, but also to enable your people. Right. Some of these licenses are very inexpensive. And some of these tools, you can get a whole team's worth of access and unlock the full capability of an AI tool for a couple hundred dollars for a whole team. But again, if you're not setting aside those funds, if you don't understand the purpose, that may never happen. And then that team may never fully realize the potential of this tool that could really help them with efficiency or effectiveness or, or again, helping offload some, some work. And so. I think that's one of the things that the two pieces I would say is get engaged to know what we're talking about and play around with it yourself or else definitely talking to people who are using it and be ready to set aside and earmark some funds in order to again best experiment but also be ready to start leveraging some of these tools in tech. J.D. Mosley-Matchett (14:59) How would you advise someone who really isn't very technical? What sort of first steps might they try that wouldn't be terrifying? Joe Levy (he/him) (15:11) I mean, you know, I think one of the things we have to do with AI, just like anything else in higher education is look to our students and look to their needs. And so then, I mean, and that goes with online, that goes with different modalities, that goes with even things like competency based education, goes with things like talking about maybe a three-year bachelor's degree, some of these trends and topics in higher ed that people may not be comfortable with, it may not be the type of risk or the type of investment the institution traditionally might make, but it comes back to our purpose, which is serving our students. And especially with AI, it's the kind of thing that it's inescapable. And we had a conversation about this at my institution just today of it can be a real misnomer to just stop the conversation and the labeling around AI because There's so many types of it, right? There's so many existing types of artificial intelligence that we're already using that we don't even call AI because we're so focused on the new versions of AI or the new tools and applications. there's a big difference between like a chat bot versus a language model versus an analysis tool versus a summer. mean, these are all forms of AI, but all different forms. And that's why AI in general is inescapable. And so when we think about our students, they are using these tools already and they want to be using these tools. They are being presented with these tools. And so I feel like it's the kind of thing you can't just stick your head in the sand about or and I get we want to be cautious. We want to think about the privacy of our students, the privacy and the sanctity of our data and making sure we're not disrupting any of that or illegally or inappropriately sharing any of that information, that it's not being harvested in certain ways. But we have to think about our students. And if we ignore it and if we say, well, that's just not our style, that's not the kind of educational approach we believe in. We're going to be going against not only a whole societal use of a technology, but we may also be going against the needs and the wants of our students. And I'm not saying that, you know, we should cater to everything they want, but we should at least think about our current, the current society, the tools we have, and how we can best meet the needs of our students and be thinking about that in terms of what do we need to know our students want? How could that have applicability with AI and then make those decisions. Because I think just initially, stopping at a point of, we're not good with technology or we're low tech or we're, you know, we care more about face to face. And so we wouldn't even be looking to bother or invest in those things. Well, again, that may be you, but not your students or the rest of your campus community. Right. And so I think administrators really need to Put their ear to the ground and you know check the pulse of their community before they they make any of those decisions And that's where like anything else institutions really need to think about should they maintain the status quo or should they be truly about all the current needs in the current environment? J.D. Mosley-Matchett (18:45) agree wholeheartedly with you and the whole point of trying to determine whether or not you're going to maintain the status quo or if there are potentials and possibilities that need to be explored. So is the final question. What can higher education do to better inform and share about the utility and possibilities of AI. Joe Levy (he/him) (19:16) I mean, I think it's things like this, continuing to explore, share, and be on a learning journey together. I've been able to be part of two different sets of AI educator groups. There's a Google group that is just open to anyone and everyone. There was a Java Jams organization, group of colleagues that started some conversations. Truly very grassroots and talking with one another and collaboratively sharing. Somebody would present for a little bit of the meeting, but then they would really open the floor of, I was talking about using a chat bot and training a chat bot for this purpose. Who else has done that? And leveraging the collective experience. And that's so, it's been so beneficial and eye-opening for me because I've learned about so many different tools, so many different applications. And so we just need to keep doing that and keep sharing. I just came from a conference where you can imagine there were a couple sessions about AI and applications of it. Given it's so new and given there's so many applications of it, we just need to keep telling each other about it. And again, thinking about like any other tool, how can this tool potentially help solve a problem, be used to solve a problem, be used to meet needs? For all sorts of people, right? Like we talked earlier about how it helps me, right? But thinking how it can help our students, how it can help our colleagues, how it can help our practice. But I think, you know, the best thing we can do is explore on our own and share with one another and do it at a pace you're comfortable with. Like I said, it's very overwhelming and I would get very intimidated in those early on meetings because I'd see my colleagues using so many tools or... think so many resources already provided to them to experiment in these ways. And I just felt so far behind. But, you know, then there's other ways where I have used it, that then they're like, I've never used it that way. You'll help me understand how you and again, that's why AI has so many different applications and possibilities. And I think we just have to explore together and find ways that make sense and are helping and naturally then you'll continue to explore. Because that's the thing where the more I was using it initially just for coding, the more I realized its power and its accuracy and its potential and said, well, I wonder if I could use it for this way. OK, well, maybe this tool didn't exactly. So I wonder if I can search for another tool. No, look, I can use this tool for that. And you find how some tools are better at one thing than another, even though they all have some capabilities, just like any other tech out there. I mean, like an assessment management system. They all generally have some of the same tools and functionality, but some are much better at surveys, or some are much better at putting in reports and document repositories, and some are much better at content management. So you need to know what your need is, you need to know what's best going to help you, and then experiment and find the tool that can do that. J.D. Mosley-Matchett (22:26) I love it. This was definitely time well spent. Joe Levy (he/him) (22:30) Thanks for having me, JD.