J.D. Mosley-Matchett, PhD (00:30) Hi, welcome to another episode of AI Update brought to you by InforMaven. Today's guest is Dr. Ruth Slotnick, Director of Assessment at Bridgewater State University. She currently leads the Generative AI Survey for Assessment Professionals, which is a biannual study examining the adoption, application, and implications of AI in higher education assessment practices. Ruth also serves as a co-leader of the GenAI in Assessment Community of Practice at the Assessment Institute in Indianapolis, which will launch in supporting professional development and AI adoption in J.D. Mosley-Matchett, PhD (01:12) Additionally, she leads the AMCOA AI and Assessment Working Group, an initiative of the Massachusetts Department of Higher Education focused on developing AI guidelines and best practices for assessment professionals across the state. J.D. Mosley-Matchett, PhD (01:29) Dr. Slotnick earned her PhD in higher education administration from the University of South Florida, specializing in adult education. And she holds a master's in art education from Penn State, integrating creativity into her approach to assessment. Her current research continues to explore how AI intersects with assessment, equity, and institutional effectiveness, ensuring higher education assessment professionals are prepared to navigate future Welcome to the podcast, Ruth. Dr. Ruth Slotnick (02:02) It's great to be here. Thanks for having me. J.D. Mosley-Matchett, PhD (02:04) Now you've been researching assessment for years. How has it changed in higher education over the past decade? Dr. Ruth Slotnick (02:11) That's a great think the recent study I did with Mark Nicholas in 2024, where we compared our portrait to 2014, Interestingly enough, we haven't changed all that much. To some J.D. Mosley-Matchett, PhD (02:26) ⁓ no. Dr. Ruth Slotnick (02:26) we're relatively the same. We have a little more diversity. We're primarily white. We're primarily from social sciences. And our route in is through higher education administration. We have a little coming in from teaching and those who are kind of freshly minted assessment practitioners, researchers from an institution like JMU. will come in naturally to the role. But essentially, the field still draws multi-route entry into the field. I do think that we have expanded into student affairs assessment and that's been a wonderful contribution to our field. If we kind of dial back about 15 years ago, student affairs was sort of a low level conversation in the research or topics. It would kind of come up in to some degree in terms of working with students through student life or student governance. And now I think we have more of a structure and interest, a body of research. If you look at the work that Gavin Henning is doing, in part some with Joe Levy, Ciji Heiser. These are all really powerful research-driven individuals who have added the student affairs piece to our J.D. Mosley-Matchett, PhD (03:43) That's great. Wow. You've had quite a journey, but Now you're investigating how to effectively incorporate AI into higher education assessment. But with the technology advancing so quickly, what are some of the biggest roadblocks that higher education institutions face when they're trying to integrate AI into assessment work? And what would you say are some practical ways to move forward? Dr. Ruth Slotnick (04:09) Yeah, that's a really, I kind of call that the million dollar question right now. I have a lot to say here, so kind of stick with me as I kind of work through it in my own head. But the first thing that I want to clarify is I'm an assessment expert. I'm not an AI expert by any means. And so that's really important as we talk about it today. I've been experimenting with Generative AI really since November of 2022. When it first came out, I was all in it to win it. I was like in trying to really experiment in the space. But one of the biggest barriers, and it may surprise you, are assessment professionals themselves. We are the barrier. And so how do I know this? We had 264 assessment professionals respond to that survey in which we built in some Gen AI questions. And the results kind of might surprise you that I would say about half of the assessment professionals indicated that Generative AI will have a moderate to major impact, yet only 30% were using it. That was one finding. But what was even more surprising is that some of those not using Generative AI I still felt prepared for its impact. It's kind of like, I know the hurricane's coming, but I'm not yet doing the hurricane preparedness. And anyone who's been through a hurricane, you got to get that emergency bag ready, that distilled water ready, your boarding materials ready. And so what the data was telling me is that... the assessment professionals knew it was coming, not prepared, and yet felt prepared. So the disconnect was a little odd to me. And one person noted that they were just going to wait until the AI thing got itself figured out, and then they would get their hands in. Another person said they were just completely fatigued from COVID-19, that they couldn't even handle one more thing. Because this was November of 2022 coming in. The survey was 2024. or, by the way, two years later, someone's still feeling overwhelmed because of COVID. So these are some real barriers. And so this kind of resistance may not only be about caution in terms of using it. It might be about an uncertainty regarding professional identity. It could be about purpose. I've had numerous conversations with colleagues who have privately expressed concern about the implications of Gen AI adoption. They're asking thoughtful questions about how automated and even semi-automated analysis might change the nature of the work. Could it put us out of work? And as Ethan Mollick from Wharton recently noted, the biggest barrier to AI adoption isn't the technology itself, it's the organizational immune system that rejects change. And this perfectly captures, I think, what we may see playing out in the field. But I'm also reminded by something that Mike Kent from AI Literacy Partners, he's so thoughtful. And what he says about resistance, particularly with faculty, he frames this resistance thoughtfully by describing the five stages of grief around Generative AI and providing this analogy of who moved the cheese, right? In other words, like we didn't want the cheese moved, the cheese moved. J.D. Mosley-Matchett, PhD (07:38) Mm-hmm. Mm-hmm. Yes. Yeah. Dr. Ruth Slotnick (07:41) Right? Without consultation J.D. Mosley-Matchett, PhD (07:42) Mm-hmm. Dr. Ruth Slotnick (07:43) and here we are, like suddenly it's in our lap. And, you know, the Gen. AI sort of took us all by surprise and we were not ready. J.D. Mosley-Matchett, PhD (07:51) Mm-hmm. Dr. Ruth Slotnick (07:52) So to probe deeper on the barriers and also the opportunities, because whenever you talk about barriers, you also have the opportunities on the other side of it. And there's a, so there's this other national survey that you very nicely mentioned in the beginning I'm doing with Joanna from Bridgewater State University as well. And Bobbi Jo Grillo-Pinelli from Walden University. We have 225 assessment professionals that have already responded to the survey. So we have some early data and we're looking at barriers, right? And one that's kind of standing out for us is those who have been in the field from 11 to 15 years are the ones who are perhaps most concerned or resistant about AI adoption. And this, maybe this again, is a form of caution that may ultimately limit innovation or even just understanding our use. We just don't know yet. And this is why the survey work is really important. J.D. Mosley-Matchett, PhD (08:35) Yeah. Dr. Ruth Slotnick (08:48) The field of assessment with Gen AI added in is evolving. With or without us, the question is whether we'll lead the change or we'll resist it. And it begs the question, why are we so slow to adopt? And of those adopting, what are they doing with GenAI so we can learn from them? And so the survey will help us understand what's happening in our field. So the first look will sort of close at the end of March, and then we'll do another look in August. So it's kind of a pulse survey. And because this field is moving so quickly to GenAI and how it's intersecting, we kind of have to do a regular check-in as opposed to, you know, let's do the survey every 10 years. That's just not going to J.D. Mosley-Matchett, PhD (09:33) you Dr. Ruth Slotnick (09:34) work right now. I also want to take a moment and say, however, I'm particularly concerned about the ethical implications of Gen AI that many assessment professionals are rightly voicing: issues about data privacy, intellectual property rights, the potential for bias is all legitimate. And also, I'm closely following the research on concern about cognitive offloading, the concern that relying too heavily on just "push the button" might actually have an effect on our analytic skills. Additionally, the environmental impact is very serious. I grew up in Lancaster, Pennsylvania. They just booted up Three Mile Island again to fuel the AI need. It's concerning to me. I was there when we had a little bit of a leak downstream. That was a long time ago. But still, the environmental impacts are something that we don't always talk about. And then also we have low wage workers in developing countries who are doing the training data testing. And this also raises serious questions about broader implications. And even the way I'm talking to you right now is a type of fluency that we all need to kind of work for and we need to be able to interact and talk with each other. This is just kind of modeling. kind of the shape and understanding of what this thing is and how we use it. So any opportunity to build fluency in terms of integration and use is breaking down a barrier in terms of just how we even talk about it. J.D. Mosley-Matchett, PhD (11:00) Mm-hmm. Dr. Ruth Slotnick (11:09) So these are some, in terms of barriers, I've got much to say, but the other thing I wanted to really bring to the conversation is a study that I did with Natasha Jankowski in 2015. And this is rolling back to 10 years ago. Natasha and I really pursued this question of what do we do and how do we do it? And what are our skill sets? Has anyone really even just mapped that out in our field? And 10 years ago when I had darker hair and a little bit younger, we identified these major roles. The first one is the assessment method expert. The second one is the narrator translator. The third is the facilitator guide. The fourth is the political navigator. And the fifth is the visionary believer. So if you think about that in the AI space, what is our role institutionally in bringing in this new tool to our toolbox and how we use it not only for our administrative practices, but when we go out to the field and we're interacting with faculty, we have to first say, huh, is this space a good space to even bring up the conversation of Generative AI? Kind of reckons, it goes back to like when we would come in as assessment professionals, is this space a good space to talk about assessment? Is it going to be a lion's den? How do I need to operate in this space? So you're using your political navigator in that space, but you have to come in with your own assessment expertise in terms of perhaps a disciplinary conversation of Gen AI integration. Perhaps it's just looking at application on how to use AI for assessment reporting assistance. You can use some of these roles to help you break down perhaps your own internal barriers, if I say a set of practitioners themselves are a barrier, we have to begin to start seeing Gen AI not as a replacement of our expertise, but as a way to augment the skills that we already have. It amplifies how we could potentially work in our institutions and navigate these complex institutional challenges we're facing right we still have to see where we need to prepare for the incoming students. and we need to prepare for the outgoing students. And Generative AI is on the bookends right now. So tackling the barriers is a kind of a one by one effort by that assessment practitioner, and it starts with oneself. J.D. Mosley-Matchett, PhD (13:31) Yep. So true. Excellent synopsis. Thank you so much for that. Okay, so let's move on to the second Dr. Ruth Slotnick (13:45) Sure. J.D. Mosley-Matchett, PhD (13:50) Can you offer some key factors that higher education institutions should consider when determining their readiness for AI and assessment, and how can they move forward strategically? Dr. Ruth Slotnick (14:04) I listen to a lot of podcasts and I hear a lot of people talk about policy, no policy, even at the AAC&U January meeting, I did a session with Peter Shea and Devan Walton. And there was some concern about do we create a policy? Is it too much of a moving target? And it's important to keep some clear boundaries to not only help the institution understand what is its boundary around gendered AI, but to help the faculty understand what is its boundary around AI and to help students understand what are their boundaries around AI. And there was a recent article written about where's the line? This is literally the title of the paper. Where's the line? It's a blurred line. It's an absurd line. So we got to have some lines. So Generative AI isn't just being ready for the technology. It's about culture. It's about mission. It's about governance structure. It's about strategic alignment to those things, like the structure, the mission. And I think that many institutions assume that Gen AI readiness means having the right tools are really expensive. That's a huge So my observation is that many universities are generally interested in innovation, but they're struggling with how to operationalize it effectively. I don't have a... clear answer on just how to get started. But I do think that, as Peter Shea says, and he observed in a study on institutional Gen AI readiness, he said the gap between expressed enthusiasm for Gen-AI and actual implementation reveals deep structural resistance to change in higher education. And I think this is particularly an important point when it comes to unionized environments. So, you you may have some institutions that are doing complete adoption, but they may not be inside of a union environment. J.D. Mosley-Matchett, PhD (15:50) you Dr. Ruth Slotnick (15:56) This goes back to cultural once again, like what is the culture of your institution? What is the culture of the state? I also think that this is kind of an interesting thing. We didn't expect COVID, but AI has been talked about for decades. I've got books all over my room here about AI and sort of how long it's been kind of in the works. And it's almost like we kind of got bowled over by Generative AI as if it was something new. And I just want J.D. Mosley-Matchett, PhD (15:57) you Yeah. Dr. Ruth Slotnick (16:24) We're heads in the sand? Why didn't we know about it? Why weren't we tuned into it? But here it is. And now we have to think about how this dynamic is manifesting in our field, and we have to grapple with it. It has strengths, and it has weaknesses. So it's both this large language model outputting is both unreliable and dangerous, but it also is useful. So trying to figure out how we work inside of this paradox, especially for assessment professionals, we have training. We know what precision and accuracy are. So, I think that readiness has sort of this culminating effect where first you got to look at cultural readiness and specifically in terms of assessment professionals, you've got to have some interest in getting ready. that hesitancy is not going to work well for our field. It could actually hurt you if you're hesitant in using the technology. Then we need to think about the governance and policy readiness. And I ask, where is the assessment professional at the table for those conversations? And in some cases, there's no place for the assessment practitioner at the table because either A, they haven't been asked or the institution doesn't know to ask or for some other reason, the assessment professional isn't part of those conversations. So wherever possible, like for example, in my own work, I create my own policy for my office and we live inside of that policy even though policy hasn't been given to us yet institutionally. Also operational readiness. Where does Gen AI fit into to existing assessment workflows? So again, right now, mostly my work in terms of Generative AI has been in my own operational power, how I might use it for survey work, how I might use it for memo writing, how I might use it for report writing, how I might use it for accreditation purposes. But at some point, I need to cross over into how am I working with faculty related to the technology? And some institutions and assessment practitioners are already there. And some of us are still just in the operational space in terms of what is in the locus of our control. I will say that at our institution at Bridgewater, we're offering some mini grants for faculty to apply to tell us about use case scenario. And it can either be a probing project, one where like they haven't even, they've either eschewed the technology or they've been thinking about it, but they haven't had the right motivation to go there. And I feel like at least at Bridgewater State University, in my role for over almost 11 years, I've created a street credibility, trustworthiness. Faculty know that if they experiment in our space with assessment, that that space is a close relationship to build fluency and build readiness. On the other hand, faculty can also apply who have been doing things with the Generative AI space and share it with us. We'll do use case scenario, sort of interviews at the end, and then we can package that up and send it out to the faculty for consideration. I don't know necessarily how it will be received, but it's my first try in trying to get something out there to move our campus forward in terms of readiness in the assessment space. There's a whole other entity that's looking at readiness in the teaching space that's never been where I've been centered at my institution. I don't do the teaching relationship per se. That's the Office of Teaching and Learning. I don't do the technology per se. That's the Office of Technology and Training Center. I do the assessment piece. I work inside of my intersection, and that's where I see us going slowly, but we're getting there. So in terms of readiness, you gotta look at the culture. You gotta look at your mission. You gotta look at governance structures. You gotta look at strategic plans, but you have to keep in mind those coming in, meaning the students, and those going out in terms of workforce readiness. J.D. Mosley-Matchett, PhD (20:33) Well said. Thank you so much and I applaud your efforts because someone has to move the needle forward and too many people seem to be just "deer in the headlights" frozen in place. So bravo. Your research from 2014 and 2024 has shown that assessment professionals often work in silos yet AI adoption is moving faster than any one office can manage alone. So how can a community of practice help higher education leaders build AI fluency and integrate AI into assessment effectively? Dr. Ruth Slotnick (21:13) Yeah, I I think that's kind of the reality of where we're at right now. I felt like in 2022, through most of 2022 into 2023, I felt like I was working alone a lot in the space and it was hard to know who I could even connect with and talk to about it. I found myself falling into a really great friendship with Peter Shea at Middlesex Community College. I've already mentioned him. He's an instructional designer. He's now been promoted up to a institutional level overseeing AI for Middlesex Community College. Devan Walton, who is an assistant of computer science at Northern Essex Community College. And I sort of naturally floated into a relationship that Peter and Devan had already built and they were kind of road showing for the state at our state level conferences. "Look, here's how you can use AI and here are the friction points and here's how we can think about integrating AI to relieve some of those friction points for faculty and even in our own work." Luckily for me, Peter and Devan took me in and we kind of became the collaboration we wanted from our own institutions, right? Devan was my computer scientist faculty and Peter was my instructional designer partner and I am the assessment person. And so we started collaborating. J.D. Mosley-Matchett, PhD (22:26) And yes. Dr. Ruth Slotnick (22:35) That was my first home for collaboration where I could say, "Yeah, I wrote this whole thing with ChatGPT. I changed very little. What do you think?" And I wasn't willing to put that out there elsewhere, but I was willing to share it with others. And I think that vulnerability and that space was very important to me early on because I didn't have it with others. And then it sort of slowly, little by little, I think the field started to admit to itself that we're using it and talking with each other, how we're using it. And then kind of fast forward at the Assessment Institute, there may have been maybe two sessions on Generative AI. Two sessions in 2024 at the premier longest running assessment conference in the country. There's two, maybe three Generative AI sessions. So that tells you something, right? Like our field is moving so slow. J.D. Mosley-Matchett, PhD (23:22) Mm-hmm. Dr. Ruth Slotnick (23:29) And that session was packed. There was like standing room only and I presented with Gavin Henning and Natasha was supposed to join us. She couldn't come, but she gave us all of her notes and we presented all of her information. And we kind of just went through little bit of an opener, a little hands up in terms of use and then kind of talked about use and did a little demo. And that's all that we did and people loved it. Right. I think moving into 2025, we're in a different space now. J.D. Mosley-Matchett, PhD (23:33) Yeah. Dr. Ruth Slotnick (23:59) We're moving now towards the Assessment Institute Community of Practice. That will by the wonderful John Hathcoat and Will Miller from Embry-Riddle Aeronautic University. Together, we will build a rather large community of practice. have, I believe over 125 individuals who have expressed interest in the community of practice. When I tell you we have 226 respondents to our survey and half are saying they want a community of practice. And I ask, why is that? Because we have no training institutes, because we have no guidelines, because we have no books on this, because we have very little literature on it. I wrote a paper on it and it was published in 2024 on how do you use AI for qualitative work as an office of one, maybe two, and in what ways can we test it to see how well it works? And you know what? It worked really well. It worked really well as a third research partner. But always the humans in the loop, and we're testing it. But we missed stuff in the thematic analysis that it picked up. So it has strengths. It has weaknesses. So also the Java Jams that ran for a year last year. We started, we kicked it off in 2024. It ran for almost a year. We took some time off in the summer because it was an unhosted community of volunteers. And we asked, who wants to come forward and share with the field? How are you using it in assessment? And that's how I got very connected to David DiSabito, Josephine Rodriguez, Will Miller, of course, Anne Converse Willkomm at Drexel University, Devan, Peter, Joanna presented on cautions. Like we're building the field together, but those sessions were really well attended and we were just running an unhosted community. And we, you know, we sprinted right out of the gate with sort of a lot of conversation. And then we noticed that the field was shifting to more. We don't want just the conversation. We want like a 45 minute walkthrough of how are you using this thing? And that kind of, that kind of brought us into the current moment where we need to move from these quick sprints. to a deeper training where we run it for each other. AAC&U had, I believe, an institute and it was for faculty, right? And AALHE is doing some really great webinars. And NEEAN for example, right down there, you know, our Massachusetts group outside of AMCOA is doing, you know, some sort of presentations or panel presentations, but we need more than that as a field. So this community of J.D. Mosley-Matchett, PhD (26:23) Mm-hmm. Dr. Ruth Slotnick (26:39) practice is where we need to be because this fire hose is coming fast at us. And we need a way to be able to really find a way to be down in the weeds of the technology, admit what we don't know, bring in the experts that have, and when I say experts, none of us are experts in AI here. We may actually have to bring in an academic integrity expert. We may actually have to bring in an ethical training on, you know, how are we using it? What are the ethics? What are the sustainability outcomes? How, when we're interacting with the technology what does that mean for us, for the world, for our environment? So we have to be the training we need because it is not out there. You can get some Google training, you can get some Microsoft Pilot training, you can go on LinkedIn and get all sorts of training, but it's not built for the managers, directors, leaders, practitioners. We have to build it for us and no one's going to do it for us. And that's why this community of practice, through the Assessment Institute is its own unique, very much needed next step. And we have to wait until 2025 to get there, We have to wait for the kickoff. So we're starting to build the structure of what it's gonna look like. And I think it sort of remains to be seen, but I think it's gonna be the next thing that we need for our field. J.D. Mosley-Matchett, PhD (28:04) I agree wholeheartedly. Thank you so much for all of the good work that you're doing. Okay, Accreditation and institutional effectiveness efforts increasingly rely on data, and AI does have the potential to support that work. So, what ways would you suggest for institutions to integrate AI into assessment reporting while still maintaining human oversight and credibility? Dr. Ruth Slotnick (28:32) Well, I think it's a great question. I'm reminded of Laura Gambino's podcast, believe on a different podcast that I listened to Laura talk about how even the NECHE Commission has their own sort of paid internal ChatGPT. I think we call them subscriptions now to the ChatGPT. And NECHE is using it for really deep dives into longitudinal analysis of I would say thousands and thousands of pages of self-studies. The reality is that we don't have paid researchers at the commission, undergraduate researchers or graduate researchers who are able to really look deeply into trend analysis or coding data. So, in some ways, it's an exciting prospect for the commissioners themselves to think about how they use these tools for kind of a deeper analysis to look at kind of trend analysis at the commission level itself. And I applaud NECHE for already kind of experimenting kind of in an enterprise space on and I'm very jealous because the enterprise space allows for a very different relationship with the technology, which I don't get because I can't afford $200 a month in the office of assessment. But perhaps if I worked at ASU and our campus integrated it naturally, I'd be having a very different experience and I'd be talking to you differently. Like Lisa Bortman, who's at Arizona State University, would be an interesting person to interview because her campus is a complete adopter to the point where she might not be able to tell you much because of the intellectual property piece of how they're using it. But we need that in the field right now. When it relates to the accreditation pieces and components, the human is always the final judge. Jessica Parker calls it the human sandwich, right? But the buns are the human and the middle could be the Generative AI inputs that you're doing and outputs. But at the top and the bottom, the human is always there for the input and the output. So we think about the accreditation pieces. I believe that you've had the wonderful Glenn Phillips on your podcast. And Glenn has probably already talked about, is actually possible to think about how to use Generative AI to measure against a set of standards. I don't see anything, I actually don't see that as a barrier. And I think CHEA just recently put out a statement that they are guidelines. There's no policy directives that I don't think is coming down from any of the institutional commissions themselves or even CHEA, just guidelines. But the human stays in the loop. However, if you could take a piece of writing that your institution has put together and you can say, now you feed it in and you say, where have I missed addressing a standard? Give me the strengths and weaknesses on this particular narrative of what I'm missing and or, you know, am I addressing more description or am I actually providing an evidence-based narrative? And it's pretty good at giving reflection and feedback. And you can even further now with deep research, for example, in ChatGPT, you can ask it to do a comparative analysis. So I think that we can work the tools to our advantage, but still at the end, of the day, it doesn't know our institutional data. It doesn't know evidence-based stories. It doesn't have the actual day-to-day interaction from the dean level to the department level to the way that the Office of Assessment may collect or aggregate tell So it's not going to replace the assessment professional and or even a writing team's work. However, it can serve as a really powerful copy editor. Copy editors are expensive. It could do a first pass copy edit. You can tell the machine, that's all that it is, you can tell the machine to go through, not change a single word, but go through every single page and look for where every page needs copy edits. You can restrict what you tell it to do for preparing the narrative. Now, writing the entire narrative without actually having humans in the loop is, I think it's nearly impossible. Although it'd be kind of a fun project to see where it goes. And my guess is that it will start hallucinating all sorts of data. It will make up data. It wants to please you. The machine wants to please you. It will make up some data and it will stick the data in there and probably create some pretty graphics that will have nothing to do with your institution. So all that to say is that I think that it could have a role as a tool in the work that we do on our campuses the contributions the assessment person might have in any sort of accreditation writing component. J.D. Mosley-Matchett, PhD (33:26) Now, we all have perfect 20-20 hindsight. So if you could go back to 2022 and give advice to yourself at the beginning of your AI journey, what would you say and what do you wish you had known? Dr. Ruth Slotnick (33:40) That's such an interesting question because... I think I'd tell myself three things. Start experimenting more and sooner. Although I'd say I was pretty prolific. I'm going to put myself out there on LinkedIn. I connected with some of the brightest minds and voices. I scanned the headlines. I followed the stock market. I tested frontier models. I watched the latest updates. As they progressed, I followed the drama on OpenAI. On X, I read articles in the New York Times. I read the sub stacks, the blogs of Lance Eaton, Donald Clark, Peter Shea. I listen to webinars both in higher education and in business and outside of business and the health sector. And I also listened to lots of podcasts, including a podcast that brought on students, both grad and undergrad to share their experiences was really eyeopening, including one graduate student said she's concerned that people will start talking like the bots themselves. you know, I'm going to do a deep dive or, you know, kind of words that crucial, know, the sort of words that the bots, you know, pumps out over and over again. Her concern was that we're going to start talking like the bots. So sometimes when I say, well, we're going do a deep dive or a deep research, am I starting to talk like a bot? I even co-presented with colleagues at assessment conferences in the state of Massachusetts at the Assessment Institute, at the AAC&U Conference. So I feel like I have been out in front. And yet when I look back, J.D. Mosley-Matchett, PhD (34:56) You Dr. Ruth Slotnick (35:10) I keep thinking I wasn't in it enough early enough. So what is that about? I think that's really about, as I said prior, it's kind of the fire hose moment where information is coming at us so fast and so quickly, we always feel like we're behind. And even in 2022, I just felt like, at some points I felt like, okay, I'm on top of the conversation and I take a day off from the technology and from social media. And the next thing I know is I get on the LinkedIn and there's hundreds of people presenting models of really interesting ways to think about Generative AI and how we use it in higher education. Mairéad Pratschke who's in Europe, she's very much in the space talking about higher education use and teaching and education overseas. And and Donald Clark is in England. So when you start... looking at the higher education world, AI is actually bringing us closer together. We tend to stay in our national states of mind. But I think this general purpose technology, as Ethan Mollick calls it, the GPT, is different than any we've ever had before. Henry Kissinger's new book on Genesis talks about this polymath this type of person so, so bright in science and math and technology. We have relied on that polymath for many centuries and AI is the new polymath. It's very different than anything we've ever experienced before in higher education and it is disrupting higher education. So, when I look back, I wonder what I could have done more of. And I think that I did all that I could at the time. And I'm doing all that I can now, realizing that I'm not an AI expert. And I'm just looking at where is my opportunity to integrate in my field? Where is the opportunity for me to inspire others to maybe get off the fence a little bit and try it? There are plenty that are just going to resist and say, there's no way I'm even going to try this technology or use that technology. And I think that's going to put them at an increasing disadvantage. Many people have said this before, AI won't replace you. But someone who is really fluent and has the ability to experiment in the space may replace you. One of the questions we ask in our survey, this may be kind of a good wrapping up point, but one of the questions we ask is, J.D. Mosley-Matchett, PhD (37:32) Okay. Dr. Ruth Slotnick (37:46) "Do you see AI being integrated into a call for a position in your office in the future?" Like will it be, you you have to know Excel, you need to know how to use Qualtrics, maybe there's some, you know, other kind of data analytics software you need to use, and do you know how to use Generative AI and integrate AI into disciplinary perspectives? Like what is your fluency to be there? And if you can't speak of that fluency now, you got to get in the space so that even if you're not going to use it, you know how to talk about it. You know where it's at currently. You know that this model just revised itself to add this additional thing over here. And so if you're not in that space, you got to get in that space and you got to be part of the conversation. And it's nothing we've ever seen before. I also think that the community college space is going to be an interesting space to watch. Why? Because the community colleges have always served students in terms of access. It's always served students in terms of local, and it's always served students in terms of workforce readiness. And so we're talking about being ready for the workforce, I am worried about some of the universities and state colleges and privates that are not actually tackling: "Are we preparing students for AI readiness and fluency and ethical use, intellectual property, sustainability? Are we preparing them for the future?" So as we look ahead, where can you also influence your university to step up? So these multiple roles that we play as assessment professionals aren't just in just how we experiment it and we get out of our own way as a barrier, but we actually build our campus collaborative structures within and around each other now where we can all serve the needs of students in terms of all students have an equitable chance upon graduating to be leads in the space when they enter a field. Right? So that's my greatest hope is that we all sort of look ahead in terms of what does the student need, not what is my resistance and, you know, I don't want to be in this technology or I'm going to wait on the sidelines till it comes along or I don't have the capacity. We can't wait. It's happening. It's happening. It's happening in higher education and it's happening in the workforce. So higher ed needs to be a player, be a part of it. J.D. Mosley-Matchett, PhD (40:22) Precisely. Thanks so much for giving us so much food for thought, Ruth. It's so important for higher education administrators to base their approach to AI on solid research instead of relying on gut feelings and instinct. Dr. Ruth Slotnick (40:38) I think you're right. You can't, you can, you can take that gut instinct and you can take it and you can say, okay, let me play with that a little bit. And I'll just leave you with a, with a funny little story of, um, my dad is 89 years old and I loaded ChatGPT on his phone and we selected a good voice for him. Although the voices have changed a little bit. He liked the original voices. And, he talks to it. You know, he's often alone and isolated and he talks to ChatGPT. He might need advice on a friend who is dying. He might need some advice on kind how to write a text message to his niece who just was selected as a valedictorian and he didn't quite know exactly how to say an 89 year old man can tackle ChatGPT on his phone as an experiment in his own way, you can do it too. You can actually think about where it might, if you're not going to use it in higher education, you could think about how you might want to use it in your own life and maybe help your senior parents think about in their own isolated way how this technology could make them feel a little less isolated because the technology is pretty savvy and it can actually make it feel like you've got a person to talk to. And I think the senior space is going to be an interesting one to watch to see, you know, in some ways we don't, how we care for the aged in this country is always a a little bit of an afterthought. And I think that some of the Generative AI technology, even being integrated into other, I'm not gonna say the ALX, EXA, because it will respond to me over there, but I think the technology will continue to advance in a way that will be so integrated, we won't even know that we're actually interacting with the GPT. So the agents are coming. They're coming to our desktops. We'll be able to help them help us in all sorts of ways in our personal lives. I still think that we have a way to go in terms of how it's going to help us in our university lives. J.D. Mosley-Matchett, PhD (42:49) more information about AI news and trends that are directly impacting administrators in higher education, please follow InforMaven on LinkedIn and visit our website at informaven.ai.