J.D. Mosley-Matchett, PhD (00:30) It's time for another episode of AI Update brought to you by InforMaven. And today our guest is Will Miller, Ph.D., the Associate Vice President for Continuous Improvement and Institutional Performance at Embry-Riddle Aeronautical University. Dr. Miller is on the Board of Directors for AALHE and SAIR and co-leads the Assessment Institute's GenAI Community of Practice with Drs. Ruth Slotnick and John Heathcote. Welcome to the podcast, Will. Will Miller, Ph.D. (01:02) Thanks, J.D. I'm really excited to be here and to talk. J.D. Mosley-Matchett, PhD (01:04) I've known you since my time as the Director of University Accreditation at UNC Charlotte, and I've always admired your ability to cut to the essential core of accreditation and assessment challenges, which explains why you're heading Embry Riddle's Office of Continuous Improvement and Institutional Performance. So would you mind sharing some of the successful projects that you're especially proud of, just to give our followers a sense of the quality of your solutions. Will Miller, Ph.D. (01:32) Yeah, of course. I'm really excited to actually share that JD. And it's funny, I mean, you mentioned the office name and that's, think, been one of the greatest successes so I first came to Emery Riddle, I got to Emery Riddle in March of 2023, was selected as the AVP for Assessment and Accreditation. And I'll be honest, I really didn't like that title. I didn't like the office title of Accreditation and Assessment. And part of the reason is because it's just not exciting to people. At Embree Riddle, we focus on future forward things. We're into aviation, aerospace engineering, space operations. Nobody wants to talk to the assessment and the accreditation person. It's the most bureaucratic way to think about what we're doing. And it also doesn't reflect what we do. You you mentioned the accreditation angle. Even our accrediting bodies, which are historically incredibly slow to adapt are into the continuous improvement language now. And it's all about closing the loop and all of these other phrases we throw out. But when I really looked at the work of what the office I was inheriting did, we needed to center in on the continuous improvement piece. And there's so much more that obviously goes with that, but we really wanted to make sure that when faculty and staff were interacting with us, they know where our priorities are. So we started with the office name change. And then over the last few years, we've really tried to become leaders in this continuous improvement space. So in February, we held our second annual continuous improvement summit. We had over 500 attendees from about 170 institutions. We were able to fill three days, virtual professional development and not looking to make money. I mean, we offered, probably you remember last year, $25, $30 for registrations. So we're trying to just, you know, more or less cover our costs for whatever our platform costs and a couple of things. And that's been incredibly successful. And then this year we launched the Improvement Imperative Webinar Series. So we had the first Improvement Imperative Series session earlier this month and had just under 200 people attend that and had about 140 institutions. And that really sprung from us wanting to do more internal professional development for Embry-Riddle folks on, you know, topics like continuous improvement and preparing for accreditation visits and making meaningful program review and thinking about churn in the assessment space. You know, we go through people somewhat more frequently than we would like, and that's always a loss of momentum. So we hosted that, but we're really just trying to provide gap fills where we can in areas that we know higher ed talks about, but there's not really a good vehicle. Then on the personal side, obviously, what we're doing with AI. I think our AI work has been incredibly helpful in starting to reshape how we think about assessment and continuous improvement. So we've moved from having assessments and assessment plans or reports to... to what we call insight to impact dialogues and it's fueled by AI. Instead of having faculty starting this fall actually go into the system to enter their assessment information, they're gonna have the option to instead schedule an hour meeting with somebody on my team or myself and we will walk through with them through basically conversation. What do you want students to learn this year? How are you gonna know if they're learning it? What problems did you have last year? What barriers do you think might... might come up and be able to in real time get information. And then we have an AI agent built that will strip that conversation back into the bureaucratic templates. And we really wanted to go that direction because even myself, I might have pages of information I want to share, but when I sit down on the system to say it, I type two sentences and I'm like, I've done everything the accreditor needs. I don't need to go deeper than this. We're missing the good information. And I think in higher ed in general, especially on the assessment and accreditation and even the institutional research side, we've really had the script flipped and I'm just, know, for a long time, I racked my brain and it kept me up on night. I'm like, how do I tell the story for them? Like they give me this data, this assessment information, these accreditation reports. How do I tell the story? And then I realized that's that's wrong. They already know the story. How do I capture their story? And make it work for all of these other places? I shouldn't have to tell the faculty story. They already know it. And it's a lot easier to convince a faculty member, not that they need convincing, or a staff member, to sit down and brag for an hour with us on what they're doing and why they're doing it and the hurdles they're facing and what they did based on data they've seen. It's a lot easier to convince them to do that than to do something that no matter what we do to change culture and wording, at the end of the day, they know the preeminent reason we do it is for SACSCOC. J.D. Mosley-Matchett, PhD (06:14) Yes, so true. Will Miller, Ph.D. (06:15) So lots of culture, lots of structure. think language matters. I think names matter. I think titles matter. So a lot there. J.D. Mosley-Matchett, PhD (06:22) I agree wholeheartedly. My goodness. You've been busy and I love it. So let's hear some more about your thoughts about AI. First of all, How is generative AI reshaping the way institutions approach program level assessment and particularly in areas like documentation, reporting, and continuous improvement plan? Will Miller, Ph.D. (06:47) Absolutely. I mean, I think we have to start JD and it's weird that we're still having to say this, but we just have to recognize that AI is going to change how we do things. I'm growing increasingly frustrated on the listservs with folks that seem to want to think that this is just going to disappear and it's not really going to reshape things. And just because I can teach it without AI doesn't... For me, our students are going to leave our institutions entering fields where they are expected to be able to use, utilize, understand AI. And obviously there's ethical concerns and everything else that goes with it, but this, you know, head in the sand approach that AI is just like MOOCs were 10 years ago and just a threat that's gonna disappear. It's fundamentally changing what we do. And on the program assessment and the accreditation side, so much of what AI can offer is time saving for faculty and staff. When I think about interactions with faculty or staff members at any level, I consider that they have 100 % of effort to give something, to give anything, to give everything. And that 100 % for faculty and staff should include what they're doing at work, what they're doing in their personal lives, what they're doing in their family. There's 100 % effort I can give. And I think a lot of folks in the assessment and accreditation space have historically not been kind about that. And we'll just keep adding things for faculty and staff to do so that we're expecting 120, 130, 140 % of effort that they don't have to give without burning. So I stress to my team on a regular basis, if we're asking them to do something new, we have to take something away. Like we need to keep this level. Like if I'm asking them to give two more percent, I need to find something that can save them two more percent. AI can save a lot J.D. Mosley-Matchett, PhD (08:28) Yes. Will Miller, Ph.D. (08:36) of time and energy for folks. whole thought in reshaping this, and again, I come from a faculty background, I'm a political scientist, I've filled out assessment reports for myself at some institutions. the frustration still carries. I get done grading and scoring and assessing. I'm like, whoo, got the faculty part over. Now I have to go into the program assessment side. I have to enter all the information I collected, what it shows me and analyze it. And by the time we get to the point where I'm going to talk about what I plan to do in response to what I'm seeing, I don't want to sit there and think about it anymore. I'm done. I've given my So we always end up shortchanging the part that everybody actually cares about. And it's not just that the accreditors care about that part, which they should. That's where institutional value comes. That's where student success increases. So I think with AI, the ability for it to do a lot of that front end piece then gives us time to say to faculty and staff, look, the data collection, the information, all of that's been there. And we're just gonna talk to you about what you plan on doing or think you might do based on this. And then we're gonna come back in two years when you reassess the outcome and we're gonna see what you did and what worked and what didn't work. So we're allowing to focus energy on the part that actually makes the difference and matters rather than all of the energy going towards data collection, data aggregation, data sorting, all of those things. The accreditation part's a little more interesting in some ways. it's only just because whether you're dealing with Gemini, Copilot, ChatGPT, any of these features, with accreditation reports, you're wanna make sure information that's coming through is right. I use AI for accreditation a lot in... How can I restructure? How can I reword? How can I better present this for consistency across various programs that should be having the same information? So it's much more in that kind of idea generation and then organizational side as opposed to, you know, I'm not going to ChatGPT or Copilot and saying, "Write a two page background on Embry-Riddle Aeronautical University based on what you can find or dig up." But even on that, piece where we are using it, it can quickly tell me if I upload three reports to an internal Copilot, where do I have information in one that conflicts with information in another? Which we can catch as human beings too, but it's a lot easier and it's a better use of our time for AI to catch the things that are that obvious. And then for us to spend energy figuring out how do we want to rectify this? Maybe it's a typo or maybe it's, gosh, we have faculty thinking about this very differently and we need to... to make sense of it. J.D. Mosley-Matchett, PhD (11:18) That's so true. I absolutely agree with everything that you said, and that's why I have always admired you. Anyway, let's Will Miller, Ph.D. (11:29) Yeah! J.D. Mosley-Matchett, PhD (11:29) Can you tell me what strategies you've seen or would recommend for helping assessment and accreditation teams overcome resistance from colleagues who distrust or fear AI's role in higher education operations? Will Miller, Ph.D. (11:45) Yeah, a hundred percent. I think there's two layers to this one. It's funny earlier this week, JD, I was at James Madison with, you you mentioned Ruth and John earlier with the, Gen AI community of practice and we were at James Madison kind of brainstorming what this will look like. It's, it's about what you asked, but it's also about how do we get the assessment and accreditation folks that are still resisting on board? And I'll be honest, I think for a lot of the assessment and accreditation folks, and and I talked to some that, that don't like how pro AI I am and really you know, would prefer not to be. And when you get down to the end of the day, it's all fear. And I hate saying it. A lot of the fear is I'm going to lose my job if AI can do what I've been doing for 20 years and turned into this, you know, kind of siloed position. That's what I do. And I have really two big comments on that. Number one, that's a terrible way to think about your job and the health of your institution. You know, if you've created this niche by making something manual and you're the only one who knows how to flip the levers or turn the dials, that's not healthy for anybody. But number two, I'm a firm believer that AI supplements and assists. AI doesn't replace. ⁓ You know, I tell my team as much as I can, like, It's okay if we start using AI and you find yourself having two hours a day that you're not sitting there doing something tedious or manual, and instead you have time to think big picture or to think about publications or to think about revamping process or building culture or working with personalities. But I'm like, that's okay. That doesn't mean you're unnecessary. That doesn't mean you're becoming, you know... ⁓ close to losing your position because your position has been replaced by an agent who can do it. That's just not the same thing. It frees you up to have the more elevating conversations that we actually need people to have. I I would much rather pay anybody working at any institution to sit there and think big picture about ways they can have impacts on student success, their own office, efficiencies, whatever, than sitting there doing something tedious just so they felt like they couldn't be replaced. but I'm paying them the same. There's so much more value add to this. Now, in terms of resistance amongst the people we work with in assessment and accreditation, that's a tougher one. it's tougher because, I'll say it's a lot tougher with faculty, and I'm generalizing here, I have plenty of faculty, and I think we all do, who are fully on board. In some cases, I'd say we have faculty that are too on board. I do think there are faculty that... J.D. Mosley-Matchett, PhD (14:19) you Will Miller, Ph.D. (14:21) that may-- and staff-- that might see AI as the replacement and they're getting their time back, but they're not necessarily doing the forward thinking things we'd want them to do with that free time. Which in some ways, I mean, if I was in a classroom right now or teaching online, mean, yeah, AI can read discussion posts and respond to them. AI can grade if I give it a rubric. AI can post announcements. Like I get the temptation. So there's a balance to find there. So you have one end of the spectrum that's I'm going to use AI for everything. And this is going to be freaking amazing because I'm not going to have to do any of these tedious things I used to do. And then on the other end, have the AI will never enter my classroom under any circumstance. I refuse to go this route. It's impure, whatever it might be. And in both cases, we're failing to help our students. The faculty member who's relying too much on AI is not using their expertise. And again, if we think about this, and again, as you know, JD, I have mixed feelings on accreditation at all times and what's good and bad. But at the end of the day, SACCOC, ABET, whatever accrediting body it is, they're not qualifying our faculty on their ability to train AI to be disciplinary experts. They're qualifying our faculty on our faculty members and their knowledge and expertise. So we need to be super careful with that. And that doesn't mean that if a faculty member wants to use AI to generate outlines and use outlines to generate scripts that reflect their own voice and ideas and then use AI to create, you know, avatar videos that explain this. That's fine. Like no concerns, no worries there. We're talking about just full replacement. So part of it's trust. Like we have to work with faculty and staff better to help them understand where we do and don't use AI. And I think honestly, just having that type of setup helps get buy-in immediately. of a sudden when you're willing to say these are places we don't want AI to go, I think faculty and staff feel relieved where it's like, okay, this isn't going to be all encompassing. There's been thoughts about the ethics and guardrails and potential problems and issues. So I think that starts it in one. And then two part of it is respecting that there are places where it may not be as appropriate. You know, and again, I believe you can find ways to use AI in any course, meaning fleeing any subject matter. But you have some where it's more direct than others. If I'm in a computer science program, I'd better be teaching my students how to use AI because they've seen it, they know it. If I'm at Riddle and I'm in aerospace engineering, but if I'm teaching, I'll use political science. So I'm teaching intro to American government. It might not be something that weaves through the entire class. I might have an assignment that gets into, you know, in in a society concerned with misinformation, how does AI contribute to this? Go through and do some assignments where you find misinformation or create misinformation, but it wouldn't necessarily be the entire program the entire time. So I think we have to also have those conversations where faculty understand, you're not being forced to make this a thing, but on places where it naturally seems to segment, we might wanna do that. And that depends on your institution type too. mean, at Embry-Riddle, we have tech forward students. They're there for a reason. They're going to expect AI. When I taught at Flagler College, and again, being fair to Flagler, great school, much more liberal arts based, forward thinking students. It's not that they would be anti-AI, but it would have been a different conversation around AI. The programming doesn't lend itself as naturally. If I'm in a theater program or I'm in a studio arts program with Phenomenalism. Like studio art students are gonna have a very different view. I met with a few at another institution about two weeks ago who were all upset because they used to create patterns that they could sell on Etsy. And now all of a sudden AI generates all of these patterns, but nobody actually checks them. So people sell the patterns and then you think you're getting a pattern of a cat in a lamp, but you're getting like a cat that appears to have suffered serious broken bone issues with how its legs are going and a yellow orb that might be a lamp because nobody cross checked. It's very different conversations, but I think we need to have those conversations and we need to build the culture where people feel comfortable telling us what their limits are. it doesn't mean we accept the limits. We can say, okay, that's good. Now let's try to push the boundary just a little bit. But we need people to feel comfortable saying like, I'm not comfortable. I don't know how to do this. And there's a big difference there too. We need to know who are "I don'ts" and are "I won'ts." know, I saw a LinkedIn post this week from from an individual at a an R1 in the south been very progressive in some other places, but was incredibly clear they do not appreciate AI being shoved down their throat by their administration and that's more or less the language they used and that they will resist it until the end and nobody's talking about the cost and everything else and all I can keep thinking is if I have a student leave today who goes to an employer and says my university didn't expose me or train me or talk to me about AI, what service am I doing them? I mean, to me, this is the same conversation that Higher Ed had in the early 2000s about Microsoft Office, when it was like, why would we require this intro computer applications course and make all of these students learn this? And we know now, like, I mean, how many jobs do we see where there isn't a requirement that's like basic understanding and ability to use Office tools? And that used to mean the ability to like log into Outlook. And now even on that, I think it pretty much means like if you see Excel, like they're not talking about like type in numbers, they're talking pivot tables, they're talking So I'm sorry, that was incredibly long-winded, but I think so much of it is that culture and conversation to figure out where people are and what they want. And part of it, too, is use students in this. And I'm not saying to weaponize students, but at the end of the day, students are going to start asking in courses, "Where's the AI? How does AI tie in? How does this fit?" Say you're teaching Intro to Philosophy. Sure, maybe it's not somewhere that, you know, naturally you can think. But think about assignments you can do in Intro to Philosophy. Go on to ChatGPT and ask it to pretend that it's Cicero and interview Cicero through AI. And then for your paper, I want you to critique how accurate you think the AI was based on your readings of Cicero and where it was wrong and why it would be wrong. The ethics side of philosophy, like let's get into the ethical debates. Let's get into the questions and the conversations and the back and forth on the cost of AI to some of the developing countries of the world and what that looks like. And, you know, what's the cost of progress and what are we okay with and not okay with. And, you know, the grand irony for me is a lot of people I see online that are that are sitting there sending out the messages of we're destroying some of these economies with AI and it takes so much power and electricity and energy. And I'm like, but it says you sent for mobile. So we already mined that country to get the battery for your cell phone. So like, where are we drawing the line and where are the inconsistencies? Not saying they're not valid points, but what's the self-reflection? So lots of ways we can do this and do it. J.D. Mosley-Matchett, PhD (21:34) This is true. This is very true. Okay. Can you share some examples of how generative AI has meaningfully streamlined or enhanced tasks like rubric development, program review synthesis, or accreditation self-study drafting? Will Miller, Ph.D. (21:53) Yeah, and I'm to start with a different example than the three we sort of talked about, because I think it's really telling. At Embry-Riddle this year, we've had this amazing transformational And it started with our board of trustees, our president, and then our new provost, who started about a year and a half ago. With Embry-Riddle, we have three main campuses. So we have a residential campus in Daytona Beach, a residential campus in Prescott, Arizona, and then we have our worldwide campus, which is... asynchronous online plus 130-ish sites located in 28 states in a country. So it's all encompassing. Historically, we've had three catalogs where courses sometimes directly map over and sometimes they're sort of the same but a little different. And we've used three-letter prefixes and four-letter prefixes and course descriptions have been kind of all over the place. under our leadership, we wanted to make the push for one university. It's one degree. The transcript doesn't say you went to Prescott. The diploma doesn't say you went to Worldwide. It's Embry-Riddle. So we wanted to bring this together in an impactful way. And we decided that we needed to do this in August, September, and decided that we needed to have it done by this summer to be able to implement it for fall of 2026. So to give IT the time that they need. So our options at that point for my side, as I was kind of charged with, let's get to the same course description, let's eliminate the duplication, let's bring this into one catalog, was I can go to the faculty in September and say, take all of your courses, call all of your colleagues on the other campuses and let me know when you have decided on prefixes, numbers, course descriptions, course goals, student learning outcomes, and also keep all of these requirements and guidelines in mind as you're doing this. Or, I could use AI, work with my team in my office, and we could instead use AI to take variances across course descriptions, course goals, learning outcomes, plus the guidelines and requirements, and start them off with a draft. So we went that route. So we used Generative AI to basically recreate our catalog from the three existing catalogs. And then our office team went through and made sure we liked how things looked and that the courses they were saying were comparable, were really comparable. Spent a lot of time doing that. And then we sent it to the faculty and got very little pushback. And that was my biggest goal, was my concern. My biggest concern with it, JD, was if I do it the other way, the Daytona Beach faculty and the Prescott faculty might disagree on a course learning outcome and they're gonna end up arguing or going back and forth with each other. So if I do it this way, they can just all be mad at me. Like I'm not a physicist. I still don't understand necessarily how airplanes fully operate. Like I don't get a lot of this stuff. I'm just using the words they had on paper and saying, let's take that 800 character count description and get it down to 250. Let's streamline it so it looks the way we want. Let's make sure we're using Bloom's verbs in our outcomes and only having one. And if you don't like something we've come up with, go ahead and just rewrite it and send it back to us and tell us what you want. Now we have it in place for next year. We'll also have a Gen. AI agent that as they go through the curriculum process, instead of even drafting their own, they can actually sit with the AI agent and say, you know, I want to teach this course and I want it to cover these types of things. Can you give me a first crack at goals and descriptions and outcomes? then also If they have what they think they want, they can put it in and find out like, does this fit the standard that we need to keep this catalog and consistent? Now you mentioned some other places, program review. Program review is a great one. At Riddle we have this great program review process and setup. Lisa Copps led it for a number of years. It's impactful, it's meaningful. There's follow-up, there's strategic action items, there's meetings with the deans. And we use an internal committee. So for the undergrad and the masters, we use internal folks from different colleges and different programs and different campuses to serve as the reviewers. our doctoral programs, we do the same thing plus external reviews. The committees have always gone back and written, you know, two-page summaries of the entire program review when we're done. And Lisa and I are moving towards, know, I'm fine with the reviewers doing that still, but again, can I take that whole program review, plug it into AI and start to be able to create agents that pull exactly what we want and need or pull exactly what I know the provost wants or exactly what I know the budget office wants or exactly what I know the president's interested in so that instead of us having to manually think through this, if I know what the provost wants to see and what the president wants to see and they're slightly different, why can't I use AI to just yank the pieces they want so that they don't have to dig through a 30-page program review or a 70-page program review versus getting the high level that they are looking for and then feeling comfortable going through the document for everything they're wanting. And you mentioned rubric development. That's another place I feel like AI is just made for, especially because you can start with standardized rubrics still. getting ready for our annual Gen Ed Summit in Prescott and we bring in our Gen Ed coordinators and they're just phenomenal, engaged, pedagogically strong teachers great researchers and everything else. But I mean, really, it's their care for students on the Gen Ed piece that means so much here. Pretty much at every summit, we start by looking at AACU VALUE rubrics, and then we look at what we're looking for in the actual Gen Ed competency. And then we figure out what from this rubric applies, what doesn't. Here are the assignments that we've got to score. Did these mesh with this? And then we kind of, know, Frankenstein our own rubric that fits all bills. And we've spent a lot of time arguing semantics over the years, arguing is strong, talking about semantics while sitting in a room together, in Prescott, at least the most recent years. This year, I'm gonna sit there with AI and basically, as we're talking, ask it what it's thinking, what it's saying, what it's doing. So again, kind of the same process, instead of going back and forth for an hour, unless we think there's value in that back and forth. again, if we think there's something that comes from having that personal back and forth and arguing over comma placement, great, we'll continue to do that. But if we're really just trying to find the right word, AI can sit there and suggest what it looks like. I'll probably go even a step further this year. Just for the sake of interest, I'll probably take the rubric. And I will probably take some of the assignments and while the faculty are scoring, I will probably AI score those on our institutional Copilot so it's not going into training or anything. Just to start figuring out, know, not saying I don't want this summit to happen and the scoring to happen. But right now we do a fairly limited sample. And we do a fairly limited sample because we're together in Prescott for, you know, really two and a half days. So we don't have time to evaluate everything we might want to. But all of a sudden, if the AI can be reliable on some of this for assessment sake, what's the harm in saying, let's double our sample size and have half of them go through AI and have half be human scored, and then we validate and cross check the AI? I mean, it's both useful from an expanding the sampling strategy perspective, but it also opens up all kinds of doors to research, and you know, your earlier question about, "How do we get people on board?" Let's show them where it's accurate and where it's not. Let's figure out what it's not good at. we're again, making it clear, we're not doing this in place of, we're doing this as an augmentation or an adaptation or an additional layer, which I think is a big piece. J.D. Mosley-Matchett, PhD (29:51) So true, so true. Now, beyond generative AI, what other kinds of AI-driven tools are quietly transforming how institutions assess and improve their academic programs? Will Miller, Ph.D. (30:04) Yeah. And I mean, this is, this is the tough question. I mean, the gen AI we play, we talk so much about just because we use it, we can see it in our personal lives, our professional lives. You know, and again, I'm as guilty as anybody. I mean, I use it a lot. I have about an hour drive to Embry Riddle every day. And so much of my time is spent voice to text, just stream of conscious. I mean, conference presentations, JD. And I'll get to the non-gen AI, but... J.D. Mosley-Matchett, PhD (30:29) you Will Miller, Ph.D. (30:31) Well, conference presentations, it is somewhat non-GenAI. I can sit there and go voice to text or just do the audio file over an hour drive and talk through everything I think I want to say in a conference presentation. And I can get home and take that audio file, load it into GPT or Copilot and say, "Write a 200-word session description for this. Give me some learning outcomes. Give me an outline of what I've talked about, shaped together." And then I can go into Gamma or PowerPoint with Copilot or any of these and say, "Can you take that and just give me the shell of a presentation so that I can start to tweak and think and work So those tools aren't gen AI, but they're operating in similar ways. But on the higher ed side, so much of this is in the student success side for me. What we can do with AI on the student success side, you look at companies like EdSights. And EdSights has been out for, you four years or so, five years. They grew big during COVID. But with EdSights they've used Vincent Tinto's framework and they've built out frameworks that institutions can use through multi-directional texting. Well, not really multi-directional texting. It's one way to the student, but it's both proactive and reactive with the student by asking questions and then assigning risk scores. And they have great data that shows how, you know, finding out that a student doesn't feel like they're able to engage academically. What does that do to retention? What's the intervention? What are the pieces that look? And for a lot of institutions, it seems to be incredibly confirmatory in some cases, which is also a positive thing. Confirmatory is not bad. You know, they really have kind of four risk factors. And it probably won't surprise you, JD, but out of these four risk factors, if you think about academic risk, it doesn't really make a ton of difference where a student falls on academic risk and whether they retain or not. And part of the reason for that's because we have strong support systems built for academic risk. You're at risk academically, we have tutoring centers, we have advisors, we have faculty, we have all of these groups. Mental health and wellbeing doesn't have huge step-offs typically for retention because as soon as you are identified or self-identified as needing this help, we have you know, tons of resources we rush in with to help you. Financial well-being, not always huge step-offs. And again, this is generalizing, but you know, not always huge step-offs. You say you're financially at risk, we get you to financial aid. We look for scholarship money. We try to make it work. Student engagement is the one where we struggle. And it's because when a student says they don't feel like they have friends or they don't feel like they fit in, we don't have that resource to send out. I mean, we have RAs if they live on campus. We have, you know, affinity groups. But it's not the same as tutoring, scholarship money, mental health counseling. Now the overall risk scores that EdSights generates, those step down beautifully. So I have an institution where my normal retention rate's, know, 75%, my low risk students might be 85 % retaining, my medium risk are 70 and my high risk are 52. And I know all of a sudden that these matter, but that type of AI, and whether it's EdSights or any of the other platforms that do similar work doesn't really matter, but it also gives us new data points. You know, one of the things that fascinates me the most looking at lot of the research on this, a student who starts high risk and stays high risk at times is actually geared to be successful because they're just high risk individuals and they know this and this is their normal. Like, yeah, you know, I'm high risk on these things and I'm always going to be and that's, that's it is what it is. It's who I am. It's all fine. This is my normal. I'm very comfortable. But it might all of sudden be able to start giving you, you know, AI informed opinions on things like this student was low risk three weeks ago and now they're high risk and that student most likely in free fall. They've never been high risk before. This is terrifying. Everything's falling apart. So just being able to get into those nuanced pieces of what answers and what things matter from which students, which time drives so much more information. J.D. Mosley-Matchett, PhD (34:19) Right. Will Miller, Ph.D. (34:36) The AI on the business process side too, and this is the part nobody likes to talk about because it's not sexy, it's not student related. But I mean, the things that we can do in terms of invoicing, payments, tuition remission, like all of these things can be AI informed in ways that they haven't been in the past. And again, we're freeing up time for those folks to look at processes and pain points and tripping hazards, you know, all the things that drive us nuts on the non back end side can now be looked at in a much more real sense, in a much more timely sense. And most importantly, with some real focus as opposed to, you know, that's number 10 on my to-do list today. And every day it ends up back to number 10 because it's forward thinking. So other things that are more urgent or, you know, imminent end up replacing it every time and it never makes its way up. And by the time it does, I'm like, my gosh, I just want to breathe. I don't want to do this forward thinking thing. This is so difficult. I think it's important to recognize with the AI that we get really excited about things on the front end that we can see and play with. But the back end side of so many of our tools, our ERPs, our student information systems, our CRMs, I mean, they have so much potential now that has gone beyond predictive modeling. And I think we have to recognize that too. It's like we're no longer talking about like what percent chance is there that a student's going to enroll at our institution. Now we're getting more into what percent chance do you want this student to be at? And then we will use the AI to auto-generate the messaging, the information, the needs, the financial aid packages that will deliver them to you at that predicted probability. And as they respond, we'll react in real time and continually alter as needed. J.D. Mosley-Matchett, PhD (36:22) Looking ahead five to 10 years, how do you envision AI reshaping the role of institutional researchers, assessment leaders, and accreditation liaisons? And what new skills will be essential for them to thrive? Will Miller, Ph.D. (36:37) Yeah, and I mean, I think on all of those, it's gonna be reshaped. I'm gonna start with that, and I think that's an important one. It's gonna be reshaped more than would just naturally occur iteratively over time. You know, on the IR side, I feel like they're going to have more power and more ability to kind of guide questions in strategic directions than they have historically. You know, we obviously have stronger data data lakes than we've we've ever had in higher ed. But despite the fact that higher eds become incredibly data rich, we're still incredibly insight poor, I think, across the board. I would say you have institutions doing a great job with us. So again, it's generalizing with the average institution is now sitting there with all of this data. And, you know, we're into that collect data mode. But then it's what are we really doing with it? The AI is going to help there. But we're going to have to break away from some of our traditional research methodologies to get there. AI opens up the door for, I'll call it Monday morning quarterbacking for the sake of this, where I don't need to go in with a theory, I don't need to go in with a why. And we've seen some of this with the machine learning data over the last five, 10 years already. But if I have all this data and I all of a sudden have a data lake where it all lives, AI can sit there and take it and start identifying things for me that I'm not gonna find or see. Because so much of what we've historically done with modeling reflects, you know, greatest good for the greatest number. But AI has the ability to go and be like, you know, for JD, this is happening. And here's some other people who look like JD and it's happening for some of them and not some of them. So maybe it's time we get them in a focus group and talk through. How is this going one way for JD and this other person and this opposite way, these four other people. And what can we see coming in? I think it helps IR contribute to the enrollment pipeline build in ways they haven't historically. In all honesty, one of the industries I'd be worried about right now if I was in it would be kind of the enrollment management consultant side, because so much of what they've been able to offer in terms of building lists and building funnels has been the data side that institutions haven't felt confident in at all levels. J.D. Mosley-Matchett, PhD (38:42) you Will Miller, Ph.D. (38:48) But I think a lot of that's going to go away because you're going to have modeling capabilities that allow you to do that. And again, that does, you know, that's outside of the ethical conversation about what data do we put in and how do we look at it and how do we put the parameters on it. But at the end of the day, I think five to 10 years from now, that will have been settled one way or the other. But I think that's one that will be interesting to watch sort of play out. You know, on the assessment side and the IE side, what we're doing at Riddle, I feel like could really take off with some groups where we can get people out of doing the bureaucratic pieces because we have agents that they can interact with instead. And we've seen that already. I mean, I'm slow rolling out, you know, this idea of doing this through conversations and then sending back to them just to review. Here's your filled out template with all of this great information. And that should be able to help us go to a next level. But I even have faculty now that want to go to the next step today. They're like, well, instead of doing the conversation, do you just have an AI agent I can just talk to? Like, why do I have to schedule an hour with you? Why can't I do this in like 10 minute sprints by myself and then get it, you know, automatically? Or why do you have to then take what the AI produces and copy and paste it into the system? Why can't it just auto feed? And I'm like, all valid questions, like, you know, all things that I completely agree with and all things that I think in your five to 10 year landscape we'll get to. I think what makes AI and Gen AI especially so, and again, there's no way to put it, like so cool just to talk about and think about is as our students start to learn more and more about it, they're gonna keep pushing us further and further with it. So part of the hard part with predicting five to 10 years from now on what it can do or what it'll look like is I'm not sure I can even guess what it will be able to do in a good way. But, you know, in my dream scenario, which I do think is realistic in five to 10 years, is that, you know, we can have a conversation with a faculty member. It auto fills into a template. We check the box for the accreditor, but we're able to actually spend the bulk of that conversation talking about institutionally what do they need. What do they need support-wise? What's realistic? What's not realistic? What are the barriers if they get X versus if they don't get X? All of those conversations open in ways that today I don't think we fully appreciate what it could look like when they open. And I think again, I think in five to 10 years, the other part that plays into this is if you're an institution that's already financially struggling in some way, shape, or form you decide that you're going to be opposed to AI, that you're not gonna be here to have the conversation in five to 10 years, I don't think. I was talking to a friend and colleague, Kelly Rainey, who I think you've met a few times. Kelly and I were talking yesterday about stall points. And for so many institutions, we know where the stall points are and we don't respond to them. Like if you've had loss in finance over five, six, seven years, you're at a stall point. Like something is not going the way that it should be. And if you decide that you're just gonna put your head down and keep pushing forward on that path, that's where you're gonna find closures and crises and all of these, you know, "unanticipated" emergency problems that when we really look at it, probably shouldn't have been unanticipated. But I think AI is gonna be one of those stall points. I think schools that resist AI will be okay for a while. But I think as job descriptions start to include AI, I mean, I won't write a new job description for any position in my office that does not embed AI somewhere in it. As that continues to happen, as industry brings AI more on board, as more AI companies emerge, students talk with their feet. And all of a sudden, you're always going to have students that want a strong, and I'm not trying to pick on the liberal arts background, because there are plenty of liberal arts schools that do a lot with AI well and there are plenty of non-liberal arts schools that don't do well with AI, but there will always be a market for, "I want a classical liberal art education that's not tech-infused where we tackle the big problems and talk through these things." And there will always be value in But with the enrollment cliff, there's going to be fewer students, which means a smaller number off of that percentage. And I don't see that percentage growing. Like I don't, I'm not seeing at least in anything I'm reading or hearing or looking at, including from, you know, some of the more less tech-friendly folks, I'm not seeing student resistance where they're like, "No, no, this AI is a bad thing and we don't want anything to do with it. Like, we just want this to go away." I've seen a lot more kind of like, "We didn't like predictive modeling and now I'm like with AI, this is great. Like, you could just create content for me that you know I'm going to like. And I'll sit here all day long and scroll through it and give all the money to the advertisers. Like, this is amazing." And yes, that means that I might be missing other things I might like that aren't somewhere in the algorithm. But, you know, I'll try to keep that in mind and be diverse enough in what I spend my time on that, that I'll get infused some of that at different points. J.D. Mosley-Matchett, PhD (43:50) I hear you. Wow. This has been a lot to consider, Will. I knew you'd leave us. Okay, yeah. Will Miller, Ph.D. (43:55) I do have to say, J.D., the one thing I'll add to that too, though, and we're already starting to see it, I do think five to 10 years from now, you will see Colleges of Artificial Intelligence. I think you have them already propping up, but I think the way today we have business, we have these, like we will have colleges that are based on artificial intelligence that will be, ironically, the traditional liberal arts dream of bringing together people from multiple disciplines together around the J.D. Mosley-Matchett, PhD (44:04) Interesting. Will Miller, Ph.D. (44:19) I think there will be that much demand and potential that... If you imagine sitting down with a philosopher, a political scientist, a computer science student, and somebody who's in an AI program and they're trying to figure out how to map out campaign voting strategies that are unethical, like all of targeting strategies, whatever you want to call them, that potential will be there in a way that today it happens across four siloed areas. I think it will very generically bring people together once we start giving them the frameworks to do it. J.D. Mosley-Matchett, PhD (44:54) I just love your vision for improving our higher education institutions by leveraging the power of artificial intelligence in practical and exciting ways. Thank you so much for joining us today. Will Miller, Ph.D. (45:08) Of course, thanks for having me, JD. I'm always happy to talk about anything related to the future of higher ed. J.D. Mosley-Matchett, PhD (45:14) For more information about AI news and trends that are directly impacting administrators in higher education, please follow InforMaven on LinkedIn and visit our website at informaven.ai.