J.D. Mosley-Matchett, Ph.D. (00:30) Hi, welcome to another episode of AI Update brought to you by InforMaven. Today's guest is Nicholas Potkalitsky, PhD, the founder and CEO of Pragmatic AI Solutions and the author of the newly published book, AI in Education. Nick brings more than two decades of teaching experience in media, communications and rhetoric. As a scholar and creator of AI education resources, like the Educating AI Substack and Pragmatic AI Educator Newsletter, Dr. Potkalitsky offers AI training, online courses, and customized implementation strategies for educators and organizations. Welcome to the podcast, Nick. Nick Potkalitsky, Ph.D. (01:15) thank you so much for having me today. So it's a, it's an honor to be on this podcast. J.D. Mosley-Matchett, Ph.D. (01:21) I'm so pleased that you agreed to offer your unique point of view to our audience of higher education administrators. Now, your area of expertise is primarily focused on K-12 teaching and learning, but I found your writing to be universally applicable to all educational institutions, particularly in today's fast moving technology era. So why don't you tell us about your work in the K-12 space and how you came to think and write so deeply about AI. Nick Potkalitsky, Ph.D. (01:50) Well I, like many educators, was I was taken by storm by the drop of chat GPT 3.5. And I am a writing teacher by training and just kind of working with my students and helping them try to make sense of this new technology, when to use it in a productive way, how it was impacting critical thinking. These became very pressing questions for me in 2023. And it led to sort of a position of leadership within my school, spearheading their AI Response Committee. Like many schools in the K through 12 space, we sort of set up a strategic group that we were trying to figure out. You know, were we going to bring in any softwares to track AI use, or were we going to take more of a detection kind of thinking about this disruption as we were framing it at that point, or, or were we going to J.D. Mosley-Matchett, Ph.D. (02:55) Thank Nick Potkalitsky, Ph.D. (02:58) start to lean into some educational reform, in particular, rethinking how we're doing assessments. So this became sort of a passion project, but also something that was just practically just needed to happen as our students were kind of rapidly onboarding with these tools and really didn't have a lot of you know, experience using agentive sort of technology. So immediately we're sort of dropping into efficiency-use cycles. And we were just very worried about impact upon, you know, basic skills and competencies. So, we got together a team and, you know, developed AI policy like a lot of us did back in 2023 and our school is an independent school. So we believe in a good amount of teacher determination at the classroom level. So we developed a policy that allowed teachers to make individual decisions about how they were going to handle AI implementation in their classrooms and very much still a work in progress. We haven't secured sort of an AI tool to sort of function as like a unifier in terms of like infrastructure. Our school is in the process of, well, I've helped them pilot a number of different tools and we're kind of working through our best options right now. Cause I deeply believe that schools should be providing a safe access point just to be able to get data on how students are using, who's using, because I think that's going to be a very valuable information moving forward. And as I've been working with my school, I've kind of taken to reporting out to an increasingly growing community of scholars and researchers and through my Substack, which is called "Educating AI." It's actually mostly read by university and higher ed folks right now, because we, I mean, I tend to write in more of a scholarly academic vein and I think that's probably what brought my work to your attention too. Because I'm thinking not just practically, but systemically, and thinking not just in terms of this moment, but how is this going to play out in the next two to three years? It's been a really great experience getting that up and running and working with guys like Jason Gouliard and Mark Watkins and Jeanne Beatrix Law down at Kennesaw State and just kind of being a spot where people can publish out what's going on in their classrooms and what people need to know. We're cautiously optimistic about what AI can do for K through 12 and higher ed. We need safe implementation, but we're definitely not trying to create a firewall around students and have a pre-AI sort of way of doing things. I just don't think that's really even possible J.D. Mosley-Matchett, Ph.D. (06:30) take a glimpse into the future and consider the AI 2027 report. Now you've written about this report. So what should college administrators take away from it about how institutions will operate in the very near future, not just what they'll teach? Nick Potkalitsky, Ph.D. (06:48) Yeah, this was a really interesting study. It was put out by a former higher level knowledge worker at OpenAI who stepped away about a year ago due to concerns about how they were running their business. I think the other author was pretty... important in terms of predicting the rapid development of the COVID crisis. So these guys have some insider knowledge. They put it out, I think mostly as a of a provocation just to get people to start to think about particularly agentive AI. So it's all fictional, of course. and sort of premised on in late 2025, there's a fictional company called Open Brain, which we can take it to probably be referring to Open AI, who develops like a next level, agentive technology that they then in in turn like use to, to, jumpstart their AI development. So it's like AI developing In particular, the, report looks at like geopolitics in terms of, China, trying to get access to those weights. And, but you know, that's all, here or there. We should anticipate that agentive AI is going to just get more and more powerful within the short term. We're seeing things like AI. These new tools can complete online courses. So, you know. Like last year was supposed to be the year of agent of AI and it didn't really take off and like this year it's starting to pick up. And probably two to three years we'll have access to, you know, pretty, pretty powerful agents that could, you know, assist with enrollment, donor engagement. hiring, curriculum design. And, you know, this is all going to be taking place largely within a context where there has been very little AI literacy training. Higher ed and administrators are particularly going to want to implement safe governance. practices, accountability practices, even as they're trying to remain institutionally agile and really take advantage of these new opportunities. You know, the work really has to start now in terms of really training staff and knowledge workers. You know, what are these AI tools? How do they operate these sort of opaque algorithms? How do they make decisions? Because I mean, the tools are just going to be here and relatively quickly. And, you know, there's going to be temptations for workers to use these in less than transparent ways. And, front offices are going to, if they want to take advantage of these and do so in safe ways, they're going to want to bring in their own sort of agentive tools and make sure everybody's trained on it and really think about you know, both the pros and cons of potentially outsourcing higher and higher level decisions to bots. So that's kind of the the real world situation that we're operating in and I think you know the main messages, know, training is going to get us there in the most effective and safe way. But, you know, we got to start now is the fine print. J.D. Mosley-Matchett, Ph.D. (10:29) I agree wholeheartedly. Okay, Nick Potkalitsky, Ph.D. (10:31) Yeah. J.D. Mosley-Matchett, Ph.D. (10:33) Why is AI literacy something that every staff member should care about even if they never touch code or teach a class? Nick Potkalitsky, Ph.D. (10:42) Yeah, this is a good question. know, AI literacy is just becoming sort that maybe doesn't necessarily have to do with particular technical knowledge because these new tools are, we can engage with them with natural language. lot of things that we... that traditionally had to be done through coding and can now be streamlined and we can interact with chat bots in order to create all different types of text and all different types of media. Claude, one of my favorite AI models has become pretty incredible as sort of like a coding assistant. You don't really even need to know all that much about coding to get it to create some automated algorithmic workflows for you. So, we're at sort of like a real shifting point where it's sort of like a merging between, starting to think like a critical thinking skill set that a worker just needs to, to have in order to navigate professional spaces now. Luckily, it doesn't take a lot of training to get people up to speed. You know, like these, these AI models are relatively opaque in terms of like understanding why exactly they do something. You know, we call them like black boxes. So there's like, you know, deep, deep philosophical issues about, you know, not knowing like true causation. Because these tools are. pretty much functioning in terms of probabilities, but they're predictable enough that we can start to build trainings that help people understand the patterns, patterns of use, patterns of response. And for a staff person, the core three questions really have to be like, you know, where did this data come from? And so what is the model assuming in this particular chat cycle? And, what is being left out of this cycle? We can build, easily build trainings that... help our workers kind of focus in on those questions. It's the same sort of stuff I'm doing with my students. It's like a new type of research acumen. Now that you have, tertiary texts that are based on synthetic data, created by agentive tools that are essentially authorless. It's a new situation to navigate. Learning how to read AI output is sort of a new competency that is a good thing to walk through your staff because we're used to being like inside. rhetorical triangles where we have a human author and you can read purpose and intention into things. But now we're kind of transitioning into a world where, we no longer have that purposive, but more probabilistic, source of data. And it's just a new kind of reading that our knowledge workers need to master. They're building on the reading capacities and critical thinking skills they've already honed and developed. So, once again, just a little bit of training can go a long way. J.D. Mosley-Matchett, Ph.D. (14:08) This is so true. And that's a great point. I really appreciate that insight. What does generative thinking look like in an educational institution? And how can it help us better serve our stakeholders like students, employers, and policymakers? Nick Potkalitsky, Ph.D. (14:25) Yeah, I've been working on a concept in more of the educational arena, thinking about what kind of thinking does working with a generative tool inspire, particularly on the productive side of the spectrum. There's a lot of writing about AI detracting from critical thinking, but I've found And I'm sort of harnessing generative learning theory, with "AI TROC." Mayer and Fiorella, who write a lot about when you generate new content, you really create new neural pathways and you re-consolidate old knowledge and make it, you sort of prepare it for for new connections and new syntheses. So, and I really think institutions can sort of scale up, like institutions can do this work just by like generating new collaborations, new ideas, new initiatives in the face of this AI transformation. Our stakeholders have evolving needs now, our students have evolving needs, and instead of maybe framing AI as sort of a disruption, we can maybe think about this as an opportunity to build something better. A lot of the the work that I'm doing in my classroom spaces has been to kind of accept that AI is already in the hands of my students and to start building with that as the foundation. And I think, as you scale up in terms of institutions, you put people in front of these tools, you get them working in safe and effective ways with these tools. I see it in just my own work life. just using AI as sort of a, a thought interrogator. It becomes like, a real stimulator of new ideas. once you get people thinking generatively and informed by actual work cycles with AI, then I think those more concrete next steps start to materialize. And where to actually invest time and energy and money, that becomes much more apparent. J.D. Mosley-Matchett, Ph.D. (16:45) Okay, I can go along with that. So moving on to the next question. How might AI actually strengthen the relationship between K-12 and higher education? And why should colleges be proactive about that? Nick Potkalitsky, Ph.D. (17:02) Yeah, I mean, we have an emerging situation which I think people should be thinking about deeply just in terms of, you K through 12 right now is navigating its own AI You probably have about 50 % of schools that are leaning into AI detection and process detection and are really trying to create AI free zones. And then you have on the flip side, 50 % that are bringing on AI tools and using them, in most cases in more of like a tutorial fashion. And meanwhile, they're trying to upscale all of their faculty. into best practices. So it's sort of a herky-jerky kind of process and the students are in the middle of this in many ways, a year or two ahead of where their teachers are in terms of adoption. So I mean, I see numbers like 50 % of high school students are using AI multiple times a week. But, you know, if you walk around my school, you know, it's I'd say it's much higher than that. So, so you have like, all these students coming in with various experiences. And, you know, I think higher ed has a role to play in terms of, you know, meeting J.D. Mosley-Matchett, Ph.D. (18:17) you Nick Potkalitsky, Ph.D. (18:31) meeting this diversity of experiences and trying to figure out what does AI readiness look like for higher ed and then trying to set up systems that don't necessarily entail students having to unlearn potential progress that they have maybe made in K through 12. So, even if you have 50 % of students coming in that have had more intentional AI curriculum, then for colleges to just kind of say, well, for these four years, you're not going to be using AI seems like a disservice to those students, particularly in light of what their jobs and their life after university is going to require of them, let alone the internships they're going to be trying to secure while they're in college. Both sides are going to have to work together in terms of figuring out what does AI readiness mean? How can we develop a flexible enough in terms of like policy and implementation where we can have some spaces where there's less AI use intentionally, but overall there's kind of a broader commitment to these students by the time they're done with K through 12 and college are going to have to have some skills that are going to allow them to use these productively and not just automate critical thinking. I don't know. mean, I think there's like one angle that's possible. I've shifted to grading in terms of outcomes based grading in my classrooms. If we could develop some sort of, and perhaps maybe use AI. to assist with this, because now we do have these responsive technologies that can help us with some of the more laborious aspects of assessment reform. I mean, anyone who's done outcomes-based grading knows how it's a lot of work. And if we can develop some sort of skills portfolio that a student can kind of carry with them that could supplement the college applications process, then colleges could really know, what has the student learned. So then there could be some more thoughtful placement in college. Maybe they wouldn't need a first level AI literacy course because they had already gotten that. So resources could be used more thoughtfully. There are some real possibilities, but if I'm being honest, in this highly contentious atmosphere that we're in right now, I think it's probably going to happen on a university by university basis. You're probably going to have schools that are going to lean more heavily into AI readiness. And I'm going to predict that those are going to be places that students want to go. And then you're going to have schools that some are going to be prestige places that probably can pull off-- because of their prestige-- they can pull off a certain amount of siloing off of AI. But I think it's ultimately going to be a risky move for you know, smaller places or places that have enrollment issues to just kind of say this is a no AI place. Students want this. I see it every day. Students want to know how to use these tools and not just to cheat, you know. They sense the power there and they also sense the risk and they want to maintain expertise while using these tools. It's going to be our role K through 16 to kind of show them the J.D. Mosley-Matchett, Ph.D. (22:41) Not to mention the fact that the employers have expectations as well with respect to the graduates' ability to use AI. Nick Potkalitsky, Ph.D. (22:46) Exactly. J.D. Mosley-Matchett, Ph.D. (22:48) You said safety, equity, and cost manifest differently at various levels of education. So how should higher ed leaders think about these trade-offs when considering AI implementation? Nick Potkalitsky, Ph.D. (23:03) Yeah, these are big questions right And there's a lot of moving and shaking just in terms of AI model access. So I'm working on an article sort of comparing and contrasting OpenAI and Anthropic's offerings. And earlier this year we had with California State System, they kind of bought into a pretty large package with OpenAI and some people proclaimed it was visionary and then the faculty really wasn't consulted all that much and they were not all that enthused. Safety, equity and cost are just kind of the triad that I use to advise administrators when they're thinking about onboarding. They operate together. So they're not just boxes to check off. Each one has an impact on the other. So decentralized AI is really not going to be a good idea for our students and for our institutions. I routinely in my conference presentations will have people say, "Well, people already have access to OpenAI for free. Why do colleges need to get into the business of offering a particular tool?" And there's a lot of reasons. Just kind of knowing how people are using these systems is just a very valuable thing. Are they using it for just efficiency purposes? Are they using it to stimulate higher thought? Who is using it? Is it just a small subset? Demographically, who's using it and getting positive outcomes from it? Is it a valuable investment? Can that money be used more strategically? Is this the best model? So these sort of things all come from having better streams of data by working more closely with a particular vendor. I'm not fully convinced yet that universities should just buy in with a single vendor. We see, OpenAI and Anthropic contending in terms of quality or ethos with regards to AI use. OpenAI is sort of offering more efficiency and multimodality. Whereas Anthropic is sort of selling itself in terms of more inquiry based and kind of Socratic AI, which as an educator I'm certainly intrigued about. But I think there are certain risks to tying yourself down to just a single entity. Because I think once they have you as a secure market, then there might be some risks in terms of safety and security and privacy. But things to think about with students, we definitely need things in our vendor agreements that focus on bias and focus on making sure that all students have access if that's the route that we go. In terms of cost also, the cost isn't just sort of the initial offering, but you need to be also thinking in terms of the sustainability of the investment over time and the possible fluctuations in the market. We hope that these models will become more cost effective. over time. But we just really don't know whether these companies are offering us right now access at very low rates, thinking that they'll increase them. Like we know all of the major companies are running at major losses right now. So, you know, the education space is a place where they are trying to win back some of those losses. The bigger universities like ASU are building their own proprietary models. And I think there's a certain wisdom to that. And I think as these open source models become more powerful, and as the knowledge spreads for how to build safe proprietary models, I think really every school is going to want their own. But for right now, we're dealing with a lot of "wrapper" services and with the major players. You got to think about what is really the purpose of the tool that you're bringing in. And once you start to decide on purposes, then you can start to think about which vendors you want to pilot and start to make some ethical integration checklists and that you're not locked in for life. You know, definitely don't sign five year contracts. You want to leave yourself room to always have space to audit use and then revisit, you know, is this actually the tool for us? It's a big investment to bring something in and then you start to build curriculum and processes around that. But ultimately, if it's not something that's good for your stakeholders, then it's worth making a change because, there's a lot of data in the works here, and data is the real currency ⁓ for vendors. So that's where you need to really be mindful and follow the data pathways. J.D. Mosley-Matchett, Ph.D. (28:31) Great points. Well, thank you for sharing your insights with us, Nick. You've really given us a lot to consider. And I know from experience that it's just way too easy to become complacent with an insider's perspective as a higher education administrator. And that makes your point of view from the K through 12 vantage so important for us to consider. So thanks for sharing your thoughts and advice with us today. Nick Potkalitsky, Ph.D. (28:32) Yeah. Yeah, no, it was a great, great conversation. I love the questions. So keep doing what you're doing. J.D. Mosley-Matchett, Ph.D. (29:04) Thank you and you keep doing what you're doing too. For more information about AI news and trends that are directly impacting administrators in higher education, please follow InformAven on LinkedIn and visit our website at informaven.ai.