This blog was co-written by Jessica Ickes, Vice President, Market and Research Services
and Wes Butterfield, Chief of Consulting Services
At the RNL National Conference, we facilitated roundtable discussions focused on Artificial Intelligence (AI) and its potential impact on higher education. During the session we collected note sheets from the seven roundtables and administered a brief survey that attendees submitted upon entering the session. More than sixty attendees participated in the session and approximately half completed the survey. Attendees represented a broad range of institution types and are serving in a variety of roles on their campuses including in admissions, marketing, academics, and IT. This blog is intended to share insights and feedback we gained from the attendees.
Hare are some key themes that we collected.
There is emerging awareness and use of generative AI and its impact in higher education
The use of AI is emerging in higher education. Seventy-one percent of respondents indicated that they “never” use AI in the work. About 19% indicated that they use AI monthly and about 10% reported either using it weekly or daily. None said they used AI multiple times a day. This session happened to be held, perhaps serendipitously for this attendee, on the last day and last session time of the conference, as one guest shared, ‘until learning about ChatGPT in a session yesterday, I had no idea about any of this [generative AI].’ After learning about it, he was curious to learn more, as were many others.
When attendees were asked how they view using AI in higher education admissions and marketing, the distribution of responses included those who were highly skeptical through those who were highly enthusiastic. Respondents generally leaned to a ‘neutral – leaning towards positive’ orientation to AI. Comments captured from the roundtables in answer to the question, what excites you about using AI in your work, ranged from responses like, “faster, can’t beat it – join it, build skills for faculty, quicker data analysis, streamline tasks, free time for other priority projects, adds efficiency, making predictions and projections, and writing content”. Conversely, in answering, what concerns do you have about using AI in your work, responses included, “privacy, loss of skill, censorship, FERPA, accuracy, inflated data (how to understand grades), limited (not accessing things that haven’t been digitized), different sources based on geo-location, proprietary information leaking out, plagiarism, ability to identify original work, and ethics”.
AI is being used in different ways across enrollment
Likely because few reported using AI in their work, only a few attendees responded to the survey question, “how are you currently using AI in your work?”. Of those who did respond (N=18), 44% said they are not using it currently in their work. Those who were using it in their work reported using it as a writing tool, for retention with a vendor, minimally with marketing, for marketing with an online app, to generate ideas for communications and marketing, article/blog writing, and assistance with data. The roundtables provided more depth to those responses, but largely mirror the survey responses – marketing/writing, predictive modeling, text services, to tell a story from data points, creating minutes from meetings, creating content for chatbots, student success and retention by looking at metrics, and creating policies around AI (privacy, student conduct, faculty mitigations, etc.).
Respondents reported that they felt that AI has made their work both easier and harder. It was noted that there is a learning curve in generating prompts that get productive responses from generative AI technologies. And respondents reported considering the use of AI tools for routinized tasks and content generation tasks. Few respondents either in the survey or in the roundtable discussion reported considering the use of AI for more complex tasks.
RNLNC attendees helped share key considerations and a compelling case for AI literacy
The totality of the session led the session facilitators to really see a compelling need for the development of AI literacy throughout higher education. This not only includes supporting our students in developing AI literacy and competency, but in supporting our faculty and staff in developing AI literacy as well. Perhaps we walked away with more questions than answers.
Participants shared several concerns regarding the use of AI including the privacy of organizational data and student records, FERPA implications, the accuracy of AI output, it’s general impact on education both in terms of cognitive processes and how we provide education, ethical considerations, and the loss of the humanness of some processes. For example, one participant brought up significant concerns regarding privacy and consent to have personal information such as your comments recorded in a meeting uploaded into an AI tool and the potential loss of the benefit of having a community engaged in the process of collaborating to generate something of importance.
She shared an example of AI being used to generate a mission and vision statement which was tweaked, shortly vetted, and adopted. While the process was efficient, she noted that something was potentially lost in the “grappling” and consensus building process that typically occurs around that type of activity. Essentially, something was taken away from the process itself that added value to the process and community that superseded the efficiency that AI provided.
It’s important to strike a balance between leveraging the capabilities of generative AI and maintaining human-centric approaches in college admissions and marketing.
ChatGPT 3.5, June 2023
We are seeing a compelling case emerge for an urgent focus on AI Literacy on our campuses. This was a clear take-away given the number of participants who had little to no knowledge of generative AI and the plethora of academic, ethical, and legal considerations every institution needs to grapple with. In its emergence, it’s a lot like the Wild West – the space for use cases, privacy, and ethical concerns, how work with shifts and change, how program demand be impacted – it’s all up for grabs right now. There are many tech companies in and outside of higher education racing to be first to market with AI integrations and new solutions. Campuses need to be prepared to understand the strengths and limitations of those tools and to make informed decisions on how, when, and for what processes they use AI in their work. The development of competency in AI literacy will allow an institution to leverage AI in intentional and informed ways and to take needed steps to develop policy, procedure, and awareness to protect its data and mitigate other potential risks.
We shared early thoughts on emerging best practices in AI in our session. Here they are.
RNL’s list of best practices in considering AI in higher education
Here’s our list of key practices as you begin to consider using artificial intelligence:
- Use AI strategically and intentionally with specific goals
- Understand it’s risks, limitations and ethical considerations
- Develop AI literacy
- Understand what the AI model is designed to do
- Use it to create efficiencies (see first bullet when doing this)
- Be mindful of your mission and culture of the institution
- Stay current
- Engage those who may be more in the know on campus
- Consider pilots before using for full or critical functions
- Be transparent when possible and use it appropriately
- Maintain oversight and assess effectiveness
And, when we asked ChatGPT “what are best practices in using generative AI in college admissions and marketing, the response was: “It’s important to strike a balance between leveraging the capabilities of generative AI and maintaining human-centric approaches in college admissions and marketing. The goal should be to enhance efficiency, improve outcomes, and provide a positive experience for prospective students while upholding ethical standards and promoting diversity and inclusion” (ChatGPT 3.5, June 2023).
Further consider ChatGPT’s response to, “What can go wrong when using generative AI in college admissions and marketing?” provided here in a bullet format:
- Bias and discrimination
- Lack of transparency
- Unintended consequences
- Data privacy and security
- Ethical dilemmas
- Technical limitations
- Unintentional profiling
- Resistance and distrust
- Lack of human touch
. . . And, to mitigate these challenges, it is crucial to adopt responsible AI practices, conduct rigorous testing and validation, promote diversity and inclusion in training datasets, establish clear guidelines and ethical frameworks, and maintain human oversight throughout the process. Regular monitoring evaluation, and continuous learning can help identify and address any issues that arise” (ChatGPT 3.5, June 2023).
In conclusion, we are just starting to see the beginnings of how generative AI will impact higher education. It is our responsibility as higher education leaders to develop AI awareness and literacy, establish processes and procedures on how our institutions vets, adopts, and communicates how generative AI may be integrated into day-to-day operations, consider, and develop academic policies and guidelines for faculty, staff, and students, and be ready to address any variety of ethical and privacy issues which will arise. Most importantly, we need to embrace and understand that AI is becoming part of our world.
RNL has always been a thought leader in higher education. AI is a game changer for each of us. At RNL, we have been vigilantly monitoring, discussing, and considering the many ways in which we can support our campus partners and the higher education community at large in considering your use of AI. We want to hear from you, email us at LetsTalkAI@RuffaloNL.com regarding what you are challenged with, how you are utilizing AI, what you’d like to hear more about, and ways in which you would like to engage with us on this topic.