An AI Toolbox for Librarians

Suggestions for teaching strategies, prompt-writing skills, and tools, plus an overview of those ethical questions. 

Accessibility is a primary mission of school libraries, and artificial intelligence (AI) can help with that mission by opening many doors for students. It can re-level, outline, summarize, narrate, and remove distractions from text, unlocking rigorous curricula for struggling readers. Students with dyslexia or ADHD, for example, can get information in the way that is most helpful for them without having to work through the lens of their disability.

Meanwhile, there are many questions around uses, ethics, and copyright regarding AI. While those concerns stand, AI can be incredibly useful for both students and teachers.

DIgital brain raining down data onto laptop surrounded by studentsAI large language models (LLMs) are not only useful for special education students, but for all students, particularly in writing skills. Both Gemini (formerly Bard) and ChatGPT can help students who are paralyzed by choice in their research to take a step forward—either with a topic choice or in finding search terms for a fully formed research question.

AI is also useful in the editing process. Using a text-based AI tool has several advantages over a human editor; first among them, consistency. The typos, punctuation errors, or misused words that might pass by the human reader will be caught by AI.

Secondly, it is incredibly fast. While students often must wait for a peer or educator to have time to read their papers, AI can complete the task of editing for grammar in a matter of seconds. It can also be used to teach editing by having students look back and identify where and why the AI has changed parts of the text.

[Also read: School Librarian AI Hacks]

As educators, we know how challenging structuring an essay can be for students. The best way to get AI to help with structure is to ask it to outline the student’s paper—which should reveal any organizational struggles.

Generative AI can also produce and complete student work in writing and in math. In fact, most students are already using it. According to a survey by BestColleges.com, 43 percent of college students are using AI applications and 22 percent have used them to complete exams or assignments. That same survey shared that 61 percent of respondents think AI tools will become “the new normal.” Focusing instructional products on skills that students will need in the AI generation will help them build skills in areas that AI will never be able to subsume: collaboration, relationship management, creativity, empathy, and problem solving. Leaning into a maker pedagogy regarding AI can help students approach real skills in a workplace that will be built on the backs of these tools.

Teacher uses of AI
There is new evidence that levels of cheating among high school students has remained consistent or has declined in the past year. This seems like great news! Perhaps AI isn’t facilitating academic dishonesty as most educators feared.

Over the past few months, the thinking around AI and the role it could and should play in education has indeed gradually been shifting, as Washington Post columnist Josh Tyrangiel wrote in his article “An ‘education legend’ has created an AI that will change your mind about AI.” Of course, there are many things to consider - ethical and copyright concerns about how different LLMs source their information, and valid concerns around how AI will impact the growth, development, and education of the next generation. Tyrangiel characterizes the outrage over student use of AI as an “academic satanic panic.” Meanwhile, Khan Academy has integrated AI so that its Khanmigo tutor can guide students rather than generate answers, work at the skill level of the student, and focuses on accuracy and student safety. This customization of student learning may turn applications into tools our students use every day.

Many librarians may feel that learning how to use AI and how to integrate it into instruction is too daunting a task, but there are practical ways to approach this. Here are some starting points.

  1. Start experimenting with AI prompts to see how it could make your teaching life easier. Use detailed, specific language to ask an AI engine to perform a task for you - write a creative and engaging classroom activity or lesson plan centered around certain content and/or skills; experiment with creating rubrics based on certain criteria and desired feedback; have AI help generate messages to send to parents and students.
  2. Explore some of the web-based tools with AI integration like Khanmigo, Edpuzzle, Kahoot!, Canva, Slidesgo, Quizizz, and Prezi, and platforms like MagicSchool, Eduaide, Teacherbot, and Class Companion that cater to K-12 teachers and students.
  3. Try a differentiation tool. AI tools can help teachers generate differentiated assessments, re-level text (with a tool like Diffit), and make correlated images (consider Bing’s image generator).
  4. Talk to your school’s or county’s technology department to see what policies around AI already exist and/or how those are being developed.

Prompt writing strategies
It is often noted in literature around AI copyright that the creativity lies in the prompt rather than the product. As such, there are things that make it easier for the AI engine to render the correct product.

  1. Specificity. There is no penalty for very long prompts. Include all the information that might help the LLM to retrieve and organize your product. That might mean specifying things like tone, voice, audience, previous research, length, or type of output you expect.
  2. Context. Let the LLM know the background information. If you are asking it to create a lesson, tell it how many students, their ages, the preceding unit of study, and any languages spoken by the class. If you are asking it to write an email to your principal, you can be specific about tone, formality, and any background information regarding the issue at hand.
  3. Manners. Consistent across LLMs is the request for polite behavior. While it may seem counterintuitive to treat a machine with all the respect due to a person, there is a great reason for this. LLMs reply to user requests using their dataset. If their dataset is full of cursing, offensive, or rude language, the output will also be full of cursing, offensive, and rude language. Using good manners trains the LLM to reply with good manners.
  4. Constraints. Tell the LLM the limits. For example, if the product is a 20-minute lesson, include that. If you are asking it to write a newsletter blurb, include how many words. It is great at working within the boundaries of a task, but it must know what those boundaries are.
  5. Iterative behavior. This falls into “try, try again.” Rewriting and recrafting a prompt will help both you and the LLM get an output that is useful. In the same way that you might add words to a search query, work similarly with an AI prompt. If it isn’t working one way, try a different way.
  6. Feedback. This tip is by far the most important. LLMs are trained by your input. If the output is incorrect, too long, too short, with the wrong tone, wrong information, or wrong structure, you need to reply with that. Treat the prompt as a conversation. Until you open a new prompt, it is.

Ethical considerations
The world of AI is changing very, very quickly. So, the concerns of the present may be addressed - or may persist. In working with students, I would primarily caution them about hallucinations, copyright, and bias.

AI engines are designed to give you the answer that you want. To do so, they may hallucinate and answer that may not have basis in reality—either in whole or in part. AI can make up articles, authors, quotes, and events in an effort to generate what the user is requesting. As such, AI products should be thoroughly fact checked - which builds amazing research skills.

Copyright is also a concern. The copyright status of AI products is currently unclear. According to the US. Copyright Review Board, “The U.S. Copyright Office will not register a work that was created by an autonomous artificial intelligence tool” This ruling was affirmed by Judge Beryl A. Howell of the US District Court for the District of Columbia in her ruling in the 2023 case of Thaler v. Perlmutter. This could make products created by AI public domain. However, the Terms of Service for ChatGPT note that, “OpenAI hereby assigns to you all its right, title and interest in and to Output.” The issue of copyright may make students and researchers more reluctant to use it for that purpose.

Last and most importantly, is bias. AI engines are only as good as the dataset from which they derive. In a body of data that large, our societal biases shine brightly (Cowgill, 2020). However, AI engines learn from the interaction they have with their users. The more we use, correct, and revise our interactions with AI engines, the more it will learn to mitigate the bias of its sources.

Ultimately, AI engines and LLMs are not replacements for good research practices, but they can be supplemental. They contain different kinds of information and that produces varying results—both in quality and fidelity. That information also comes from somewhere; currently that means from the users and from user data. Using AI well and thoroughly in these early stages helps users around the globe engage in an ethical, polite, and accurate community.

Further reading
AI & Accessibility | Cornell Center for Teaching Innovation.

Cowgill, B., et al. “Biased programmers? Or biased data? A field experiment in operationalizing AI ethics, Proceedings of the 21st ACM Conference on Economics and Computation (pp. 679-681, July 2020).

Kelly, N. “Costs and benefits of artificial intelligence editing tools.” MDPI Blog.

Langreo, L. ”Beyond ChatGPT: The other AI tools teachers are using." Education Week.

Poth, R. D. "AI tools that help teachers work more efficiently.Edutopia.

Shaw, J. "AI in the academy: Cautious embrace of a new technology. Harvard Magazine.

Spector, C. “What do AI chatbots really mean for students and cheating? Stanford Graduate School of Education.

Thaler v. Perlmutter, Dist. Court, Dist. of Columbia 2023

Tyrangiel, J. "Opinion | An ‘education legend’ has created an AI that will change your mind about AI.” Washington Post.

Welding, L. “Half of college students say using AI is cheating.” | BestColleges.


IdaMae Craddock is the librarian at the Community Lab School in Charlottesville, VA, where Kristen Wilson is high school lead teacher.

Be the first reader to comment.

Comment Policy:
  • Be respectful, and do not attack the author, people mentioned in the article, or other commenters. Take on the idea, not the messenger.
  • Don't use obscene, profane, or vulgar language.
  • Stay on point. Comments that stray from the topic at hand may be deleted.
  • Comments may be republished in print, online, or other forms of media.
  • If you see something objectionable, please let us know. Once a comment has been flagged, a staff member will investigate.


RELATED 

ALREADY A SUBSCRIBER?

We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing

ALREADY A SUBSCRIBER?