The first occasion that led me to think about artificial intelligence (AI) and machine learning (ML) in the context of libraries came in early 2017, shortly after AlphaGo had beat the world champion of Go at that time, Sedol Lee. But until ChatGPT appeared in Nov 2022, AI and ML have been truly a topic of curiosity for mostly technologists in the library world. It is to be noted that before today’s AI boom in academia and industry, there was the emergence of data science that garnered a lot of attention. This led many academic libraries to develop new services in research data management with a focus on supporting students’ and researchers’ needs in developing data-related programming skills and tools, such as Python and R. But the emergence of data science and ML also led some people in the library world to delve more deeply into AI, AI literacy, and computational literacy, which is closely related to computer science. I was one of them, and I was also working as part of the team that planned and launched the AI Lab at the University of Rhode Island Libraries around 2017~2018. I did experience palpable interests in AI/ML in the local and larger communities, which enlivened our work then. But no one at that time anticipated the public adoption of AI/ML within the 4~5 year timeframe, let alone the meteoric rise of a large language model (LLM) to come.
The Irony in the most popular criticisms of AI
I have to admit that the personal ideas I had at that time about how the general public and academic libraries may adopt and apply AI and ML (and what may show up as the challenges and the opportunities for libraries in that process) turned out to be not at all close to what I came to see after the popularity of ChatGPT and the new boom around AI/ML and LLMs.
Probably the most-frequently voiced complaint about AI/ML that the AI/ML outputs from LLM are not grounded in facts has been What baffled me most throughout the recent mainstream adoption of AI/ML/LLM. This so-called “AI hallucination” has irritated people to such an extent that the new term was coined to refer to that phenomenon. That people would perceive this as the greatest critical flaw and the obvious failure for AI/ML to meet their expectations completely surprised me. And hearing about the deep concerns raised about AI/ML outputs not being repeatable or reproducible (especially from AI scientists) was another highly perplexing moment for me.
The fact that ML outputs are neither fully grounded in facts nor reproducible is not a bug but a feature. It is the very essence of ML, which is data-and-statistics-based (in contrast to symbolic AI that is strictly logic-based). As a matter of fact, it is exactly what has enabled ML to become the poster child of AI, after the long AI winter that followed the pursuit of the logic- and rule-based symbolic AI approach.
To trace its origin, AI was first conceptualized by Alan Turing in his 1950 paper “Computing Machinery and Intelligence.” Turing’s idea was that a machine can be considered intelligent (i.e. as intelligent as another human) if a human conversing with it cannot distinguish if it is a person or not. What is fascinating about Turing’s idea is that it did not try to define what intelligence ‘is.’ It instead proposed what ‘can count as’ intelligence. So, from its very beginning, AI was conceptualized and designed to ‘pass as intelligent’ to humans, not to ‘be’ intelligent on its own. It is important to understand that AI isn’t a concept that can be defined or described independently of human intelligence.
And philosophically speaking, the ultimate tell of (human) intelligence is that regardless of how hard we may try, we can never fully read others’ minds. Ultimately, the other person’s mind is opaque to us, never completely understandable, let alone perfectly predictable. If it were, we would immediately perceive that as not entirely human. To illustrate this point, suppose we have a chat with two beings and make a guess about which one is a person and which one is a machine. The first one gives us a response that we can perfectly predict and is factually correct every time, while the second one always gives a different and even quirky and puzzling response, not always fully corresponding to the matters of fact. After enough time in engaging in conversations with them, which one would we be more inclined to conclude as another human? It would be the second one. We may comment that the second one is odd, or a bit dumb, or both. But we would not pick the first one as a human, because no human being produces a perfectly predictable response. This, of course, is a highly over-simplified scenario. But it is sufficient to show that although we regard rationality as the distinguishing feature of humans, we also know that perfect rationality (and perfect predictability) isn’t a sign of true humanity either. Being less rational does not disqualify us from being a human; being perfectly rational may well.
Seeing how AI has been modeled in light of human intelligence this way, today’s LLMs do deliver exactly what Turing was proposing as ‘artificial intelligence.’ It has succeeded so spectacularly at making people believe that they are interacting with something as intelligent as humans that people are complaining that it is not as smart as (or smarter than) themselves. To be fair, that is not asking for just ‘intelligence.’ It is a request for something different, i.e. “higher intelligence.” How high? It turned out LLM users weren’t satisfied with the college-student level intelligence, for example. The general expectation we saw was such that AI shouldn’t be susceptible to citing inaccurate or non-existing sources, which is a mistake that many college students can make. Many also lament that today’s LLMs cannot do math and physics well enough. Again, LLMs aren’t designed to be good at math and physics. They are designed to pass as good enough.
Isn’t it ironic to find fault with AI for being bad at something that humans are also equally bad at? I am not saying LLMs should not be made more proficient with math and physics. Nor am I saying that being bad at math and physics is a distinguishing characteristic of human intelligence. All I am saying is that today’s AI/ML/LLM tools were built to pass as intelligent enough (to other humans), not to be super-intelligent in the physical properties of the real world. In light of their origin and inner workings, criticizing AI/ML/LLM tools for not being super-intelligent seems quite off the mark.
The role of a community in our ability to assess AI’s performance
When I think about AI when I hear about AI is a neighbor who have heard a lot of things, can talk convincingly and eloquently for a long time, but possesses a mediocre degree of intelligence oneself and can lack logic and reasoning at critical moments. (We all know someone like that in real life, don’t we?) Whether I would consider this neighbor to be brilliant or take their words with a grain of salt would entirely depend on (i) the circumstances of the interaction and (ii) how much I know regarding what this neighbor talks about. My evaluation in this regard would surely be limited and likely erroneous, if I know little about things that this neighbor would talk enthusiastically about at great length. In other matters where I have more knowledge and experience, I would probably be able to assess this neighbor more accurately. Equally importantly, in some circumstances, it may not matter whether what this neighbor says is true, imaginary, and/or possibly deceptive. Under other circumstances, being able to tell that difference can be absolutely critical.
In my opinion, the problem with AI that we are experiencing today isn’t so much about AI per se, nor purely about AI’s performance. The problem is more about how ill-equipped we are in understanding the way AI/ML is designed, effectively assessing its performance, and discerning what matters and what doesn’t, in any given case. And the greatest issue lies in the significant mismatch between this ill-equipped-ness of ours and the very high expectations that we hold AI to.
Furthermore, I think it is worth noting that the online environment, where we get to use AI as an individual consumer and mostly for productivity, makes this problem even more acute. This time, picture a big circle of villagers sitting around a bonfire and talking with one another. You will soon discover that some of those villagers heard a lot of stories; some have sharp analytic skills; some has memorized a lot of facts; some have practical skills but not good at speaking and explaining, and so on. Consider AI as the talker among all these characters. While other villagers chat with this great talker, various signs will soon emerge that would make it more apparent to you that this person is simply good at talking and isn’t actually the smartest or the most knowledgeable. Those signs will in turn help you better assess what this talker character says. All those signs, however, would be unavailable in the online environment, where it is just you and you alone with the AI tool.
If the individualized and isolated online environment, in which each user interacts with AI tools alone, hinders people from more appropriately assessing the AI tool’s performance, what can be done about that? Currently, there is no equivalent of getting all AI users to sit around a bonfire and having them talk to and test AI tools together. But if there were such a way, it would very much help people develop their ability to better assess AI tools’ performance. To come to think of it, wouldn’t libraries be able to organize something to that effect? It could be like a collaborative edit-a-thon where many people gather, try, and evaluate AI tools together, sharing what worked well (or not), what mattered (and didn’t), and why.
Two things I most worry about today’s AI use
There are two things that I most worry about today’s AI use. One is that most of the AI use is taking place in isolation, lacking a meaningful community discussion. The other is the emerging phenomenon of ‘AI shaming’ and ‘AI stigmatization.’ The use of AI is becoming widespread. The 2024 survey by Digital Education Council showed that the majority (86%) of college students regularly use AI in their studies, with more than half of them using it from daily to at least on a weekly basis. The 2025 survey by Pew Research Center also found that about one-in-ten workers use AI chatbots at work ranging from every day to a few times a week. Despite this rapidly increasing use of AI, there is also clear reluctance displayed among AI users in disclosing or discussing their AI use with others. The 2024 Work Trend Index Annual Report from Microsoft and LinkedIn found that among the 75% of full-time office workers surveyed responded that they use AI at work, over half of them were reluctant to reveal their AI use because it may make them look replaceable. Students and teachers are also reluctant to disclose their AI use since they can face backlash and penalization.
While the worry is certainly understandable, the trend of using AI only privately and neither admitting to its use nor discussing it in public doesn’t help most of us, who need to become better at assessing AI tools’ performance. With each person exploring and using AI tools by themselves, AI users will experience only more challenges in developing a sufficient level of digital skills and literacy necessary to appropriately and thoughtfully using AI tools to their benefit, whether they are students, educators, or workers.
Beyond the possible job loss, other backlash, and possible penalization, the general reluctance to talk about AI use is also connected to many negative associations made about AI from the widely-reported criticisms or AI/ML tools in mass media, ranging from their hallucinations and biases (resulting from the training data), their high consumption of electricity, and the detrimental impact on environmental sustainability to AI algorithms potentially being used to support or deepen existing inequalities.
All these, understandably, led to the emergence of what is called ‘AI shaming.’ ‘AI shaming’ refers to the practice of criticizing or demeaning the use of AI, which commonly manifests as stigmatizing any and all AI uses. Some in this camp (including information professionals and educators) are quite vocal about their opinions about AI. They actively discourage others from exploring AI tools, equating the use of AI to a sign of cheating, dishonesty, and/or laziness. They view any AI use as an inexcusable act of condoning and aiding negative impacts of advancing AI technologies. They stigmatize AI users for being morally irresponsible and justify AI shaming, based upon their belief that AI is inherently unethical and no use of AI should be permitted.
Everyone is entitled to their beliefs, as long as it does not harm others. But I think that the AI shaming and AI stigmatization is deeply troubling in the educational and library context, in particular. Librarianship is an endeavor to help people in their pursuit of information- and knowledge-seeking at its core, and the mission of libraries is serving as a reliable institution in providing such help for the public in an unbiased and unprejudiced manner. Libraries’ mission and values are also rooted in respect for everyone’s autonomy and right to pursue knowledge, regardless of where they come from and what beliefs they hold. Everyone comes from different backgrounds, life experiences, and realities, of which others often have little knowledge It is not a good idea for information professionals to overly prescribe how library patrons should go about looking for information and pursuing knowledge this way and not that way, based upon their own personal beliefs and values, which are likely to be representative of the socioeconomic group that they belong to, more than that of their library patrons. Feeling judged and being subject to shaming or stigmatization would be the last thing that library patrons seeking help would expect from library professionals. Such experiences may well drive library patrons to cope by themselves with difficulties that they run into while using AI tools rather than seeking help from library professionals.
This isn’t saying that we should turn a blind eye to many legitimate issues related to AI. They are real and complex problems and should be properly grappled with. But demeaning people for their use of AI and accusing them of being unethical isn’t a right or productive approach. Furthermore, when library professionals exhibit AI shaming and stigmatization towards library patrons seeking help with AI tools from them, such acts carry the high risk of doing lasting damage to the trust that library patrons place on library professionals.
The ultimate question
In a recent talk about AI that I attended, one question asked was how our society will preserve its intelligence and critical thinking abilities when they no longer seem necessary with AI. What would be the impact of automation and cognitive offloading enabled by AI on us humans? Will we humans become less intelligent and lose the ability to think critically as we rely on AI more and more?
As in most cases, the answer is neither simple nor straightforward. First of all, the impact and result of automation and cognitive offloading would differ significantly depending on what is being automated and offloaded. There are varieties of mental (and physical) labor that are a slog and a chore. They do not lead to our growth in any meaningful way, and we would be glad to rid of them. But some other types of work would be what we would rather continue ourselves, even if they are not fun, because they enable us to expand and fully realize our potential. I think a more challenging and critical question is whether we will be able to discipline ourselves enough to automate and delegate only the former category of tasks to AI, while continuing to engage in the latter category of work, because surely, there will be temptations to slack off and delegate away any and all things unpleasant or challenging if AI seems good enough.
To complicate the matter further, what is seen as a mechanical chore and a mere slog to one person may not be viewed as such to someone else and instead count as a meaningful challenge. Over-generalizing and prescribing what is to be automated and delegated to AI and what is to be retained as the work for humans wouldn’t appeal to or make sense to everyone, since we are all different in our abilities, values, strengths, and weaknesses. If AI can help us, it should be able to help us in a way that caters to our individual needs, instead of forcing us to fit into one mold. AI that does exactly the same thing may have a drastically different meaning and impact on different individuals. We should be open-minded about that possibility and respect each individual’s autonomy, the choices they make for themselves, and the context in which those choices make sense to them, as long as they are reasonable.
Upon receiving that audience question, the speaker opined that whether we (and our society) will be able to retain and preserve our intelligence and critical thinking abilities would depend on whether there will be a sufficient incentive to do so. That is an apt answer, given that the majority of humans in this world live in a market-driven economy, where incentives play a prominent role. What would it look like to provide an incentive for preserving human intelligence and critical thinking abilities? I am not sure. But surely, that can be done in various ways: utopian, dystopian, or somewhere in between.
Read this on Substack