Artificial intelligence is becoming an indispensable tool in topic brainstorming, writing enhancement, and cross-disciplinary syntheses. However, AI systems like ChatGPT have well-documented accuracy and reliability issues, and are often approached as authoritative sources while generating citations that are nonexistent or irrelevant, making it challenging for researchers to locate trustworthy information. This can furthermore result in misleading conclusions and hinder academic integrity.
AI implementation and use also struggles with bias and misinformation due to the nature of its training data. AI models can be trained on skewed, incomplete or otherwise compromised datasets, resulting in similarly compromised outputs. Additionally, AI can be manipulated to spread false or misleading information. For instance, a malicious actor could fabricate a news article and use AI to generate content that reinforces its claims, making it difficult to differentiate between real and fake news.
Efforts to counteract bias in AI have sometimes led to overcorrections, where certain communities are excluded from conversations altogether. Users must approach AI-generated content with critical thinking, verify information through reliable sources, and implement strategies to mitigate bias and misinformation in academic and research settings.
"It’s all very meta, but according to a new paper from Stanford scholars, there’s just one (very big) problem: The detectors are not particularly reliable. Worse yet, they are especially unreliable when the real author (a human) is not a native English speaker."
Chatbots: Pity the chatbot. The derided “computer says no” tool that seemed to be known more for blocking direct human interaction with shops, banks, airlines, and insurance companies has finally found affection, and an extraordinary amount of free publicity, via OpenAI’s ChatGPT.
IT HAS BEEN said that algorithms are “opinions embedded in code.” Few people understand the implications of that better than Abeba Birhane. Born and raised in Bahir Dar, Ethiopia, Birhane moved to Ireland to study: first psychology, then philosophy, then a PhD in cognitive science at University College Dublin.
How global workers, influencers, and activists develop tactics of algorithmic resistance by appropriating and repurposing the same algorithms that control our lives.
The field of Natural Language Processing (NLP) has seen significant advancements in recent years, thanks in large part to the development of powerful language models such as ChatGPT. ChatGPT, short for Chat Generative Pre-trained Transformer, is a large-scale neural language model developed by OpenAI that is capable of generating human-like responses to natural language input. With its impressive performance on a range of language tasks, ChatGPT has quickly become one of the most widely used language models in NLP research and application
It’s hard to believe that ChatGPT appeared on the scene just three months ago, promising to transform how we write. The chatbot, easy to use and trained on vast amounts of digital text, is now pervasive. Higher education, rarely quick about anything, is still trying to comprehend the scope of its likely impact on teaching — and how it should respond.
ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it can improve the moral judgment and decisions of users. Unfortunately, ChatGPT’s advice is not consistent. Nonetheless, it does influence users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced.
Aside from stringing together human-like, fluid English language sentences, one of ChatGPT’s biggest skillsets seems to be getting things wrong. In the pursuit of generating passable paragraphs, the AI-program fabricates information and bungles facts like nobody’s business. Unfortunately, tech outlet CNET decided to make AI’s mistakes its business.
America is in a crisis of trust and truth. Bad information has become as prevalent, persuasive, and persistent as good information, creating a chain reaction of harm. It makes any health crisis more deadly. It slows down response time on climate change. It undermines democracy.
A very strange conversation with the chatbot built into Microsoft’s search engine led to it declaring its love for me.
Just like how the Internet dramatically changed the way we access information and connect with each other, AI technology is now revolutionizing the way we build and interact with software. As the world watches new tools such as ChatGPT, Google’s Bard, and Microsoft Bing, emerging into everyday use, it’s hard not to think of the science fiction novels that not so subtly warn against the dangers of human intelligence mingling with artificial intelligence. Society is in a scramble to understand all the possible benefits and pitfalls that can result from this new technological breakthrough. ChatGPT will arguably revolutionize life as we know it, but what are the potential side effects of this revolution?
The remarkable progress made and the apparent user preferences for direct answers, this paradigm shift comes at a price which is higher than one might expect at first sight, affecting both users and search engine developers in their own way.
Researchers used ChatGPT to produce clean, convincing text that repeated conspiracy theories and misleading narratives.
Given that a majority of LMs’ training data is scraped from the Web without informing content owners, their reiteration of words, phrases, and even core ideas from training sets into generated texts has ethical implications. Their patterns are likely to exacerbate as both the size of LMs and their training data increase, raising concerns about indiscriminately pursuing larger models with larger training corpora. Plagiarized content can also contain individuals’ personal and sensitive information.
The U.S. Department of Education's Office of Educational Technology released a new report, Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations, this week. The report acknowledges that the rapid pace of artificial intelligence (AI) advances is impacting society and summarizes opportunities and risks for AI in teaching, learning, research, and assessment. AI enables new forms of interaction among students, teachers and computers by way of voice, gestures, sketches and other human modes of communication, according to the report. These new forms may be leveraged to, for example, support students with disabilities, provide an additional “partner” for students working on collaborative assignments or help a teacher with complex classroom routines.
Recent breakthroughs in natural language processing (NLP) have permitted the synthesis and comprehension of coherent text in an open-ended way, therefore translating the theoretical algorithms into practical applications.
Historically, the majority of digital assistants, including chatbots, have been assigned names, voices, visual representations, and even "personalities" that are stereotypically feminine and reflect patriarchal ideology. This cross-sectional descriptive study of chatbots associated with large academic libraries in the United States found that there are few extant library chatbots, and in a major departure from trends, there are even fewer that are gendered. This is promising, in that it signals-whether intentionally or not-that the practices of creators and adopters are countering entrenched tendencies to typecast digital assistants as women, which may signal more feminist and gender-inclusive technology design to come
AI hallucination is a phenomenon wherein a large language model (LLM)—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.
Artificial intelligence (AI) is a topic on everyone’s mind. From ALA’s Annual Conference to the halls of Congress and even the White House, everyone seems to have an opinion on this technology and, moreover, intense worry about what it portends for the future. Alongside the need for regulations on AI, there are also complex ethical implications involved with this issue. Ahead of TIE‘s webinar “Inclusive and Ethical AI for Academic Libraries,” this resource list is intended to expand the discussion on the ethical implications and potential shortfalls of different AI technologies while enabling readers to envision more ethical outcomes.
The publishers of thousands of scientific journals have banned or restricted contributors’ use of an advanced AI-driven chatbot amid concerns that it could pepper academic literature with flawed and even fabricated research.
As AI technologies are rolled out into healthcare, academia, human resources, law, and a multitude of other domains, they become de-facto arbiters of truth. But truth is highly contested, with many different definitions and approaches. This article discusses the struggle for truth in AI systems and the general responses to date. It then investigates the production of truth in InstructGPT, a large language model, highlighting how data harvesting, model architectures, and social feedback mechanisms weave together disparate understandings of veracity. It conceptualizes this performance as an operationalization of truth, where distinct, often conflicting claims are smoothly synthesized and confidently presented into truth-statements.
What does it mean when an AI chatbot refuses to answer a question? It may be a sign of a guardrail or safeguard put in place to prevent the AI from providing incorrect or even malicious information. But what if the question is innocuous or even purely informational?