Tag: trust
-
Link: AI and trust
https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html “In this talk, I am going to make several arguments. One, that there are two different kinds of trust—interpersonal trust and social trust—and that we regularly confuse them. Two, that the confusion will increase with artificial intelligence. We will make a fundamental category error. We will think of AIs as friends when they’re really…
-
Link: Brazilian city enacts an ordinance that was secretly written by ChatGPT
https://apnews.com/article/brazil-artificial-intelligence-porto-alegre-5afd1240afe7b6ac202bb0bbc45e08d4 “It would be unfair to the population to run the risk of the project not being approved simply because it was written by artificial intelligence.” I agree. What is wrong with having AI create an output that we’re happy to sign off on? The only problem I can think of is that society likes…
-
Link: Introducing Claude 2.1
https://www.anthropic.com/index/claude-2-1 Our latest model, Claude 2.1, is now available over API in our Console and is powering our claude.ai chat experience. Claude 2.1 delivers advancements in key capabilities for enterprises—including an industry-leading 200K token context window, significant reductions in rates of model hallucination, system prompts and our new beta feature: tool use… We’re doubling the…
-
Link: New technique makes AI hallucinations wake up and face reality
https://thenextweb.com/news/ai-hallucinations-solution-iris-ai Here is a condensed paragraph summary of the article, generated by Claude (having read the original article, I can attest that it’s a decent summary): New techniques developed by researchers at Iris.ai show promise for reducing AI hallucinations, the problematic tendency for systems like chatbots to generate false information. Their approach validates the factual…
-
Article: CORE-GPT: Combining Open Access research and large language models for credible, trustworthy question answering
Pride, D., Cancellieri, M., & Knoth, P. (2023). CORE-GPT: Combining Open Access research and large language models for credible, trustworthy question answering (arXiv:2307.04683). arXiv. http://arxiv.org/abs/2307.04683 In this paper, we present CORE-GPT, a novel question answering platform that combines GPT-based language models and more than 32 million full-text open access scientific articles from CORE. We first…
-
The “problem” of citation in language models
Recent large language models often answer factual questions correctly. But users can’t trust any given claim a model makes without fact-checking, because language models can hallucinate convincing nonsense. In this work we use reinforcement learning from human preferences (RLHP) to train “open-book” QA models that generate answers whilst also citing specific evidence for their claims,…
-
Example of a tutorial session with ChatGPT as the tutor
I wanted to do a little experiment using ChatGPT as a tutor, after coming across a project for building a personal AI tutor (see my last weekly digest). I’ve posted the transcript of the exchange below. A few points of interest I thought worth noting: When reading through the exchange below, remember that ChatGPT only…
-
Rejected AMEE abstract (oral presentation) | Is ‘being human’ enough? Preparing for clinical practice in the age of artificial intelligence
See this brief post on my reasons for sharing rejections. Introduction Identity is central to our understanding of the health professions, and much of professionaleducation revolves around this core value. The introduction of artificially intelligent tools (AI-based systems) into clinical practice has led to resistance in the face of perceived threats to clinician autonomy (Jussupow…
-
10 recommendations for the ethical use of AI
In February the New York Times hosted the New Work Summit, a conference that explored the opportunities and risks associated with the emergence of artificial intelligence across all aspects of society. Attendees worked in groups to compile a list of recommendations for building and deploying ethical artificial intelligence, the results of which are listed below.…
-
Comment: In competition, people get discouraged by competent robots
After each round, participants filled out a questionnaire rating the robot’s competence, their own competence and the robot’s likability. The researchers found that as the robot performed better, people rated its competence higher, its likability lower and their own competence lower. Lefkowitz, M. (2019). In competition, people get discouraged by competent robots. Cornell Chronicle. This…
-
The Future of Artificial Intelligence Depends on Trust
To open up the AI black box and facilitate trust, companies must develop AI systems that perform reliably — that is, make correct decisions — time after time. The machine-learning models on which the systems are based must also be transparent, explainable, and able to achieve repeatable results. Source: Rao, A. & Cameron, E. (2018).…
-
Separating the Art of Medicine from Artificial Intelligence
in AIWriting a radiology report is an extreme form of data compression — you are converting around 2 megabytes of data into a few bytes, in effect performing lossy compression with a huge compressive ratio. Source: Separating the Art of Medicine from Artificial Intelligence For me, there were a few useful takeaways from this article. The first is…
-
We Need Transparency in Algorithms, But Too Much Can Backfire
The students had also been asked what grade they thought they would get, and it turned out that levels of trust in those students whose actual grades hit or exceeded that estimate were unaffected by transparency. But people whose expectations were violated – students who received lower scores than they expected – trusted the algorithm…