Author: Michael Rowe
-
Universities should run their own small reasoning models
in AIUniversities should use small reasoning models they can tailor to specific educational contexts and needs.
-
[Note] Introducing deep research
“Deep research is OpenAI’s next agent that can do work for you independently—you give it a prompt, and ChatGPT will find, analyze, and synthesize hundreds of online sources to create a comprehensive report at the level of a research analyst.” – OpenAI
-
[Podcast] Reading in the digital age
Digital books are now a common part of education, but concerns are growing around the problems of students reading on-screen. Marte Blikstad-Balas (University of Oslo) discusses the latest research around what it means to read on-screen as opposed to reading from ‘proper’ books, and why government bans on digital devices are not the best response.
-
[Note] Google DeepMind Unveils Weather Model
https://www.perplexity.ai/page/google-deepmind-unveils-weathe-3dzDrc.6QvWDmnMrDV_Ehg I like that Google Deepmind is integrating AI capabilities across a wide range of domains.
-
[Link] OpenAI Claims DeepSeek Used Its Models
https://www.perplexity.ai/page/openai-claims-deepseek-used-it-3WNYRWivRdm90JDznlWCPA OpenAI alleges that it has uncovered evidence suggesting DeepSeek utilized its proprietary models without authorization to train a competing open-source system. The controversy centers around a technique called “distillation,” where outputs from larger AI models are used to train smaller ones12. Security researchers identified individuals linked to DeepSeek extracting substantial data through OpenAI’s API in…
-
[Link] New AI picks up 97% of lung diseases, and can tell pneumonia from COVID-19
https://newatlas.com/medical-ai/ai-lung-disease/ A new AI model developed by researchers in Australia can detect lung diseases from ultrasound videos with 96.57% accuracy. It can distinguish between pneumonia, COVID-19, and other lung conditions, outperforming previous tools. The model explains its decisions, helping doctors trust and understand its results. This hybrid AI combines two techniques to identify patterns and…
-
Managing long AI conversations: A practical suggestion for knowledge transfer
A suggestion for effectively managing long conversations with generative AI models by creating categorised artifacts that capture key discussions and decisions. This practical approach allows you to transfer important context between chat sessions, ensuring the AI maintains awareness of previous insights while avoiding usage limit warnings.
-
[Link] AI Prescription Bill Proposed
https://www.perplexity.ai/page/ai-prescription-bill-proposed-qjHVQk3ORxCsufj4FODmGw A new bill, H.R. 206, known as the “Healthy Technology Act of 2023,” proposes to amend the Federal Food, Drug, and Cosmetic Act to allow artificial intelligence and machine learning technologies to prescribe medications, sparking debates over patient safety, regulatory challenges, and the broader implications of AI in healthcare policy at both federal and…
-
Head Space course discount during January
Head Space is offering a 25% discount on online courses this January, for new newsletter subscribers. The courses aim to provide health professions educators with practical strategies for establishing sustainable academic workflows.
-
Head space visual refresh
Announcing an update to the Head Space project. A complete visual refresh, emphasising simplicity and focus.
-
Paper – Superhuman performance of OpenAI-o1 on clinical reasoning tasks
New study shows that OpenAI’s o1-preview model achieves superhuman performance in medical diagnosis and reasoning tasks, surpassing both previous AI models and human physicians. The model excels in differential diagnosis and clinical management decisions, though showing similar performance to existing models in probabilistic reasoning tasks.
-
Podcast – Gwern Branwen – How an Anonymous Researcher Predicted AI’s Trajectory
in AIGwern is a pseudonymous researcher and writer. He was one of the first people to see LLM scaling coming. If you’ve read his blog, you know he’s one of the most interesting polymathic thinkers alive.
-
Revising AI-generated text leads to better outputs
The real advantage for knowledge workers using AI comes from using it as a writing partner – or more accurately – a thinking partner. And this is the value of working with AI; it can help get us out of our own heads.
-
Writing with AI isn’t a binary proposition
We need to start developing a sense of taste for when to lean heavily on AI for content generation, and when we want to engage with it on a spectrum with either more, or less, AI-generated input.
-
Update to Copilot and data access
in AIMicrosoft is updating Copilot to provide the same enterprise-level data protection for both the standard and 365 versions. This change makes Copilot more appealing for universities and colleges, ensuring that user data is securely handled and logged like other Office 365 data.
-
Podcast – Bringing Back the Mammoth
in ScienceSam Harris – Bringing Back the Mammoth. (2024). Making Sense podcast. Sam Harris speaks with Ben Lamm about his work at Colossal Biosciences. They discuss his efforts to de-extinct the woolly mammoth, the Tasmanian tiger, and the dodo; the difference between Colossal’s approach and Jurassic Park; the details of resurrecting the mammoth; the relevance…
-
Google’s AI-based research tool
Google has just revealed a new AI tool called Deep Research that lets you call upon its Gemini bot to scour the web for you and write a detailed report based on its findings.
-
People will hand over control to AI, but probably not yet
“People are very much going to hand control over their computers to an AI. At a minimum, they are going to hand over all the information, even if they make some nominal attempt to control permissions on actions.” – Zvi Mowshowitz
-
[Link] The Biggest Week In AI Ever (Again!) — AI Mindset
https://www.ai-mindset.ai/ai-mindset-newsletter/the-biggest-week-in-ai-ever-again It’s time to stop treating AI like a fancy tool and start treating it like essential infrastructure. Just like you wouldn’t build a modern business without cloud computing, you won’t build a future business without AI infrastructure. I think before we can treat it like infrastructure, it needs to be more reliable. I wouldn’t…
-
[Link] Two visions of AI’s future
https://garymarcus.substack.com/p/two-visions-of-ais-future …there is a possible world in which we take a breath and ask how we can build a better, more reliable AI that can actually serve society, taking steps to make sure that it is used safely, equitably, and without causing harm. §AI doesn’t have to lead to dystopia.But, left unregulated, it probably will.…