Tag: openai
-
A process for getting good-enough outputs from OpenAI’s Deep Research
A practical guide to using OpenAI’s Deep Research feature effectively, detailing a four-step process that involves creating prompts with ChatGPT o1, answering clarifying questions, generating comprehensive reports in minutes instead of days, and achieving “good-enough” results that would typically require weeks of research.
-
[Note] Introducing deep research
“Deep research is OpenAI’s next agent that can do work for you independently—you give it a prompt, and ChatGPT will find, analyze, and synthesize hundreds of online sources to create a comprehensive report at the level of a research analyst.” – OpenAI
-
[Link] OpenAI Claims DeepSeek Used Its Models
https://www.perplexity.ai/page/openai-claims-deepseek-used-it-3WNYRWivRdm90JDznlWCPA OpenAI alleges that it has uncovered evidence suggesting DeepSeek utilized its proprietary models without authorization to train a competing open-source system. The controversy centers around a technique called “distillation,” where outputs from larger AI models are used to train smaller ones12. Security researchers identified individuals linked to DeepSeek extracting substantial data through OpenAI’s API in…
-
Paper – Reclaiming voice with AI
Mirza, F. N., Bogan, A., Beam, A. L., Manrai, A. K., & Ali, R. (2024). Reclaiming Voice with AI. NEJM AI, 1(12). In a world-first application, OpenAI’s Voice Engine was used to clone Ms. Bogan’s voice from just 15 seconds of preexisting audio, sourced from a school project she had filmed a few years prior. This enabled…
-
Swarm is a framework for developing multi-agent systems
Swarm is an experimental framework from OpenAI, for building, orchestrating, and deploying multi-agent systems.
-
The significance of OpenAI’s $6.6B investment round
OpenAI’s unprecedented $6.6 billion investment raise suggests they demonstrated something remarkable to investors, though not necessarily just raw intelligence gains. Whether it’s improved safety, efficiency, or multimodal capabilities, this massive vote of confidence hints at breakthroughs we haven’t yet seen publicly—developments that could reshape AI’s integration into society.
-
Being inaccurate isn’t the same as being useless
New research on AI model factual accuracy shows that while language models struggle with certain difficult questions, this doesn’t diminish their value as thinking partners. Like human conversations, where perfect accuracy isn’t required for productive discussion, AI’s occasional inaccuracies don’t prevent it from being a useful collaborative tool.
-
Diminishing returns of LLMs doesn’t stop progress
Recent discussions about LLM diminishing returns suggest OpenAI’s next frontier model may not be significantly smarter than GPT-4. However, this plateau in intelligence doesn’t diminish the technology’s potential, as improvements can focus on making models cheaper, faster, smaller, and better at specific tasks rather than increasing raw intelligence.
-
Next version of OpenAI’s LLMs might not be much smarter than GPT-4
Matthias Bastian (2024-11-10). OpenAI’s New “Orion” Model Reportedly Shows Small Gains Over GPT-4. OpenAI’s next major language model, codenamed “Orion,” delivers much smaller performance gains than expected. OpenAI researchers point to insufficient high-quality training data as one reason for the slowdown. Most publicly available texts and data have already been used. The slowdown in LLM…
-
Court ruling: Language models don’t copy information; they synthesise it
in AIMasse, B. (2024, November 8). OpenAI’s data scraping wins big as Raw Story’s copyright lawsuit dismissed by NY court. VentureBeat. The judge noted that “the likelihood that ChatGPT would output plagiarized content from one of Plaintiffs’ articles seems remote.” This reflects a key difficulty in these types of cases: generative AI is designed to synthesize…
-
Frameworks GPT helps you think through problems
Ethan Mollick’s Frameworks GPT helps you work through difficult problems by suggesting suitable frameworks to help structure your thinking.
-
OpenAI introduces experimental multi-agent framework “Swarm”
https://the-decoder.com/openai-introduces-experimental-multi-agent-framework-swarm/ OpenAI has released a new open-source framework called “Swarm” on GitHub. The company describes it as an experimental tool for creating, orchestrating, and deploying multi-agent systems. Swarm aims to make agent coordination and execution lightweight, highly controllable, and easily testable…
-
Weekly digest 41
A weekly collection of things I found interesting, thought-provoking, or inspiring. It’s almost always about higher education, mostly technology, and usually AI-related.
-
Weekly digest 39
A weekly collection of things I found interesting, thought-provoking, or inspiring. It’s almost always about higher education, mostly technology, and usually AI-related.
-
Good (non-technical) overview of how LLMs are getting smarter
Ethan Mollick’s non-technical overview of the two scaling laws describing how generative AI models keep getting smarter.
-
Weekly digest 38
A weekly collection of things I found interesting, thought-provoking, or inspiring. It’s almost always about higher education, mostly technology, and usually AI-related.
-
Learning to Reason with LLMs
OpenAI o1 is much better at reasoning through problems.
-
OpenAI releases their ‘Strawberry’ language model
OpenAI releases OpenAI o1, codenamed ‘Strawberry’, in response to common reasoning problems inherent in language models.
-
Weekly digest 34
A weekly collection of things I found interesting, thought-provoking, or inspiring. It’s almost always about higher education, mostly technology, and usually AI-related.
-
Weekly digest 33
A weekly collection of things I found interesting, thought-provoking, or inspiring. It’s almost always about higher education, mostly technology, and usually AI-related.