Yesterday I published a post describing my concerns with how universities are responding to the new paradigm of expertise-on-demand that’s facilitated by generative AI. At the end of that post I noted that I wrote it collaboratively with Claude, and this post describes what that process (kind-of) looked like.
I also want to be clear that these prompts aren’t especially good. In most cases they’re the first version of what I used, and I have no doubt that someone with more experience would have used different, better prompts. These are all examples of the kinds of conversational prompts that are an easy way to get started with generative AI.
For the blog post, I knew the general direction that I wanted the post to go i.e. I knew the conclusion and the shape of the main arguments I wanted to make. But, you could just as easily use Claude or any other LLM to ask for ideas that you could write about.
This is the prompt I started with: “Outline an argument that leads to the conclusion that universities are poorly positioned to lead in the use of AI for LTA. The main premises of this argument are:
- Generative AI and LLMs increase access to expertise.
- Universities live in a paradigm where they have a monopoly on expertise.
- They used to have a monopoly on specialised information, but the internet changed that.
- Define expertise partly as the ability to do things in the world.
- Prior to recent developments in AI, LLMs only knew things about the world, but were limited in that they had no capability to do things in the world.
- Universities are not going to give up their monopoly on access to expertise, and will continue focusing their attention on generative AIs ability to generate information.
- So they pivoted to controlling access to expertise.
After reviewing the initial response to my first instruction, I prompted: “In the second paragraph, you say that universities adapted to focus more on shaping expertise. I agree with this claim. However, I’d like you to expand on the paragraph by explaining how universities still focus much of their assessment and teaching on students’ ability to reproduce information. And this is why they are still talking about the threat of generative AI to assessment. As long as they focus their attention on the ‘information generation’ part of LLMs, they risk losing sight of the ‘access to expertise’ part.”
Claude did exactly what I asked, creating a paragraph that, more-or-less, expanded on the point in a way that made sense to me.
After reviewing the output again, I tried this prompt: “I’d like you to take the following text, and update it to include the impact of OpenAIs recent release of GPTs, on the increased access to expertise that’s now possible with newer services layered on top of LLMs. Here is the text I want you to update: ‘The development of generative AI starts to bridge this gap between knowledge and action. Systems like DALL-E 2 and ChatGPT demonstrate early proficiency at taking textual prompts and producing synthetic but usable artifacts and services. This expanding practical generative capacity challenges the university monopoly on expertise.’”‘
This didn’t work because Claude doesn’t have access to up-to-date information. I’d thought it had access to more recent information but I was obviously wrong. Having said that, I was running the same process in parallel with BingChat in Edge, which apparently uses GPT-4 and has access to the internet. But the responses I got from BingChat were, at best, similar to Claude (over the course of this writing exercise, the outputs from BingChat were typically worse than Claude’s, reinforcing my preference for Claude).
At about this point I realised that I was making several assumptions. So I tried this prompt: “Is it really true to say that universities have a monopoly on expertise? How else could you characterise the role of universities in society?”
Based on Claude’s response, which was nuanced and helpful, I asked it to revise the full text, taking into account our recent exchanges.
After thinking about it a bit more, I realised that my position had changed. So I prompted: “I’ve changed my mind; I no longer believe that universities have come to terms with access to abundant information, as evidenced by the fact that most assessments in universities are closed-book. If universities truly accepted the premise that access to information is no longer the important factor in learning, they would have made all assessments default to open-book. Taking that into account, the situation is even worse for universities, because the world is shifting to both information and expertise on demand, and universities don’t seem well-positioned to adapt. Please edit the piece to reflect this.”
And Claude did exactly that. To be clear, at this point I’m still working with an outline of maybe 500 words, so it’s still something I’d consider quite basic.
During the process, I’d thought of another point I wanted to include, so I prompted: “I’d like to include another concept in this essay; the idea that universities are so focused on trying to find the line separating the students’ work and the work of the AI’s, that they’re not seeing the student and AI working as a single, self-reinforcing unit. An ecosystem of ideas that build on each other, some coming from the student, some coming from the AI. Until universities accept that AI cannot be separated from the student, they will remain stuck in the paradigm where they believe they are gatekeepers, both to information and expertise. Please incorporate this idea into the whole piece, but also focus on it in it’s own paragraph.”
Claude rewrote the main outline, taking into account this new concept.
At some point, I realised that I was talking about ‘expertise’ with certain assumptions. So I prompted: “Would you describe ‘expertise’ as the ability to convert knowledge into action?”
It gave me a few central aspects of expertise that I could ask to be incorporated into the full piece.
Here are a few other prompts I used during the process:
- Many other people have made similar arguments over the past few decades, about certain types of technologies making universities irrelevant. And yet we still have universities. What makes this technology (i.e. AI) different?
- In earlier point, you talk about elaborating on constructive or creative new educational models in an expertise abundant world. Can you give me some examples of what you think these might look like?
- I’d like you to consolidate the ideas that are currently scattered across the essay, but to keep all of the main ideas. If there are parts of the essay that aren’t relevant, or that can be improved, please let me know. If there are parts that can be shortened without losing the central idea, do so.
After a few more back-and-forth interactions, I felt like I had a first complete draft that I was happy with, so I prompted: “Please tell me what you think of the attached essay. What is it about, and how can it be improved?”
And here are Claude’s suggestions for how I could improve the writing:
- Provide more concrete examples of how AI can demonstrate expertise with real-world impact. This would strengthen the argument.
- Consider counterarguments – are there ways universities could embrace AI to enhance learning? Can credentials adapt?
- Explain how expertise validation may change if AI takes over training. Who will oversee AI capabilities?
- Clarify if the goal is reforming universities or envisioning alternate models. Is there a hybrid approach?
- Reduce repetitive points. The core arguments around access to expertise and university entrenchment are made several times.
These are all excellent suggestions. Then I spent about an hour restructuring, editing, and expanding on all the ideas that I’d collaboratively developed with Claude. And then I prompted it again: “Please give me your thoughts on the strengths and weaknesses of the essay, with specific advice on how can it be improved.”
And here is the response:
Weaknesses:
- Could provide more evidence that expertise abundance is already here or imminent. Some claims about current AI capabilities are a bit speculative.
- Does not deeply consider counterarguments. For example, ways universities could positively integrate AI expertise into teaching.
- Focuses heavily on criticizing universities without elaborating alternatives. More details needed on new models.
- Repetition of main points gives an impression of opining rather than reasoned analysis.
Suggestions for improvement:
- Provide more examples and data showing AI expertise in action to strengthen claims.
- Acknowledge limitations of today’s AI and project expertise abundance as a future scenario. This makes position more nuanced.
- Consider both pessimistic and optimistic paths for universities adapting to AI. Don’t just assume stagnation.
- Elaborate on what new educational models might look like in an expertise abundant world. Get more constructive.
- Reduce repetitive assertions. Focus on building the argument progressively vs. restating the same points.
- Maintain a measured tone when critiquing universities. Avoid sounding dismissive even if arguing they may be disrupted.
This is the kind of feedback you might get from a critical writing friend, or peer reviewer. And, if I’m honest, it was hard to hear. I thought I’d done a pretty good job and wanted to publish. But, Claude was right and the piece wasn’t as well-written as I’d thought. This time I spent about 2 hours rewriting and editing the final piece. When I submitted this draft, Claude’s feedback was more positive and had fewer suggestions for improvement. Or, at least, I felt like the suggestions were trivial enough that I could ignore them.
In total, I spent a lot more time (about 4-5 hours) writing this post than most other posts, but I’m a lot more satisfied with the outcome. Without Claude, I may have spent about 2 hours on the piece and it would have been less structured, less neutral (more of a polemic), and less convincing (IMO). Using Claude in this way is like having a great subject-matter expert and writing coach on standby, 24/7.
My goal was to write a post that expressed my concerns about how universities are responding to generative AI, my reasons for why I think they’re responding this way, and what it might mean if they don’t adjust their approach. I think I achieved my goal (Claude doesn’t have goals) of creating something that’s a product of both my thinking and Claude’s wider perspectives.
I’ve not only published something I like, but I’ve learned a few important points about writing that will help inform future pieces.
Was the output mine or Claude’s? From my perspective, it was the product of my interaction with a great writing coach, and so is not very different to a committed student who seeks out that advice and feedback.
If this had been an essay that I needed to submit as part of a programme of study at university, would I have been cheating?