Social media
I had thought that deleting my Twitter account (and also my two Mastodon accounts) would be harder than it ended up being. Even though it’s something I’ve been considering for a long time, I’ve struggled with the idea of deleting more than 10 000 posts that I’ve created over more than a decade. But now that it’s gone, the only thing I’m feeling is a sense of relief.
Since deleting my account, I’ve come across these posts, which I thought were worth sharing. Note that 3 of the 4 posts in this list are by Mark Carrigan, a social media scholar at the Manchester Institute for Education:
- Axbom, P. (2022, October 9). Why I left algorithm-based social media and what happened next. Axbom.Com. (I like the post but I think the title is a bit click-baity).
- Carrigan, M. (2023). It is time for academics to let go of Twitter. Mark Carrigan.
- Carrigan, M. (2023). Requiem for a Tweet – Is there a future for the academic social capital held on the platform? (2022, November 19). Impact of Social Sciences.
- Carrigan, M. (2023). Saying goodbye to Twitter. Mark Carrigan.
This platform is a lost cause for academics. Social platforms have become integral to the research infrastructure in a way analogous to conferences, workshops and seminars. Universities and funders need to take responsibility for the upcoming transition rather than outsourcing it to private firms. If you’re committed to using Twitter for external engagement then only option will be to pay the tribute and accept you will be doing ever more work to reach a shrinking audience. – Mark Carrigan
Generative AI and language models
We’re going to see more generic language models that are fine-tuned on discipline-specific databases. This will give us experts in technical professions that we can interact with using conversation language. I imagine the lawyer language model, or the doctor language model, or engineering, and so on. This will soon be a response to concerns that ‘vanilla’ ChatGPT isn’t safe for clinical decision-making.
- Singhal, K., et al. (2022). Large Language Models Encode Clinical Knowledge (arXiv:2212.13138). arXiv (Google/DeepMind research paper).
- Bastian, M. (2023, April 14). Google’s medical language model ‘Med-PaLM 2’ enters pilot phase with first customers. THE DECODER.
- Bastian, M. (2023, January 29). BioGPT is a Microsoft language model trained for biomedical tasks. THE DECODER.
- Mesko, B. (2023). Medpalm: New chatbots will soon be better than waiting for a doctor. (2023, January 17). The Medical Futurist.
- Sharing Google’s Med-PaLM 2 medical large language model, or LLM. (n.d.). Google Cloud Blog.
Staying with language models, HealthGPT uses the GPT API to query personal health data. This gives you a conversational interface to your own health data. You can imagine something similar, like TeachGPT, which would create a customised model after looking through my notes, emails, articles, and presentations (for example), and then ask me questions, tutor me, help me to highlight gaps, and suggest resources I can use to fill those gaps.
And related to this, we’re starting to see evidence that language models can give feedback to students.
Our results show that i) ChatGPT is capable of generating more detailed feedback that fluently and coherently summarizes students’ performance than human instructors; ii) ChatGPT achieved high agreement with the instructor when assessing the topic of students’ assignments; and iii) ChatGPT could provide feedback on the process of students completing the task, which benefits students developing learning skills.
Language models that serve specific functions through collections of narrow skill-sets may be the first components of personal AI assistants that are controlled by users, rather than large companies. I’d be super-uncomfortable giving GPT (or any language model) access to lots of my personal information. Which is why the development of open-source language models (also see this Nature blog post), services (like Open Assistant), and APIs are going to be so important in developing this ecosystem. We need to ensure that the potential of generative AI isn’t entirely controlled by private enterprise (although this may be like saying that Linux is a valid alternative to Windows, while acknowledging that everyone still runs Windows).
Still in the context of education, we’re going to see an explosion of websites, apps, courses, and so on, all of which will be “AI-enabled”, with each offering a slightly different take on the teaching / learning / assessment process. Some of these will be great for learning (e.g. Socratic-tutorial-type apps and services), while others will be borderline cheating (e.g. apps to help students ‘simplify’ or ‘enhance’ their writing). We’ll need to help students (and future healthcare professionals) recognise the features of tools and services that support their learning, and to ignore the rubbish. And there’s going to be a lot of rubbish.
Now is the time for us to focus on ensuring that educators understand language models; what they are, how they’re trained and tuned, how they’re implemented, what risks they present, how to mitigate that risk, what regulation exists, how that affects policy positions, and so on. And we could do worse than simply starting with a shared vocabulary of AI terms.