Categories
AI

AI safety needs social scientists

…we need social scientists with experience in human cognition, behavior, and ethics, and in the careful design of rigorous experiments. Since the questions we need to answer are interdisciplinary and somewhat unusual relative to existing research, we believe many fields of social science are applicable, including experimental psychology, cognitive science, economics, political science, and social psychology, as well as adjacent fields like neuroscience and law.

Irving, G. & Askell, A. (2019). AI safety needs social scientists. OpenAI.

The development of AI and its implications across society is too important to leave to computer scientists, especially when it comes to AI safety and alignment. The uncertainty around how we think about human values makes it difficult to encode into software, since it involves human rationality, bias and emotion. But because the alignment of our values with AI systems is so fundamental to the ability of those systems to make good decisions, we need to have a wide variety of perspectives aimed at addressing the problem.

Link to the full paper on Distill.

Categories
AI

How OpenAI is developing real solutions to the AI alignment problem

Growth in AI safety spending

Farquhar, S (2017). Changes in funding in the AI safety field.

Here’s a situation we all regularly confront: you want to answer a difficult question, but aren’t quite smart or informed enough to figure it out for yourself. The good news is you have access to experts who are smart enough to figure it out. The bad news is that they disagree.

If given plenty of time – and enough arguments, counterarguments and counter-counter-arguments between all the experts – should you eventually be able to figure out which is correct? What if one expert were deliberately trying to mislead you? And should the expert with the correct view just tell the whole truth, or will competition force them to throw in persuasive lies in order to have a chance of winning you over?

In other words: does ‘debate’, in principle, lead to truth?

Source: Wiblin, R. & Harris, K. (2018). Dr Paul Christiano on how OpenAI is developing real solutions to the ‘AI alignment problem’, and his vision of how humanity will progressively hand over decision-making to AI systems.

This is one of the most thoughtful conversations I’ve heard on the alignment problem in AI safety.  It wasn’t always easy to follow as both participants are operating at a very high level of understanding on the topic but it’s really rewarding. It’s definitely something I’ll listen to again. Topics that they covere include:

  • Why Paul expects AI to transform the world gradually rather than explosively and what that would look like.
  • Several concrete methods OpenAI is trying to develop to ensure AI systems do what we want even if they become more competent than us.
  • Why AI systems will probably be granted legal and property rights.
  • How an advanced AI that doesn’t share human goals could still have moral value.
  • Why machine learning might take over science research from humans before it can do most other tasks.
  • Which decade we should expect human labour to become obsolete, and how this should affect your savings plan.
Categories
conference

Attendance of e-learning colloquium

I attended the first few presentations of an e-learning colloquium on campus last Tuesday. Here are a few short notes I took during the short time I was there.

Begin a section of work with a short test to evaluate pre-intervention knowledge. The questions should be aligned with important content from the module. This will identify for students the areas that they need to focus on during the module. Following up with the same short test after the intervention allows you to evaluate immediately following the lesson / module whether or not students understood the main concepts.

Make sure that this isn’t seen by students as “busy work” i.e. something we want them to do that’s meaningless.

Sometimes “e-learning” seems to be be about moving content online. Even if it is interactive content, does it change behaviour, or do students use the same learning techniques they would use with offline content? I think that often “e-learning” for many teachers means using a computer and the internet to do the same thing they’ve always done i.e. there’s no change in practice.

Less than half of dentistry students accessed the e-learning site that staff spent ages creating. 4th year students would go if it was useful for exams, to get notes, curiosity. 5th year students see opportunity for advanced learning. Big disconnect between 4th and 5th year students in terms of how they see / perceive the service.

Students wanted to see more clinical cases on the e-learning site. Dentistry uses OSCEs (maybe we should contact them to discuss our implementation next year). Students also wanted mock tests with memos (seems like a good idea, but most participants thought that this was paramount to giving students the tests and answers)

Suggestions to ban access to social networks, as it slows down the servers. Evidence of a lack of understanding on the part of academics as to the value of incorporating a social component to the teaching and learning process?

Including new teaching techniques requires a change in student mindset. This needs to start in 1st year.

Categories
learning physiotherapy teaching

Aligning curriculum with assessment

Our department is gearing up for it’s annual planning meeting, where we review the physiotherapy course from the past year and plan for the next one. This is also the year that our newly formed Directorate of Teaching and Learning has developed an institutional teaching and learning policy, with a strategic implementation plan over the next 5 years. As part of the development of a scholarship of teaching and learning at the university, all faculties and departments are now being asked to develop their own teaching and learning policies, aligned with the institutional one. I’ll be conducting a short workshop at the planning meeting, where we’ll look at the institutional departmental policy and flesh out the draft document I’ve been working on for the past week or so.

As part of my presentation, I’ll be showing an example of how we can align a simple assessment task with the departmental teaching and learning policy. Here’s my initial idea, feedback or comments are welcome.

Categories
education students

Assessment in an outcomes based curriculum

I attended a seminar / short course on campus yesterday, presented by Prof. Chrissie Boughey from Rhodes University. She spoke about the role of assessment in curriculum development and the link between teaching and assessing. Here are the notes I took.

Assessment is the most important factor in improving learning because we get back what we test. Therefore assessment is acknowledged as a driver of the quality of learning.

Currently, most assessment tasks encourage the reproduction of content, whereas we should rather be looking for the production of new knowledge (the analyse, evaluate and create parts of Bloom’s top level cognitive processes).

Practical exercise: Pick a course / module / subject you currently teach (Professional Ethics for Physiotherapists), think about how you assess it (Assignment, Test, Self-study, Guided reflection, Written exam) and finally, what you think you’re assessing (Critical thinking / Analysis around ethical dilemmas in healthcare, Application of theory to clinical practice). I went on to identify the following problems with assessment in the current module:

  • I have difficulty assigning a quantitative grade to what is generally a qualitative concept
  • There is little scope in the current assessment structure for a creative approach

This led to a discussion about formal university structures that determine things like, how subjects will be assessed, as well as the regimes of teaching and learning (“we do it this way because this is the way it’s always been done”). Do they remove your autonomy? It made me wonder what our university official assessment policy is.

Construct validity: Are we using assessment to asses something other than what we say we’re assessing? If so, what are we actually assessing?

There was also a question about whether or not we could / should asses only what’s been formally covered in class. How do you / should you asses knowledge that is self-taught? We could for example, measure the process of learning, rather than the product. I made a point that in certain areas of what I teach, I no longer assign a grade to an individual peice of work and rather give a mark for the progress that the student has made, based on feedback and group discussion in that area.

Outcomes based assessment / criterion referenced assessment

  1. Uses the principle of ALIGNMENT (aligning learning outcomes, passing criteria, assessment)
  2. Is assessing what students should be able to do
  3. “Design down” is possible when you have standardised exit level outcomes (we do, prescribed by the HPCSA)
  4. The actual criteria are able to be observed and are not a guess at a mental process, “this is what I need to see in order to know that the student can do it”
  5. Choosing the assessment tasks answers the question “How will I provide opportunities for students to demonstrate what I need to see?” When this is the starting point, it knocks everything else out of alignment
  6. You need space for students / teachers to engage with the course content and to negotiate meaning or understanding of the course requirements, “Where can they demonstrate competence?”

Criteria are negotiable and form the basis of assessment. They should be public, which makes educators accountable.

When designing outcomes, the process should be fluid and dynamic.

Had an interesting conversation about the priviliged place of writing in assessment. What about other expressions of competence? Since speech is the primary form of communication (we learn to speak before we learn to write), we find it easier to convey ideas through conversation, as it includes other cues that we use to construct meaning. Writing is a more difficult form because we lack visual (and other) cues. Drafting is one way that constructing meaning through writing could be made easier. The other point I thought was interesting was that academic writing is communal (drafting, editors, reviewers all provide a feedback mechanism that isn’t as fluid as speech, but is helpful nonetheless), but we often don’t allow students to write communally.

Outcomes based assessment focusses on providing students with multiple opportunities to practice what they need to do, and the provision of feedback on that practice (formative). Eventually, students must demonstrate achievement (summative).

We should only assign marks when we evaluate performace against the course outcomes.

Finally, in thinking about the written exam as a form of assessment, we identified these characteristics:

  • It is isolated and individual
  • There is a time constraint
  • There is pressure to pass or fail

None of these characteristics are present in general physiotherapy practice. We can always ask a colleage / go to the literature for assistance. There is no constraint to have the patient fully rehabilitated by any set time, and there are no pass or fail criteria.

If assessment is a method we use to determine competence to perform a given task, and the way we asses isn’t related to the task physio students will one day perform, are we assessing them appropriately?

Note: the practical outcomes of this session will include the following:

  • Changing the final assessment of the Ethics module from a written exam to a portfolio presentation
  • Rewriting the learning outcomes of the module descriptors at this year’s planning meeting
  • Evaluating the criteria I use to mark my assignments to better reflect the module outcomes