Comment: Could robots make us better humans?

This is one of his arguments for listening to AI-generated music, studying how computers do maths and…gazing at digitally produced paintings: to understand how advanced machines work at the deepest level, in order to make sure we know everything about the technology that is now built into our lives.

Harris, J. (2019). Could robots make us better humans? The Guardian.

Putting aside the heading that conflates “robots” with “AI” there are several insightful points worth noting in this Guardian interview with Oxford-based mathematician and musician, Marcus du Sautoy. I think it’ll be easiest if I just work through the article and highlight them in the order that they appear.

1. “My PhD students seem to have to spend three years just getting to the point where they understand what’s being asked of them…”: It’s getting increasingly difficult to make advances in a variety of research domains. The low-hanging fruit has been picked and it subsequently takes longer and longer to get to the forefront of knowledge in any particular area. At some point, making progress in any scientific endeavor is going to require so much expertise that no single human being will be able to contribute much to the overall problem.

2. “I have found myself wondering, with the onslaught of new developments in AI, if the job of mathematician will still be available to humans in decades to come. Mathematics is a subject of numbers and logic. Isn’t that what computers do best?”: On top of this, we also need to contend with the idea that advances in AI seem to indicate that some of these systems are able to develop innovations in what we might consider to be deeply human pursuits. Whether we call this creativity or something else, it’s clear that AI-based systems are providing earlier insights into problems that we may have eventually arrived at ourselves, albeit at some distant point in the future.

3. “I think human laziness is a really important part of finding good, new ways to do things…”: Even in domains of knowledge that seem to be dominated by computation, there is hope in the idea that working together, we may be able to develop new solutions to complex problems. Human beings often look for shortcuts when faced with inefficiency or boredom, something that AI-based systems are unlikely to do because they can simply brute force their way through the problem. Perhaps a combination of a human desire to take the path of least resistance, combined with the massive computational resources that an AI could bring to bear, would result in a solution that’s beyond the capacity of either working in isolation.

4. “Whenever I talk about maths and music, people get very angry because they think I’m trying to take the emotion out of it…”: Du Sautoy suggests that what we’re responding to in creative works of art isn’t an innately emotional thing. Rather, there’s a mathematical structure that we recognise first, and the emotion comes later. If that’s true, then there really is nothing in the way of AI-based systems not only creating beautiful art (they already do that) but of creating art that moves us.

5. “We often behave too like machines. We get stuck. I’m probably stuck in my ways of thinking about mathematical problems”: If it’s true that AI-based systems may open us up to new ways of thinking about problems, we may find that working in collaboration with them makes us – perhaps counterintuitively – more human. If we keep asking what it is that makes us human, and let machines take on the tasks that don’t fit into that model, it may provide space for us to expand and develop those things that we believe make us unique. Rather than competing on computation and reason, what if we left those things to machines, and looked instead to find other ways of valuing human capacity?

Link: Enlightenment Wars: Some Reflections on ‘Enlightenment Now,’ One Year Later

I’m a big fan of Steven Pinker’s writing (I know that this isn’t fashionable with the social justice warriors, but there it is) and so was really happy to read his 10 000 word response to some of the criticisms of his latest book, Enlightenment Now. While reviews of the book were overwhelmingly positive many bloggers and online commentators really took a dislike to Pinker’s arguments, sometimes seemingly because of who else liked the book (e.g. Bill Gates). In many cases, where Pinker uses data and links to sources to support his claims, his critics generally go for straw man arguments and ad hominem attacks.

Pinker’s response is a long read but it’s also a really good example of how to respond to a critique of your academic work. He doesn’t take it personally and simply does what he is good at, which is marshalling the available evidence to support his arguments. If you like Steven Pinker (and science and rationality in general) you may enjoy this post.

Here is the link: https://quillette.com/2019/01/14/enlightenment-wars-some-reflections-on-enlightenment-now-one-year-later/.

Academic expert says Google and Facebook’s AI researchers aren’t doing science

Google and Facebook, and other corporate research labs are focused on AI for profit, not on advancing science..such laboratories aren’t advancing the field of cognitive science anymore than Ford is advancing the field of physics at the edge.

After all, no matter how impressive neural networks are, they operate on principles that date back decades. Perhaps the greatest good for humanity isn’t in fine-tuning algorithms that make people pay attention to Facebook at the expense of their mental health.

Source: Academic expert says Google and Facebook’s AI researchers aren’t doing science

I tend to agree with the main point that the work being done at Google and Facebook doesn’t count as science, in the sense that it’s not advancing our understanding of the world. The engineers at software companies spend a lot of time optimising algorithms until they get the answer they need. I’m not 100% sure but it sounds a bit like p-hacking.

Having said that I also think that there’s a difference between the kind of academic research that enhances our understanding of the world, and the kind of applied research that has commercial value. I’d also love to see more work devoted to cancer diagnosis (and therapeutic intervention development) than social media optimisation, but that’s not really the point. This is about choosing to make an intellectual contribution to a field of research (in which case, do a PhD at a university) or applying well-understood theoretical principles in the service of  real world application (in which, case go work for a startup).

I enjoyed reading (July)

Artificial Intelligence Is Now Telling Doctors How to Treat You (Daniela Hernandez)

Artificial intelligence is still in the very early stages of development–in so many ways, it can’t match our own intelligence–and computers certainly can’t replace doctors at the bedside. But today’s machines are capable of crunching vast amounts of data and identifying patterns that humans can’t. Artificial intelligence–essentially the complex algorithms that analyze this data–can be a tool to take full advantage of electronic medical records, transforming them from mere e-filing cabinets into full-fledged doctors’ aides that can deliver clinically relevant, high-quality data in real time.

Carl Sagan on Science and Spirituality (Maria Popova)

Plainly there is no way back. Like it or not, we are stuck with science. We had better make the best of it. When we finally come to terms with it and fully recognize its beauty and its power, we will find, in spiritual as well as in practical matters, that we have made a bargain strongly in our favor.

But superstition and pseudoscience keep getting in the way, distracting us, providing easy answers, dodging skeptical scrutiny, casually pressing our awe buttons and cheapening the experience, making us routine and comfortable practitioners as well as victims of credulity.

Is it OK to be a luddite?

Perhaps, there is some middle-ground, not skepticism or luddism, but what Sean calls digital agnosticism. So often in our discussions of online education and teaching with technology, we jump to a discussion of how or when to use technology without pausing to think about whether or why. While we wouldn’t advocate for a new era of luddism in higher education, we do think it’s important for us to at least ask ourselves these questions.

We use technology. It seduces us and students with its graphic interfaces, haptic touch-screens, and attention-diverting multimodality. But what are the drawbacks and political ramifications of educational technologies? Are there situations where tech shouldn’t be used or where its use should be made as invisible as possible?

Reclaiming the Web for the Next Generation (Doug Belshaw):

Those of us who have grown up with the web sort-of, kind-of know the mechanics behind it (although we could use a refresher). For the next generation, will they know the difference between the Internet and Google or Facebook? Will they, to put it bluntly, know the difference between a public good and a private company?

7 things good communicators must not do (Garr Reynolds): Reynolds creates a short list of items taken from this TED Talk by Julian Treasure. If you can’t watch the video, here are the things to avoid:

1. Gossip
2. Judgement
3. Negativity
4. Complaining
5. Excuses
6. Exaggeration (lying)
7. Dogmatism
Reynolds added another item to the list; 8. Self-absorption

Personal Learning Networks, CoPs Connectivism: Creatively Explained (Jackie Gerstein): Really interesting post demonstrating student examples of non-linguistical knowledge representation.

The intent of this module is to assist you in developing a personalized and deep understanding of the concepts of this unit – the concepts that are core to using social networking as a learning venue. Communities of Practice, Connectivism, Personal Learning Networks, create one or a combination of the following to demonstrate your understanding of these concepts: a slide show or Glog of images, an audio cast of sounds, a video of sights, a series of hand drawn and scanned pictures, a mindmap of images, a mathematical formula, a periodic chart of concepts, or another form of nonlinguistic symbols. Your product should contain the major elements discussed in this module: CoPs, Connectivism, and Personal Learning Networks. These are connected yet different concepts. As such they should be portrayed as separate, yet connected elements.

The open education infrastructure, and why we must build it (Davis Wiley)

Open Credentials
Open Assessments
Open Educational Resources
Open Competencies

This interconnected set of components provides a foundation which will greatly decrease the time, cost, and complexity of the search for innovative and effective new models of education.

I enjoyed reading (May)

autumn book

Stop publishing web pages (Anil Dash):

Start moving your content management system towards a future where it outputs content to simple APIs, which are consumed by stream-based apps that are either HTML5 in the browser and/or native clients on mobile devices.

What happens when everyone is pushing their content out into streams that can be filtered, mixed together, repurposed and republished? I shouldn’t have to go to a page to get your stuff. I should be able to subscribe to your feed. And more than that, I should be able to subscribe to only the parts of your feed that interest me.

 

Impact factors declared unfit for duty:

I think basing a judgement on the name or impact factor of the journal rather that the work that the scientist in question has reported is profoundly misguided…Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.

Of course, the problem is that while I may not think the Impact Factor has any real value, my institution does. Sad face.

 

The world needs you to stop being boring (Garr Reynolds):

Stop being boring. “The world needs you to stop being boring,” he says. “Everyone can be boring. Boring is easy! “What will you create that will make the world awesome?” Robbie Nova asks. “Nothing if you keep sitting there!” So get up and take the road less traveled — that’s the road that leads to awesome!

 

This 3D printed biplastic windpipe saved a baby’s life (Clay Dillow):

Using high resolution imaging to build a digital picture of Kaiba’s trachea, they were able to print a customized biopolymer tracheal splint for the infant using a 3-D printer.

OK, so we can do this now. We can basically take pictures of things and then print them. Perfectly. And it’s getting cheaper. How long before every house (or community) has a 3D printer connected to a database of shared schematics that people can use to print whatever they need?

Related (kind of): Two year old girl receives new trachea made from her own stem cells, and Injectable oxygen keeps people alive without breathing. Science is awesome.

 

Let there be stoning (Jay Lehr):

We attempt to achieve excellence of written presentation in our journals. We can require no less in our conferences. It is an honor to be accepted as a speaker who will spend the valuable time of hundreds of scientists at a conference. Failure to spend this time wisely and well, failure to educate, entertain, elucidate, enlighten, and most important of all, failure to maintain attention and interest should be punishable by stoning. There is no excuse for such tedium, so why not exact the ultimate penalty?

Is this a bit harsh? No. I don’t think so. I spend a lot of time preparing my presentations. I read up on design principles. I spend ages deciding what font I will use. I choose my pictures carefully. And that’s after I’ve spent a lot of time preparing the academic content. I don’t think it’s unreasonable to expect the same of others. If you don’t have time to prepare well, don’t submit your abstract. See also How to give a presentation that bores your audience.

I enjoyed reading (December)

reading outsideI’m going to try something new on this blog. At the end of every month I’ll write a short post highlighting the things I particularly enjoyed reading. I found that simply pushing them into a Twitter or Google+ feed would tend to obfuscate them among all of the other things that I wanted to point out to people. I guess this post is a way to say, “Of all the things I read this month, these are the ones I enjoyed the most”. I’m not trying to summarise everything I read, just present a small sampling. I’ll try it out for a few months and see if I like the process.

 

The web we lost (Anil Dash). A look back over the past 5-10 years of social media and how things have changed, usually not for the better. In many instances, we’re actually worse off now than we were before the rise of the new social platforms. He talks about how we’re progressively losing control of our online identities, of the content we create and share (and which makes those platforms as powerful as they are), and lost sight of the values that actually led to the development of the web in the first place. Here’s a quote from the end of the article:

I know that Facebook and Twitter and Pinterest and LinkedIn and the rest are great sites, and they give their users a lot of value. They’re amazing achievements, from a pure software perspective. But they’re based on a few assumptions that aren’t necessarily correct. The primary fallacy that underpins many of their mistakes is that user flexibility and control necessarily lead to a user experience complexity that hurts growth. And the second, more grave fallacy, is the thinking that exerting extreme control over users is the best way to maximize the profitability and sustainability of their networks.

The first step to disabusing them of this notion is for the people creating the next generation of social applications to learn a little bit of history, to know your shit, whether that’s about Twitter’s business model or Google’s social features or anything else. We have to know what’s been tried and failed, what good ideas were simply ahead of their time, and what opportunities have been lost in the current generation of dominant social networks.

Update: Here’s a follow up post from Anil on Rebuilding the web we lost.

 

Mobile Learning, Non-Linearity, Meaning-Making (Michael Sean Gallagher). What I liked most about this post is the suggestion, presented below, that the true power of “mobile” is that it transforms every space into a potential learning space.

They refer to the ‘habi­tus’, the sit­u­at­ed locale of the indi­vid­ual. Yet the locale doesn’t define the learn­ing per se as the process of mobile learn­ing trans­forms the habi­tus into a learn­ing space. Tools, con­tent, and com­mu­ni­ty are recon­struct­ed to allow for meaning-making. Turn­ing the envi­ron­ment in which we hap­pen to find our­selves into an envi­ron­ment for learn­ing. Mobile tech­nol­o­gy assists in bring­ing these ele­ments into con­junc­tion, an orga­niz­ing agent in this process. But it is real­ly about the trans­for­ma­tion. From space to learn­ing space. From noise to mean­ing.

 

Arm Teachers? (Tom Whitby). When I first read about the suggestions to arm teachers, in the wake of the Newtown shooting, I dismissed it as ridiculous without even considering it. What I liked about this post from Tom is that instead of just dismissing the suggestion out of hand, he follows it through to some logical conclusions. I realised that his approach does far more to systematically dismantle the argument than simply rejecting it.

 

The demon-haunted world: Science as a candle in the dark (Carl Sagan). Carl Sagan is one of my heroes. Few people have done as much as he did to bring a sense of wonder about the world, to the public. This book is an exploration of scientific thinking over the past few centuries, highlighting the many areas where a lack of this critical approach to the world has led to a stumbling of our species. Think of the hysteria of witch-burning, UFO abductions, racism and all the other instances where a lack of critical thought has brought so much suffering and misunderstanding about the world. This book should be required reading for everyone.

 

The robot teachers (Stephen Downes). Stephen argues against the idea of universities and higher education in general as a system designed to maintain division between a cultural elite and everyone else. He suggests that the solution is not to open up those institutions (i.e. MIT, Harvard, etc.) but to build a better system outside of them.

We must develop the educational system outside the traditional system because the traditional system is designed to support the position of the wealthy and powerful. Everything about it – from the limitation of access, to the employment of financial barriers, to the creation of exclusive institutions and private clubs, to the system of measuring impact and performance according to economic criteria, serves to support that model. Reforming the educational system isn’t about opening the doors of Harvard or MIT or Cambridge to everyone – it’s about making access to these institutions irrelevant. About making them an anachronism, like a symphony orchestra, or a gentleman’s club, or a whites only golf course, and replaced with something we own and build for everyone, like punk music, a skateboard park, or the public park.

Twitter Weekly Updates for 2012-04-23

Twitter Weekly Updates for 2012-02-27