Categories
technology

Delete All Your Apps

A good question to ask yourself when evaluating your apps is “why does this app exist?” If it exists because it costs money to buy, or because it’s the free app extension of a service that costs money, then it is more likely to be able to sustain itself without harvesting and selling your data. If it’s a free app that exists for the sole purpose of amassing a large amount of users, then chances are it has been monetized by selling data to advertisers.

Koebler, J. (2018). Delete all your apps.

This is a useful heuristic for making quick decisions about whether or not you should have that app installed on your phone. Another good rule of thumb: “If you’re not paying for the product then you are the product.” Your personal data is worth a lot to companies who are either going to use it to refine their own AI-based platforms (e.g. Google, Facebook, Twitter, etc.) or who will sell your (supposedly anonymised) data to those companies. This is how things work now…you give them your data (connections, preferences, brand loyalty, relationships, etc.) and they give you a service “for free”. But as we’re seeing more and more, it really isn’t free. This is especially concerning when you realise how often your device and apps are “phoning home” with reports about you and your usage patterns, sometimes as frequently as every 2 seconds.

On a related note, if you’re interested in a potential technical solution to this problem you may want to check out Solid (social linked data) by Tim Berners-Lee, which will allow you to maintain control of your personal information but still share it with 3rd parties under conditions that you specify.


Categories
education ethics physiotherapy technology

My presentation for the Reimagine Education conference

Here is a summarised version of the presentation I’m giving later this morning at the Reimagine Education conference. You can download the slides here.

Categories
clinical technology

E.J. Chichilnisky | Restoring Sight to the Blind

Source: After on podcast with Rob Reid: Episode 39: E.J. Chichilnisky | Restoring Sight to the Blind.

This was mind-blowing.

The conversation starts with a basic overview of how the eye works, which is fascinating in itself, but then they start talking about how they’ve figured out how to insert an external (digital) process into the interface between the eye and brain, and that’s when things get crazy.

It’s not always easy to see the implications of converting physical processes into software but this is one of those conversations that really makes it simple to see. When we use software to mediate the information that the brain receives, we’re able to manipulate that information in many different ways. For example, with this system in place, you could see wavelengths of light that are invisible to the unaided eye. Imagine being able to see in the infrared or ultraviolet spectrum. But it gets even crazier.

It turns out we have cells in the interface between the brain and eye that are capable of processing different kinds of visual information (for example, reading text and evaluating movement). When both types of cell receives information meant for the other at once, we find it really hard to process both simultaneously. But, if software could divert the different kinds of information directly to the cells responsible for processing it, we could do things like read text while driving. The brain wouldn’t be confused because the information isn’t coming via the eyes at all and so the different streams are processed as two separate channels.

Like I said, mind-blowing stuff.

Additional reading

Categories
AI technology

a16z Podcast: Network Effects, Origin Stories, and the Evolution of Tech

If an inferior product/technology/way of doing things can sometimes “lock in” the market, does that make network effects more about luck, or strategy? It’s not really locked in though, since over and over again the next big thing comes along. So what does that mean for companies and industries that want to make the new technology shift? And where does competitive advantage even come from when everyone has access to the same building blocks of innovation?

This is a wide-ranging conversation on the history of technology (mainly in Silicon Valley) and the subsequent impact on society.  If you’re interested in technology in a general sense, rather than specific applications or platforms, then this is a great conversation that gets into the deeper  implications of technology at a fundamental level.

Categories
education technology

In Beta and sunsetting consumer Google+

Action 1: We are shutting down Google+ for consumers.

This review crystallized what we’ve known for a while: that while our engineering teams have put a lot of effort and dedication into building Google+ over the years, it has not achieved broad consumer or developer adoption, and has seen limited user interaction with apps. The consumer version of Google+ currently has low usage and engagement: 90 percent of Google+ user sessions are less than five seconds.

I don’t think it’s a surprise to anyone that Google+ wasn’t a big hit although I am surprised that they’ve taken the step to shut it down for consumers. And this is the problem with online communities in general; when the decision is made that they’re not cost-effective, they’re shut down regardless of the value they create for community members.

When Ben and I started In Beta last year we decided to use Google+ for our community announcements and have been pretty happy with what we’ve been able to achieve with it. The community has grown to almost 100 members and, while we don’t see much engagement or interaction, that’s not why we started using it. For us, it was to make announcements about planning for upcoming episodes and since we didn’t have a dedicated online space, it made sense to use something that already existed. Now that Google+ is being sunsetted we’ll need to figure out another place to set up the community.

 

Categories
AI technology

Mozilla’s Common Voice project

Any high-quality speech-to-text engines require thousands of hours of voice data to train them, but publicly available voice data is very limited and the cost of commercial datasets is exorbitant. This prompted the question, how might we collect large quantities of voice data for Open Source machine learning?

Source: Branson, M. (2018). We’re intentionally designing open experiences, here’s why.

One of the big problems with the development of AI is that few organisations have the large, inclusive, diverse datasets that are necessary to reduce the inherent bias in algorithmic training. Mozilla’s Common Voice project is an attempt to create a large, multilanguage dataset of human voices with which to train natural language AI.

This is why we built Common Voice. To tell the story of voice data and how it relates to the need for diversity and inclusivity in speech technology. To better enable this storytelling, we created a robot that users on our website would “teach” to understand human speech by speaking to it through reading sentences.

I think that voice and audio is probably going to be the next compter-user interface so this is an important project to support if we want to make sure that Google, Facebook, Baidu and Tencent don’t have a monopoly on natural language processing. I see this project existing on the same continuum as OpenAI, which aims to ensure that “…AGI’s benefits are as widely and evenly distributed as possible.” Whatever you think about the possibility of AGI arriving anytime soon, I think it’s a good thing that people are working to ensure that the benefits of AI aren’t mediated by a few gatekeepers whose primary function is to increase shareholder value.

Most of the data used by large companies isn’t available to the majority of people. We think that stifles innovation. So we’ve launched Common Voice, a project to help make voice recognition open and accessible to everyone. Now you can donate your voice to help us build an open-source voice database that anyone can use to make innovative apps for devices and the web. Read a sentence to help machines learn how real people speak. Check the work of other contributors to improve the quality. It’s that simple!

The datasets are openly licensed and available for anyone to download and use, alongside other open language datasets that Mozilla links to on the page. This is an important project that everyone should consider contributing to. The interface is intuitive and makes it very easy to either submit your own voice or to validate the recordings that other people have made. Why not give it a go?

Categories
AI education technology

Technology Beyond the Tools

You didn’t need to know about how to print on a printing press in order to read a printed book. Writing implements were readily available in various forms in order to record thoughts, as well as communicate with them. The use was simple requiring nothing more than penmanship. The rapid advancement of technology has changed this. Tech has evolved so quickly and so universally in our culture that there is now literacy required in order for people to effectively and efficiently use it.

Reading and writing as a literacy was hard enough for many of us, and now we are seeing that there is a whole new literacy that needs to be not only learned, but taught by us as well.

Source: Whitby, T. (2018). Technology Beyond the Tools.

I wrote about the need to develop these new literacies in a recent article (under review) in OpenPhysio. From the article:

As clinicians become single nodes (and not even the most important nodes) within information networks, they will need data literacy to read, analyse, interpret and make use of vast data sets. As they find themselves having to work more collaboratively with AI-based systems, they will need the technological literacy that enables them to understand the vocabulary of computer science and engineering that enables them to communicate with machines. Failing that, we may find that clinicians will simply be messengers and technicians carrying out the instructions provided by algorithms.

It really does seem like we’re moving towards a society in which the successful use of technology is, at least to some extent, premised on your understanding of how it works. As educators, it is incumbent on us to 1) know how the technology works so that we can 2) help students use it effectively while at the same time avoid exploitation by for-profit companies.

See also: Aoun, J. (2017). Robot proof: Higher Education in the Age of Artificial Intelligence. MIT Press.

Categories
technology

With every answer, search reshapes our worldview

Our search engines tried to impose structure and find relationships using mainly unintentional clues. You therefore couldn’t rely on them to find everything that would be of help, and not because the information space was too large. Rather, it was because the space was created by us slovenly humans.

Source: Weinberger, D. (2017). With every answer, search reshapes our worldview.

Interesting article on how search algorithms have changed as the web has grown in scale. In the beginning, we got results that were determined by precision and recall  (although optimising for one meant reducing the importance of the other). Then relevance became necessary to include as the number of possible results became too large i.e. when you have 100 000 articles that match the topic, the search engine must decide how to rank them for you. Over time, interestingness was another concept that was built into the algorithm; it’s not just that the results should be accurate and relevant, but they should be interesting too.

Currently, there’s interest in serendipity, where search engines return results that are slightly different to what you’re looking for and may serve to provide an alternative point of view (but not so different that you ignore it) and so avoid the filter bubble. As we move forward, we may also begin seeing calls for an increase in the truthfulness of results (which may reasonably be called quality). As I said, it’s an interesting article that covers a lot with respect to how search engines work, and it useful for anyone who has ever told someone to “just Google it”.

Categories
AI technology

The future is ear: Why “hearables” are finally tech’s next big thing

Your ears have some enormously valuable properties. They are located just inches from your mouth, so they can understand your utterances far better than smart speakers across the room. Unlike your eyes, your ears are at work even when you are asleep, and they are our ultimate multi-taskers. Thousands die every year trying to text while they drive, but most people have no problem driving safely while talking or dictating messages–even if music is playing and children are chatting in the background.

Source: Burrows, P. (2018). The future is ear: Why “hearables are finally tech’s next big thing.

Audio is going to be the next important user interface for human-computer interaction. You could argue that it already is (see Google Home and Assistant, Alexa, Siri, and Cortana). If you think of it as a bandwidth problem you can see that we can take in so much more information by listening, compared to reading. And, unlike reading, listening frees us up to do other things at the same time.

Categories
technology

Will Marshall: The mission to create a searchable database of Earth’s surface | TED Talk

And we now have over 200 satellites in orbit, downlinking their data to 31 ground stations we built around the planet. In total, we get 1.5 million 29-megapixel images of the Earth down each day. And on any one location of the Earth’s surface, we now have on average more than 500 images. A deep stack of data, documenting immense change.

Anyone can go online to planet.com open an account and see all of our imagery online. It’s a bit like Google Earth, except it’s up-to-date imagery, and you can see back through time. You can compare any two days and see the dramatic changes that happen around our planet. Or you can create a time lapse through the 500 images that we have and see that change dramatically over time.

What we’re doing with artificial intelligence is finding the objects in all the satellite images. The same AI tools that are used to find cats in videos online can also be used to find information on our pictures. So, imagine if you can say, this is a ship, this is a tree, this is a car, this is a road, this is a building, this is a truck. And if you could do that for all of the millions of images coming down per day, then you basically create a database of all the sizable objects on the planet, every day. And that database is searchable.

I can imagine us abstracting out the imagery entirely and just having a queryable interface to the Earth. Imagine if we could just ask, “Hey, how many houses are there in Pakistan? Give me a plot of that versus time.” “How many trees are there in the Amazon and can you tell me the locations of the trees that have been felled between this week and last week?” Wouldn’t that be great?

This is fantastic. It’s well worth putting aside 20 minutes to watch the video and then go play around at planet.com.

Up-to-date high res image of Cape Town.