Making a Start

Welcome!

This is the first post to this blog and I wanted to give a brief outline of where it will head initially.  If there is some level of success then I would hope and image that many conversations will develop and that may well take it some place different from what I imagine today.  But that’s a good thing. Lets make a start.

The public’s perception of AI today

AI and Machine Learning are very misunderstood and it seems to me that your average person in the street conjures up images of “mad scientists” creating world destroying robots, or some malevolent alien life force derived from some form of evolution that allowed machines to take on the characteristics of biological life forms.  I think if you look at the subtext of most of these “fantasies” they are really parables of the cruelties that humans inflict on one another rather or grand tales of heroes versus monsters in the great tradition of the Greek classics, such as Homer’s epic poem, The Odyssey.

Contrary to the slow, incremental advances in the science and engineering of machine intelligence, the public seem to have a gnawing doubt that something terrible may be unleashed on them from the depths of a robotics lab somewhere.  This is understandable from the perspective that minor miracles have poured out of companies like Google,  Samsung and Apple, and if they can do that why is it so unlikely that something more self-interested starts emerging. And I mean “a product” more interested in itself rather than the obvious self-interest of the shareholders of these companies.  Surely, if something as fantastical as a mobile phone, or the global SatNav network can be constructed then conscious, sentient machines are not to far behind.

So, taking it as read that the public suspects there is something lurking in the shadows, and that we have some prominent figures in science and engineering telling us that we should be wary ( Stephen Hawking warns artificial intelligence could end humanity ) then I wanted to start digging in to and examining why this was the case beyond the superficial ignorance of the science and mathematics of AI.

Today’s AI is tomorrows MP3 Player

If you think back of 5 years, or 10 or 15 and just sticking to computers and electronics there are many things that can be accomplished by our machines today could only be done by humans just a few years ago. So what could only be achieved by a conscious intellect is now being routinely accomplished by CPU cycles and software.  Speech recognition is one great example and we will come back to this in more detail in a later blog, but suffice it to say for now that 20 years ago the mathematical and algorithmic foundations of today’s “Siris’ was laid down in many Speech Recognition laboratories around the world.  Whilst, it was not my own specific field of research I was closely aligned with much of this work so I have the benefit of seeing this slowly, ever so slowly, evolve from a lab, to products like Dragon Dictate, all the way through to Google Translate. This translation intelligence was foreshadowed as Douglass Adams’ Babel Fish in the “The Hitchhiker’s Guide to The Galaxy” which was described as “Probably the oddest thing in the Universe.” Interestingly Douglass Adams’ didn’t choose to have a machine perform this feat, it was a biological organism that evolved to do this and it was responsible for starting “more and bloodier wars than anything else in the history of creation”. I think Douglass Adams’ comic point was that it did not act intelligently but instead facilitated miscommunication through allowing different cultures to understand each other’s language without understanding the context, hence the wars.

So we have increasingly cheaply available CPU cycles available to do large numbers of mathematical / algorithm calculations and as a result we can create machines that can do some of the things that historically only humans could do.

When did maths and algorithms start being able to do things only humans could previously?

To be frank, I am not going to claim I know when (or even what the activity was) but I know that we can look at what we think of as the humble graph as a very good example of early AI.  I can hear the question reverberating in your mind: What does a drawing on a piece of paper have to do with AI?  Well I will get to that but lets spend a minute looking at the history of the chart,

In 1644, what is sometimes considered to be the first known instance of a graph of statistical data, Michael Florent van Langren showed estimates of the difference in longitude between Toledo and Rome. Some recently discovered documents help to date the genesis of this graphic to before March 1628, and shed some light on why van Langren chose to display this information in this form. For a detail discussion on this see: The First (Known) Statistical Graph: Michael Florent van Langren and the “Secret” of Longitude .

van Langren’s 1644 graph of determinations of the distance, in longitude, from Toledo to Rome. The correct distance is 16.5◦. Source: Tufte (1997, p. 15).

van Langren’s 1644 graph of determinations of the distance, in longitude, from Toledo to Rome. The correct distance is 16.5◦.
Source: Tufte (1997, p. 15).

Van Langren’s 1644 graph contributed to a more precise determination of longitude but that value, 16.5◦, would remain imprecisely known for over 100 years. More important for my argument this new tool allowed a collection of highly complex data (for the time) to be organised, perceived, interpreted and understood by someone without an expert knowledge of the mathematics behind longitude.  They did need to learn how to use the tool called a graph but once they achieved that they could use it and a few of their own thought cycles and apply it to all sorts of data and problems and provided an intuitive way to understand series of number pairs for people who were not experts in the underlying processes and machinations that cause the data manifest in the first place.

Why is this relevant?

Any AI system in existence today is a manifestation of the same principle embodied by the graph. Take complex data usually representing real-world world measurements of some phenomena and apply a mathematical method to reduce this data to a form that simplifies this into representations or categories that are recognisable by a layperson.

  • Words in Japanese rendered in English,
  • Images of faces classified as known or unknown
  • The answers in Jeopardy statistically matched to the best answer in the huge dataset of the world wide web.
  • A system to forecast the movement in price of a stock or foreign exchange rate

All these things are considered AI today but they are nothing more than sophisticated multidimensional graphs.

So next post I will start discussing what I think Artificial Intelligence really is.

Comments are closed.