It seems like the discussion on Artificial Intelligence (AI) has moved quite a way since I wrote about it just a few months ago. Or maybe because my Reticular Activating System has now been programmed to notice these things, it's just doing its job.
In my post I wrote about one of the most common tropes of AI, self-driving vehicles. I'll come back to that because I promised to relate how the lizard crossing the road in front of a car scenario ended in real life.
The charismatic Elon Musk has recently reiterated his concern about the pace of AI development, opining that it is dangerous to allow this to happen in the current environment free of any regulatory constraints. Musk's concern seems to be aimed primarily at that other trope of artificial intelligence, the psychotic killer robot. In science fiction the most scary example of this is Arnold Schwarzenegger's unstoppable Terminator. Perhaps the best known example of the self driving car in fiction is the suavely British-accented KITT in the 1980s series Night Rider.
While science fiction has often proved to be the most effective predictor of the future, what it usually fails to take into consideration is two key fundamentals; physics and economics. Both KITT and the Terminator possess powers and invulnerability closer to that of a superhero than any practical technology. Where would the economic capacity and desire to create such devices come from, even if it were possible?
For sure money can take us a long way; when the US President convinced his people to go to the moon, essentially unlimited funds were provided to meet that goal, which was barely achieved in the timeframe set. Whether America would have stuck with the goal if Kennedy had not been assassinated will always be a matter of conjecture, but history shows that once the goal had been achieved, the money tap was rapidly turned off and much to the frustration of NASA and the scientific community has never been opened to the same extent since.
So much then for the imagination of science fiction writers such as the great Arthur C. Clarke who was not unusual in predicting we would have a colony on the moon by 2001. On top of that we would have to concede that the pace of scientific development in tech has also slowed since. While computers have unquestionably become more powerful, this is mostly due to engineering improvements not advanced research. There is little fundamental architectural difference between the smartphone on which I am writing this post and the computers that launched Gemini and Apollo fifty years ago. Indeed much of the code written then was still being used in the Space Shuttle program well into the 21st century.
So I think it will be a little while yet before the full promise of AI technologies will be available in the general market.
However progress is being steadily made, there are practical examples of AI that we are all familiar with such as the voice-activated "personal assistants" that come with most smartphones. Microsoft has taken some useful steps including automatic translation into products such as Outlook and Yammer, and building practical tools such as Azure Machine Learning. Microsoft can also boast one of the most notorious examples of an embarrassing unintended consequence of AI experimentation in the Tay chat bot incident.
My experience so far with AI tools such as Siri on my iPhone has been mixed. It sometimes feels like I'm dealing with a reptile brain with a pleasant voice rather than a real intelligence. "Shut up, Alexa" seems like a useful phrase that I've heard a few times recently.
But there's no doubt that some things are better for automation, which takes me back to my experience at the wheel of a speeding car in the Great Australian Desert when the lizard ran out in front of me. By now you've probably guessed that the big lizard, known affectionately to us as Larry forever afterwards, did not survive the encounter.
I think I'm a pretty good driver; I've never had a serious accident in over forty years of driving and my reaction times are above average. I didn't swerve, brake too heavily or otherwise overreact. I slowed slightly and directed the car to the gap between the lizard and the road. There was plenty of room and our car was the only traffic in sight. Sadly for Larry - although possibly to the benefit of the lizard gene pool - he decided that his best move was to suddenly reverse course about a tenth of a second before his path intersected with that of the car's right bumper.
The reaction of everyone in the car was the same; the immediate horror rapidly followed by complacency, it was only a lizard after all, no major harm done.
Except that a life had been lost. None of us can reanimate a dead lizard or any other organism. For all of our tech might that remains beyond us - at least for now.
The only way for us to survive emotionally in a busy world full of such micro-tragedies is to shut off our empathy. We take the same attitude to road toll or surgical mishap statistics - until it directly affects us or someone we love.
A self-driving car probably would not have killed Larry the Lizard. Artificial Intelligence has the potential way to prevent all kinds of tragedy for all of the acknowledged new risks it creates.
I for one am up for the challenge this represents, because we will be better off as a species when we get there.