HOME / Beyond artificial intelligence

Beyond artificial intelligence

I have just returned from the Tata Communications CEO Summit 2016 in Ascot.  The event was titled ‘Artificial Intelligence meets Emotional Intelligence’, and Oliver Pickup of the Daily Telegraph has written a great article about the event that you can read here.

Artificial Intelligence (AI) has been increasingly in the news.  In March Google’s DeepMind made headlines when the machine programmed to play Go defeated champion Lee Sedo by 4 matches to 1.   Amazon’s Echo and voice assistant Alexa has been widely praised for its voice recognition capabilities, and many people will remember how Watson handily beat the best Jeopardy players in the world.

Things have been changing quickly.

ImageNet is a cloud-based database of millions of images. Beginning in 2010 the ImageNet Challenge was established to see how well a machine would do at object recognition.  As a point of reference an average person will be able to achieve 95% accuracy.  In 2010 the winning machine could correctly label an image 72% of the time, by 2012 accuracy improved to 85% and in 2015 the machine achieved 96% accuracy. You’re starting to see some of this technology in consumer applications like Facebook: AI that can recognise and tag your friends and family when you upload photos, and systems that can describe the contents of photos for blind people.

So why have things been changing quickly?

First, we’re continuing to get more computing and more storage for lower and lower prices. Next generation compute and storage cloud services can provide thousands of computers for an hour or for a day, flexible to users’ needs.  AI and machine learning software require lots of computing during the learning phase.

The second reason is the emergence of neural network algorithms.   Third, it’s not possible to apply these advanced AI technologies without data, and lots of data.  Consumer Internet companies like Facebook are able to use billions of photos to train facial recognition systems, while AlphaGo learned from millions of games of Go and Alexa learned on millions of voice patterns.

Although we’ll continue to see progress in replicating what we humans do, we have the opportunity to apply these technologies to even more important challenges.  Today many of the machines that generate electricity, transport goods, farm food, or sequence genes have large amounts of data.  If we were able to connect these machines and collect the sensor data from them we would have the opportunity to use artificial intelligence and machine learning technologies to operate a more precise planet.  Imagine a future farm that can use fewer pesticides, which not only reduces the cost of the food but also makes it healthier. A future power utility could be based on a vast array of solar panels, wind turbines, small hydro generators and batteries to generate more power, much more efficiently. A paediatric hospital could share the results of millions of MRI scans and diagnose healthcare far faster.

So what if we could connect the Things from our physical world? What if we could couple large-scale compute and storage cloud services with the AI software we’ve been using to recognise photos or phone calls? Maybe then we’d be able to use machines to make our planet a better, more sustainable place and by doing so improve the quality of life for more of the world’s population by 2025.

What possibilities do you see in combining cloud and AI? Let us know in the comments below.

To read more from Timothy on the Internet of Things, try his new book Precision, available now from Amazon

Tags: , , , , , , ,

Timothy Chou

Timothy Chou

Timothy Chou has been a leader in bringing enterprises to the cloud since 1999, when he began his tenure as President of Oracle On Demand where many businesses chose to have their enterprise applications delivered as a cloud service. Since leaving Oracle he returned to Stanford University and started the first course on cloud computing. Timothy has been a visible pioneer in evangelizing this major shift in computing.

Comments are closed.