Defining artificial intelligence is no easy matter. Since the mid-20th century when it was first recognized as a specific field of research, AI has always been envisioned as an evolving boundary, rather than a settled research field. Fundamentally, it refers to a programme whose ambitious objective is to understand and reproduce human cognition; creating cognitive processes comparable to those found in human beings.

Therefore, we are naturally dealing with a wide scope here, both in terms of the technical procedures that can be employed and the various disciplines that can be called upon: mathematics, information technology, cognitive sciences, etc. There is a great variety of approaches when it comes to AI: ontological, reinforcement learning, adversarial learning and neural networks, to name just a few. Most of them have been known for decades and many of the algorithms used today were developed in the ’60s and ’70s.

Since the 1956 Dartmouth conference, artificial intelligence has alternated between periods of great enthusiasm and disillusionment, impressive progress and frustrating failures. Yet, it has relentlessly pushed back the limits of what was only thought to be achievable by human beings. Along the way, AI research has achieved significant successes: outperforming human beings in complex games (chess, Go), understanding natural language, etc. It has also played a critical role in the history of mathematics and information technology. Consider how many softwares that we now take for granted once represented a major breakthrough in AI: chess game apps, online translation programmes, etc.

 

Complete text here.

 

By : Cédric Villani

Previous Somewhere In Time
Next On The Computational Complexity Of Algorithms