Skip to main content
Artificial Intelligence and the Singularity of Mankind

Bayram Kara

Jul 1, 2016

Throughout history, human civilization has been mainly agricultural. Today, we supposedly live in the “digital” age. Scientific advancements have never been this rapid – or this dramatic. The ever-accelerating developments have changed our visions for the future and already made scientists and philosophers question the fate of mankind.

Many respected thinkers claim that a new era based on artificial intelligence (AI) will follow and ultimately succeed us as the primary inhabitants of our planet [www.bbc.com/news/technology-30290540]. Their main rationale is that AI will exceed our limited human intelligence, and machines will ultimately transcend us in their capabilities. AI has already entered our lives with our smart phones (e.g., voice recognition software, like Siri), Google’s search predictions, Netflix’s movie recommendations, your major bank’s fraud-detection algorithms, and many other ways. In the very near future, we will likely see self-driving cars on the streets, drone-based delivery of online purchases, and ‘smart’ machinery for our households. It’s certain that AI based algorithms will substitute or at least help humans in performing certain tasks. In fact, they already have. But will advances in artificial intelligence research eventually allow humanoid robots to surpass us, not only in specific tasks, but in all ways?

A little history

The Industrial Revolution, in the early nineteenth century, and subsequent socio-economic developments, inevitably pushed the course of human culture towards a computerized future. In the Victorian era, financial institutions had started to carry out millions of transactions per year. Large populations needed to be surveyed and governmental censuses required the processing of millions of records. Increases in manufacturing and trade required constant bookkeeping. Such tasks were manually carried out by employed clerks. However, the advent of large-scale data required a much more efficient and effective means of data processing.

It was this need that caused an engineer named Herman Hollerith to develop a mechanical system for data processing in the late nineteenth century. Hollerith commercialized his invention by establishing the Tabulating Machine Company, in 1896, which later gave birth to IBM.

Such mechanical devices carried out specific tasks very efficiently and replaced error-prone human computers (clerks) in processing large amounts of data. But more importantly, they attracted mathematicians’ interest in defining what a “task” is and what was computable by such devices. This led to the rigorous formulation of the “algorithm,” the task or process a machine carries out, in the early twentieth century. Several tools and celebrated theorems were introduced during these years to describe computability: the Lambda-calculus, Church's thesis, the Turing machine, Godel’s incompleteness theorem...

The pioneering work of Claude Shannon’s A Mathematical Theory of Communication, in 1948, laid the foundations of digital communication. With the invention of the transistor, data was transferred into a digital format, and digital machines or “computers” based on the Turing machine model were developed. Electronic productions were becoming cheap, ultimately enabling computers to be affordable for the public. Apple was established in the 1970s to sell personal computers (PCs). During this same era, Microsoft was also founded as a company providing software solutions for the newly emerging PC market. With the commercialization of the Internet in the 1990s, the need for content search engines was on the rise. As such, Google was founded by two PhD students from Stanford.

Today, the aforementioned companies are among the largest in the world. One common pattern in all of them is that their founders were successful in reading the global trends and aware of the course of scientific developments. With tenacity and talent, their companies were able to rise to the top of the financial heap in a short amount of time.

The era of AI?

There is a recent trend in investing in AI. Google has recently acquired many of the world's leading robotics firms, including Boston Dynamics. Amazon, Facebook, and Microsoft are all investing in machine learning, with the hope of pushing their businesses into the future. The number of AI startups has exploded in the last few years, capitalizing on over 300 million dollars from investors in 2014, up more than 20-fold compared to four years prior [www.bloomberg.com/news/articles/2015-02-03/i-ll-be-back-the-return-of-artificial-intelligence].

Many scholars point out that an AI revolution is taking place and that staying relevant in tomorrow’s world requires acting today. Of course, some philosophers and scientists are arguing that AI poses a threat to our very existence. Could machines inherit our human capabilities and replace us as the predominant, intelligent form on Earth?

According to Said Nursi, the progress of human civilization and scientific development is God’s desire in humankind “to make manifest and display in the view of the people the majesty of His rule... the wonders of His art, and the marvels of His knowledge, and so that He could behold His beauty and perfection.” In achieving this, God has created mankind as the vicegerent of the earth, and He has bestowed us with remarkable abilities. As a manifestation of this, humankind has established civilizations by using our social-cultural advancements and dominion over our natural environment. Thus, knowingly or not, humanity has excellently displayed and made known the miraculous art and divine-attributes of our Maker.

From a religious viewpoint, does this forecast of intelligent machines surpassing human abilities contradict the purpose of humankind as the vicegerent of the earth? Does it also contradict humanity’s status as the most superior creation of God? What does science tell us?

We know that the Turing machine, on which modern computers are based, has serious limitations. In 1900, the famous mathematician David Hilbert published a list of problems which was unsolved at the time. The tenth problem asked for a general algorithm to determine whether a given Diophantine equation with integer coefficients has an integer solution. We now know that no such algorithm exists. Similarly, in the 1930s, Turing himself put forth that the halting problem (whether the Turing machine halts or not) is undecidable – i.e., it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.

At the time, many also argued about whether the Turing machine was the ultimate mathematician. The question was: given a list of axioms and some rules of logic, will a Turing machine be able to prove every mathematical statement? The rationale behind the idea was that, since all theorems are derived from a set of axioms and logic rules, the machine can eventually enumerate all possible theorems and thus validate the correctness of a given statement.

The question was answered just a few years after. The celebrated Godel’s incompleteness theorem simply states that in a consistent system, where statements are not true and false at the same time, we will have statements that we cannot decide the correct answer to. This theorem seriously dampened the enthusiasm of people overly excited about the capabilities of these new machines. Ever since Godel’s discovery, the computability theory has been used to research the limits of computation. It is well known that the current computing models, such as Turing and Quantum machines, have their ultimate limits.

Differing from computability theory, artificial intelligence is more concerned with developing algorithms that learn or adapt to their environment, allowing them to perform a particular task. Though such algorithms inherit the ultimate restriction of the computing model they operate on, computers can still be taught to perform certain tasks, such as visual/speech recognition. The technological advances in the last decades are mind blowing, as algorithms that perform certain tasks, such as human/face detection, already show great accuracy. However, we’re still far from the kind of robots we see in certain sci-fi movies or shows.

The question remains whether we can see progress in the near future of AI that will eventually allow machines to surpass humans in their skills. But what makes humans, well, human? The computability theory first requires defining, mathematically, what an algorithm is before developing machine models that function as instructed by such an algorithm. The lack of an answer to the question of what makes humans human thus contributes to the ever-continuing discussion of AI vs. humans. Denying our spiritual side will most likely cause people to continue to see AI as the next evolutionary step; whereas, given the undeniable spiritual capacity of humans, it is not wrong to state that although AI will increasingly dominate our lives, it will never fully replace us.

Bayram Kara - PhD Candidate studying in the areas of Machine Learning.