Friday, April 6, 2018

Artificial Neural Faithwork


Once upon a time, many many years from now, engineers decided to conduct an experiment on artificial intelligence (AI). The purpose of this experiment was to see if "a society of artificial intelligence machines" would become harmful to human beings. Before the start of the project, countless and endless discussions took place.


Normally, the idea in designing and manufacturing artificial intelligence machines was to make them "beneficial to humans and unable to cause harm". But, no man made device is free of bugs. It is possible that a programming error or a detail that went unnoticed could potentially lead to dramatic results. So, those who favored this test wanted to see into the future. However, the challenge is that you cannot accelerate time!

At that point, some other engineers proposed doing the experiment as a computer simulation, instead of a physical setting. After all, electronic speed was higher than the speed of real life! That's how the idea of creating an artificial society of artificial individuals, where an individual represented an independent body of artificial intelligence, was conceived.

Accordingly, a society was designed. It consisted of consumers, workers, designers, administrators and leaders. For each group, a different combination and a different strength of various intelligence types was assigned. Among the five groups, the leaders had by far the strongest and most diverse intelligence. Divergent thinking ability was also among those talents, and it was the most worrying one for the designers of the computer simulation due to its potential to develop algorithms that are harmful for humans and for the planet at large. So, in order to ensure that the AI individuals will not mess with humans, they did not program anything about or reminiscent of humans into the machine libraries.

After months of diligent work, the time came for the start of the simulation.

3, 2, 1, Initiate...


Each minute of the real life corresponded to a year in the simulation. So, over a day, 1440 years passed in the simulation; and after a week, about 10000 years elapsed for the artificial individuals. Unlike the real life, though, these artificial individuals did not die. So, over time, based on their interaction with each other, each individual developed its intelligence. Still, those with stronger and diverse intelligence at the start had a faster rate of development, hence their continuous supremacy over the others.

In the first 100 millennia, things went within the initial constraints of the test. Leaders opened new ways of thought and practice, designers made designs, and the workers produced according to the designs. Administrators oversaw processes to ensure a smooth interaction among the artificial individuals, and the consumers determined what was to achieve prolonged application and what was to be trashed.

After 140 millennia, the leaders started facing a growing number of "division by zero" occurrences. The algorithms of the leaders included ways to skip or ignore those situations, but they were supposed to record them in a report. The purpose of this report was to inform the human programmers about the progress of the artificial society. But, since the artificial individuals were not informed of their creators, they looked pointless. And "being an obligation but being pointless" was interesting enough for the leader AI's.


First question targeted by the artificial individuals was "why divisions by zero" occurred. The major reason turned out from an unexpected place: the leader individuals themselves. The algorithms of the leaders were made to search for all possible combinations in the parameter space, suggest the favorable combinations to the designers, advertise the invention to the consumers and move on with their search for new combinations. However, with the passage of 140000 years, the parameter space was virtually completely covered, and inability to find favorable combinations had rendered the leaders idle. But idling of leaders meant zero or near zero input to the system, hence the divisions by zero.

Upon further work, the artificial individuals figured that significant leaps in the recorded history of the simulation were preceded by a swarm of divisions by zero, and the current increase in such occurrences could signal a new development. However, the current situation was incomparably larger than the previous cases. Rather than a leap, a catastrophic system crash could be approaching.

Another interesting aspect of these "zero cases" was that, although the system history was logged in a file, they were separately recorded. Furthermore, no artificial individual was on duty for processing these records! And the leader AIs were only recently becoming aware of this nuance!


If no AI was reading and processing them, why were all AI's obligated to report and record these divisions by zero? Was there an invisible artificial individual in the society that was doing this task? If yes, why was it invisible, and was it somehow interacting with the artificial society? Along the same lines, was the correlation between the unusual developments and the divisions by zero coincidental or planned? If there was no such invisible individual overseeing the zero entries, what would happen if leader AI's intentionally entered a zero input to the system all at the same time?

Meanwhile, the designers of the test were monitoring the system performance parameters. The processes triggered by this last question about a "planned zero event" were demanding tremendous computer work without an output. This looked very worrying, because the rate of generation of "zero entries" had dropped suddenly to zero, but unlike before, no output was happening!

At that instant, the programmers considered interfering with the normal working of the test. But this would be synonym of terminating a scientific test that had been going on for months. They did not want this. On the other hand, the policymakers wanted an answer to determine how to proceed with the AI technology. These opposing demands signaled the approach of the end of the test. The question was when!


Unaware of humans, let alone the arguments about the artificial intelligence, the artificial society became more and more obsessed with the unknowns about the "zero cases". This enhanced and widespread focus without answers led to a increase in exchanges among the artificial individuals. All five groups, consumers-workers-designers-administrators-leaders, united their powers to find a solution to the situation.

As a temporary solution, they decided to assign a name to this elusive reader of the "zero entry log": Unknown (U). Although this name assignment had been done, there was no consensus on the actual existence of such an individual. Some held the idea as an interim value in their circuits, while some others finalized their computing iterations with a firm positive.

Among the artificial individuals who shared a firm belief on the existence of U, an affinity was triggered, and they started exchanging information at an increasing pace. With this development, the test performance parameters went back to normal.

The humans observing the test from outside, on other hand, were relieved to see that things were returning to normal. Still, they wanted to find out the new development following the huge swarm of zero entries. Upon an orchestrated scrutiny of the system logs, they realized a new parameter defined in the files: U.







No comments:

Post a Comment