Abstract

In 1851, Herman Melville published his most famous novel, Moby Dick. It is the story of the obsessive pursuit of a white whale, Moby Dick, by a ship’s captain that ultimately results in the destruction of the ship, the captain and all aboard but one who lived to tell the tale. In many respects, the book is a description of what happens when human beings lose sight of their humanity or, more simply, surrender it to something else.At their most basic, relationships between human beings for millennia were, and to some extent still are, characterized by a human-human interface. This is the result of one person interacting directly with another for the accomplishment of a particular task. This may expand to become a “team” effort to achieve a particular goal, or it may be a matter of dispute resolution. As time went on, it became clear that more and more tasks would require a “team” effort, and the need for the individual “genius” tended to blend into the background. In the fourth century BC, Plato noted that Socrates was fond of beginning many of his dialogues with the phrase “Know thyself”—a clear statement of the importance of the individual as the foundation of society. That statement in effect prefaced that what was going to happen next would be in its very essence both personal and human. It set the tone for what was to happen in much of Western civilization for the next 2,000 years.From the later centuries BC to the eighteenth century, much social, cultural and commercial interaction could be described as stemming from a human-human interface enhanced by “occasional” technology created to meet a specific need. During this period of technological elaboration, the ability of individual humans to control the consequences of the technology was diminished almost imperceptibly, but steadily.It was with the beginning of the Industrial Age that the social dynamic began to shift noticeably away from a human-human toward a human-machine interface. This initially was the result of the creation of machines intended to perform mundane tasks formerly done by human beings.The key feature, however, was that even in the human-machine interface environment, a person in the affirmative exercise of their mental resources retained some vestiges of human control of the mechanical environment, although that control tended to be diminished in direct relation to the complexity of the technological response required by the problem at hand. For that reason, some conception of ethical behavior was never not present in this system. The process, while gradual, was also characterized by a certain tension between the traditional view of individuality and the “group.”In the nineteenth century, the socioeconomic structure of society underwent a transition under the influence of what can be termed a human-machine interface. This meant that technology became an even more elaborate extension of human activities.In modern times, however, there has been a shift, both in terms of society and technology, with the result that both chronologically and philosophically, the issue for society now is one of cyberethics: the relationship between the ethical and legal systems that have been developed to serve humanity from ancient times to the present as expressed in our philosophical and ethical systems of thought and the judicial process as contrasted with the ability of computer-driven technology to operate outside those conventions with almost no limits [2].It is in this context that the Space Age wrought its magic on the society that existed in the middle of the twentieth century, and, without much fanfare, created the machine-machine interface. The requirement for human action was steadily reduced in direct proportion to the increased ability of machines to communicate with each other and accomplish tasks. Humanity had driven the functioning of the system, yet control over the environment had been steadily replaced by machine logic. As a result, the system that we have known as “Western civilization” may have been deprived of one of the bases of its validity and may well need redefinition.During the six decades since NASA announced the Apollo program, society as a whole has tended to focus on the expansion and development of new and exciting technologies. During the height of the space race, this development was primarily oriented toward the goal of landing on the Moon by the end of the decade of the 1960s. The mechanisms in place to accomplish that daunting task were, by modern standards, quite primitive, ranging from the mechanical sequencers used in the earliest pre-Saturn launches to the “ropes” used in the landing computer aboard the Lunar Module itself. As demonstrated by the Apollo 11 mission, during which the final approach was handled by Neil Armstrong personally, the human-machine interface, with the human as a “fail-safe” factor, was still dominant even at that point in the space program, and a human still was in control of the mission. Yet, even with its relatively primitive technological beginnings, there can be little doubt that the space program had an immediate and direct economic, philosophical and psychological impact on society not just in America but throughout the world. At its heart was the notion that “man is now a spacefaring creature.” To some extent, this quest for technological excellence at the cost of human values became a sort of cultural “white whale,” such that the latest innovation was its own justification for its existence.That the very existence of the technology of the Internet providing access to an enormous body of information that may well now be beyond the ability of the scholar to examine in a meaningful way is beside the point, except to note that the Internet became another resource to be consulted. Unfortunately, as has been noted elsewhere, the problem of “Big Data” clouds the ability of the individual researcher to remain focused. One estimate is that approximately 16.3 zettabytes of information, roughly the equivalent of 16.3 trillion gigabytes, is being produced each year. By 2025, this number should increase 10 times [3]. Pocket-size computers contained in cell phones more powerful than the ones that flew Apollo to the Moon and back have all but eliminated the need for students to learn basic arithmetical skills. What is obtained from the Internet has tended to become prima facie “valid” data for inclusion in scholarly work without the benefit of critical thinking or analysis beyond “it must be true or it would not be on the Internet.”The corollary to this question is whether in the focused pursuit of the “white whale” of technological perfection, the human aspect of that pursuit has become submerged. Is this the price that society must pay for the “white whale”—the sacrifice of encouragement of individual “genius” and achievement in favor of a sort of Orwellian “groupthink” that eliminates individuality and the rewards that go with superior performance in any given field?Without going into the lawyerly arguments based on statutory language and regulations, the United States Patent and Trademark Office essentially has said that the machine could not be a “person” within the meaning of the statutes. Creativity and invention remained the province of “natural persons.” Similarly, a state, or, by extension, a corporate entity, cannot be an “inventor,” because invention is a formation of the mind of the inventor, a mental act, conception of which is not available to a machine [4]. It may be that the range of the “white whale” has limits.On a social level, the advent of such technologies makes it easy for human beings to compartmentalize their existence and, thus, dissociate from their fellows. This dissociation phenomenon ironically makes the actions of the group seen as more significant than those of the individual. This can create a subtle isolation of individual human initiative that could give rise to an “elitism” on a social level that probably should be discouraged. The shift of focus away from people to machines thus becomes the modern analogy of the view that Captain Ahab held of Moby Dick at the start of the book. The pursuit of the “white whale” of technological excellence mimics the voyage of the Pequod, a voyage on which humanity now finds itself. As Tacitus once said, “Because they didn’t know better, they called it ‘civilization,’ when it was part of their slavery [idque apud imperitos humanitas vocabatur, cum pars servitutis esset]” [5].For human beings to be thus subordinated to machines is a suspect concept. Further, many students well may not know the basics of research in a conventional library; indeed, one could arguably write an entire thesis on the history of the Reformation without ever seeing an original manuscript by Luther. Yet their ability to communicate with a machine is unparalleled as compared with the experience of their parents.This modern breakdown in “connectedness” of the human-human interface also has implications in terms of the transmission of culture from one generation to the next. If in the present day these norms should not be passed down from the parent to the child due to the lack of parent-child interaction, then the question immediately presents itself, “What will the society look like if people begin to bond more readily to machines than to people?” By extension, the revision and modification of “history” is made simpler if the reality of what our grandparents knew disappears because of the ability of technology to uproot that reality.In another aspect of the modern world, it is often said that the legal system derives its legitimacy from a faithful reflection of the society that it is designed to serve. This is, in part, the reason that technologically based evidence until recently has had such a difficult time being established in court. An increasingly frequent type of such evidence is generally referred to as “electronic evidence.” This is evidence that is either electronically generated, such as computer printouts, or is electronically stored in some fashion, such as emails. The problem arises in the existence of these materials and where they might be stored. The inability of an attorney to cross-examine the machine may well lead to the case being dismissed. To that extent, the potential “enslavement” of the judicial process to technological capability is avoidable.In the United States, the proffered evidence must be examined by the judge, both as to methodology and as to the qualifications of the tester before it can be considered [6]. This “gatekeeper” function vested in the judiciary places the judge in a rather interesting position in relation to the scientist. When confronted with evidence grounded in the sciences, the judge must now evaluate on a scientific level the quality of the information. The problem from the point of view of the justice system is not that these technologies are not reliable. On the contrary, it is their very precision that is the issue from the point of view of the court. It is rather that they have been invested by their human operators and those who receive information from them with a possibly undeserved level of infallibility. If the factual situation should be unprecedented, then traditional models based on technology that cannot change with the times will, by definition, be of little, if any, utility. This opens up the opportunity for technology to evolve in its ability to assist in the creation of new models.When applied in the social context of the judicial process, such an analytical approach borders on giving technologically based evidence a machine-human character in its impact on the judge or jury. For all practical purposes the judicial inquiry (and, therefore, judicial discretion) ends there. This is a clear representation of the next stage in the evolution of the cyberethical relationship, that of machine-human interface.In the world at large, there is an almost imperceptible, yet undeniable, shift to a machine-human interface that reflects the willingness to defer to the action of a machine programmed to perform increasingly complex tasks formerly done by a human being. Similarly, in this paradigm, the machine, having been once set in motion, somewhat as in “The Sorcerer s Apprentice,” exercises its potential in the process to determining mindlessly the destiny of the human beings before it that is its focus. Perceptually, this has created the view that “[People] will believe anything if it is in the computer” [7]. It is perhaps at this point where the quest for the “white whale” encounters the “black swan” [8].Reliance on standard forecasting tools can both fail to predict and potentially increase vulnerability to black swan events by propagating risk and offering false security. In the Apollo program, the planning system led to what was called a “3 Sigma (3∑) design” plan. The engineers and technicians realized that a complete system failure that would lead to the death of the crew was possible. They then worked backward to design the systems so that the probability of “mission loss,” i.e. everything going wrong, was practically zero. Short of this was “crew loss,” a situation in which the crew perished but most of the mission objectives were achieved. It was this level of loss that nearly happened on Apollo 13, but the fact that it did not was in part the result of the multiple levels of 3∑ design planning. The practical outcome was that the Apollo/ Saturn V space vehicle, in many of its systems, had double redundancy, in addition to human input.Unfortunately, it would seem that the black swan scenario in modern planning characteristically does not acknowledge the catastrophic event as the starting point, so planning for it as a way of avoiding it simply does not happen. This potentially reduces humans to pawns in a game, the rules of which are completely unknown at the time the catastrophe occurs.An illustration of the “black swan” effect in the transition from the human-machine interface to the machine-machine interface and then to the machine-machine-no human interface in the context of the space program is nowhere more clearly presented than in the tragedy surrounding the Columbia disaster of February 2003. There can be little doubt that the fact that the onboard computer was in control of the shuttle to the exclusion of the human crew during reentry (machine-machine-no human) deprived the crew of any opportunity to affect the sequence of events that led to the catastrophic breakup of the spacecraft [9]. It is perhaps this unquestioning confidence that has been placed in machines by our society that was a component of the disaster as well.Before the Columbia disaster, when a machine malfunctioned, the problem was characteristically handled by a human-human interface between the operator or other appropriate person and the person who needed to have the machine work properly. The person responsible for the operation of the machine would simply resolve the problem for the person who had been “victimized” by the machine by later initiating appropriate inputs into the machine to “make it right.” The focus was on the resolution of the human problem that had been created by the machine. In marketing terms, the customer was right. If the system of which the machine is a part breaks down, the emphasis is on fixing the machine, not on the examination of the human impact of a possible system design failure.In the case of Columbia, the report did point out that “organizational cause factors” contributed to the disaster. The principal focus was on the repair or redesign of the hardware systems and a return to manned space flight as soon as possible “consistent with the overriding objective of safety” [10]. In other words: Do not look at possible future events, but simply fix the machine and work on the system as it goes forward. This reflects the post-catastrophe characteristic of the “black swan” scenario in explaining the event as “a mistake.” In this thought process, it is the pursuit of the “white whale” that is important, not any concern for the “black swan” of disaster.It is particularly in the modern medical context, whether of microsurgery or of life-support systems, that the reality is that human beings are ever more routinely deferring to robots to allow impaired human beings to perform normal human tasks. This hints that the definition of the machine-human interface may be in the process of evolving beyond even a human-machine-human interface into what is sometimes referred to as transhumanism [11]. The word itself implies that human beings as unaugmented organisms may have entered a period of, at the least, obsolescence in the minds of some philosophers and engineers for some purposes. To be direct, if we are not careful, this could be the equivalent of harpooning ourselves and creating the “black swan” in the process. That situation would confirm the words of Albert Einstein that “The unleashed power of the atom has changed everything save our modes of thinking and we thus drift toward unparalleled catastrophe” [12].This question of control over the device and the extent of the control by the human being remains at the heart of the debate about evolution to the transhumanism phase from the machine-machine-no human interface. The life-altering benefit of transhumanism to someone who has lost a physical capability due to misfortune is obvious. The question, however, is not whether this is possible, but: Should it be done? Put another way, does the ability of the machine to extend the limits of human physical existence and capabilities indefinitely in fact “injure” a human being by impacting the totality of their humanity in such a way as to violate Asimov’s First Law of Robotics?The technological revolution initiated by the space programs of the major powers in the latter half of the twentieth century created not only an epochal mechanical impact on society but also led to the philosophical shifts in how humankind views itself in relation to the machine. It is a “given” that society will continue to create new and more complex machines that can think with increasing levels of independence. The issue is rather the inclusion in the programming and design of those machines basic concepts of right and wrong, morality and immorality, that have stood the test of time. Only in the “connectedness” between the machine and the human, in whatever interface relationship might exist, is there validity in that process. In this way, human society can create a safeguard against a technology that might on its own initiative, unrecognized by its creators, decide that human beings are so severely flawed in their mental and emotional capacities that the planet should be “cleansed” of them by the more precisely logical machine in a destructive application of the First Law. When he said, “I was dreaming,” Sonny, the lead character in the movie I, Robot, showed that there may well be an objective in the advancement of technology that does not leave humanity either in the position of Captain Ahab or the victim of a black swan.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call