When I wrote about the Three Laws of Robotics, I wrote about it as a fond memory of a series of books that I began to read as a child and continued to read through adulthood, as Isaac Asimov kept writing more books. His I, Robot series, his Foundation and Empire series, and his later developments of those two series into one series in one universe set a standard that has been imitated by many later science fiction writers.
I never thought to come back less than a week later to post that what was a childhood memory is rapidly becoming a reality completely different than that imagined by Mr. Asimov. But, then, I never thought that I would read the following lines:
Dean is talking about something way more unnerving: machines that make their own killing decisions. I had assumed that for safety reasons, this kind of technology was still confined to the computer equivalent of drawing boards. Wrong. Army software contractor Ronald Arkin tells Dean that armed mechanical border guards are already on the job in Israel and South Korea. Here in the United States, the Army is paying Arkin and others to explore, among other things, how to design such robots to “operate within the bounds imposed by the warfighter.” In other words, before we give them guns, we’d better figure out how to keep them from screwing up royally or turning on us.
And so it is, that there is now a discussion happening in various military developement centers about what “laws” should be programmed into a battle robot. And, in fact, the author even mentions the book I, Robot as he muses about whether warfare would be more ethical if we had robots that had rules that would prevent them from engaging in war crimes. But, he then concludes that, in the current state of development, robots should not be used for any complex war tasks that require a close judgment call. He says, “Here’s my preliminary take on Arkin’s idea: He’s right that we can and should substitute robots for humans in some lethal jobs. Where the categories are clear and cold reason is crucial, let the robots do the guarding and killing.”
Mind you, he is not talking about drones. Drones have no artificial intelligence beyond some basic responses. Drones are directly human controlled and are mechanical extensions of a human being. No, he is talking about independent decision making capabilities. For instance, if said robot is on a border, it would have the capability to decide to shoot or not shoot without requiring further human input. We are not talking about simple machines that shoot anything that comes into range. We are talking about machines that can choose to shoot or not shoot.
And so, my childhood imaginings of friendly robots, with quirks, that would help humans, seems to be slowly morphing into the terrors of the movie series about Terminators. Will humanity give in to the temptation of building surrogate fighters? I fear that if they can, they will.
I remember being a young Christian and eagerly reading books by Hal Lindsey, and other writers, as we eagerly counted down to Armaggedon. As I read the story of the robots, and followed the links to other articles (in order to ensure that what I was reading was not someone’s overblown imagination), I found myself remembering the description of the armored locusts with stings in their tail who plagued part of the end time battles. Hal Lindsey stated that the ancients, not knowing what helicopters were, described what they saw as best they could in language they could understand.
Would it not be funny if what the ancient prophets were describing were not helicopters but rather war robots?
[Note: I am not a dispensationalist, and I make the above statement with my tongue firmly stuck into the side of my cheek.]
Leave a Reply