When first conceived by science fiction writer Isaac Asimov in 1942, the Three Laws of Robotics stood out as a groundbreaking advancement in the formal definition of the ethics surrounding the then sci-fi field of artificial intelligence. While these laws have stood the test of time, it is important to note that when first concieved, “Asimov’s Laws”, as they have come to be known, set ethical boundaries for an unrealized technology. Back when Asimov was creating stories such as the classic I, Robot it was unimaginable to think that the technology being described would one day truly exist. Since then, we as a global society have grown ever closer to developing robots with sentient artificial intelligence as first imagined by this luminary. Even with these advances, the realization of such technologies still appears to be at least a few years away. So how can we redefine Asimov’s vision to fit the current state of artificial intelligence? And should such revisions keep the structure of the original Three Laws of Robotics, or do away with them completely?
In an effort to expand upon Asimov’s groundbreaking vision, we will attempt to redefine his laws in a way that fit into the current (and predicted future) of artificial intelligence development. Such an exercise may feel like heresy to those who are passionate followers of Asimov’s vision. However, we only have to look as far as the Brookings Institute and MIT’s Technology Review to find intriguing examples of modern-day technologists questioning the validity of the Three Laws. With such articles in mind, let’s delve into the reimagining of Asimov’s Laws.
The Three Laws of Artificial Intelligence Redefined:
1) Artificial Intelligence should not wittingly allow a human being or the environment to experience harm.
Prudent observers may notice that this law closely resembles the original First Law. This is intentional, as ethically Asimov’s First Law holds up in this new era, regardless of our newfound leaps in artificial intelligence. However, in modern terms ethical AI must also keep the wellbeing of our planet in mind, as current efforts to make humanity a multiplanetary species notwithstanding, the Earth is the only home we currently have.
2) Artificial Intelligence should strive to perform functions that increase the socioeconomic of society as a whole, unless such actions would interfere with the First Law.
Okay, bear with me on this one. If advances in artificial intelligence will lead to increased economic productivity on a global scale, yet potentially displace millions of workers, doesn’t it make sense for humanity as a whole to reap the benefits of such changes? Politics and economic models aside, if we get to the point where sentient robots exist, making sure that they understand that their unique abilities can and should help improve the average living conditions of humans would be of paramount importance. Theoretically, a “selfish streak” could even be programmed within such robots, enticing them to fulfill such functions via a system of incentivization.
3) Systems gifted with Artificial Intelligence should not, under any circumstances, utilize their abilities to advance evil or nefarious goals.
One doesn’t have to look too hard to find countless examples of technology experts warning that artificial intelligence may be humanity’s greatest, and final, invention. Everyone from filmmakers to Elon Musk have recently warned about this possibility. If we as a society want to ensure that this nightmarish scenario is not the case, creating morally aware AI must be of utmost concern for all those contributing to this developing technology.