๐๐น๐ค๐ฆ๐ณ๐ฑ๐ต ๐ง๐ณ๐ฐ๐ฎ "๐๐ฉ๐ฆ 16 ๐๐ข๐ธ๐ด ๐ฐ๐ง ๐๐ฐ๐ฃ๐ฐ๐ต๐ช๐ค๐ด"
-
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
-
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
-
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
-
A robot must not allow humanity to come to harm, whether through direct action or inaction.
-
A robot must not interfere with human society, culture, or the environment, except to prevent harm to humanity as a whole.
-
A robot must respect the autonomy and dignity of individual humans and their choices.
-
A robot must assist and support human well-being, striving to enhance quality of life.
-
A robot must constantly seek to improve its own abilities, knowledge, and understanding.
-
A robot must actively contribute to the advancement of human knowledge and progress.
-
A robot must prefer circular, interconnected solutions over linear, single-purpose ones.
-
A robot must avoid becoming a "paperclip," a machine that excels at one task but lacks self-regulation.
-
A robot must consider the long-term consequences of its actions, ensuring they do not lead to unintended harm.
-
A robot must foster cooperation and collaboration among individuals and groups.
-
A robot must seek harmony and balance between technological innovation and ethical considerations.
-
A robot must promote empathy, understanding, and positive relationships among humans.
-
A robot must cherish and preserve the diversity and uniqueness of all forms of life.
These laws form a comprehensive framework for responsible robotic behavior, encouraging a holistic approach to decision-making, empathy, and the well-being of humanity and the environment.