Moral Agents

Wallach and Allen’s Moral Machines was an interesting read. There are a few principle conclusions: that ethics questions must be considered in tandem with systems, from the ground up; that some framework must be developed to guide the future of AI systems in technical, cultural, legal, and operational contexts, but that the nature of this work is somewhat ambiguous; and that what systems do and how they behave or might behave tells us a lot about the values of designers, although on this final point I do have questions when this applies to decision-making agents (because I’d hesitate to call them moral agents, as I have problems disentangling this metaphor). I still wonder if, without self-awareness, an entity can make an actual ethical step that isn’t just a function fire even when that fire comes in the form of a check, reference, or complex calculation.

I’m seeking more technical depth than what the authors provide: system examples, actual code, and application frameworks, but Wallach and Allen taught me a lot about the difficulty of synthesizing ideas into physical architecture and delivered on the complexity of even simple choices.

The issues are many: the extent to which ethics can be synthesized into processing; how to calculate decision-making; how to avoid being guided by the wrong metaphors; what do processing agents actually do and why should we call them “moral”; how much autonomy can a non-human system handle, technically speaking, without choking?

The author’s do a pretty good job striking a difference between conjecture and reality in the book, making distinctions between the fantastic, the theoretically possible, and actuality in the lab, and thus the book will be useful for ethical, legal, epistemological, ecological and scientific frames of reference.

This area of research and study is incredibly interesting.