I’m now digging into Moral Machines by Wendell Wallach and Colin Allen. It’s a thought provoking examination of machine ethics.
Early on, Wallach and Allen ask an interesting question: do we want ethical machines? It’s a foundational question, as people normally don’t build tools without a specific intention. The question provides room for related questions: if we want a bot that cares for the sick or walks the dog, then should we come at that design with with an ethical framework in mind.
In this context I think of an obvious example, such as answering machines, which are pretty much casual robots people have around the house and with which we interact in interesting ways. And then a scenario: an answering machine is programmed to allow certain calls but disallow others, such as those from other robots because we come to the scenario with the assumption that Mother calling is an acceptable use of a communication devise. The conditions are as follows: the answering machine must be able to distinguish machine from human calls and than act “accordingly.”
My interests in this scenario have to do with technological capacity. Is such a technology possible? The notions behind Strong AI say should be, the Strong framework asserting that all human capacities should be programmable in machine contexts. The assumption behind Strong AI is that we have a reasonable understanding of the human from which to make interpretations: we understand planning and learning pretty well. While consciousness is a toughy, it’s not necessarily clear in a machine context whether consciousness or self-awareness is necessarily required for learning, as the above case would suggest. Or does it?