Human-robot interactions are increasingly taking place between a robot agent and a human observer who, unable to witness all aspects of the robot's behaviour, is uncertain as to how the robot will behave. This uncertainty has serious consequences when it concerns the normative aspects of machine behaviour: that is, whether the robot's actions are morally permissible. Bringing together distinct threads from robotic design and machine ethics, we demonstrate the importance of conveying ethical understanding and commitment in order to reduce moral ambiguity and how this can demand a behavioural demonstration. We provide a framework that structures these considerations in the form of a broad constraint on robot behaviour: roughly, to avoid behaviour, even if it is permissible, if that behaviour could appear impermissible. Thus, we formalise a model of communicative-behavioural ethics in human-robot interactions. We apply this constraint to a series of example cases demonstrating how it can be modified to incorporate different sources of information including preferences and probabilities. This reveals the complexity of less idealised cases and highlights how the constraint can be fine-tuned along a number of dimensions to take into account, amongst other thing, risk attitudes.