Like I mentioned briefly in my last post, there is a difference between morals and ethics. Morals, on the one hand, have to do with what is right and what is wrong. Ethics, on the other hand, are primarily concerned with a society’s accepted understanding of what sort of behavior reflects right and wrong. It can be a bit confusing, since the two concepts are so closely related. Maybe an example will help.

Most of us would agree that there is nothing fundamentally immoral about driving past a red light. It is just a light bulb that happens to be given a red filter to it. In some places, such as certain districts in Amsterdam, red lights might be something that you’d want to move past as quickly as possible, not stopping. However, when it comes to traffic laws in our society, we have decided that red lights should be a way of signaling vehicles to stop and allow others to cross an intersection instead. And while there may be nothing immoral about driving past a red light per se, there is something immoral about selfishly and recklessly endangering others. So it would be unethical of me to go driving around the city refusing to stop at red lights, and it would be unethical of a police officer to see me doing so and not try to stop me, even though both the police officer and I would agree that the action of driving past a red light is not inherently immoral in and of itself.

Ethics, I think, is actually the reason that people tend to disagree on things so much. So, for instance, I know people who are Christians, believe in the value of human life, are against killing unborn babies, and so think it should be illegal for women to get abortions. I also know people who are Christians, believe in the value of human life, are against killing unborn babies, and yet still support the right for a woman to get an abortion. And both people might be operating consistently within their own separate ethical frameworks, even if they hold to the same fundamental moral values.

I have also met a number of people who have a tendency to switch ethical models depending on which issue we are talking about. So while we are talking about the Darfur genocide, they might adhere to virtue ethics (which I’ll discuss below) and believe that it is always wrong to indiscriminately slaughter another people group; however, when talking about the Israelite conquest of Canaan in the Old Testament, they espouse a consequentialist model of ethics (which we’ll get to) and claim that the slaughter of the Canaanites was justified as a means of giving Israel possession of the Promised Land. This is one of the reasons why some people claim to be pro-life as well as pro-gun and pro-death penalty (wherein “pro-life” doesn’t necessarily mean being for all life, but rather is meant to indicate that one is against abortion specifically).

As you might guess, this irritates me most when I see fellow Christians do it.

Now it usually happens that when someone (e.g., ethicists, philosophers, theologians) is trying to figure out a system for right and wrong, they will often begin with morality, a sense of what is inherently moral (right) and immoral (wrong), and from there formulate a model of ethics. Even though this seems to be the proper, logical order of things when thinking about morals and ethics as a framework, it is not the only way to go about formulating that framework. In fact, I would like to approach things from the opposite direction, first considering a number of ethical models, and then drawing my conclusions of morality from there. After all, good morals should lead to good ethics, so (by way of reverse engineering) a good model of ethics might give us some insight into what is truly moral.

So let’s do that, let’s talk about ethics. And in order to do so, let’s talk about Marvel’s Avengers.

Traditionally, there have been three major camps or primary models of ethics: utilitarianism, deontology, and virtue ethics. In his excellent essay, Superhuman Ethics Class with the Avengers Prime, Mark White used the three Avengers Prime—Iron Man, Captain America, and Thor—to illustrate these three ethical models.1

Iron Man’s Stark Utilitarianism

The first model we’ll consider is utilitarianism. In short, utilitarianism claims that “the morally right action is the action that produces the most good.”2 Or, as the boys over at Crash Course put it, “We should act always so as to produce the greatest good for the greatest number.” This is a type of consequentialism. In other words, how do you know that something is the right thing to do? Because it has good consequences. Tony Stark (a.k.a., the invincible Iron Man) is a utilitarian through and through. For example, in the first Civil War story-arch, Stark decides to lead the effort to enforce the government’s Super-human Registration Act, and even employs a team of sociopathic villains to help hunt down and imprison superheroes, not because he necessarily agrees with the Act (early on, he actually opposed it), but because he knows that if he doesn’t support it the government might find someone else less friendly and merciful toward his super-powered friends to do it instead. The right or wrong-ness of Tony’s actions is not anchored in what he is doing so much as in what he believes the outcome will be.

This is an important part of the utilitarian approach—anticipating, and even to some extent predicting the future. As both a futurist and a genius-level intellect, Stark can do this with greater accuracy than most. But sometimes, perhaps more often than he’d like to admit, ol’ Tony just gets it wrong. Which illustrates the first reason why I don’t think utilitarianism is an adequate model of ethics. Simply put, nobody can predict the future with 100% accuracy.

Try as we might, there are just too many variables at play. No one person, in fact no collection of people, has the computing power to do all the calculations and predict with certainty the outcome of each event. Moreover, I don’t even think a supercomputer could do it. Once upon a time it may have been tenable to believe that if we could just have access to every bit of data, we could predict the future. The advent of quantum indeterminacy, however, has taught us that there is most likely an element of uncertainty and unpredictability built into the very fabric of our universe. Throw human free will into the mix, and the notion that we can anchor out ethics on our ability to predict the outcomes of our actions starts to look like foolish hubris. Hell, as an open theist, I don’t think even God can know as a certainty the future decisions of free agents and their consequences, so I definitely don’t think humans could.

Think about this: suppose you lived in southern Germany during the second half of the 19th century. While strolling through the streets of Munich, you happen upon a young pregnant woman who is visiting the city on holiday. The poor woman is drowning in the Isar river, and you have only seconds to save her. Should you? I think most of us would say yes, you definitely should save her because that’s the right thing to do. But, hold on a bit. What if that woman turns out to be Klara Hitler, the mother of Adolf? By saving her, you have essentially helped move history toward the holocaust and World War II. You wouldn’t have known this, of course. But utilitarianism is not concerned with whether or not you have good intentions, but with whether or not the outcome of your actions were good for the greatest number of people. And in the case of Klara Hitler, saving her from drowning definitely wouldn’t be. However, you can’t choose whether or not to save a drowning woman based on the possibility that she might mother a tyrannical despot since, after all, that drowning woman could also have been Pauline Koch, the mother of Albert Einstein, and (according to utilitarianism) you would be a terrible person for having robbed history of such a great scientist if you didn’t choose to save her.

In addition to the fact that we simply can’t predict the future well enough to know the outcome of each action, we have to ask, who decides what the “greatest good” is? This sort of depends on values, which almost nobody can fully agree on. So, for instance, in the situation where aborting a pregnancy may save the mother’s life, which is a greater good, a living mother and a dead baby or a living baby and a dead mother? How do you quantify the value of two lives when they are in tension with one another? Is it better to genetically modify our foods to produce more of it and feed more people, or is it a greater good to ensure that our food isn’t causing auto-immune diseases? What’s more important to us will determine what the greater good is, and I just don’t see us coming to an universally accepted agreement on that anytime soon… or ever.

One thing we can say is that we might be able to quantify “the greatest number” by simply counting up the people. Action X benefited 20 people and hurt only 3, so by utilitarian standards it was an ethically justified action. But even this reveals another flaw with utilitarian ethics: it favors the majority and disadvantages the minority. So, could we get rid of genetic disorders by eliminating everyone whose family carries that gene? Certainly. Pick almost anything off the list of genetic disorders, kill off everyone whose family is a carrier, and you will have done the whole of a humanity a great favor. Sure, we will lose a lot of friends and family. But most of them wouldn’t have been remembered after three or four generations anyhow, and the rest of the species will be better off for it. It sounds heartless, I know. And maybe it is. But in order to say that it was unethical, we would need to adopt a model of ethics other than utilitarianism. After all, if we assume that the ends justify the means, you can (and many people have) justified some rather heinous actions with utilitarian ethics.

The last criticism that I’ll levy against utilitarianism is that it divorces the moral value of an action from the moral agent doing the action. In other words, right-ness or wrong-ness are not tethered to a person’s moral character (or lack thereof) so much as to the consequences of their actions. You could have rather devious character and still be ethically justified so long as your reprehensible behavior ends up benefiting the majority of people. On the other hand, you could be a virtuous person, but if your good deeds resulted in bad outcomes (even by virtue of something beyond your control) then you will likely find yourself on the wrong end of utilitarian ethics. We should be cautious of a model of ethics that makes saving a drowning women (any drowning woman) into an unethical behavior.

Thankfully, Iron Man is not so consistent in his utilitarianism.

The Deontological Captain America

There is perhaps no superhero in the Marvel universe more driven by duty than Steve Rogers, the original Captain America. As such, Rogers embodies deontological ethics, which claims that the ethical thing to do is not necessarily what is “good” but rather what is “right.” As opposed to utilitarianism, deontology says we may be able to maximize the good outcomes of a situation by engaging in morally suspect behavior, but that doesn’t make it the right thing to do.

So, given the chance, should a superhero kill his or her archenemy? If they are concerned with maximizing the greatest good for the greatest number, then they probably should. But if they have a sense of obligation or duty to certain principles—say, for instance, the no-killing rule—then the right thing to do would be to not kill their archenemy. Pretty much every superhero that adheres to the no-killing rule is a deontologist in this sense. Unlike utilitarianism (consequentialism), deontology believes that right and wrong are not contingent upon our circumstances. If it is right to save someone dangling by one arm off the ledge of a building, then it is always right, regardless of whether that person is a villain or a damsel in distress.

Unlike, utilitarianism which places the locus of morality in the future consequences of our action, deontology places it in the object of our allegiance. And for Cap’, that object of allegiance is his country and its citizens. It is important to note that Tony Stark didn’t become Iron Man until after he realized the egregious consequences of his weapons dealing and had the technology (the Iron Man suit) to do something about it. Steve Rogers, on the other hand, signed up for the military and committed himself to fighting evil back when he was a scrawny wimp, long before the Super Soldier Serum was ever injected into his body. That’s because Rogers has an unconditional sense of moral duty to his country. If his country is in a war, then, by golly, Rogers is gonna sign up and suit up.

In truth, we all employ deontological ethics from time to time. Every time someone tells their child, “Because I’m the parent and I say so,” they are acting in a deontological framework and expecting their child to do the same. Every time you hear someone say, “God said it, I believe it, that settles it,” you are hearing them propound deontological reasoning. In other words, duty to our country, our family, or our God is where the buck stops. We need look no further for a rationale to our moral behavior. It’s not up to children to question the long term effects of their folks’ parenting methods, it does not belong to the soldier to wonder why they have been given a command by their superior officer, and it is not our place to question God.

There’s a beautiful and effective simplicity in this. It certainly relieves us from the burden of having to calculate and predict the most likely outcome of our actions.

But, as you probably guessed, deontological ethics can take us to some pretty bad places, especially when we have placed our sense of moral duty in a principle or allegiance to the wrong thing. In the realm of Christian religion, the best example is probably the Israelite conquest of Canaan. Regardless of whether or not we think things went down historically quite how the Bible depicts them, this account contains some of the most troubling passages. Without getting into specific passages, let’s just consider the overarching narrative that, at the behest of God, the Israelites committed genocide against the people of Canaan in order to take control of the Promised Land. And what was their justification for such a bloody conquest? Simply put, “Thus saith the Lord.”

So, is that good enough? If God tells you to do something that seems morally repugnant—like slaughter men, women, and children (1 Sam 15:3), or even your own child (Gen 22:1–2)—is it the right thing to do? If you are consequentialist, then maaaybe… but you’d have to be able to show that the ends justify the means. And if you can’t, then no. But if you are a deontologist, then yes, regardless of the utility of the consequences. Maybe we wouldn’t call it “good” per se, but we would still say that it was right of the Israelite’s to kill all those people. They were just following orders, after all.

Maybe this sounds familiar to you. Maybe you’ve heard about how Nazi officers used that same rationale to try to avoid being punished for their involvement in the holocaust. Maybe you’ve heard of other religious people who claim that God told them to do something morally suspect. I’m sure some of you have questioned the value of always following orders from Washington. And I would venture to guess that all of us have stopped to question our parents’ instructions from time to time.

This gets to the heart of why deontological ethics might not provide us with a comprehensive ethical model. Not only does a child’s obligation to obey certain rules tend to end after they leave home and no longer feel that sense of duty to obey mom and dad’s rules, but if mom and dad are telling you to do something like lie to your teachers about where those bruises came from, then you should probably start questioning your deontological obedience to them much sooner. Ultimately, deontological ethics only works if the locus of your morality is placed in a figure of allegiance that is itself infallible. We know that parents, governments, and even moral principles can be twisted and corrupted. And when God starts telling you to wipe out whole people groups, even He starts to seems like a bad place to ground our morality.

So maybe we shouldn’t be looking “out there” for our source of morality, but rather “in here,” which leads us to the third major ethical model.

The Virtuous Thunder God

If Tony Stark is a futurist and Steve Rogers is a blast from the early/mid-twentieth century past, then Thor hails from an antiquity long forgotten. His great age and longevity comes across in numerous way, perhaps most noticeably in the way he speaks, which sounds like something out of a Shakespearean play. Which, when you stop and think about it, doesn’t make a whole lot of sense for a Norse god. But what does make sense is that Thor embodies the oldest of the three main models of ethics—virtue ethics—whose pedigree goes as far back as Plato (directly) and the wisdom literature of pretty much every ancient culture (indirectly).

Whereas utilitarian ethics asks, “what will produce the best outcome?” and deontological ethics asks, “what does duty require of me?”, virtue ethics asks, “what would the virtuous person do?” Because, you see, in virtue ethics the locus of moral behavior is not in future consequences or allegiance to an external entity, but in a person’s own moral character. In my estimation, this is virtue ethic’s greatest strength, that it is principally concerned with developing good character in people. According to this model, you can’t live a moral life unless you are yourself a morally upstanding person.

After all, a person can do something that has good consequences and in service of a good allegiance, but still be a morally corrupt person if they have some self-serving reason for doing it. For example, when Asgard is under attack, Thor’s devious brother Loki may fight to defend it because doing so will allow him to carry out some other wicked scheme for which he needs Asgard’s resources or because he has a duty to Asgard. But because Midgar (Earth) may have less advantageous resources, and because Loki has no allegiance to our world, he would be far less likely to protect Midgar from the same threat. By contrast, Thor may likewise fight both to protect Asgard’s resources and out of allegiance to his home world; however, unlike Loki, he would (and often does) fight just as hard to protect Midgar. Why? Quite simply, because it is the virtuous thing to do.

Moreover, this models allows a person to maintain both the situational flexibility of utilitarianism, since displaying the same virtue may look different in different contexts, and the dutiful fortitude of deontology, since one is obligate to do the right thing regardless of the possible outcomes. According to virtue ethics, morality is not about our behavior itself so much as it is about our character, out of which our behavior will naturally flow. I think this makes a lot of sense and, in my mind at least, this puts virtue ethics above the other two main models of ethics.

Even so, problems remain.

For instance, what are the virtues that one should develop within oneself? Should we focus on the seven cardinal virtues of the Christian tradition or are all ten virtues of Japanese Bushidō necessary? Who gets to determine which virtues get primacy or what even counts as a virtue? For as much as I like virtue ethics, practitioners of it still face potential obstacles.

On the one hand, virtue ethics can still lead us into conflict with others, even others of equal virtue. Take for example Saladin and Richard the Lionheart. Both Saladin and Richard embodied many of the same virtues, such as honor, nobility, courage, faith, valor, and leadership. Even so, history finds them on opposite sides of the bloody 12th century crusades. One would think that if all mankind were to become mature in virtue then we would see an end to wars and needless violence. And yet, as these two men show, that doesn’t appear to be the case.

On the other hand, virtue ethics can still lead us to conflict within ourselves, particularly when one virtue comes in conflict with another. What happens when the virtue of honesty tells me to share a painful truth with someone, but compassion tells me it is best to keep my mouth shut? Aristotle and other virtue ethicists might say that there is never any real conflict between the virtues, only apparent ones, and if we only had a better understanding of the nature of these apparently conflicting virtues, then we would be able to know how best to navigate the situation. I think human experience, however, would tell us that sometimes there are very real conflicts between virtues. Additionally, how do we know what constitutes a better understanding of a virtue? Who gets to determine which virtues trump others? In other words, do we interpret our understanding of what compassion requires of us in light of the imperative to tell the truth, or the other way around?

As you can see, for all it has going for it, virtue ethics seems to lack the central guiding principle that deontology possesses and the statistical certitude that utilitarianism offers. It seems to me that what is needed is a primary virtue, one we can have confidence in, and by which we might measure, qualify, and interpret all others. That will be the subject of my next post.

Fair warning: I’m about to get all biblical.

1: Mark D. White, "Superhuman Ethics Class with the Avengers Prime," The Avengers and Philosophy: Earth's Mightiest Thinks (John Wiley & Sons, Inc., 2012), 5–17.

2: "The History of Utilitarianism," Stanford Encyclopedia of Philosophy, last modified on Sept. 22, 2014,

| Philosophy | Be the first to leave a comment!

Ready for another article?

Rocky Munoz
Jesus-follower, husband, daddy, amateur theologian, former youth pastor, nerd, and coffee snob. Feel free to email me at and follow me on Twitter (@rockstarmunoz)

This is for security, and will never be published.