Chapter 2 – Mental shackles fashioned from inside and outside

The opposite of mental freedom is mental enslavement, which sadly reflects the main condition of our present culture. Whenever we conform against our better judgment or wishes, we lose some mental freedom. Whenever we surrender our autonomy and choice, even for survival purposes, our sense of mental enslavement increases. Again, freedom concerns our capacity to choose, based on our range and depth of knowledge, as well as key external factors.

Conformity and obedience are forms of mental enslavement that have been widely recognized and researched. One study that readily comes to mind (with many others stemming from it) is by psychologist Stanley Milgram, which first took place at Yale University in 1961. [4] The obedience studies done by Milgram in the 1960s and ‘70s involved the administration of shocks by volunteer subjects, the so-called teachers, to “learners” who were also volunteers. The subjects were told that the experiments were supposed to determine whether such shocks would facilitate learning, even though the actual reason was to determine the average person’s disposition toward perceived authority, following orders, and being obedient to the point of aggression. Many subjects chose to do harm instead of stop when admonished by an authority figure to continue giving shocks, especially when in an assumed trusted and safe experimental environment (such as Yale University).

Outside the confines of such experiments, we can readily see militaristic command-and-control hierarchies that give rise to countless incidents of soldiers following orders from individuals of “higher rank” to harm others. Of course, these actions are typically in battlefield situations in which the explicit concept of enemy is involved, thereby advancing a mental and existential framework to inflict harm on others. Given this framework, soldiers are supposed to obey commands rather than rely entirely on their own independent value judgments; doing the latter might cause the military system to fall apart, after all, as soldiers could refrain from perpetrating various acts demanded of them without incurring punishment. Whether in an experimental design or in the real world, following orders can entail relying on the judgments of a perceived authority figure while dismissing internal checks of conscience, which can result in decisions to inflict harm.

Milgram’s experiments involved giving shocks of increasing intensity to a supposed learner in another room, who was a stooge for the experimenter and not really receiving shocks, whenever he made an error in the learning process. Again, the experiment was voluntary, meaning that the “teacher” and the “learner” were told that they could quit at any time without penalty. However, whenever a teacher became reluctant to administer shocks, due to protests and even screams of the learner, the experimenter firmly stated that it was essential to continue the experiment. This perceived authority figure in a lab coat escalated the “prods” in the following way to get compliance: “Please continue”; “The experiment requires that you continue”; “It is absolutely essential that you continue”; “You have no other choice, you must go on.” And when the teacher expressed concern for the learner’s well-being and protests, the experimenter stated the following: “Although the shocks may be painful, there is no permanent tissue damage, so please go on”; and, “Whether the learner likes it or not, you must go on until he has learned all the word pairs correctly. So please go on.” [4]

Of course, these orders were designed to counteract any reservations in the teachers’ minds about the harmful nature of their actions. Even though the participants were not being coerced to participate, they were still being told to continue the experiment. Why must they continue? Well, because the “authority” said so, and because he wanted the experiment to be completed, according to the policies of the institution. Another salient factor was that the teachers were assured that they would not bear responsibility for the results, which again reminded them that they weren’t in charge. Given this assurance, most of the participants continued the shocks—many to the point of what they were informed were supposedly lethal levels.

Such conformity experiments and others similar to them demonstrate that doing “what is right” can be a quite contextual matter: It depends on the circumstances and the perceived role of self-responsibility. We know that when boundaries of consensual activity are crossed, empathy for others disappears too. Using force against or injuring another person is tragically the core of this process. In these shock experiments, the teachers disregarded learners’ cries of protest against further participation, even to the point of ostensibly shocking them to death.

Milgram conducted surveys prior to the study to discover what his psychological and psychiatric colleagues predicted about the degree of obedience that subjects would exhibit. Nearly all of them predicted that only a tiny fraction of the subjects would proceed to the harmful stages of shocking learners. Unfortunately, their conjectures were dashed by the experimental results—Milgram’s subjects evidenced such high degrees of obedience that 50-60% of them proceeded to the very end of the voltage spectrum, marked ostensibly lethal “XXX.”

Upon analysis, Milgram noted four factors as decisive in influencing each subject’s degree of obedience: the emotional closeness of the learner to the teacher; the proximity and perceived legitimacy of the experimenter; the absence or presence of a dissenting observer; and (to a lesser extent), the general reputation and prestige of the institution where the experiment occurred. Regarding the last factor, in some studies that were done outside usual places of alleged reputation and prestige, subjects conformed to nearly the same levels. Milgram’s experiments and many subsequent ones have demonstrated that, unless someone is nearby who dissents and refuses to support the process, most people will tend to go with the program. When someone outspokenly deviates from the harmful norm, the conscience of others tends to be activated, which can radically diminish the amount of compliance.

In the realm of “just following orders,” the element of responsibility is indeed key. Both inside and outside the realm of experimental designs, we humans tend to lose our empathy and be obedient in following orders to harm others when we don’t perceive ourselves as being fully responsible for our actions and their results. After all, the experimenter in Milgram’s studies told teachers that the procedure was safe and that, although the shocks were painful, they “would do no permanent damage,” which was in contradiction to the learners’ cries of protest concerning excruciating pain and of course the “XXX” level. So, in this environment the teachers were given the opportunity to disregard their conscience, disinhibit themselves, and dehumanize the learners being shocked. Circumstances of this sort can make it much more difficult to take conscientious actions that honor the well-being of everyone involved.

The chain of command: who is actually responsible?

In 1978 a study was done to determine the interpretations of responsibility in obedience. Researchers put subjects into a mock jury simulation to explore the trial of Lt. Calley. This trial concerned the real military case of a massacre of unarmed civilians in My Lai during the Vietnam War by U.S. soldiers, who had been given orders to do so by Lt. Calley.

Not only had the Lieutenant given the orders to attack, but he also participated in it. So, it was difficult for him to avoid responsibility for the atrocity, though he could of course claim that a “superior” had given him the initial orders. V. Lee Hamilton (head of the study) referred to it as a “crime of obedience” and commented on this situation in the following way:

“Authorities can certainly be said to have causal responsibility for a subordinate’s acts that they may order. They also have a role responsibility for those acts and indeed a role responsibility for overseeing action that goes beyond what they specifically order. They are both the authors of action and the overseers of actors. Reciprocally, the actors who are their subordinates physically cause deeds that they are ordered to do, and they act intentionally. But they do so in response to a role duty and with the expectation that the authority has the responsibility (in the sense of liability) for any bad outcomes. To do what they are told is both something they must do to stay in role and something they ought to do as a role occupant.” (p. 128) [5]

To view obedience like this, which is common in military situations, really provides a way for the actors not to take full responsibility for their choices and actions. It tends to confuse the issue of who is responsible for an action—the one who ordered or enforced it, the one who did it, or both? Such ambiguity is part and parcel of notions of allegedly “collective” responsibility in systems of hierarchy and domination, wherein the volitional agency of each individual becomes minimized.

Moreover, if some human beings are treated like attack dogs, we can practically expect the harmful aftermath. Of course, two types of responsibility are generally deduced from such situations: responsibility ascribed to the giver of commands and self-responsibility for one’s own actions. If one is following the former type, then one is still relying on the commander’s degree of self-responsibility. All actions therefore depend on self-responsibility.

Reasoning beings by definition are voluntary agents of their own behavior. Our voluntary agency, or volition, exists in the context of the multifaceted unconscious aspects of brain functioning, as we explored previously, which some might contend takes a degree of agency away. Yet “obedience” implies conforming to another’s commands with indifference to, or in disregard of, one’s personal integrity and values. Essentially, when we’re obedient, we attempt to surrender our self-responsibility to another’s self-responsibility, which we deem preferable in some way, such as to gain favor or to avoid accountability and punishment.

To follow another person’s or institution’s directives without question can severely undermine our sense of self-responsibility, because it entails sacrifice of our own autonomy and sense of agency. Basically, to forfeit self-responsibility means to defer one’s own critical thinking process to another’s (or to some intangible group), which runs counter to one’s own independent functioning.

In the jury simulation study that was conducted by Hamilton, subjects deemed the “superior” who gave the orders to be significantly more responsible than the soldiers themselves. They cited the fact that he was the key causal factor in the incident, via his orders. Hamilton advised that strong sympathy for the subordinates by the subjects may have resulted from the authoritarian military atmosphere of the case. This again exposed the ambiguous meaning of responsibility in a culture of widespread normalization of conformity.

In a 1986 survey designed to assess how typical citizens understood the meaning of responsibility in hierarchical situations, Hamilton set out once more to decipher the nature of responsibility. In the introduction he noted that, historically, those in the fields of law and psychology have viewed the superior who gives directives as most responsible for the actions of the subordinate. The particular role-obligations of the superior are evidenced in “The legal principle of respondeat superior, ‘let the superior answer.’” (p. 120) [6]

Interestingly, Hamilton remarked that oftentimes the greater the obligatory role of the authority, the more murky the issue of responsibility becomes. Subjects in this survey attributed on average the most responsibility to individuals like Lt. Calley, who were neither solely a distant authority nor solely an obedient soldier. Subjects’ ambiguity concerning responsibility became apparent here: They figured that they cannot err by picking a man most involved in both ends of the chain of command. Of special note is that among the 391 Boston area subjects in his study, the most educated ones on average attributed more personal responsibility to the soldiers. In Hamilton’s words these “…results suggest that education promotes independence from authority…” (p. 137)

Yet, such independence from perceived authority took place in the controlled conditions of a study. If we instead shift focus to our present-day society, we can see an example that seems to conflict with Hamilton’s conjecture. The level of conformity to orders from those in government tended to be quite high across the entire population of Boston in the aftermath of the 2013 Marathon bombing. Unfortunately in times of crisis, self-responsibility in the populace can wane amidst the decrees of perceived authorities in government—i.e., those trained to exercise their judgment over countless others. So, perhaps education alone isn’t the main factor in independence from perceived authority. Boston was turned into a veritable paramilitary police zone, due to the mandate for people to “shelter in place” while the search for the bombing suspects took place. Such lockdowns tend to be viewed by most people as beneficial in times of crisis, even though they come at the expense of people’s property rights and freedom to do their normal activities. In this case, despite all the searching by paramilitary police, a resident noticed one of the suspects hiding in his boat.

The nature of giving and following orders

As we know, we can attribute varying degrees of responsibility to ourselves and others within chains-of-command (and thus chains-of-obedience) systems. Oftentimes, the actor and the commander (or perceived authority figure) have differing notions of who is actually accountable for the particular action. Based on this phenomenon, other psychological studies have examined how roles of responsibility are assumed when obedient actions are carried out.

One study similar in method to the Milgram experiment was designed in 1974 to determine exactly how much responsibility subjects would attribute to themselves in administering shocks to others. When subjects assumed different roles in the “learning” experiment, either transmitter (the one who relays the message) or executant (the one who gives the shocks to the learner), researchers Wesley Kilham and Leon Mann found that the transmitter felt less responsible. They wrote:

“The transmitter is in a relatively ‘safe’ place psychologically; he can disclaim responsibility for the orders and can argue that he had no part in their execution. The transmitter can argue or rationalize that his highly specialized part in the act was only of a trivial, mechanical nature.” (p. 697) [7]

Of course, all the so-called learning experiments we’ve covered so far (as well as the ones that follow) involve inflicting shocks on a learner by a subject who believes that they are real shocks. Additionally, the participants always consisted of only those subjects who chose to remain in the experiment after being told that they are free to leave, without penalty, if they disagree with the procedure or find it uncomfortable. So, invariably, these studies may end up with a biased sample consisting of those who agree to follow orders, even if those orders lead to harming others.

However, given the general culture in America that’s oriented toward blaming, shaming, and punishing, it appears that experimenters didn’t have any trouble obtaining subjects in their screening process, which evidences the fact that the average participant didn’t find such a punitive learning experiment objectionable. And again, the rigid and formal manner of interaction in these experiments likely fostered an atmosphere not psychologically conducive to defiance by subjects. They all agreed to participate in an experiment apparently designed and controlled by professionals, as noted by researchers Kilham and Mann. Similar to most situations that involve high degrees of conformity, persons can lose sight of the significance of personal integrity, among other values, and empathy.

Kilham and Mann also designed a control group that was allowed to choose any level of shock they wanted in order to “teach” the learner. This was in contrast to the typically required ascending level of shock voltage for each incorrect response by a so-called learner. Statistically, the control group’s level of obedience was significantly less than each of the four experimental groups. Although the highest level of obedience (i.e., following through with the highest intensity shocks) by a group of executants was not as high as subjects in some of Milgram’s studies (upwards of 65 percent), obedience was still quite high at 40 percent. While the experimental groups sometimes proceeded from moderate, to strong, to very strong, to intense, and on to extreme intensity and danger (severe shock levels, despite the learner’s cries of protest), the control group never moved beyond the first stages of moderate shock intensity. This indicates the importance of having a sense of control over, and responsibility for, one’s actions.

True to form, all groups showed significantly higher levels of obedience in the transmitter position (the one giving the message to shock) than in the executant position (the one who shocks the learner). This indicates, once again, that the person telling another person to do something typically assumes less responsibility for the consequences of an action than if the orderer were to take that action on his or her own. In our own personal experiences outside the research room, we know that psychological distance can be fostered when we’re not the ones performing the action in question—and we tend to value this distance when the actions cause harm or come at serious cost.

Perspectives on self-responsibility

A familiar term in the realm of social psychology is fundamental attribution error. It refers to the subjective perceptions of differences in the causes of behavior between the person acting and the person viewing that action. The person acting will generally attribute his or her own behavior as a reaction to or consequence of the environmental conditions, or given set of circumstances, while a person observing another’s action will typically attribute it to the actor’s personality or mental characteristics, including decisions.

An experiment was done in 1975 to assess this notion in regard to acts of obedience in a controlled setting. Researchers John Harvey, Ben Harris, and Richard Barnes found that as the severity of the effects of the subject’s obedient actions escalated—in this case the administration of harmful shocks (albeit pseudo)—he or she would attribute more of these effects to the situation. In turn, the observer of this behavior would attribute the effects more to the mindset of the subject, perceiving the subject (the actor) as more responsible than the subject would. The results of the study “…show a general tendency for actors to attribute more responsibility to the experimenter than do observers” (p. 25) and “…that in general observers attributed more freedom to actors than actors attributed to themselves.” (p. 26) [8]

Harvey et al. also noted that there seems to be a direct relationship between the amount of “perceived freedom” and the degree of felt responsibility (self-responsibility). In other words, as the subjects involved in the shock experiment saw their actions having increasingly disturbing consequences, they explained their behavior as being less volitionally free and more restricted by the conditions, or constraints, of the experiment.

Once again, we see how a decrease in perceived self-responsibility can happen amidst a situation of conformity, or perceived powerlessness. We can become obedient actors for others whose instructions might be on some psychological level questionable for us. When we are in a different context that lacks the supposed constraints of obedience (both mental and punishment-oriented), we may be more inclined to reject instructions that lead to harm—harm that includes denying that we have the capacity and freedom to choose. In other words, we side with our conscience, our concern for self and others’ well-being. In line with fundamental attribution differences, however, when we’re in aversive conditions, we’ll typically perceive little freedom of choice. When hierarchically structured systems of interaction become the norm, they foster widespread following of “orders from above.”

Another thing that’s involved in causing misattribution of one’s own behavior is the level of dissonance, i.e., the degree to which one experiences a conflict between how one is acting and how one prefers to act (particularly in relation to present compliance-oriented conditions). Any degree of dissonance concerns a disparity between beliefs, as well as between beliefs and actions. As we have seen in the preceding studies, when persons engage in conduct that seems less than respectful of themselves and others, in the words of researchers Marc Riess and Barry Schlenker they “…can try to excuse the behavior by denying responsibility for the consequences. This relieves them of accountability, potential punishment, and guilt.” Additionally, “when aversive consequences follow an action that appears to have a reasonable likelihood of producing such consequences, justifications are needed.” (p. 22) [9]

In their 1977 study “Attitude Change and Responsibility Avoidance as Modes of Dilemma Resolution in Forced-Compliance Situations,” such observations held true. They also found that when observers ascribed accountability to subjects, subjects engaged in a change in attitude, which placed their own behavior in a different and better light. This further confirms the general psychological observation that, when involved in questionable behavior, we may try to appear less responsible or perhaps as causing harm only unintentionally, or accidentally.

Given that self-responsibility inescapably follows from our choice-making capacity, why do we tend not to take full responsibility when others, or even ourselves, disapprove of our actions? Well, this makes much more sense when we consider that as children we might’ve been punished (including ridiculed) when we really “owned” our actions. And when our parents didn’t like what we did, perhaps we were subjected to what psychological researcher Alfie Kohn calls “love withdrawal” (from his book Unconditional Parenting: Moving From Rewards And Punishments To Love And Reason). [10] As conceptual creatures, we have a need for self-esteem, i.e., to view our minds as efficacious and to feel fundamentally worthy as persons. So, in addition to the unwanted consequences related to others, whenever we do something that seems to run counter to these profound aspects of self, we may experience feelings of anxiety, worry, alarm, shame, guilt, agony, and regret.

The challenge for us then becomes whether to acknowledge such feelings and connect with our needs underlying them, or simply to try to protect ourselves by not taking responsibility. Ultimately, our quality of psychological living depends on our strength of inner relationship. Various domination themes in our culture especially discourage us from attaining a high-quality psychological life; they disparage us and promote thoughts of our supposed “goodness” and/or “badness.” Moralistic judgments tend to discourage us from honoring our need for self-esteem via such practices as self-responsibility and self-acceptance (which is the topic of a later chapter).

Continuing with an empirical examination of the process of how we can avoid internalization of self-responsibility, another faux shock experiment was done in the 1980s to see if persons would behave differently in various roles. Researchers David Kipper and Dov Har-Even assigned one group of subjects to the “spontaneous group” (who were free to choose the level of shock administered) and one group to the “mimetic-pretend” group (who assumed the role of a teacher through instruction and imagination while delivering shocks). The fundamental difference in these two groups was the way in which their roles were emphasized. They both had to “teach” a so-called learner (albeit confederate, so he or she wasn’t actually being shocked), but the mimetic-pretend group was explicitly told to act like a teacher, focused on the business at hand, supposedly causing a greater task-oriented mood. This mood was presumed to lead to a decreased feeling and attribution of personal responsibility, whereas the spontaneous group would still be in a self-oriented mindset.

As we might expect, subjects in the mimetic-pretend group increased the intensity of shocks as the test proceeded, while members of the spontaneous group remained at a moderate level. Furthermore, those in the mimetic-pretend group attributed responsibility for the shocks to factors outside the self, while the spontaneous group focused more on personal responsibility. The researchers noted the following:

“It appears that casting a person in a mimetic-pretend role accelerates disinhibition processes, at least as far as the expression of aggressive behavior is concerned, and possibly also with regard to other types of conflicts, principally those that involve guilt feelings.” (p. 940) [11]

The focus was not particularly on how much conformity the experimenter could obtain from the subjects (like with Milgram’s studies), but rather on the kind of behavior exhibited in two different roles. The nature of the mimetic-pretend role led to escalated levels of aggression, thereby demonstrating once again that distancing oneself psychologically from one’s own actions (through a duty or role) contributes to the denial of self-responsibility.

Furthermore, assuming roles can obscure normal attributions of responsibility. Researchers Kipper and Har-Even found that although the mimetic-pretend role subjects took responsibility for their behavior, it was only in the context of the role. Since the learners in this study did not fake being seriously injured by the shocks (like in Milgram’s), perhaps teachers would’ve assumed a different level of accountability if that had been the case. However, we know from history that being obedient actors in rigid roles can lead to severe dehumanization of victims and thus commission of atrocities.

From this we can connect more psychological dots regarding a society that lacks political freedom. Daily in the news, both foreign and domestic, we can see that multitudes of individuals are harassed and harmed by persons in various roles of coercive authority within a political matrix: “soldiers” in “military operations”; “police officers” in “law enforcement”; and, “judges,” “prosecutors,” and “jurors” in “judicial proceedings.” All these roles entail the same harmful processes found in the social psychology experiments we’ve been exploring.

Various factors in becoming inhumane or remaining humane

When we don’t take full responsibility for our own actions toward others, social psychologists have noted that typically a couple things happen: disinhibition and dehumanization. When we become disinhibited in this context, we lose connection with our thoughts and feelings regarding treating others respectfully; basically, our principles and empathy fade. The self-reflective thoughts and feelings that normally prevent us from inflicting harm on others become neglected or overridden.

Circumstantial factors, such as a supposedly exalted cause or noble goal that treats some individuals as the means to the prescribed ends of others—essentially, that value the “collective good” above the individual good, devaluing countless persons in the process—or that uphold the welfare of an experiment as more important than the welfare of the participants, can all play their part in the disinhibition process. Of course, these may just provide fuel to the fire of resentful, vengeful, enraged, or other upsetting emotions that tend to be present when a person forgoes rationality and compassion, and thus does harm.

In order for any of us to perform a harmful act, we will tend to dehumanize the other, seeing him or her as no longer possessing redeeming or respectable qualities, but rather as being deserving of punishment or neglect. The processes of disinhibition and dehumanization have been evidenced time and again throughout the centuries, from the commonplace to the unspeakable. Psychological researchers Albert Bandura, Bill Underwood, and Michael Fromson stated the following in 1975:

“Inflicting harm upon individuals who are regarded as subhuman or debased is less apt to arouse self-reproof than if they are seen as human beings with dignifying qualities. The reason for this is that people who are reduced to base creatures are likely to be viewed as insensitive to maltreatment and influenceable only through the more primitive methods.” (p. 255) [12]

The experiment Bandura, Underwood, and Fromson conducted sought to discover the outcomes when subjects, who were recruited as teachers to administer (and choose the intensity of) shocks to individual learners, were placed under various psychological conditions. Different subjects were put either in a position with high individual responsibility for the shocks they administered or in a position of diffused responsibility, in which they would practically remain anonymous. Additionally, the learners were portrayed to different subjects “…in either humanized, neutral, or dehumanized terms.”

As we might surmise by now, subjects whose shocks were mostly anonymous gave higher intensity shocks on average to learners, especially when the learner had been dehumanized. But when the learner was made to appear high in moral value, both individual and diffused responsibility groups (although they differed statistically) viewed high-level shocks as less justified. In turn the researchers stated, “when circumstances of personal responsibility and humanization made it difficult to avoid self-censure for injurious conduct, subjects disavowed the use of punitive measures and used predominantly weak shocks.” (p. 268) [12]

In our exploration of self-responsibility, we’ve realized the importance of internal mechanisms, i.e., within the individual, that can curtail harmful behavior that a person may contemplate and enact under various circumstances. This signifies the self-control aspect of volitional agency, which is one of our distinctive traits as human beings. Paying attention to internal mechanisms can prevent us from becoming deindividuated, that is, from losing our sense of individuality, integrity, and consideration regarding behavior toward others. As a consequence, we can view ourselves as accountable individuals with dignified and empathetic standards of behavior; this is the realm in which self-responsibility operates.

However, sometimes we hear responsibility discussed in relation to social constraints or public influences that inhibit people from doing harm. In this sense, a person is being “held accountable” not only by their own empathetic mindset and views of personal integrity, but also by the critical and punitive measures of others, referred to as “accountability cues.” Deindividuation cues and accountability cues can be seen as the “private and public components,” respectively, that affect impulses to aggress and levels of obedience, researchers Steven Prentice-Dunn and Ronald Rogers noted.

In their 1982 study (of course, another shock test) that related these factors to the level of aggression against a learner in an experiment, they found that “compared to subjects in the high accountability-cues conditions, subjects receiving low accountability cues displayed more aggression. In addition, the external attentional-cues condition [designed to induce more deindividuation] produced more aggression than did internal attentional-cues condition.” (p. 508) Furthermore, they stated, “…the available data strongly suggest that subjective deindividuation mediated the expression of aggression.” (p. 512) [13]

Subjective deindividuation is associated with a lessening of self-responsibility, which is one of the most powerful factors in harmful behavior. Obedience and conformity entail a lowering of one’s inner awareness and acting in accordance with the demands of others, or external cues. Yet do we always attribute responsibility to someone else or something else when our acts are considered harmful?

As we have just noted, when a victim is dehumanized, more responsibility may be assumed by the actor who believes that the harm was in some way warranted. On a familial level (as we’ll explore later) such is the case when parents view children as inferior and not deserving the same degree of respect, in addition to deserving punishment—“because you did something wrong, and we’re the parents, after all!” Indeed, the disinhibiting and dehumanizing power of roles looms large in this process.

A study conducted almost identically to Milgram’s questioned the theory of responsibility attribution. The experimenters David Mantell and Robert Panzarell included both a group of subjects who were free to choose the level of shock voltage and a group who witnessed a preceding test in which the teacher defied the authority figure and refused to continue administering shocks. They concluded the following in 1976:

“A monolithic view of the obedient person as a purely passive agent who invariably relinquishes personal responsibility is a false view. There are people who obey and continue to hold themselves responsible as well as people who obey and relinquish responsibility. Similarly, among people who initially obey but then defy, there are those who accept full responsibility and those who accept none at all for the actions they performed prior to their defiance.” (p. 242) [14]

The authors did note that attribution of personal responsibility was related to decision-making capability: When subjects could choose the shocks in the test, they felt more responsible. How might we explain these results? Well, from the description of the study’s method, it appears that the subjects were asked about attributions of responsibility after they had been de-hoaxed and comforted by the fact that the learner had not really been shocked almost to death. It would be hard to believe that subjects would take personal responsibility for following orders to shock an innocent person beyond the point of screams of protest, to the point of silence. If this were the case, it could be considered an admission of behaving in an extremely malevolent way and, perhaps, that one is requesting accountability for one’s harmful actions. Yet at the same time, this might evidence such a high degree of self-responsibility that the person most likely would not have followed the harmful commands of the experimenter in the first place.

Concerning the factors that can contribute to obedience on the part of subjects in studies, all these experimental settings had an aura of reputability. Subjects entered into a task assuming or believing that it must have been well thought-out and proven safe and reasonable. After all, do not psychological experimenters at sanctioned institutions abide by strict codes of ethics? Such thoughts were probably running through the minds of subjects as they began to perform their tasks.

The particular subjects who maintained a high degree of self-responsibility—regardless of the consequences—made what Nissani in 1990 called a “conceptual shift.” (p. 1385). [15] The shift involves actually being cognizant of the harmful actions that were requested (or demanded), wherein the subjects mentally shifted into accepting that the experimenter had apparently become a “malevolent” figure. The institution thereby was discredited, at least if the shocks were real. After all, no person attuned to self-responsibility would be reckless enough to call their bluff, speculating that the shocks were not real.

The internal mechanism of self-responsibility relies on the conviction that one is both the voluntary creator and inhibitor of one’s own actions and thus, concomitantly, one is fully accountable for them, whether or not others are harmed. Unfortunately, we live in a culture in which obedience to “authority” in some form or fashion has been the mainstay. As we’ll be exploring both on the political and familial levels, we are trained from an early age to defer to certain adults and comply or suffer punishments by them, which doesn’t honor our voluntary choices and actions and thus self-responsibility. Despite the variety of reasons for requiring obedience, to embrace self-responsibility directly challenges the paradigm of giving orders and blindly following them.

Surrendering to systems of domination

We’ve just examined some well-researched aspects of how autonomous functioning is surrendered by otherwise autonomous persons in controlled conditions; replications of such experiments in recent history have produced similar results, by the way. These results indicate that there’s something really damaging happening in American culture, not to mention other cultures. We are trained to be “good” boys and girls, which typically means to be conformists to adults’ desires—rather than free persons who are responsible for our own choices. As a result, we tend to lose sight of our intrinsic motivation to handle our needs and others’ needs with care.

Of course, a fairly obvious and understandable reason exists for our conformity as children, and then later as adults: to be accepted in the group in order to survive. Without early conformity to what adults in our world want from us, we can jeopardize our place in the family and seemingly jeopardize even our lives. This primarily explains why, for thousands of years, humans have repeated the same obedient, ritualistic patterns. Dissent can be a risky activity in a social group, especially when various adults were subjected to memes of conformity during their early years too—as opposed to being encouraged to be independent thinkers and rights-respecting individuals.

Consider what typically happens when members of the group disapprove of our choices. Perhaps a blaming and shaming process occurs, which leads to some sort of punishment, either in the form of aggression or ostracism. Adherence to spoken and unspoken group norms and rules thus can begin to sacrifice our needs for autonomy, choice, and self-expression. “Don’t rock the boat” can become a major guideline for our emotions and behaviors among others in groups. Consequently, few of us learn how to deal effectively and healthily with upsetting emotions; instead, our feelings are oftentimes disregarded and not voiced.

As we probably know, we tend to pay a steep personal price for such a strategy: We don’t get to freely express our genuine selves, and we’re discouraged from believing that such honesty can ultimately lead to better things. After all, more harmonious relationships and a more meaningful society entail the fulfillment of such needs as acceptance, trust, consideration, empathy, cooperation, and support.

A primary theme of this book is that systems tend to have major influences on human beings, and of course humans are main factors in systems. We in fact create systems, so we can alter and dissolve them as well. The domination systems that preside in our culture today have so many harmful aspects that we can be thoroughly desensitized to them, to the point of normalizing them and surrendering to them.

A system has various definitions, of course, but here’s a dictionary one that’s germane: a set of principles or procedures according to which something is done; an organized scheme or method. Moreover, a system entails a way of interacting that tends to maintain itself based on agreed-upon beliefs, either explicit or implicit. Thomas D’Ansembourg wrote the following about this in his book Being Genuine: Stop Being Nice, Start Being Real:

“In systemics, the science of systems, we learn that any system tends first of all to perpetuate itself, to maintain its existence. This is the law of homeostasis. In such systems as the family, the couple, or a range of other relationships, difference and divergence produce fear because they represent a risk of compromising the system by destabilizing it. Faced with such fear, the trend is often to endeavor to reestablish unanimity as a matter of urgency, either through control or through submission. Thus, to regain equilibrium in our family, marital, or other relationships, that is, the homeostasis or stability of our system, we often impose solutions compelling everyone to agree, or we submit without a word of discussion. What you get is fight or flight, and there is no real encounter.” (p. 190) [16]

What we’ve been exploring about self-responsibility and conformity, and self-expression and obedience, pertains to our beliefs about systems and how they influence us. Few of us were informed in much coherent psychological detail of the rationale for adhering to social systems. As D’Ansembourg noted, fear of risking a destabilization of the system, of upsetting the perceived equilibrium, plays a major role. Commonly, the system is implicitly understood for its permanence and taken as “the given,” like the enduring nature of gravity. Yet systems are human constructs, once again, and to the extent that we don’t scrutinize them, we become trapped in their gravitational pull, in tragic and needless perpetuity.

“Because that’s the way it is!” was a phrase all of us probably heard as children. “Because I said so!” was likely another. Such utterances can be heard sometimes in Walmart shopping aisles, as parents assert aspects of the same systems that they themselves learned as children. Similar to the heuristics, or rules of thumb, that we discussed earlier, we tend to gravitate to what’s most familiar, comfortable, and safe for us and what appears to serve our interests, given the prevailing belief systems.

A belief system seeks to organize aspects of systems into something understandable or at least mentally manageable. Like other mental constructs we can create, it may or may not accord with the facts of reality and what truly serves our interests. Nonetheless, a chain of inferences or assumptions often culminates in acceptance of “a set of principles or procedures according to which something is done.”

If we explore some of the main assumptions that lead to systems of domination—i.e., of some human minds ruling over other human minds, or beliefs and behaviors (memes) endeavoring to rule all human minds—then we can grasp the rationale for parental statements that basically deter inquisitiveness and understanding. Another adult statement that we’re all too familiar with in the political realm is: “Because it’s the law!” The main rationale here perhaps involves yearnings for cooperation, order, safety, and stability, unfortunately coming at the expense of choice, respect, and self-responsibility.

Children are taught from a very early age that, if left to their own “selfish” desires, they can’t be trusted, and this message leads to quite tragic results—for them, for the family, and for society at large. Given that each person is motivated biologically to improve his or her lot in life, to make life better for him or herself, using the word “selfish” in a disdainful way fosters further psychological confusion and self-alienation. It essentially puts a conceptual organism in conflict with itself, particularly in relation to other selves. While “self-interest” might be a more accurate term, which takes into account our biology, obviously it too can carry negative connotations in a culture of self-sacrifice.

Since we’re in the ethical realm now, the following questions arise: 1) Can an individual actually determine what’s in his or her own interest (which could also be called rational self-interest, or enlightened selfishness) and thus what’s not in his or her interest? and 2) If so, can an individual accomplish various self-interested tasks to a satisfactory degree of trustworthiness?

Devastatingly, our culture tends to express serious doubts about both 1 and 2. Negative responses to these fundamental ethical and psychological questions are reflective of the domination systems in which we’ve been immersed from a very early age. We are essentially discouraged from developing confidence in our natural ability to serve our own lives and well-being. From a domination-system perspective, we’re trained to engage in self-fulfilling prophecy regarding our doubts about self-help and confidence. Because we’ve mostly been trained to believe that “people” (actually, ourselves) can’t be trusted, then we simply surmise that everyone must be controlled (by others). While ironically those supposedly in charge of controlling us are left out of the lack-of-trust-in-humans formula, they are always inescapably, practically, part of it. In such a system, we’re also supposed to not seriously question or challenge what’s going on here.

After all, domination systems entail the exercise of power over others. According to the assumptions of these systems, the belief is that without power-over strategies, people will not do what’s most wanted to serve human life. So, we are supposed to serve those using and advocating power-over strategies.

Now, there’s a lot of onion peeling that needs to be done about this belief system, especially in relation to the nature of extrinsic motivators, which nearly all of us have experienced from a young age. Since we’re subjected to a multiplicity of rewards and punishments (a.k.a. “behavior modification”) we tend to develop a distorted understanding of what we want and how to get it. And then, we tend to become mentally enslaved, prone to following the orders and assumptions of others to the nth degree.

Intrinsic motivation, in stark contrast, arises when we have mental freedom. It entails having a desire to learn and do something because of one’s authentic interest, curiosity, and creative spirit. Intrinsic motivation is really essential—it enables us to be mindful of our needs for choice, spontaneity, inspiration, genuineness, integrity, challenge, discovery, growth, and purpose (to name some salient ones). Inner trust is also a major component of intrinsic motivation—to have trust in our own ability to be in touch with and meet the above needs as well as rely on our own judgment, instead of the judgment or commands of some authority figure (who, by the way, has the same psychological needs).

Of course, intrinsic motivation gets squelched to a large extent because of domination systems, which administer extrinsic motivators in the forms of rewards and punishments, or carrots and sticks. As we’ve covered, these can be eerily similar to the kinds that researchers devised decades ago in various shock studies, as well as ones used with such non-reasoning, or non-conceptual, creatures as rats, pigeons, monkeys, and dogs. When others try to get us to do things that don’t interest us (i.e., that we aren’t self-interested in, in terms of meeting our needs), typically only a couple choices occur to us: rebellion or submission.

The human mind has the ability to forecast future outcomes. We all know what happens when a child says some variation of “No!” to parents who have themselves grown up in a culture of domination systems. Typically, they react to defiance with bribes and/or punishments. As parents seek to resolve the situation in such a way, they might also believe that their actions are “for the child’s own good,” because parents are supposed to know best.

When children follow through with what they’re told to do or say (despite what they might think and feel), they’ve learned that less painful things, and maybe even some positive things, tend to happen in relation to parents. So, not rocking the family boat has its immediate benefits, although this is where the “boat” metaphor reveals its serious inadequacy. Neither the family nor society is floating in a vessel that requires no one to move too much, lest it capsize and everyone risks drowning. While systems of domination discourage us from believing otherwise, they also falsely imply that humans can’t be authentic with their thoughts, feelings, and actions, and that we can’t voluntarily meet each others’ needs for equality and harmony.

We can directly question the notion that children must be told what to do (and even think and feel) and must unquestioningly obey orders from above, lest the family system (or society) devolve into harmful chaos and disorder. We’re going to explore the uplifting implications and marvelous results of fully honoring intrinsic motivation in younger family members in a later chapter. But for now, let’s examine the flaws in the inference found in domination systems that human beings can’t be trusted to enrich their lives, especially without sacrificing themselves or others in the process.

Who thinks your thoughts? Neuroscience instructs us that our thoughts emanate from our own brain processes. Put succinctly, the mind is what the brain does. No one can directly control your brain processes unless they, for instance, force-feed or inject you with some mind-altering drug. However, what others say or do in your proximity might trigger various cascades of thoughts and feelings in you, of which you can be mindful. This indicates that we are highly communicative and social animals. Still, each of us possesses a distinct neural system, physically separate from others (with the exception of some conjoined twins, of course), and therefore each of us has our own thoughts, feelings, value judgments, memories, etc.

The fact that each human brain is independent in this fundamental way gives rise to self-understanding, autonomy, choice, and all the amazing aspects that can make our lives and interactions with others so enjoyable, as well as sometimes upsetting. This latter aspect may give rise to desires to harness others’ independence to do the bidding of other people, rather than to be in service to themselves as intrinsically motivated individuals.

To lack trust in one’s own condition of autonomy and the requirements for self-generated, self-sustained, and joyful actions can lead to a similar lack of trust in others. One’s fears, anxieties, and worries about lack of trust in self can lead to further upsetting emotions and distorted beliefs about others, who then might be expected to carry out one’s wishes without challenge. This leads to doubting the very human capacity to enrich one’s life in a win/win fashion with others.

Domination systems begin with parenting methods on children, then extend to schooling methods on learners, and finally include law-enforcement methods on adults. Since they foster self-doubt and lack of self-worth, which can lead to a host of compensatory defense mechanisms, domination systems perform a very tragic trick on us—somehow convincing us that we are not in charge of our own actions, but are instead coerced or forced to do things, thus greatly diminishing our sense of self-responsibility. This is usually directly coupled with the prospect of being punished for not conforming to the edicts of the domination system, which means not obeying the orders of others.

We of course saw the effects of this played out in the controlled experiments done by Milgram and others. We also see countless uncontrolled and ongoing “experiments” being done in our midst today. Culturally and politically speaking, we are essentially reaping what the dominations systems have sewn for us. Punishments become the “consequences” of not doing what’s desired or demanded by others; rewards or bribes try to minimize such disobedience; and, diminished self-responsibility becomes the assumed norm. Let’s now examine how all this relates to our social and political predicament.

Liberty and other memes

Since our focus in this book is on the psychological side of complete liberty, we won’t venture as far into the realm of political philosophy as Complete Liberty did. Nevertheless, one of the unfortunate things about the word “liberty” in our culture is its ambiguity; it oftentimes doesn’t entail actual freedom. Sometimes the liberty that people speak of (in accordance with the U.S. Constitution) involves one group of people wielding power over a populace, to “protect” and “serve” them in various ways.

When reduced to essentials, true and complete liberty entails being fully responsible as an individual decision-maker and fully respectful of others (as fellow individual decision-makers) in a social context. This necessarily spells the end of “politics” and thus the end of any notion of government (a.k.a. statism), which has been and continues to be the ultimate “power-over” system devised and enacted by humans, in terms of the immense number of individuals who’ve died from it and who continue to suffer under its reign.

Each domination system uses power over others, essentially methods of manipulation and control, at times culminating in lethal force, in order to maintain itself. Each domination system seeks compliance from children, adolescents, and adults. While the conditional parenting model, which we’ll explore soon in more detail, doesn’t normally resort to lethal methods, as children we nonetheless perceive real threats to our functioning and our survival in abusive or neglectful circumstances. These traumatic experiences lay the psychological groundwork for their re-expression in other domination systems. At times a child is seriously physically injured or killed when he or she resists “arrest” by a punitive parent. Under statism, sometimes a person who disobeys is imprisoned in a metal and concrete cage for weeks, months, or years, essentially forced to live in a subhuman environment for arbitrary periods of time. These dreadful events are of course extreme examples of the power-over paradigm. Though they happen rarely in family systems, they tend to be much more common in governmental ones.

Given that our discussion here basically concerns the nature of some humans ruling over other humans, we can definitely see the painful, tragic similarities between the punishments enforced against children and the punishments enforced against adults in society. The latter tend to dramatically increase in intensity, because fully grown humans can exercise their entire autonomy and resist forceful restraint (i.e., use self-defense measures), which can harm and even kill the persons trying to subdue them.

Many in our culture might say that the rewards and punishments administered by humans “in charge” on other humans are necessary aspects of living together in a safe, stable, and orderly manner. The parallels to our own upbringing tend to be quite noticeable. But without rewards and punishments, what would society look like? How can people deal effectively with those who are typically punished with fines and imprisonment? Without threats of aggression and coercion, how can humanity cope?

In order for these questions to be answered to a reasonable degree of clarity, understanding, and satisfaction, we need to recognize more fully the tragic and costly nature—the essential dysfunction—of the domination system that gives rise to such concerns in the first place. The calming phrase “Peace, Love, and Happiness” is very foreign to such a system, even though it’s commonly used as a hopeful gesture, a yearning to transform the status quo into something less troubled.

Even before children acquire language, they’re bathed in a culture of memes. Memes are ideas and behaviors that are easily replicated among humans. As genes replicate, so do memes, but arguably with far more dramatic effects. It oftentimes takes thousands of years for a significant genetic alteration to take hold in a population. Memes, however, being byproducts of our phenotypes (stemming from our genotypes), can change a population within weeks or months, even days for individual minds.

The general domination system we’ve been exploring is definitely a meme of the highest costly order. Within this system are ideas and behaviors, methods of functioning that yield particular attitudes and results. Rewards and punishments in childhood translate into incentive plans and prisons for adults. Spanking (a.k.a. hitting) and “time-outs” translate into assault, battery, kidnapping, and incarceration by cops.

What drives these memes, once again, is the fundamental distrust that we have in our own minds—in our capacity to meet our own needs and others’ needs without conflict. It’s as if conflict is the prophecy to be fulfilled by the use of power-over tactics that attempt to ensure compliance. The implicit thought is that with coercion some needs will get met, somehow.

In order for anyone to attempt to justify such tactics, thoughts of right-doing and wrongdoing are also instrumental. Ideas of good and evil provide the fuel to the domination fire, so that rewards and punishments can be administered according to what individuals “deserve.” Perhaps notions of good and evil have been around since the dawn of civilization, similar to evaluations of civility and incivility. What would civilization be like without notions of “good” persons and “bad” persons, based on their respective actions? How would society function without deserve-oriented thinking about self and others?

Answers to these questions can be found by inspecting the mental process that gives rise to them: moralistic judgment. Here, we are at the nexus of philosophy and psychology. When we are taught from childhood onward that people are “selfish” and can’t really be trusted because they are “only in it for themselves,” it immediately puts us in a defensive posture regarding thoughts about our own nature. Unfortunately, philosophers and psychologists are mostly at a loss to clearly describe human nature, and our culture suffers immensely as a result. While they oftentimes note an array of interesting and useful particulars, they are as immersed cognitively and emotionally in the same domination system of civilization-with-governments and other power-over strategies as the rest of us. So, they typically overlook the factors that enable us to fully actualize our potential.

Despite the enormous problems of our present culture of memes, and partly because of these memes, we still seek to ensure our survival, and we still try to thrive with others. Communities and marketplaces arise and persevere. Along with them, moralistic judgment is the usual process by which we learn whether ourselves and other humans are worthy of interacting with. Given our training in childhood, this seems like the easiest way to describe the nature of people. We’ve all heard, and probably ascribed, such assessments as the following: “He’s a good guy.” “She’s a nice person.” “He’s an asshole.” “That’s evil!” “Good job.”

One of the purposes of this book is to demonstrate that this type of judgment isn’t as useful and helpful as we’ve been taught or as it oftentimes seems. It can actually detract from our optimal functioning, which includes our happiness. The very system of domination in which we grew up, the one in which humans for thousands of years have been immersed, basically gives rise to moralistic judgment. Additionally, it hasn’t enabled us to gain an accurate understanding of the origins of our emotions, which are from biologically-based needs, rather than from strictly our particular circumstances or simply what other people say or do.

So, we’ll learn in subsequent chapters that we can relate to others in ways that can get universal needs met consistently and helpfully. We’ll also see as we progress that, while moralistic judgment is quite dispensable, needs-based judgment is inescapable and necessary. In fact every moralistic judgment of self and/or someone else can be translated into an accurate and helpful needs-based judgment. Of course such a translation process doesn’t come very easily when we’ve been immersed in domination systems. But it can be done, and with very beneficial results.