By Andrew Lee
Although this is something that is already read in PF, “framework debate” in LD is much more complex and hard to understand than PF frameworks. PF frameworks are usually just impact weighing mechanisms, such as “terrorism causes a bunch of deaths, so the framework is preventing terrorism.” However, LD frameworks operate not as weighing mechanisms for impacts but as filters that “filter” what impacts matter in the first place. For example, a deontological (“basing morality on whether an action itself is right or wrong under a series of rules, rather than based on the consequences of the action”) framework like Kant would say that we shouldn’t care about any consequences and only care about the intentions and means behind actions, not their ends.
There are two main types of frameworks:
Since there are literally thousands of frameworks, I won’t go over all of them here, but I will talk about the most common ones (in LD).
There are two main types of frameworks:
- Ends based - these are frameworks that evaluate the consequences of actions, like utilitarianism (util) or structural violence. PFers are very familiar with this.
- Means-based - these are frameworks that evaluate the means behind actions, like Kant or Hobbes. These frameworks usually prescribe certain overall practices that people need to follow like “don’t lie” or “don’t challenge the state” which will be based in philosophy and not justified by ends - for example, Kant’s justification for why people shouldn’t lie isn’t because lying causes people to be sad or mad, but because of a complex chain of logic that terminates in what is functionally the golden rule - “don’t will maxims that you couldn’t will universally.” Teams that read these arguments will then make arguments for why the other team’s advocacy doesn’t abide by those practices and why their advocacy does.
Since there are literally thousands of frameworks, I won’t go over all of them here, but I will talk about the most common ones (in LD).
- Util - maximize well-being for the greatest amount of people. Extremely simple, and what is typically referred to as a “cost-benefit analysis.”
- Structural violence - we should care most about stopping instances of violence that are “structural,” such as racism, sexism, or homophobia. This is typically justified by cards that indicate that we need to include all people in our moral calculus before anything else, or can be justified pre-fiat by arguments stating that the judge has an obligation to challenge instances of racism/sexism/homophobia/etc. In debate.
- Kant - this is really complicated, so bear with me.
- We should abide by what practical reason - the ability to set and pursue ends - tells us to do, since a] the external world is different for everyone - we experience it in different ways - and would thus have different obligations in different instances so we can’t normative prescribe obligations, b] reason is the only thing that is binding since we can question anything else - i.e. “why do we have to follow our innate senses that happiness is good” but questioning reason would cede its authority, since asking for a reason for reason requires reason.
- Everyone has the same ability to practically reason since we can all set and pursue ends.
- With the same ability to practically reason, we can’t will “contradictory maxims.” If I was able to set an end, since everyone has the same ability to reason, everyone else also has the same ability to take that action. If I were to will that I could kill people, then I know that other people could also do that. That is a contradictory maxim, since if I was to will that I could kill I would simultaneously be willing that everyone could kill since we have the same ability to set and pursue ends, and if everyone killed each other they wouldn’t have been able to kill people in the first place - the original goal of the action is contradicted, which means it’s a contradictory maxim.
- Thus, the moral thing to do is to only will things that are universalizable - things that are non-contradictory.