1 00:00:01,267 --> 00:00:03,933 My name is Dan Cohen and I am an academic, as he said. 2 00:00:04,500 --> 00:00:07,567 And what that means is that I argue. 3 00:00:07,600 --> 00:00:09,233 It's an important part of my life. 4 00:00:09,267 --> 00:00:10,433 And I like to argue. 5 00:00:10,733 --> 00:00:14,100 And I'm not just an academic, I'm a philosopher, 6 00:00:14,133 --> 00:00:17,067 so I like to think that I'm actually pretty good at arguing. 7 00:00:17,100 --> 00:00:19,900 But I also like to think a lot about arguing. 8 00:00:20,367 --> 00:00:23,833 And in thinking about arguing, I've come across some puzzles. 9 00:00:23,867 --> 00:00:25,767 And one of the puzzles is that, 10 00:00:25,800 --> 00:00:28,333 as I've been thinking about arguing over the years -- 11 00:00:28,367 --> 00:00:29,767 and it's been decades now -- 12 00:00:29,800 --> 00:00:31,467 I've gotten better at arguing. 13 00:00:31,500 --> 00:00:35,000 But the more that I argue and the better I get at arguing, 14 00:00:35,033 --> 00:00:36,333 the more that I lose. 15 00:00:36,967 --> 00:00:38,200 And that's a puzzle. 16 00:00:38,233 --> 00:00:41,100 And the other puzzle is that I'm actually okay with that. 17 00:00:41,500 --> 00:00:43,333 Why is it that I'm okay with losing 18 00:00:43,367 --> 00:00:46,800 and why is it that I think good arguers are actually better at losing? 19 00:00:46,833 --> 00:00:48,833 Well, there are some other puzzles. 20 00:00:48,867 --> 00:00:50,733 One is: why do we argue? 21 00:00:50,767 --> 00:00:52,400 Who benefits from arguments? 22 00:00:52,433 --> 00:00:54,767 When I think about arguments, I'm talking about -- 23 00:00:54,800 --> 00:00:57,633 let's call them academic arguments or cognitive arguments -- 24 00:00:57,667 --> 00:00:59,500 where something cognitive is at stake: 25 00:00:59,533 --> 00:01:02,267 Is this proposition true? Is this theory a good theory? 26 00:01:02,300 --> 00:01:06,767 Is this a viable interpretation of the data or the text? And so on. 27 00:01:06,800 --> 00:01:10,767 I'm not interested really in arguments about whose turn it is to do the dishes 28 00:01:10,800 --> 00:01:12,467 or who has to take out the garbage. 29 00:01:12,500 --> 00:01:14,800 Yeah, we have those arguments, too. 30 00:01:14,833 --> 00:01:17,567 I tend to win those arguments, because I know the tricks. 31 00:01:17,600 --> 00:01:19,567 But those aren't the important arguments. 32 00:01:19,600 --> 00:01:21,367 I'm interested in academic arguments, 33 00:01:21,400 --> 00:01:23,267 and here are the things that puzzle me. 34 00:01:24,667 --> 00:01:27,733 First, what do good arguers win when they win an argument? 35 00:01:27,767 --> 00:01:30,233 What do I win if I convince you 36 00:01:30,267 --> 00:01:32,700 that utilitarianism isn't really the right framework 37 00:01:32,733 --> 00:01:34,467 for thinking about ethical theories? 38 00:01:34,500 --> 00:01:36,367 What do we win when we win an argument? 39 00:01:36,400 --> 00:01:37,733 Even before that, 40 00:01:37,767 --> 00:01:39,000 what does it matter to me 41 00:01:39,033 --> 00:01:41,967 whether you have this idea that Kant's theory works 42 00:01:42,000 --> 00:01:45,167 or Mill is the right ethicist to follow? 43 00:01:45,200 --> 00:01:46,567 It's no skin off my back 44 00:01:46,600 --> 00:01:49,767 whether you think functionalism is a viable theory of mind. 45 00:01:50,300 --> 00:01:52,333 So why do we even try to argue? 46 00:01:52,367 --> 00:01:54,200 Why do we try to convince other people 47 00:01:54,233 --> 00:01:56,400 to believe things they don't want to believe, 48 00:01:56,433 --> 00:01:58,200 and is that even a nice thing to do? 49 00:01:58,233 --> 00:02:00,467 Is that a nice way to treat another human being, 50 00:02:00,500 --> 00:02:03,467 try and make them think something they don't want to think? 51 00:02:03,500 --> 00:02:08,167 Well, my answer is going to make reference to three models for arguments. 52 00:02:08,200 --> 00:02:10,933 The first model -- let's call it the dialectical model -- 53 00:02:10,967 --> 00:02:13,867 is we think of arguments as war; you know what that's like -- 54 00:02:13,900 --> 00:02:16,600 a lot of screaming and shouting and winning and losing. 55 00:02:16,633 --> 00:02:18,767 That's not a very helpful model for arguing, 56 00:02:18,800 --> 00:02:21,533 but it's a pretty common and entrenched model for arguing. 57 00:02:21,567 --> 00:02:24,800 But there's a second model for arguing: arguments as proofs. 58 00:02:24,833 --> 00:02:26,900 Think of a mathematician's argument. 59 00:02:26,933 --> 00:02:29,700 Here's my argument. Does it work? Is it any good? 60 00:02:29,733 --> 00:02:34,200 Are the premises warranted? Are the inferences valid? 61 00:02:34,233 --> 00:02:36,767 Does the conclusion follow from the premises? 62 00:02:36,800 --> 00:02:39,200 No opposition, no adversariality -- 63 00:02:39,233 --> 00:02:44,900 not necessarily any arguing in the adversarial sense. 64 00:02:44,933 --> 00:02:46,900 But there's a third model to keep in mind 65 00:02:46,933 --> 00:02:48,900 that I think is going to be very helpful, 66 00:02:48,933 --> 00:02:53,933 and that is arguments as performances, arguments in front of an audience. 67 00:02:53,967 --> 00:02:56,900 We can think of a politician trying to present a position, 68 00:02:56,933 --> 00:02:59,067 trying to convince the audience of something. 69 00:02:59,100 --> 00:03:02,567 But there's another twist on this model that I really think is important; 70 00:03:02,600 --> 00:03:06,600 namely, that when we argue before an audience, 71 00:03:06,633 --> 00:03:10,700 sometimes the audience has a more participatory role in the argument; 72 00:03:10,733 --> 00:03:15,233 that is, arguments are also [performances] in front of juries, 73 00:03:15,267 --> 00:03:18,033 who make a judgment and decide the case. 74 00:03:18,067 --> 00:03:19,867 Let's call this the rhetorical model, 75 00:03:19,900 --> 00:03:23,600 where you have to tailor your argument to the audience at hand. 76 00:03:23,633 --> 00:03:26,267 You know, presenting a sound, well-argued, 77 00:03:26,300 --> 00:03:29,667 tight argument in English before a francophone audience 78 00:03:29,700 --> 00:03:31,300 just isn't going to work. 79 00:03:31,800 --> 00:03:35,433 So we have these models -- argument as war, argument as proof 80 00:03:35,467 --> 00:03:37,833 and argument as performance. 81 00:03:38,167 --> 00:03:41,933 Of those three, the argument as war is the dominant one. 82 00:03:42,467 --> 00:03:45,067 It dominates how we talk about arguments, 83 00:03:45,100 --> 00:03:47,133 it dominates how we think about arguments, 84 00:03:47,167 --> 00:03:50,133 and because of that, it shapes how we argue, 85 00:03:50,167 --> 00:03:51,933 our actual conduct in arguments. 86 00:03:51,967 --> 00:03:53,633 Now, when we talk about arguments, 87 00:03:53,667 --> 00:03:55,633 we talk in a very militaristic language. 88 00:03:55,667 --> 00:03:58,900 We want strong arguments, arguments that have a lot of punch, 89 00:03:58,933 --> 00:04:00,600 arguments that are right on target. 90 00:04:00,633 --> 00:04:03,800 We want to have our defenses up and our strategies all in order. 91 00:04:03,833 --> 00:04:06,400 We want killer arguments. 92 00:04:06,433 --> 00:04:08,433 That's the kind of argument we want. 93 00:04:09,200 --> 00:04:11,600 It is the dominant way of thinking about arguments. 94 00:04:11,633 --> 00:04:13,233 When I'm talking about arguments, 95 00:04:13,267 --> 00:04:16,100 that's probably what you thought of, the adversarial model. 96 00:04:16,433 --> 00:04:18,933 But the war metaphor, 97 00:04:18,967 --> 00:04:21,700 the war paradigm or model for thinking about arguments, 98 00:04:21,733 --> 00:04:24,700 has, I think, deforming effects on how we argue. 99 00:04:25,100 --> 00:04:28,067 First, it elevates tactics over substance. 100 00:04:28,967 --> 00:04:31,100 You can take a class in logic, argumentation. 101 00:04:31,133 --> 00:04:32,833 You learn all about the subterfuges 102 00:04:32,867 --> 00:04:35,733 that people use to try and win arguments -- the false steps. 103 00:04:35,767 --> 00:04:38,933 It magnifies the us-versus them aspect of it. 104 00:04:38,967 --> 00:04:42,367 It makes it adversarial; it's polarizing. 105 00:04:42,400 --> 00:04:48,200 And the only foreseeable outcomes are triumph -- glorious triumph -- 106 00:04:48,233 --> 00:04:51,300 or abject, ignominious defeat. 107 00:04:51,333 --> 00:04:53,067 I think those are deforming effects, 108 00:04:53,100 --> 00:04:56,933 and worst of all, it seems to prevent things like negotiation 109 00:04:56,967 --> 00:05:01,667 or deliberation or compromise or collaboration. 110 00:05:02,233 --> 00:05:05,400 Think about that one -- have you ever entered an argument thinking, 111 00:05:05,433 --> 00:05:08,800 "Let's see if we can hash something out, rather than fight it out. 112 00:05:08,833 --> 00:05:10,733 What can we work out together?" 113 00:05:10,767 --> 00:05:13,133 I think the argument-as-war metaphor 114 00:05:13,167 --> 00:05:17,567 inhibits those other kinds of resolutions to argumentation. 115 00:05:17,600 --> 00:05:20,033 And finally -- this is really the worst thing -- 116 00:05:20,067 --> 00:05:22,867 arguments don't seem to get us anywhere; they're dead ends. 117 00:05:22,900 --> 00:05:28,567 They are like roundabouts or traffic jams or gridlock in conversation. 118 00:05:28,600 --> 00:05:29,867 We don't get anywhere. 119 00:05:30,433 --> 00:05:31,700 And one more thing. 120 00:05:31,733 --> 00:05:34,633 And as an educator, this is the one that really bothers me: 121 00:05:34,667 --> 00:05:36,833 If argument is war, 122 00:05:36,867 --> 00:05:41,867 then there's an implicit equation of learning with losing. 123 00:05:41,900 --> 00:05:43,500 And let me explain what I mean. 124 00:05:44,067 --> 00:05:46,600 Suppose you and I have an argument. 125 00:05:46,633 --> 00:05:49,633 You believe a proposition, P, and I don't. 126 00:05:50,500 --> 00:05:52,400 And I say, "Well, why do you believe P?" 127 00:05:52,433 --> 00:05:53,833 And you give me your reasons. 128 00:05:53,867 --> 00:05:56,233 And I object and say, "Well, what about ...?" 129 00:05:56,267 --> 00:05:57,767 And you answer my objection. 130 00:05:57,800 --> 00:06:00,200 And I have a question: "Well, what do you mean? 131 00:06:00,233 --> 00:06:01,667 How does it apply over here?" 132 00:06:02,133 --> 00:06:03,767 And you answer my question. 133 00:06:03,800 --> 00:06:05,500 Now, suppose at the end of the day, 134 00:06:05,533 --> 00:06:07,633 I've objected, I've questioned, 135 00:06:07,667 --> 00:06:10,267 I've raised all sorts of counter counter-considerations 136 00:06:10,300 --> 00:06:13,867 and in every case you've responded to my satisfaction. 137 00:06:13,900 --> 00:06:16,533 And so at the end of the day, I say, 138 00:06:16,567 --> 00:06:19,900 "You know what? I guess you're right: P." 139 00:06:20,500 --> 00:06:22,833 So, I have a new belief. 140 00:06:22,867 --> 00:06:24,267 And it's not just any belief; 141 00:06:24,300 --> 00:06:30,600 it's well-articulated, examined -- it's a battle-tested belief. 142 00:06:31,800 --> 00:06:32,967 Great cognitive gain. 143 00:06:32,967 --> 00:06:34,333 OK, who won that argument? 144 00:06:35,600 --> 00:06:39,533 Well, the war metaphor seems to force us into saying you won, 145 00:06:39,567 --> 00:06:42,267 even though I'm the only one who made any cognitive gain. 146 00:06:42,300 --> 00:06:45,933 What did you gain, cognitively, from convincing me? 147 00:06:45,967 --> 00:06:48,933 Sure, you got some pleasure out of it, maybe your ego stroked, 148 00:06:48,967 --> 00:06:50,933 maybe you get some professional status 149 00:06:50,967 --> 00:06:53,533 in the field -- "This guy's a good arguer." 150 00:06:53,567 --> 00:06:56,533 But just from a cognitive point of view, 151 00:06:56,567 --> 00:06:57,833 who was the winner? 152 00:06:57,867 --> 00:07:02,667 The war metaphor forces us into thinking that you're the winner and I lost, 153 00:07:02,700 --> 00:07:04,733 even though I gained. 154 00:07:04,767 --> 00:07:07,033 And there's something wrong with that picture. 155 00:07:07,067 --> 00:07:09,767 And that's the picture I really want to change if we can. 156 00:07:09,800 --> 00:07:13,133 So, how can we find ways 157 00:07:13,167 --> 00:07:16,733 to make arguments yield something positive? 158 00:07:17,633 --> 00:07:20,633 What we need is new exit strategies for arguments. 159 00:07:21,367 --> 00:07:24,233 But we're not going to have new exit strategies for arguments 160 00:07:24,267 --> 00:07:27,567 until we have new entry approaches to arguments. 161 00:07:27,600 --> 00:07:30,600 We need to think of new kinds of arguments. 162 00:07:31,267 --> 00:07:33,833 In order to do that, well -- 163 00:07:33,867 --> 00:07:35,533 I don't know how to do that. 164 00:07:36,100 --> 00:07:37,433 That's the bad news. 165 00:07:37,467 --> 00:07:40,467 The argument-as-war metaphor is just ... it's a monster. 166 00:07:40,500 --> 00:07:42,900 It's just taken up habitation in our mind, 167 00:07:42,933 --> 00:07:45,367 and there's no magic bullet that's going to kill it. 168 00:07:45,400 --> 00:07:48,033 There's no magic wand that's going to make it disappear. 169 00:07:48,033 --> 00:07:49,300 I don't have an answer. 170 00:07:49,333 --> 00:07:50,667 But I have some suggestions. 171 00:07:50,700 --> 00:07:52,633 Here's my suggestion: 172 00:07:53,700 --> 00:07:55,900 If we want to think of new kinds of arguments, 173 00:07:55,900 --> 00:07:59,567 what we need to do is think of new kinds of arguers. 174 00:07:59,900 --> 00:08:01,833 So try this: 175 00:08:02,767 --> 00:08:07,200 Think of all the roles that people play in arguments. 176 00:08:07,233 --> 00:08:10,200 There's the proponent and the opponent 177 00:08:10,233 --> 00:08:12,400 in an adversarial, dialectical argument. 178 00:08:12,433 --> 00:08:14,567 There's the audience in rhetorical arguments. 179 00:08:14,600 --> 00:08:16,800 There's the reasoner in arguments as proofs. 180 00:08:18,767 --> 00:08:20,067 All these different roles. 181 00:08:20,100 --> 00:08:23,933 Now, can you imagine an argument in which you are the arguer, 182 00:08:23,967 --> 00:08:27,333 but you're also in the audience, watching yourself argue? 183 00:08:27,967 --> 00:08:31,000 Can you imagine yourself watching yourself argue, 184 00:08:31,033 --> 00:08:35,500 losing the argument, and yet still, at the end of the argument, saying, 185 00:08:35,533 --> 00:08:38,033 "Wow, that was a good argument!" 186 00:08:39,133 --> 00:08:40,467 Can you do that? 187 00:08:40,500 --> 00:08:43,800 I think you can, and I think if you can imagine that kind of argument, 188 00:08:43,833 --> 00:08:47,600 where the loser says to the winner and the audience and the jury can say, 189 00:08:47,633 --> 00:08:49,567 "Yeah, that was a good argument," 190 00:08:49,600 --> 00:08:51,467 then you have imagined a good argument. 191 00:08:51,467 --> 00:08:52,633 And more than that, 192 00:08:52,667 --> 00:08:54,633 I think you've imagined a good arguer, 193 00:08:54,667 --> 00:08:59,133 an arguer that's worthy of the kind of arguer you should try to be. 194 00:08:59,567 --> 00:09:02,333 Now, I lose a lot of arguments. 195 00:09:02,367 --> 00:09:04,767 It takes practice to become a good arguer, 196 00:09:04,800 --> 00:09:07,967 in the sense of being able to benefit from losing, but fortunately, 197 00:09:07,967 --> 00:09:10,933 I've had many, many colleagues who have been willing to step up 198 00:09:10,967 --> 00:09:12,667 and provide that practice for me. 199 00:09:12,700 --> 00:09:13,867 Thank you. 200 00:09:13,900 --> 00:09:17,967 (Applause)