<%BANNER%>

The Problem of Higher-Order Vagueness


PAGE 1

THE PROBLEM OF HIGHER-ORDER VAGUENESS By IVANA SIMI A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS UNIVERSITY OF FLORIDA 2004

PAGE 2

Copyright 2004 by Ivana Simi

PAGE 3

ACKNOWLEDGMENTS I would like to thank Gene Witmer and Kirk Ludwig for helpful comments. I am particularly indebted to Greg Ray for very fruitful discussion, for reading and commenting on various versions of this thesis, and for very helpful advice in these matters. iii

PAGE 4

TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iii ABSTRACT.......................................................................................................................vi CHAPTER 1 INTRODUCTION........................................................................................................1 2 FINES TREATMENT OF HIGHER-ORDER VAGUENESS...................................8 2.1 Supervaluational Framework..................................................................................8 2.2 The Application of Supervaluational Strategy to the Soritical Argument...........12 2.3 Supervaluationism and Higher-Order Vagueness................................................12 2.4 Resurrection of the Paradox..................................................................................14 2.5 Fines Expected Reply..........................................................................................16 2.6 Two Problems for Fine.........................................................................................17 3 THE DEGREE THEORY AND HIGHER-ORDER VAGUENESS.........................23 3.1 The Basic Idea of a Degree Theory......................................................................23 3.2 Meta-Language, Vague or Precise?......................................................................24 4 BURGESSANALYSIS OF THE SECONDARY-QUALITY PREDICATES.........27 4.1 Burgess Project....................................................................................................28 4.2 The Circularity Problem in the Proposed Schema................................................29 4.3 The Problem of the Unacknowledged Source of Vagueness in the Proposed Schema...................................................................................................................31 5 HYDES RESPONSE TO THE PROBLEM OF HIGHER-ORDER VAGUENESS.............................................................................................................36 5.1 Paradigmatic vs. Iterative Conception of Vagueness and the Problem of Higher-Order Vagueness.......................................................................................38 5.2 Hydes Argument..................................................................................................39 5.3 Sorensens Argument............................................................................................42 5.4 The Circularity Problem in Hydes Argument.....................................................46 5.5 The Problem with the Strategy.............................................................................49 iv

PAGE 5

6 IS HIGHER-ORDER VAGUENESS INCOHERENT?.............................................52 6.1 The No Sharp Boundaries Paradox.......................................................................53 6.2 The Higher-Order No Sharp Boundaries Paradox................................................55 6.3 Wrights Argument...............................................................................................57 6.4 Is Higher-Order Vagueness Really Incoherent?...................................................59 6.5 Hecks Reply........................................................................................................59 6.6 Edgingtons Reply................................................................................................61 7 EPISTEMICISM AND HIGHER-ORDER VAGUENESS.......................................64 7.1 The Epistemic View.............................................................................................66 7.2 A Margin for Error Principle................................................................................68 7.3 Epistemic Higher-Order Vagueness.....................................................................71 7.4 Why and How KK Fails.......................................................................................73 7.5 The Failure of KK Answers a Seeming Trouble with MEP.................................76 7.6 But Williamson is in Trouble Anyway.................................................................78 7.7 The Problem of Epistemic Higher-Order Vagueness...........................................79 7.8 Further Reflection on MEP...................................................................................82 8 CONCLUSION...........................................................................................................88 REFERENCES..................................................................................................................93 BIOGRAPHICAL SKETCH.............................................................................................94 v

PAGE 6

Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Master of Arts THE PROBLEM OF HIGHER-ORDER VAGUENESS By Ivana Simi May 2004 Chair: Greg Ray Major Department: Philosophy According to the paradigmatic conception of vagueness, vague predicates admit borderline cases of their applicability, and they tolerate (to some extent) incremental changes along the relevant dimension of variation. However, given that vague predicates admit borderline cases of the first order, and that they are tolerant they must be said to admit borderline cases of the second order, third order, and so on indefinitely. This feature of vague predicates that they exhibit constitutes the phenomenon of higher-order vagueness. I argued that all theorists who accepted the paradigmatic conception of vagueness face the problem of higher-order vagueness or some parallel problem, and fail to successfully deal with it. An important feature of the failure that these views exhibit is that they fail not for some accidental reason that would allow for a possible fix, but they rather fail for some principled reasons, and there are no resources in this theoretical milieu to give a satisfactory treatment of the problem of higher-order vagueness. If this is correct, then what imposes itself as a conclusion is that there is a need for rethinking the vi

PAGE 7

basic vagueness phenomenon by reexamining the basic presuppositions of the paradigmatic conception of vagueness that cannot be taken for granted anymore. vii

PAGE 8

CHAPTER 1 INTRODUCTION Consider predicates such as bald, heap, tall, red. No doubt, these predicates are vague. Pretheoretically, there are three features that they exhibit. Firstly, vague predicates seem to admit borderline cases of their applicability. That is, they give us cases in which the predicate seems to us to clearly apply, cases in which it seems to us that it clearly fails to apply, and cases in which it seems to us that the predicate neither clearly applies nor clearly fails to apply. Secondly, the predicates in question seem to admit at least one dimension of variation along the relevant scale of applicability, such that small changes along the relevant scale cannot make any difference whether the predicate applies or fails to apply. That is, vague predicates seem to be tolerant. Following the above mentioned intuitions it is seems that vague predicates are at least first-order vague (i.e., they seem to admit at least first-order borderline cases of their applicability). By first-order borderline cases we mean that there is no sharp boundary between the kinds of cases to which the predicate seems to clearly apply, and the kinds of cases to which it seems to clearly fail to apply. Now, given the tolerance intuition we are intuitively forced to acknowledge another apparent feature of vague predicates. So, thirdly, it is also the case that intuitively there seems not to be a sharp borderline between the kinds of cases to which the predicate seems to clearly apply, and the kinds of cases that we call borderline cases. Similarly, there seems not to be a sharp borderline between the kinds of cases to which the predicate seems to clearly fail to 1

PAGE 9

2 apply, and the kinds of cases that seem to be borderline cases. So, it seems that there are cases that are i) not cases where the predicate clearly applies, ii) not cases where the predicate clearly fails to apply, but are also iii) not cases that are clearly borderline cases. Call such cases second-order borderline cases. Vague predicates seem typically to be second-order vague, because it is plausible to think (using this intuition) that if there are first-order borderline cases, then there are second-order borderline cases. By extension, we can describe what it would be for a predicate to be third-order vague, and so on indefinitely. Thus, intuitively vague predicates exhibit vagueness of indefinitely high order. This is the phenomenon of higher-order vagueness. The goal of this project is to show that the phenomenon of higher-order vagueness is an insuperable problem for theorists who accept the paradigmatic conception of vagueness in their attempt to give semantics for vague predicates and to specify the conditions under which vague sentences (i.e., sentences that involve vague predicates) are true. By paradigmatic conception of vagueness we mean the spectrum of views that attempt to tell a story about the semantic behavior of vague predicates and which take for granted the pretheoretical intuition that vague predicates either apply or fail to apply, and admit borderline cases of applicability. These different views might, however, differ in the way they characterize the notion of borderline cases (semantic characterization and epistemic characterization, for example), but nevertheless, they all accept the theoretical characterization of vagueness that rests on co-opting of our intuitions and how things seem to us on a pretheoretical level into a theory and end up saying that vague predicates

PAGE 10

3 either apply or fail to apply and have borderline cases. Typically, they also accept the intuition that they are tolerant, but aim to show that the theoretical version of the tolerance intuition needs some restriction (or must be denied) in order to accommodate or avoid the phenomenon of higher-order vagueness being a problem for the proposed account of vague predicates. It turns out, as we aim to show, that theorists who have accepted the paradigmatic conception of vagueness and phenomenon of higher-order vagueness have been unable to successfully deal with or to avoid the problem of higher-order vagueness. We also see that theorists who have accepted the paradigmatic conception of vagueness, but who have argued against the genuineness of the phenomenon of higher-order vagueness, are also unable to avoid problems. This leads us to suggest that there is some tension in the paradigmatic conception of vagueness between its basic presuppositions that vague predicates admit borderline cases and that they have application-conditions, on one hand, and the phenomenon of higher-order vagueness on the other hand. Because the only thing that these different views that share the paradigmatic conception of vagueness have in common is the characterization of vagueness by presence of borderline cases (no matter whether they are characterized semantically or epistemically, for example), and because these views attempt to reconcile the description of vague predicates as higher order vague, while maintaining that they have application-conditions, we suspect that this suggests these presuppositions should be targeted as the generator of the trouble for these views. The plan of the thesis goes as follows. In Chapter 2, we consider Kit Fines (1975) treatment of higher-order vagueness by applying the supervaluational strategy. The

PAGE 11

4 solution Fine proposes consists in respecting higher-order vagueness through a meta-language that is vague, so that the seeming sharp boundaries set up by the theory are just the consequence of successive approximations. So long as one keeps moving one level up in the meta-language, sharp boundaries are avoided. John Burgess (1990) challenges Fines strategy by appealing to its inability to solve the sorites paradox. Since the sorites paradox is the symptom of vagueness for the predicates for which it can be constructed, one cannot but conclude that if Burgess is right, then Fine has not given a good account of vagueness. We have a reason to think that Burgess has shown that Fine is not successful in dissolving the paradox. We also aim to show that Fines truth-conditions for vague sentences cannot be met if he is to respect higher-order vagueness. Even worse, he cannot but end up with sharp boundaries anyway. In Chapter 3 we briefly discuss the degree-theory and its strategy of introducing the continuum-valued semantics for dealing with vague terms. One might think that the degree theory had a natural solution to the problem of higher-order vagueness, but short of simply denying the phenomenon of higher-order vagueness, degree theory ends up facing just the same sort of problem Fine faces. In Chapter 4 we discuss Burgess thesis that higher-order vagueness terminates at a low finite level. Burgess aims to show that secondary-quality predicates admit of an analysis which is such that that it shows that they are limitedly vague. We find his demonstration unsatisfactory on the grounds that it falls short of showing its promise and suffers from unavoidable circularity.

PAGE 12

5 After examining these representative views on higher-order vagueness based on the paradigmatic conception, we come to conclude that none offers a satisfactory treatment of higher-order vagueness. Thus, we turn, in Chapter 5, to a slightly different approach, as presented by Dominic Hyde (1994). He acknowledges the phenomenon of higher-order vagueness, but emphasizes that the paradigmatic theorists need not to do any extra work in order to modify their theory so as to accommodate higher-order vagueness. Vague is vague, according to Hyde. Higher-order vagueness, he argues, is already present and respected in these theorists meta-languages. We aim to show that Hydes argument is not sound, and that it relies on a not-uncommon confusion regarding semantic predicates such as vague. Also, after examination, Hydes argument turns out to be question-begging. This series of unsuccessful treatments of higher-order vagueness lead us to a view that responds to higher-order vagueness by denying it. The subject of Chapter 6 is Crispin Wrights (1992) argument that higher-order vagueness is not a problem, since it is incoherent. After we present Wrights argument, we present two related criticisms of it, namely Richard Hecks (1993) and Dorothy Edgingtons (1993), that show that Wrights argument relies on the misapplication of a nonclassical rule of inference in a classical proof. We aim to show that, in light of Hecks and Edgingtons criticisms, we must abandon Wrights view, and admit that the case of higher-order vagueness is left unanswered. In Chapter 7 we turn to the epistemic treatment of higher-order vagueness. Although epistemicism does not have a problem of semantic higher-order vagueness (since borderline cases are characterized epistemically) we aim to show that it still has a

PAGE 13

6 parallel problem, namely the problem of epistemic higher-order vagueness. Epistemicism, as championed by Timothy Williamson (1994), has exchanged one problem, namely the problem of semantic higher-order vagueness, with a parallel and equally vexed problem, namely the problem of epistemic higher-order vagueness. The exchange occurs by rejecting the tolerance principle as a semantic principle that governs vague predicates, and replacing it with an epistemic margin for error principle. However, we aim to show that just as the former gives us paradoxical results regarding the truth of certain claims, the latter does likewise regarding our knowledge of them. In the context of our discussion, some broader issues for Williamsons view come to light which suggests more broadly that his epistemicism cannot hope to be a successful theory of vagueness. It is worth noticing at the outset that the paradigmatic conception of vagueness is underwritten by the assumption that vague predicates have application conditions and that vague sentences have truth-values. No doubt, we do use these predicates in everyday practice and communication as if they in fact do have the mentioned features. This might very well be just an idealization. If this is so, then the question is whether the theorists in question succumb to an idealization in theorizing about the practice, that is the question is whether they translate our intuitive idealized description of the phenomenon into a theory, which consequently leads to trouble, namely higher-order vagueness. This indicates that the assumption that underwrites the paradigmatic conception of vagueness cannot be taken for granted anymore, given that after critical reflection we come across an insuperable difficulty for it. The situation is also aggravated by the fact that all specified difficulties one could not hope to fix and to save the paradigmatic conception of

PAGE 14

7 vagueness. The problem of higher-order vagueness is a serious obstacle to accepting the basic assumption of the paradigmatic conception of vagueness precisely because, as we aim to show, all the projects of dealing with higher-order vagueness have a principled problem with higher-order vagueness and one cannot hope to solve this problem by modifying either of these accounts of vagueness. We acknowledge that we do not have a positive story about the right conception of vagueness. That question could be the subject of a whole new project. Yet, if the discussion we pursue is successful, the central presuppositions of the paradigmatic conception of vagueness cannot be taken for granted and need reexamination, which amount to rethinking the whole basic vagueness phenomenon.

PAGE 15

CHAPTER 2 FINES TREATMENT OF HIGHER-ORDER VAGUENESS Overview. In this Chapter, we will present and critically examine Kit Fines (1975) 1 treatment of higher-order vagueness and Burgess (1990) 2 criticism. Fine acknowledges higher-order vagueness and aims to accommodate it in his proposed account of vagueness based on a supervaluational framework. The plan of the Chapter goes as follows: in Section 2.1, we will give a description of the basic supervaluational idea. In Section 2.2, we will present an application of this idea to the sorites paradox. In Section 2.3, we will present Fines treatment of higher-order vagueness. Section 2.4 presents Burgess challenge that Fine has not resolved the paradox. In Section 2.5, we try to give a possible response that Fine could make to this challenge. In Section 2.6 we will pursue a line of criticism akin to Burgess, and which also aims to make a further point about Fines treatment of higher-order vagueness. These considerations should have as a result the conclusion that higher-order vagueness presents an insuperable difficulty for Fine, and that there are no resources in Fines strategy to account for the problems that we are concerned with. 2.1 Supervaluational Framework The central project that Fine undertakes in Vagueness, Truth and Logic consists in attempting to specify truth-conditions for vague sentences. In order to implement this 1 For all references to Fine in the thesis see (Fine 1975). 2 For all references to Burgess in the thesis see (Burgess 1990). 8

PAGE 16

9 project, he introduces a supervaluational framework that is supposed to accommodate two essential features of vague predicates: higher-order vagueness and what Fine calls penumbral connections. The main idea of the supervaluational approach consists in considering not only the truth-values that vague sentences actually admit, but also truth-values that they could admit after making them more precise. The underlying idea of the supervaluational framework is that vague sentences have truthvalues. However, we evaluate vague sentences not just according to actual truth-values that they might have, but a according to the truth-values that they could have after precisifying the vague terms that they involve. Within this framework a vague sentence is true just in case it is true for all ways of making it completely precise that is supertrue, it is false just in case it is false for all ways of making it completely precise that is superfalse and neither true nor false otherwise. Success in this project is expected to have as a consequence that it leads to the dissolution of the sorites paradox, and consequently to answer to the question what has gone wrong with the soritical argument. In the core of the proposed framework is the characterization of vagueness as a semantic phenomenon. Vagueness is, as Fine puts it, deficiency of meaning. That is, meaning of vague predicates and hence meaning of vague sentences is underdetermined by the rules of the language. The meanings, however, can be made more complete, but there are constraints on what the possible completings of vague meanings can be. Such constraints include for example that what has been true before making the meaning more precise must remain true after the process of meaning completings.

PAGE 17

10 The main motivation for the supervaluational framework and for this approach to the problem of the sorites paradox lies in dissatisfaction with the truth-functional approach to logical connectives which presupposes the principle of bivalence. Such an approach, according to Fine, is not able to accommodate what he calls penumbral connections, for it leaves vague sentences without truth-value. This will become clearer after we say what, for Fine, a penumbral connection is. The notion of penumbral connection and the corresponding notion of penumbral truth are defined as the possibility that logical relations hold among the predicates and among the sentences, which are, due to their vagueness, indefinite in truth-value. The best way to see what Fine has in mind is via an example, and he himself introduces this notion partly by an example. Fine takes, for example, a vague sentence P which says that this blob is red. He points out that P and not-P is always false, even when P is indeterminate in truth-value (i.e., when the blob is the borderline case of the predicate red). The truth of the sentence It is always false that P and not-P is a penumbral truth, according to Fine. The sentence in question always has a determinate truth-value even though P is vague, and hence indeterminate in truth-value. Let us take now, following Fine, another vague sentence, R that says that the blob is pink. The conjunction of P and R is indefinite, due to vagueness of both P and R. One might wonder how this could benamely, how the truth-value of the conjunction sometimes depends on the truth-value of its conjuncts, and sometimes it does not. Fine has a ready rationale for the difference in truth-values between P and not-P, and P and R. The difference in truth-value between these two conjunctions, according to Fine, corresponds to the difference in how the sentences in question can be made more precise by sharpening the vague

PAGE 18

11 predicates that they contain. The sentence P &~P is always false, no matter how we sharpen red, while P & R is true under some sharpenings of what P and R say, and false on others, and hence neither true nor false. To illustrate this Fine takes into account the vague predicate small as an example. The sentence This blob is red and this blob is not red is always be false, according to what has been said above, for no matter which sharpening of red we take, a blob cannot satisfy both predicates red and not red, that is there is no sharpening under which the blob can be made a clear case of both. Contrary to the case of red and not red, the sentence This blob is small and red is neither true nor false; for some sharpenings of small and red, it is going to be true, on some sharpenings false, and hence the sentence is indeterminate in truth-value. A salient feature of the sentence This blob is red and small is that it could sometimes be true if the blob is a clear case of both predicates red and small. Now, to say that a sentence is indeterminate in truth-value is not to introduce another semantic category, namely the indeterminate, one might think. Fines response to this is that indeterminate has a peculiar status and the one which is not the status of a semantic category. Fine emphasizes that although vague sentence can lack (super) truth-value, while it has a truth-value on every so-called precisification. The framework for evaluating vague sentences that Fine develops is based on the notion of admissible precisification. According to Fine, a precisification of a predicate is admissible as long as it i) includes all the clear positive cases for the predicate, and ii) excludes all the clear negative cases for the predicate. According to Fine, a vague sentence is true just in case it is true for all ways of making it completely precise, that is under all admissible precisifications. Fine coins the

PAGE 19

12 term supertruth for sentences that meet this condition. Thus, the vague sentence is said to be true just in case it is true on all admissible precisifications of the vague terms in it. 2.2 The Application of Supervaluational Strategy to the Soritical Argument Let us turn now to the application of the supervaluational strategy to the sorites paradox, and to Fines answer to the question what has gone wrong with the soritical argument. Consider a series of people starting with the clearly tall person and ending with a clearly short person, and the difference between the subsequent members in the series is negligible (say, less then a millimeter).This series is a soritical series and we can construct the following soritical argument: 1. X 1 is tall. 2. For all X i if X i is tall, then X i+1 is tall. 3. X n is tall, when X n is of the height 1.5m, which clearly contradicts the supposition that the last member of the series is clearly short. How does Fines approach shed light on the sorites paradox? The answer that Fine provides to this problem consists in the claim that the major premise of the soritical argument is false and hence that the argument is unsound. This is so because there is a sharpening of tall, say tall*, which is such that tall* applies to X i and it does not apply to X i+1 In other words there will be the greatest i such that X i satisfies the predicate in question, and its successor does not. 2.3 Supervaluationism and Higher-Order Vagueness A natural response to this approach to the sorites paradox consists in the charge that, as it stands, Fines supervaluational strategy of sharpening vague predicates (and the

PAGE 20

13 notion of admissible precisification in particular) would seem to presuppose that there is a clear semantic demarcation between cases to which a vague predicate applies, cases to which it fails to apply, and borderline cases. If that is right, Fine fails to account for the phenomenon of higher-order vagueness. Fine, however, has a ready answer to the problem of higher-order vagueness, which he thinks, besides penumbral connections, is an essential feature of vague predicates. In fact, he thinks that it is necessary to be higher-order vague in order to count as a vague predicate at all. His response to the charge that supervaluationism sets sharp boundaries to vague predicates is to say that the notion of admissible precisification is itself vague. That in turn implies that the notion of supertruth is vague too, since it is defined in terms of the notion of admissible precisification. The notion of supertruth belongs to the meta-language, and admissibility of precisification is central to it, then the meta-language must be vague too, rather than precise. Thus, it turns out that the truth predicate is vague due to the vagueness of the notion central to its analysis and, hence, higher-order vague. Thus, the strategy of supervaluations respects higher-order vagueness by being applied to the object-language, which is precisified and the boundaries are fixed on the object-level, but at the same time higher order vagueness is respected by going one level up in the hierarchy of meta-languages. In other words, vagueness is reflected in a vague meta-language through the vagueness of the truth predicate. However, the story of sharpening does not end here, for the meta-language, in which the analysis of the object-language is given, is itself vague, and needs to be precisified, while vagueness is reflected in the meta-meta-language, and so on indefinitely.

PAGE 21

14 The upshot of the approach sketched by Fine is to allow one to say that the major premise of the soritical argument is not true, because it is not supertrue, without imposing any sharp boundaries between different semantic categories. Indeed, it will not be surprise because there will be some sharpening of the predicate tall, say tall*, which is such that that there will be some X i which is the last object in the soritical series to which tall* applies and it does not apply to its successor X i+1 This, according to Fine, does not presuppose sharp boundaries, for it is true just to a first approximation. By reapplying the strategy we get the result for the second approximation, and so on indefinitely. Thus, supervaluationism is said not to presuppose sharp boundaries and hence respects higher-order vagueness. 2.4 Resurrection of the Paradox We have seen what Fines response to the sorites paradox is when the soritical argument has a general inductive premise as a major premise. Yet if Fine has resolved the paradox, the strategy has to be applicable to the soritical argument which can be given in a different fashion. So, consider again our soritical series of people ordered according to the height, starting with a clearly tall person and ending with a clearly short person (where the difference between any two members in the series is less then a millimeter). We can write the soritical argument as follows: 1. X 1 is tall 2. If X 1 is tall, then X 2 is tall 3. If X 2 is tall, then X 3 is tall : n. X n is tall,

PAGE 22

15 when X n is of the height of 1.5 m, which contradicts the original supposition that X n is clearly short. In this form, the argument has no general inductive premise, but only a stepwise series of conditionals, where each conditional has the form if X n is tall, then X n+1 is tall. If we write the soritical argument in this form, that is as a series of conditionals instead of the general inductive premise, then, with the help of the finite number of applications of Modus Ponens, we get the same paradoxical result that someone whose height is only 1.5 m is tall, for example. Burgess has challenged Fines approach, on the grounds that it does not yield a satisfactory solution to the sorites argument when it has the form of the stepwise series of conditionals instead of a general inductive premise. Burgess explicates the difficulty for Fines and any supervaluational approach by saying that if supervaluational story is applied to the step-wise soritical argument in which there is only a finite series of conditionals, there will be the first conditional which is not supertrue. However, taking any nth conditional as the first one which is not supertrue implies that there is a sharp boundary of the vague predicate. The upshot of running the soritical argument with the step-wise series of conditionals instead of the general inductive premise is to show that the first-level supervaluational story fails to solve the sorites paradox. If the strategy really worked, it would be equally applicable to the second form of the soritical argument and not only to the argument with the generalized inductive premise.

PAGE 23

16 2.5 Fines Expected Reply What could Fine say about the soritical argument in this form? We can extrapolate from Fines treatment of the inductive soritical argument that he will want to claim that the stepwise soritical argument is also unsound, while at the same time denying that any nth conditional is the first one which is not super true. In short, Fine will want to appeal to the vague meta-language. He will probably think that just reapplication of the strategy employed for the first form of the soritical argument would help with the stepwise form of the soritical argument, for the reapplication of the strategy is thought as capable of doing the trick of not picking any n-th conditional as the first one which is not supertrue. Now, the reason why one would think that the reapplication of the strategy would help with the stepwise form of the soritical argument is that one might think, following supervaluationists that the approach in question only on the face of it seems to impose sharp boundaries between the two semantic categories: supertrue and superfalse. The worry that the supervaluational approach sets precise boundaries neglects that the notion of admissible specification is vague. Generating sharp boundaries would mean that the notion of admissible specification is precise, which is clearly not the case in Fines story. The first level story that supervaluationism offers seems to be committed to the sharp boundary between supertrue and superfalse just because it is an approximation. As an approximation it does seem to set sharp boundaries, but they are at the same time avoided since we do not stop applying the strategy. If we do not stop in reapplying the strategy we are safe from sharp boundaries.

PAGE 24

17 2.6 Two Problems for Fine An immediate worry that arises with the commitment not to stop applying the supervaluational strategy is that there is a tension between this commitment and the fact that there is only a finite number of conditionals in the series. So, it seems that the reapplication of the strategy must stop somewhere, since there are just so many conditionals, and only so many things in the soritical series. Now, given that there is only a finite number of conditionals the question is how Fine can maintain both the view that there are no sharp boundaries, and not to pick any n-th conditional as the first one in the soritical series which is not supertrue. For, by reapplying the strategy, on every next level fewer and fewer conditionals are going to meet the criterion of being super true. Now, the nature of admissible sharpening is such that not all the cases that were true all the way up on some level must be counted in on every further level of approximation. So, superpositive cases can lose their status as we go up in the hierarchy. But since the sorites series is finite, the iteration of the strategy must give out at some finite stage. If it does not there is a worry that nothing is going to be counted as supertrue, for reapplication of the strategy on every higher level is going to remove more and more cases that were originally counted in. Burgess pushes this critical point against the supervaluational higher-order vagueness strategy by emphasizing that at least the first sample in the soritical series does absolutely definitely satisfy the vague predicate. This means that the vague sentence that encompasses the predicate in question is supertrue not just to the some approximation, but it is true on all admissible precisifications all the way up. We also accept that not all the cases are like this. There are some clear negative cases, the cases that fall out all the

PAGE 25

18 way up. Thus, in the series of conditionals some of them (at least the first one) are true all the way up and not all of them are like that. Thus, there will be a first conditional that is something other than absolutely definitely true. Also, there is nothing in Fines or supervaluational strategy in general that would make absolutely definitely vague. For, there is no vagueness of the matter in absolutely definitely true, and hence no further vagueness. It seems that Burgess complaint against the supervaluationist is right and he has offered a compelling argument against the supervaluational story when we are presented with the soritical argument in the stepwise series of conditionals instead of generalized inductive premise. It is not clear at all that Fines approach has any resources to answer this complaint. So, Fines attempt to handle higher-order vagueness does not look promising. Not only has Fine not resolved the paradox, but also it seems that sharp boundaries appear after all. For take again into account the supervaluationists story about the sorites argument given in terms of the series of conditionals. Fine would want to say that there are some instances of the general inductive premise that is some conditionals which are not true. However, they are not false either, but they are neither true nor false. But if higher-order vagueness is to be respected, then there cannot be a sharp boundary between the conditionals that are true, those that are neither true nor false, and the conditionals that are false. If this is so, then, the range of borderline cases is going to get bigger, and each sharpening reduces the number of clear positive cases, until none is left. This is clearly a problem for Fine, for all the cases get either positive or not and hence sharp

PAGE 26

19 boundaries emerge after all. Worse yet, it looks as if nothing is going to be super true in this picture, for the criterion for being super true cannot ever be met. In what follows we attempt to give a careful formulation of the structure of the reapplication strategy in order to corroborate Burgess criticism and to secure this further point. Consider a series of objects, a 1 a 2, a n a m which are ordered according to height in such a way that the first member in the series being the tallest, and hence clearly tall, and as we move along the series of objects the height of the objects in question decreases. Then, we can define possible extension sets for tall, t n ={a i : i n}. To represent the notion of admissibility formally we can use the following symbolism: 3 Adm1-[A] ifdf A{ti}im, where Adm 1 [A] says that A is a possible first-level sharpening of admissible sharpening, and the same holds for higher language levels, namely Adm k+1 -[A] if df A {B: adm k [B]}. Now, Fines supervaluational truth-conditions for the vague sentence n is tall, commit him to the following: There is an A1 such that i) clearly adm1-[A1] and ii) to a first approximation, n is tall is supertrue iff tiA1, nti. 3 The definition is undoubtedly too broad, but it does not matter for our critical points in what follows.

PAGE 27

20 In virtue of the reapplication strategy, however, Fine is also committed to there being at least one such set at level two, that is There is an A2, which is such that i) clearly adm2 -[A2] and ii) to a second approximation, n is tall is supertrue iff A1A2, tiA1, nti. And so on for every level. : There is an An which is such that i) clearly admn -[An] and ii) to an nth approximation, n is tall is supertrue iff An-1 An, An-2An-1, A1A2, tiA1, nti. Thus, there is at least one sequence, , meeting the above conditions. Also, since each admissible sharpening, A n+1 is a clear case of admissibility at the level n+1, it should include and clear cases of admissible sharpening on the level n. So, we should have A 1 A 2 A 3 etc. This implies that negative judgments about what is to be counted in on the previous levels do not ever go positive as we go up in the hierarchy. So, in the end, n is tall is supertrue just in case it is positive all the way up and false otherwise. One can imagine, however, Fine complaining that we have just redefined the notion of super-truth. Our rejoinder to this is that Fine subscribes to this notion of supertruth because it comes together in the same package with his notion of supertruth, if he is to respect higher-order vagueness. Now, it looks as if all this allows the re-emergence of boundaries, and hence higher-order vagueness is not respected after all. For, for each integer, either that integer is counted as positive all the way up, or not, and there is no vagueness about this, and nothing in the supervaluational account suggests otherwise. Moreover, if n does, then all m, m n, do as well. So, they all go all the way up or fail to go all the way up. Thus, there is a greatest n that does go all the way up all a 1 a n are supertruly tall, but a n+1 is not. Clearly, the sharp boundaries emerge after all.

PAGE 28

21 The further point that the formal structure of the reapplication strategy reveals is that the emergence of sharp boundaries is not the worst result that we get by re-applying supervaluational strategy. What looks to be even worse in this account is that it seems that all that further approximations can be doing is taking out a few more ns, which were positive on the lower levels. Unless higher-order vagueness is to give out at some finite level, we get If
counts a 1 a n as supertruly tall, then there is k such that does not count an as supertruly tall. That is, if we have for some n that a n+1 is not supertruly tall, that is if a n+1 does not go all the way up, then if a n is not to be a sharp boundary, then n must not be supertruly tall either. But if this is so, then all m, such that m n, must not be supertruly tall. If this is correct, then, assuming that our sorites series has only a countable number of items, then, the full sequence must count no one as supertruly tall, that is there will be no positive cases. Thus, given the account is correct, nothing is going to be counted as supertrue, for nothing can meet the condition for being supertrue. Also a parallel argument can be constructed following this chain of reasoning with the result that nothing is going to be superfalse either. Conclusion. In light of the foregoing discussion we can conclude that Fines treatment of higher-order vagueness is not satisfactory. The supervaluational strategy cannot resolve the problem of higher-order vagueness. Moreover, the basic first-level supervaluational story does not work, and it leaves us short of the solution of the problem of vagueness. It turns out that the problem of vagueness, namely higher-order vagueness is an insuperable difficulty for supervaluationism, as presented by Fine. If the forgoing discussion is correct, we have learned that sharp boundaries emerge after all. Also,

PAGE 29

22 another unresolved difficulty for Fine is that it looks as if on this account nothing is going to be supertrue. Now, before we turn to Burgess positive story about higher-order vagueness we want to take a brief look at another strategy based on the paradigmatic conception of vagueness that fails to give a satisfactory treatment of higher-order vagueness for similar reasons to those that show Fines strategy fails. We turn to the degree-theory.

PAGE 30

CHAPTER 3 THE DEGREE THEORY AND HIGHER-ORDER VAGUENESS Overview. In what follows we focus on the degree theory of vagueness which approaches the phenomenon of vagueness by introducing a continuum-valued semantics. The degree theory also accepts the paradigmatic conception of vagueness in so far as it treats vague terms as characteristically giving rise to borderline cases. We discuss it here not because one might hope to find something illuminating in the degree theory itself, but only to show that this strategy also fails to reconcile the paradigmatic conception of vagueness with the problem of higher-order vagueness. After a brief description of the basic idea of the degree theory and its continuum-valued approach (Section 3.1), we turn to a criticism that establishes this (Section 3.2). 3.1 The Basic Idea of a Degree Theory The basic idea of the degree theory is to give a continuum-valued semantics for vague predicates. The argument for the degree theory roughly goes as follows. Consider a vague predicate heap. A thing can be more or less of a heap. So, we can naturally think of heapness coming in degrees. Consequently, the truth of the sentence x is a heap comes in degrees too. The degrees of truth that a sentence could have are represented with the closed interval of real numbers, [0, 1]. The sentence x is a heap could admit uncountable infinity of values, corresponding to the uncountable infinity of numbers in this interval. This is supposed to secure that the boundary between the positive cases and the negative cases of the application of the predicate is defused. Admittedly, both x and y can be heaps, yet x can be more of a heap than y, depending where on the scale it is. This 23

PAGE 31

24 in turn means that neither y is a heap nor y is not a heap are true, if y is a heap to the degree 0.412. They are rather both true to the degree 0.412. Sharp boundaries between the two semantic categories, true and false, have been avoided since there is an uncountable infinity of numbers between 0 and 1, which correspond to the degrees of heapness that an object could exhibit. Consider again the sentence x is a heap. If the object in question exhibits heapness to the degree 0.412, then the sentence x is a heap is true to the degree 0.412. An appeal to the interval of numbers between 0 and 1 is motivated by an attempt to avoid an arbitrariness of the semantics given in finitely many values. Introducing the continuum of values is supposed to do the trick of avoiding any particular choice of a particular segment in the series as the exact place where a non-heap converts into a heap, in the series of objects that are continuously transforming from a non-heap to a heap. Degree theory is thus motivated by an attempt to keep the boundaries unsharp, and yet to avoid arbitrariness. But sharp boundaries and/or arbitrariness seem to come with the meta-language. 3.2 Meta-Language, Vague or Precise? Although there is no sharp boundary between 0 and 1, still there is a sharp lower boundary between 0 and something else.. This conflicts with the intuition that vague predicates are at least second-order vague. Thus, it looks like the degree theory has accommodated only one part of the intuitive story about the vague predicates, namely the intuition that they are first-order vague, but has not accommodated the phenomenon of higher-order vagueness.

PAGE 32

25 Consider again the sentence x is a heap is true to the degree 0.412. Now, one might ask what the truth-value of the sentence x is a heap is true to the degree 0.412is, that is whether it is true or false. This question corresponds to the general question whether the metalanguage in the degree theory is vague or precise, or whether the complex predicate is true to the degree 0.412 is vague or precise. Since a simple denial of higher-order vagueness, and appeal to a precise meta-language is not an available option, it looks like the degree theory should apply to metalanguage too. If a vague language requires a continuum-valued semantics, that should apply in particular to a vague metalanguage. The vague metalanguage will in turn have a vague meta-meta-language, with a continuum-valued semantics, and so on all the way up the hierarchy of meta-languages. 1 We have already shown in Chapter 2 the principle difficulty with the strategy of progressing up in the hierarchy of metalanguages. A degree theorist who would like to say that the metalanguage is vague, and that it itself requires a continuum-valued semantics is no better off than Fine with respect to the problem of higher-order vagueness. One might suggest not taking numbers too seriously, but just as a useful approximation for modeling vague predicates. One might very well grant the usefulness of the approximation, but the question is then whether we have been told when x is a heap is true at all. It seems clearly not. Also, if the proposed theory is just a useful modeling of vague predicates, then there are some competing modelings that are far superior to this one, in terms of consistency with some independently plausible 1 This style of criticism has been offered in (Williamson 1994, p.128).

PAGE 33

26 principles, such as the principles of classical logic, for example. So, even in the game of usefulness, degree theory loses. Conclusion. We are not surprised that the forgoing discussion, if correct, yields the conclusion that there are no resources in the degree theory that could give a satisfactory treatment of higher-order vagueness. The reason one might have hoped to find in the degree-theory a promising way to go regarding the problem of higher-order vagueness is, as Williamson suggests, that one is mislead in the view that the infinity of numbers defuses the sharp boundaries between the two semantic categories, true and false. However, the strategy of continuum-values suffers from the same defect as Fines reapplication strategy suffers, and analogous criticisms to those that apply to Fines strategy can be extended to a continuum-valued strategy. The difficulties of the two discussed strategies which have in common accommodation of the higher-order vagueness that runs all the way up in the hierarchy of borderline cases leads us to move to a different treatment that attempts to deny that the vagueness runs all the way up. We turn to Burgess attempt to deny infinite higher-order vagueness.

PAGE 34

CHAPTER 4 BURGESSANALYSIS OF THE SECONDARY-QUALITY PREDICATES Overview. In the foregoing discussion we have seen how the problem of higher-order vagueness presents an insuperable difficulty for both Fine and his supervaluational strategy and for a continuum-valued strategy. Now, we turn to another project, namely Burgess (1990) treatment of secondary-quality predicates for which Burgess aims to provide an analysis that shows that they are only limitedly vague, and that higher-order vagueness gives out at a fairly low finite stage. Respecting higher-order vagueness turned out to be problematic (for theorists like Fine), only because it was assumed that higher-order vagueness has no upper bound. So, success in boundary-specifying project would have the effect of resolving an outstanding problem for various proposals, such as the ones that we have already presented. In the present Chapter we will present Burgess project and the proposed schema for analysis of the secondary-quality predicates (Section 4.1). In subsequent sections we will describe two problems for Burgess analysis of the secondary-quality predicates. Section 4.2 introduces the circularity problem of Burgess schema, and Section 4.3 introduces the problem of the unacknowledged source of vagueness in the schema. If we are right, the problems that we specify for Burgess show that his analysis falls short of its goal; the analysis fails to support his central thesis that higher-order vagueness terminates at a low finite level. 27

PAGE 35

28 4.1 Burgess Project Burgess central thesis about higher-order vagueness consists in the claim that it terminates at a low finite order. This means that it is possible to spell out truth-conditions for a vague predicate that specify a boundary of the vague predicate. The central project of Burgess essay is to provide a nonarbitrary, nonidealized boundary-specifying analysis of secondary quality predicates that proves his central thesis that higher-order vagueness terminates at low finite level. Burgess proposes the following schema for the analysis of secondary quality predicate, F: (A*) x Ext Lt (F) iff For most u (u is normal at t & u is competent at t: C( C is F suitable for x at t(u observes x in C at t x seems F to u at t))). (p. 438) The proposed schema is supposed to fix the extension of the vague secondary-quality predicate. x Ext Lt (F) says that x is a member of the extension of the predicate F, and (u observes x in C at t x seems F to u at t) is a counterfactual conditional which is true just in case the consequent is true in all the closest worlds in which the antecedent is true. What Burgess needs to establish about the proposed analytic schema is that for all elements in the schema that are possible sources of vagueness a boundary-specifying analysis can be given. These elements of the schema that can be possible sources of vagueness, according to Burgess, are the following expressions: u is normal at t, u is competent at t, most, counterfactual construction, F suitable. Succeeding in this project enables Burgess to calculate the order of vagueness that secondary quality predicates exhibit, and to show that these predicates are bounded, and where those boundaries lie.

PAGE 36

29 In order to achieve a boundary-specifying analysis of secondary quality predicates, Burgess needs not only to establish that a boundary-specifying analysis for all the constituents of (A*) can be given, but also the proposed schema must not be viciously circular. That means that the constituents of the analysis in (A*) must not explicitly or implicitly appeal to the notion that we want to give an analysis forin this case the secondary quality predicate in question. So, the main purpose of the analysis is to break down and bring to light in limitedly vague terms what it is for x to be red, for example. 4.2 The Circularity Problem in the Proposed Schema Now, it seems immediately obvious that the proposed analysis will be circular once we try to spell out the notion of suitable conditions that figure in the analysans of (A*). Burgess acknowledges that (A*) suffers from a kind of circularity, but he thinks that this is not a vicious circularity and hence that it is not a problematic feature of the proposed analysis of vague predicates, but is, in fact, essential to it. The circularity Burgess acknowledges comes from the analysis of the notion of F-suitability, and Burgess argues that this circularity is crucial in order for the notion of F-suitability to perform the function required of it, which is tracking F-ness closely. Since the analysis does not purport to be a reductive analysis, he claims, this much circularity is not a problem. The analysis of the notion of F-suitability goes as follows: (C*) Conditions C are F-suitable for x at t iff For most u (u is normal and competent at t: (u observes x in C at t (x seems F to u at tx Ext Lt (F)))). (p. 453) The charge for circularity seems to be fully appropriate, however. Burgess uses the notion of F-suitability to analyze an objects being F, and he appeals to the notion of

PAGE 37

30 being F in characterization of F-suitability. So, when we do the substitution in (A*) according to the proposed analysis of F-suitability, we get (A*) x Ext Lt (F) iff For most u (u is normal at t & u is competent at t: C(For most u (u is normal and competent at t: (u observes x in C at t (x seems F to u at t x Ext Lt (F)))). Clearly, we have x Ext Lt (F), which is only a different way to say that x is F both in the analysandum and in the analysans. This seems to be a problem for Burgess project since he not only promises to give an analysis of a secondary quality predicate, but also he wants to break down the higher-order vagueness of the secondary quality predicates. The predicate is F is the vague secondary quality predicate that we want to give not only analysis for, but also we want analysis in the terms which are shown to be limitedly vague if we are to calculate its order of vagueness. However, given Burgess analysis we cannot do that. For if is F appears in the analysans then we need to give an analysis for it, for it needs to be shown to be a limitedly vague secondary quality predicate. That is, we need to conduct analysis further for is F. But the analysis for is F is supposed to be given by (A*), so we have (A*) figuring in the analysans for the predicate is F. That is, part of the analysis of (A*) is (A*) itself. This seems to be far from being a benign circularity, for the original motivation for giving an analysis for the secondary quality predicates was not to give an analysis for its own sake, but the idea was to specify the way how to calculate the order of vagueness by showing that the predicate in question is analyzable in limitedly vague terms. To show that this circularity is not vicious in its character, but a welcome feature of the offered analysis, Burgess defends only its status as non-reductive analysis of the proposed schema, (A*). But it looks as if he has forgotten what the analysis is supposed

PAGE 38

31 to dohe has forgotten that the upshot of giving an analysis is to show that secondary-quality predicates are limitedly vague, and merely establishing that that this is an analysis of some kind is irrelevant for what the announced goal of the project is. The result is that is F in the analysans of (A*) has not been shown to be limitedly vague. This, apparently, makes Burgess project of boundary-specifying analysis for the secondary quality predicates fail, for it does not enable us to calculate the order of vagueness of the predicates in question. This is what makes the circularity in Burgesses analysis vicious rather than benign. 4.3 The Problem of the Unacknowledged Source of Vagueness in the Proposed Schema Another difficulty with (A*) as a boundary-specifying analysis of secondary quality predicates is the use of seems F 1 in the analysans of (A*), which was not acknowledged as a relevant source of vagueness by Burgess, although it appears to be vague precisely in the same way as is F. Since Burgess aims for a boundary-specifying analysis of the secondary quality predicates, all the constituents of the analysis of (A*) must be shown to be at most limitedly vague. Yet, he fails to identify seems F as a possible source of vagueness. Now the question is why Burgess fails to acknowledge seems F as a possible source of vagueness. One can only suppose that Burgess thinks that seems F is clearly not vague. However, this is a mistake, which is a consequence of a not uncommon confusion regarding this expression. 1 Here one might be worried that seems F involves a family of notions and that our criticism hinges on how the notion of seems F is spelled out. So, one might say that we use seems F as a phenomenal notion as opposed to an epistemic notion, for example. However, even the epistemic notion, spelled out that x seems F to one just in case one believes that x is F, or is inclined to believe that x is F, falls under our criticism for the same reason for which the phenomenal notion is said to be vague. That is, the relevant vague term, namely is F is used in both of them.

PAGE 39

32 In order to illustrate this confusion, take for example different patches that are all some or other shade of red. Now imagine an observer who is presented with these different patches. Each patch looks different to the observer, so she can discriminate between them. No doubt, there is a way that each of the patches appears to the observer color-wise, and each of the patches looks different to her color-wise. Not only can our observer can discriminate between the patches, she can even invent a particular color term for each way that the patch looks to her color-wise. So, she introduces the term R007 in this way. Now, while the observer cannot be mistaken that the patch looks to her color-wise however it does, she can certainly be mistaken about whether it looks R007 to her. If we ask the observer whether a particular shade we are presenting her with is R007, what the observer is being asked to do here is to categorize, although the category R007 is a very peculiar one. She can be mistaken about this, for example, if the light is not normal, and when presented with a patch, which, in fact, is not R007, she can judge that it is because of light. However, she cannot be mistaken that the patch looks her the way that it does color-wise. Now, take for example the question whether each patch she is presented with seems red to her. In the clear cases of red the observer has no trouble answering our question whether the patch seems red to her. In borderline cases, however, our observer expresses uncertainty what to answer and hesitates with the answer. In contrast to the case when the observer does not hesitate and cannot be wrong that the patch looks to her that way color-wise, in this case, when asked whether the way the patch seems to her color-wise should be called red, she can be at loss what to answer. Thus, although there is a particular way that the patch looks to the observer color-wise, when it comes down to the question

PAGE 40

33 whether it seems red, then she can be at a loss what to say. For, what she has been asked to do is to categorize. Both seems F and is F work as categorical terms and the only difference between them is with respect what one has been asked to categorize. In both cases there is a color categorization, using the category red, and the difference is in that that in the case of is F one is asked to categorize patches, and in the case of seems F one is asked to categorize not patches, but visual impressions. Clearly, since the relevant category in the case of seems red is red, and it is vague, seems red inherits vagueness from its sortal, and hence what is offered as an analysis of a secondary quality term hasnt been shown to be less vague than what we aim to analyze. So, Burgess failing to acknowledge seems F as a relevant source of vagueness may be a consequence of this common confusion about seems F confusing seeming F with seeming that way color-wise. He apparently takes seems F as if it does not involve any categorization, which is a mistake. Conclusion. If the charges for vicious circularity and vagueness in the analysans are correct, then we can conclude that Burgess analysis falls short of its goal it fails to support his central thesis; that is, the difficulties in his analysis of secondary quality predicates prevents it from being shown to be a boundary-specifying analysis that uses limitedly vague terms in the analysans, which is, of course, necessary if we are to calculate the order of vagueness of a secondary quality predicate. In the forgoing discussion, I specified two unresolved problems for the analysis that Burgess offers; namely, the problem of circularity of the schema (A*), and the problem of vagueness of seems F in the analysans of (A*). In virtue of these complaints we cannot but conclude that, as it stands, the analysis fails to prove Burgess central thesis

PAGE 41

34 that higher-order vagueness terminates at a low finite level. One might wonder, however whether those problems are fixable by removing the sources of vagueness, or circularity from the analysans of (A*). Our answer to this question is No, and not only that, but it would also be Burgess answer in the light of his commitments. The only reason why Burgess thinks that the circularity is not a problem, and that the offered analysis is minimally plausible is that he hopes that is F can be cashed in terms of seems F. That looks like a good way to go, however, only through confusion about seems F and its categorical role, to which we have already pointed. Thus, the answer to the question whether the analysis is hopelessly circular is even according to Burgess Yes. For if the hope for cashing is F in terms of seems F breaks down, then we have hopeless circularity, given that seems F is essential to the analysis of is F. According to Burgess, seems F is indispensable in the analysis of is F, and hence the analysis is unamendable, and not for some accidental reasons but for principle reasons. Thus, in virtue of Burgess commitments, these problems are not fixable. And certainly there is a commitment to some circular vagueness in the analysis he proposes, and that was expected, given that we must use some language to talk about what is the object of analysis. Now that language must be vague. However, there are two options. The language can be either limitedly vague or non-limitedly vague. But given, that Burgess commits himself to the use of seems F in the analysans of the proposed schema, which is dangerously close to is F, and is essentially dependant on is F we still we have not been shown that higher-order vagueness terminates and consequently do not have the promised recipe how to calculate the order of vagueness for secondary-quality predicates.

PAGE 42

35 Accordingly, since Burgess analysis fails even for the secondary quality predicates, which are presumably the simplest case, then we have a good reason to think that Burgess did not show that the vagueness of the other vague predicates terminates at a low finite order. So far, we have seen that different attempts to give a satisfactory treatment of higher-order vagueness have failed. Besides the fact that they all share the paradigmatic conception of vagueness, these theories share the dialectical situation with respect to the challenge to deal with the problem of higher-order vagueness. Namely, they all aim to deal with the problem within the paradigmatic conception, and not challenging the basic presupposition of the paradigmatic conception of vagueness. We have seen that they are unable to deal with this problem, which motivates us to turn to an approach that has a different dialectical position, namely, it aims to give a reason why one should hold on to the paradigmatic conception of vagueness and at the same time not to worry about the problem of higher-order vagueness.

PAGE 43

CHAPTER 5 HYDES RESPONSE TO THE PROBLEM OF HIGHER-ORDER VAGUENESS Overview. Hydes (1994) 1 approach to higher-order vagueness differs from the approaches that we have been discussing, or will discuss in that his project is meta-theoretical. Since paradigmatic conception of vagueness is not a single view about vagueness, but rather a generic name for different theories that have in common that they characterize the phenomenon of vagueness by the presence of borderline cases, it looks as if Hydes approach is neutral between different theories inside of paradigmatic conception of vagueness. If this is correct, then if Hydes argument is successful, it still does not provide sufficient grounds for deciding between different theories inside of the paradigmatic conception; that is, it would not give us a criterion to decide whether epistemicism is true, or supervaluationism, for example. According to Dominic Hyde, higher-order vagueness is a real phenomenon, but he argues that it does not present a problem for the paradigmatic conception of vagueness. By showing that the problem of higher-order vagueness is a pseudo-problem he aims to save the paradigmatic conception of vagueness as an adequate conception against the charge that what he calls the iterative conception of vagueness is inescapable on paradigmatic approach, and also misguided. So, how Hydes treatment is supposed to help to a theorist such as Fine consists in saying that the challenge that one might put forward to Fine that he must reapply his supervaluational strategy is out of place, and the 1 For all references to Hyde in the thesis see (Hyde 1994). 36

PAGE 44

37 theorist such as Fine should do nothing to respond to this challenge since it neglects that higher-order vagueness is already present in the language that the theorist has used in order to give a theory of vagueness. Since this language is vague, rather than precise, higher-order vagueness is already respected and nothing needs to be done to accommodate it. The crucial point that Hyde makes is that vague is vague, that is vague is a homological term. That claim is supposed to lead to the conclusion that has borderline cases is vague, and that consequently borderline cases have borderline cases. This feature of borderline cases does not need, however, to be explicitly stated in the analysis of vague predicates and the paradigmatic conception does not need to end up in what Hyde calls the iterative conception of vagueness. This is the main theme of Section 5.1. In establishing the conclusion of the argument, Hyde relies on Sorensens (1985) 2 argument for the vagueness of vague. In Section 5.2 and in Section 5.3 we will sketch both Hydes and Sorensens arguments respectively. Then, we will specify two worries that Hydes argument raises, and a related worry about Hydes general argumentative strategy. The first worry is related to the question whether Sorensons argument is sound. Our answer to this question is no. Hydes argument exploits Sorensens argument that does not seem to be good since it relies on bad reasoning. The second worry is related to the circularity of the proposed argument and we deal with it in Section 5.4. Hyde anticipates this worry, and has a ready answer to it. We aim to show that the type of response he offers is not a good one. 2 For all references to Sorensen in the thesis see (Sorensen 1985).

PAGE 45

38 In Section 5.5 we raise a question about the general argumentative strategy. Hyde makes a strategic mistake in overlooking the asymmetry in the presuppositions that are available to those who are in the paradigmatic conception, and the presuppositions that are available to one who is defending the paradigmatic conception of vagueness. By making his argument essentially dependent on the presuppositions of the paradigmatic conception, Hyde simply seems to presuppose that the conception that he aims to defend is correct. 5.1 Paradigmatic vs. Iterative Conception of Vagueness and the Problem of Higher-Order Vagueness The paradigmatic conception of vagueness is the view that vagueness of a predicate can properly be characterized by the presence of borderline cases of the applicability of the vague predicate in question. The difficulty for views that characterize the phenomenon of vagueness by the presence of borderline cases is that these views cannot distinguish vague predicates from merely partially defined predicates. In order to make such a distinction, the paradigmatic view moves its talk about vagueness by way of appealing to the presence of borderline cases to the talk about hierarchy of borderline cases. This shift in talk about vagueness is supposed to be a recognition that not only do vague predicates not draw sharp boundaries, but that vague predicates fail to draw any boundaries within their range of significance. This is the leading intuition that underwrites the iterative conception of vaguenessnamely, if there are borderline cases of the first order, then there are borderlines of the second order, and so on indefinitely. This amounts to saying that if the predicate suffers from vagueness of the first order, then it suffers from vagueness of every order, and this feature of vague predicates, namely the

PAGE 46

39 feature of being higher-order vague, distinguishes them from merely partially defined predicates. It looks then as if the paradigmatic conception of vagueness inevitably ends up with the iterative conception of vagueness; that is, the phenomenon of higher-order vagueness needs to be accommodated if the conception pleads to be the correct one. We have seen earlier, in the discussion of supervaluationist view of vagueness that one might be worried that the iterative conception of vagueness might be inadequate since it either does not respect higher-order vagueness after all (if the strategy is applied just finitely many times-the limited iterative conception), or it is incapable of specifying the application conditions for vague predicates, and consequently the truth-conditions for vague sentences. Although, Hyde does not mention these worries explicitly, it is plausible to think that these difficulties motivate his project. At the outset, Hyde points out that any criticism of any conception of vagueness centered on the question whether the conception of vagueness accommodates higher-order vagueness presupposes that the higher-order vagueness is not only a real phenomenon, but also that it presents a problem for the paradigmatic conception of vagueness. Hyde, however, gives an argument that is supposed to demonstrate that there is higher-order vagueness, and he gives an account why it need not to worry anyone who accepts the paradigmatic conception of vagueness. 5.2 Hydes Argument Hydes central thesis is that the phenomenon of higher-order vagueness is real enough, and the paradigmatic conception of vagueness captures it, but without collapsing in the iterative conception of vagueness. That is, the phenomenon of higher-order

PAGE 47

40 vagueness is not a problem for the paradigmatic conception of vagueness. The insistence on the iterative conception of vagueness is just the consequence of ignorance of the ambiguity of borderline case. Once we realize this ambiguity, and appreciate it, it becomes clear, Hyde argues, that the paradigmatic conception of vagueness does not need to end up committed to the iterative conception of vagueness. This allows one to avoid the difficulties of the iterative conception that we mentioned earlier. The core of Hydes argument on the premise that vague is vague that allows him to claim that the paradigmatic conception of vagueness needs not to be modified or is not challenged by attempting to accommodate the presence of higher-order vagueness. That is, different theorists need not to worry that their semantic theory imposes sharp boundaries and that they need to do some extra work in order to respect higher-order vagueness. These theorists should do nothing; since vague is vague, according to Hyde, and it is part of the meta-language in these theories, higher-order vagueness is respected. So, higher-order vagueness is already present in the characterization of vagueness by appealing to the presence of borderline cases, for borderline cases have themselves borderline cases, but this need not to be explicitly stated in the analysis of vague predicates. Hydes argument can be reconstructed as follows: 1. Vague is vagueit is an homological term. (by Sorensen) 2. Since vague is vague, it cannot be defined in purely precise terms. (1) 3. The vagueness of a predicate is properly characterized by the presence of borderline cases. (Suppressed premise) 4. Therefore, has borderline cases is vague. (2, 3) 5. And hence, borderline cases have borderline cases. (3, 4)

PAGE 48

41 Hydes argument is, no doubt, valid, but the question is whether it is sound. By examining the premises of the argument one might get worried whether Hydes argument is successful. The initiators of the worries could be identified as the premise (1) and the premise (3) in his argument. Premise (1) is a conclusion of Sorensens argument for the vagueness of vague. Since Hydes argument depends on the conclusion of Sorensens argument, if Sorensens argument is sound, Hydes argument would be sound too if there were no other candidates that could undermine its soundness, such as the premise (3), for example. However, Sorensens argument is unsound, as we will explain later on, although the conclusion might very well be true. For even if the conclusion turns out to be true the reasons he gives to support his conclusion are flawed. The second candidate for suspicion in Hydes argument, as we pointed out, is the premise (3). If we are right, (3) commits Hyde to something that makes his argument question-begging. The two mentioned worries are not disconnected. For by adopting the conclusion of Sorensens argument, Hyde subscribes to the presuppositions that Sorenson makes, which are not available to Hyde on pain of begging the question. These presuppositions although benign for Sorensons project might be fatal for Hydes project; for these two projects are different in character. As we have already pointed out, Hydes project is meta-theoretical, (i.e., it is supposed to be a defense of the theories such as Sorensons) while Sorensons project aims to give an account of the problem of vagueness, and is not about theories of vagueness. The presupposition we particularly have in mind is (3) that is, that the vagueness of a predicate is properly characterizable by the presence of borderline cases, which needs to be established on independent grounds.

PAGE 49

42 If these charges are correct, then they would, clearly, undermine Hydes attempt to establish the thesis that borderline cases have borderline cases, which claim is crucial to the establishment of his central claim. We will discuss each suspicion in turn. Thus, let us begin with Sorensens argument and the reasoning behind it. 5.3 Sorensens Argument Sorensens argument goes as follows: 1. The vagueness of small allows one to construct the following soritical argument: (a) 0 is a small number. (b) If n is a small number, then n+1 is a small number. (c) One billion is a small number. 2. Numerical predicates such as n-small can be used to construct a soritical argument for the predicate vague, where n-small is a numerical disjunctive predicate defined as applying to only those integers that are either small or less than n. (a`) -small is vague. (b`) If n-small is vague, then n+1-small is vague. (c`) One billion-small is vague. 3. The soritical argument is the symptom of vagueness of the predicate for which it can be constructed. 4. Therefore, vague is vague. (2, 3) Now, the question is whether Sorensens reasoning really establishes that vague is vague. Take first a paradigmatically vague predicate, small. Small is typically vague, which is, in paradigmatic conception to say that it is tolerant and admits borderline cases, and hence one can construct a soritical argument as above. The soritical argument in question exploits this feature of small, namely its being tolerant with respect to

PAGE 50

43 incremental changes along the relevant dimension of variation. By analogy one would expect that the soritical argument for vague would exploit this feature of vague. In other words, similarly as the vagueness of small implies that small has borderline cases, one would expect that the vagueness of vague implies that vague has borderline cases. However, we want to argue that Sorensens argument is not sound. There is no analogy between vague and small as Sorensen conceives it. The argument is not sound because it relies on fallacious reasoning. Even if we agreed with the conclusion that vague is vague, which could be true, yet we could not assent to it for the reasons that Sorensen offers; the reason he offers is of the wrong sort. The soritical argument for vague does not depend on some feature of vague that is responsible for the paradox, but it depends on a feature of small. Now, either the view we are discussing overlooks that small is responsible for the second soritical argument, or the view is that the semantic predicate that we use to talk about the predicate in question inherits vagueness from it. So, we can run the sorites for vague not because vague is vague, but because small is vague. Thus, vagueness exhibited in the premise (a`) is a feature of vague only if there is such relation of inheritance between the predicate that is mentioned and the predicate that is used to talk about the referent of the mentioned predicate. For what the name of the predicate n-small refers to is a vague predicate, namely the predicate small, rather than the predicate which is used to talk about it, namely is vague. The sorites paradox for vague, thus, owes its existence to the vagueness to the predicate which was referred to by n-small, namely small. Hyde explicitly refers to the inheritance principle (p. 38), IP.

PAGE 51

44 (IP) If all the constituent phrases of a complex phrase are precise, then the complex phrase is precise. Now, the trouble is that in (a`) the subject term cannot be vague, and still one experience the same sort of hesitance and faces the same difficulties in forming a judgment regarding the question whether vague applies to n-small, for there are some values of n for which it is unclear whether is small applies to it or not. He takes, for example, -small. It is just as vague as small, because both predicates apply to 0, and apply in the same way to all other integers. The same holds for -small, -small, and so on. However, there are some cases when vague does not apply, such as one billion-small, for less than n clause in the definition of n-small takes care of this. But, according to Sorensen, it is not clear what the value of n is when the clear cases give out. Thus, Sorensen concludes that vague must be vague, for there are no other candidates for the source of vagueness in (a`) that could be blamed for the vagueness, and hence for the soritical argument in question. The reasoning that Sorensen employs, and which Hyde adopts in his argument can be roughly stated as follows: 1. As the series of numbers increases from n 1 to n j it becomes more difficult to answer the question whether small applies to n i 2. To the same degree it is difficult to answer the question whether vague applies to n i -small. 3. Thus, there is a series of -small, -small,, n j -small to which the application of vague is essentially doubtful. 4. Therefore, vague is vague to the same degree and in the same way in which small is vague. 3 3 For a similar style of argument see (Ludwig & Ray 2002, p.455).

PAGE 52

45 However, it is just the feature of small in virtue of which what is referred to by the name n-small is vague. Consequently it is a mistake to think that we can run the sorites because vague is vague. We can run the sorites just because small is vague. Now, one could suppose, as Hyde does, that there is an inheritance relation in question between small and vague. But it is a mistake to think that the semantic predicates inherit vagueness from the predicates that they are used to talk about. Sorensen apparently overlooks that the paradox-generator in the second soritical argument that he gives is small. So, even if vague is vague, this reasoning does not establish it. The predicate is vague belongs to a semantic category that we use to talk about something that is vague. What the predicate n-small refers to is vague, no doubt for it applies to a predicate such as either n is small or less than n. However, what we use the semantic category, such as vague, to talk about is the vagueness of small. It could be said that it is unclear whether vague applies to n is n-small, for there are some values of n for which it is unclear whether small applies to n. Sorensen, however either sees the vagueness of small as transferable to the semantic predicate that we use to talk about it or he simply overlooks the role of small in the soritical argument that is supposed to show that vague is vague. But, in any case, it is a mistake to infer from the soritical argument for n-small that vague is vague too; that is, Sorensen illegitimately extends the vagueness of small to vague (i.e., he transfers the vagueness of the mentioned predicate to the predicate that is used). If we look in the two soritical arguments for small and for vague, we can notice that in the former, the predicate small is used, whereas in the latter it is not used, but mentioned. The mistake lies in the inference from the hesitancy over whether to assert a sentence in which a vague word is mentioned,

PAGE 53

46 which arises from the vagueness of the mentioned word, to the conclusion that another word in the sentence that is used to talk about the mentioned word is vague too. If small does not transfer its vagueness to vague, then we can say that Sorensens argument fails to establish his conclusion that vague is vague, for it is not due to the vagueness of vague that the sorites runs as stated above. Although Sorensen does not explicitly subscribe to the inheritance of vagueness between the semantic terms, such as vague, for instance, and the terms that they are used to talk about, Hyde explicitly appeals to this principle, which is just a mistake, if the forgoing reasoning is right. 5.4 The Circularity Problem in Hydes Argument Another worry about Hydes argument is that not only that Sorensens argument fails to give a good ground for Hydes premise (1), but also Hyde, by adopting Sorensons argument and its conclusion in his argument, he adopts together with it some presuppositions that Sorensen makes, and which he cannot adopt on the pain of circularity in his argument. Hydes argument essentially depends on the assumption that the predicate small has borderline cases, or more generally that the vagueness of predicates is properly characterizable in terms of borderline cases. But that is precisely what is at issue here, and what Hydes project is supposed to show. If we recall that the goal of Hydes argument is to show that the characterization of vague predicates in terms of borderline cases needs no revision in order to accommodate higher-order vagueness, this cannot be done under an assumption that has borderline cases is vague, for it is to assume that the characterization one wants to defend is correct, which is clearly question-begging. To see this worry clearly it is enough to see that (3) commits Hyde to (3*),

PAGE 54

47 (3*) If X is vague, then X has borderline cases, and if X has borderline cases, then it is vague. In order for (3*) to be true, has borderline cases must be assumed to be vague, for if it is not, then, the second conjunct of (3*) is false and the whole conjunction is false. Take for example a predicate child*, and say that it applies to the individuals between 1 till 12 years of age, it fails to apply to those who are between 17 and up(s), and it neither applies nor fails to apply to those between 13 and 16 years of age. Now, 13 year old individual is a borderline case of child*, but child* is not vague, for the boundaries between these three categories are sharp. Now, in the case of a paradigmatically vague predicate, such as child these three categories are not sharp, and hence borderline cases have themselves borderline cases. For if a 13 year individual is a borderline case of child, so is a 12 year old individual, for small differences cannot make a change in the application of the concept, that is child is tolerant. Hyde explicitly commits himself to the assumption that border case is vague. According to Hyde, there are two senses of borderline cases: a precise and a vague sense, due to the ambiguity of the phrase indeterminate, and definitely. In the case of partially defined predicates, for example, indeterminate and definitely have a precise senses, and hence the line between borderline cases and positive (negative) cases of the application of the predicate is sharp, whereas in the case of vague predicates these phrases have vague senses, and consequently the demarcation between the borderline cases and positive (negative) cases is not sharp. Thus, when applied to precise and partially defined predicates, it is presupposed that these terms are used in their precise sense, and when applied to vague terms it is presupposed they have vague sense. This, according to Hyde, gives a criterion for distinguishing between vague predicates from

PAGE 55

48 merely partially defined predicates, without running into the trouble with higher-order vagueness. So to speak, partially defined predicates do not have borderline cases in the proper sense, for it seems that Hyde takes the vague sense of borderline case to be the proper one, and hence he points out that it would be useful to use some other expressions to designate the precise sense of borderline case. Relying on the inheritance principle, Hyde comes to think that if small is vague and hence has borderline cases, then this would imply that it has borderline borderline cases. Having borderline borderline cases is from the very beginning built in the predicate small, without the need to explicitly state this in the analysis of the vague predicate in question. The trouble is that Hyde did not show us that small had borderline cases to begin with, but he just assumed this. This assumption makes his account viciously circular, for the goal of his meta-theoretical enterprise is precisely to defend the theorists who characterize vagueness by the presence of borderline cases. To illustrate this point we can easily imagine theorists who disagree that there are the borderline cases of the first order; would Hydes argument be a successful defense of the paradigmatic view of vagueness against these theorists? The answer is clearly no. For it would provide a support for the paradigmatic account simply by assuming that it is the right one. Clearly, if Hyde supposes that small has borderline cases to begin with, then in virtue of (3) plus the inheritance principle, he is committed to suppose that small has borderline borderline cases. Thus, Hyde not only presupposes that vagueness of small entails border cases, but also that vagueness of small entails border borderline cases. So, the theorists who have a problem with how to accommodate higher-order vagueness should do nothing only if we suppose that vagueness is correctly

PAGE 56

49 characterizable in terms of borderline cases. But given that Hydes project is to defend the theorists who advocate this characterization of predicates vagueness, he cannot assume that they are simply right, which Hyde in fact does by assuming that border case is vague, mistakenly thinking that vagueness is transferable from the mentioned term to the term that is used to talk about it. Hyde anticipates the worry about the circularity and has a ready answer to it. The charge of circularity is just recognition of the homological aspect of vague. Although there is some circularity in his account due to the homological nature of vague, this type of circularity is, according to Hyde, benign, for he does not use the word vague, in his argument, but he just uses vague words. Here, he resorts to an analogy with meaningful and its homological nature. He argues that in the same way in which we characterize meaningful using meaningful terms, we characterize vague using vague terms. The major disanalogy that Hyde overlooks is that in the analysis of meaningful, there is no supposed inheritance relation between the terms used to talk about meaningful, and the term mentioned, namely meaningful, while in the case of vague we get the result that the semantic predicate used to talk about vague only if we suppose that vagueness is transferable from the referent of the predicate that is mentioned to the predicate that is used to talk about it. 5.5 The Problem with the Strategy In light of the discussion above we cannot but conclude that Hydes argument is not successful, and for two reasons. First, it relies on bad reasoning that is underwritten with a false principle, namely, the Inheritance principle. Second, it already assumes what needs to be argued for, namely, that the paradigmatic conception of vagueness needs no

PAGE 57

50 modification and does not have to end up endorsing the iterative conception of vagueness. To illustrate this, we can just recall for a moment Hydes general argumentative strategy. He wants to show that the paradigmatic conception of vagueness is correct. Sorensens theory fits the description of Hydes definition of the paradigmatic conception of vagueness, for Sorensen presupposes that the vagueness of a predicate is properly characterizable by the presence of borderline cases. Given that Hydes project is meta-theoretical, he cannot presuppose of what is the object of the defense. Hydes argument is still unsuccessful, for he fails to establish on independent grounds that small has borderline cases, that is that the borderline cases characterization of vagueness is correct. If these charges are correct, then we can conclude that Hydes argument is question-begging, and not just unsound, and hence the problem of higher-order vagueness still presents a great difficulty for the paradigmatic conception of vagueness. Conclusion. In the forgoing discussion, we learned that the type of response that Hyde proposes as a defense of the paradigmatic conception of vagueness is not a good one. We specified three unresolved problems: the problem of the soundness of Hydes argument, the circularity problem, and the problem with the general argumentative strategy. If we are correct, then we have just seen another example of an attempt to defend the paradigmatic conception of vagueness which fails. A distinctive feature of this endeavor was in an attempt to use a meta-theoretical strategy in defending the paradigmatic conception. But, since the strategy is essentially flawed because of adoption of some unwarranted presuppositions, and the reasoning deployed depends on some false principles, the whole project is unsuccessful.

PAGE 58

51 In what follows we discuss a view that shares with the paradigmatic views of vagueness the characterization of vagueness by presence of borderline cases, but which is also significantly different from the views that we have dealt so far, for it does not allow the problem of higher-order vagueness to get off the ground. The response consists simply in denying the higher-order vagueness as incoherent.

PAGE 59

CHAPTER 6 IS HIGHER-ORDER VAGUENESS INCOHERENT? Overview. In this Chapter we will present Crispin Wrights (1992) 1 solution to the problem of higher-order vagueness. Higher-order vagueness turns out not to be a problem since, according to Wright, no one should take higher-order vagueness seriously; higher-order vagueness is incoherent. If this is correct, then one can calmly reject the tolerance principle (the major premise of the soritical argument), and avoid the charge that one does not respect higher-order vagueness by doing so. If higher-order vagueness is incoherent, then the charge is simply out of place. The plan of the Chapter is as follows. First, in Section 6.1 we will say something about what Wright calls the no sharp boundaries paradox, and its relation to the tolerance principle, which Wright calls the characteristic sentence for vague predicates, and which generates the no sharp boundaries paradox. Section 6.2 considers the higher-order paradox. Further, in Section 6.3, we will give an exposition of Wrights proof for the incoherency of higher-order vagueness. Then we will turn to some criticisms of Wrights argument, in Section 6.4. Section 6.5 discusses Richard Hecks (1993) 2 criticism and, Section 6.6, Dorothy Edgingtons (1993) 3 criticism, which seem to be on the target concerning the problems with Wrights proof. 1 For all references to Wright in the thesis see (Wright 1992). 2 For all references to Heck in the thesis see (Heck 1993). 3 For all references to Edgington in the thesis see (Edgington 1993). 52

PAGE 60

53 6.1 The No Sharp Boundaries Paradox A sorites paradox is a manifestation of the vagueness of the predicates for which it can be constructed. According to the paradigmatic conception about vagueness, the sorites paradox largely depends on what is taken to be a salient feature of the vague predicates, namely their feature of being tolerant. The idea about a predicates being tolerant is typically expressed by saying something to the effect that small changes cannot make a difference in the application of the vague predicate. So, in the series of gradually changing objects, there is no object that is such that a certain predicate applies to it, and it does not apply to its successor. To say that there is such an object in the series is to impose sharp boundaries on vague predicates contrary to what is said to be the essential feature of the vague predicates, namely that they do not have sharp boundaries. The major premise of the soritical argument is said to express this intuition, and it is what Wright calls the characteristic sentence for vague predicates: (i) ~(x) (Fx & ~Fx`), (where x` is the immediate successor of x). As it stands, (i) constitutes the No Sharp Boundaries Paradox, and it is not a proper expression of intuitions about vague predicates. According to Wright, although this sentence meets our tolerance intuitions, it conflicts with our other intuitions about vague predicates, and hence cannot be the characteristic sentence that expresses all our intuitions about vague predicates. For what follows from it is that all the objects are Fs or none are, depending on how the series of gradually changing objects starts, which conflicts with our convictions about the existence of clear positive and clear negative cases. What is needed, according to Wright, is the definition of vagueness (characteristic sentence) that meets all intuitions about vague predicates. Thus, Wright aims to find a

PAGE 61

54 definition of vague predicates that would express our tolerance intuitions, and also our intuitions about some clear positive and some clear negative cases of application of vague predicate. According to Wright, the sorites paradox, which has as its major premise the tolerance principle, and which is said to constitute the no sharp boundaries paradox, can be resolved, and it is not the paradox of vagueness. The lesson that we can learn, though is that when dealing with vague expressions, it is essential to have the expressive resources afforded by an operator expressing definiteness or determinacy (p. 130). Then, vagueness would simply consist in negating such definiteness. Wright emphasizes that the operator that would play such a role is not redundant, as one might think, for A and Def A do not always coincide in truth-value. When A is true, then both A and Def A have the same truth value, but if A is not true, then the A and DefA might differ in truth value, in such a way that Def A is false, even though A is not false. ~Def(A) is not equivalent to Def(~A), since A is not true is not equivalent to ~A, when A is indeterminate in truth-value. Wright proposes the following sentence as a proper representative of the intuitions about vague predicates, and hence as the characteristic sentence: (ii) ~ (x) (Def(Fx) & Def(~Fx`)), from which we get: (iii) Def(~Fx`)~Def(Fx), neither of which is paradoxical, for what they say is just that no definitely tall thing, for example, is succeeded by a definitely not tall thing.

PAGE 62

55 6.2 The Higher-Order No Sharp Boundaries Paradox As we have seen, the simple maneuver of introducing Def operator is supposed to remove the paradox. Now, an immediate worry that arises is whether this definition of vagueness just fixes one problem by replacing it with a new problem that is also raised by some intuitions, and commitments that we have by virtue of defining vagueness in a certain way, namely by defining vagueness by the presence of borderline cases. That is if (ii) is supposed to negate the sharp boundaries of the first order, then the question is whether that commits one to the view that there are no sharp boundaries of the second order, then of the third order, and so on indefinitely. The worry is that any strategy that deals just with the first-order vagueness, instead of solving the problem, just postpones it, for higher-order vagueness presents an apparent challenge. If we only have a strategy for dealing with sorites paradoxes involving only first-order vague predicates, this amounts to have no strategy at all. All that we have then is that the first obstacle has been overcome, that is a strategy may work for the first-order borderlines, avoiding in that way imposing sharp boundaries of the first-order, while imposing sharp boundaries on some higher level. This is widely recognized as incompatible with the characterization of the phenomenon of vagueness by presence of borderline cases. A commitment to first-order borderlines commits one to second-order borderlines and so on indefinitely, since, according to the paradigmatic picture of vagueness, to be a genuinely vague predicate is to admit not only borderline cases, but also to admit a hierarchy of borderline cases. Thus, in order to deal with the problem of vagueness, it is not sufficient to have a strategy that handles only the sorites paradox of the first order. On the one hand we have a sorites paradox of the first order, to which we

PAGE 63

56 can apply the strategy of introducing a border area for the applicability of any vague predicate, in order to respond to a challenge that it presents. But this just postpones the resolution of the problem by shifting the problem to the next level. For there is the problem of the higher-order (strengthened) sorites paradox that looks exactly like the first order paradox, except that in the former we have 'is definitely red' instead of is red', for example. Clearly, it looks as if by applying the strategy of introducing border cases of higher and higher order, we are driven into a vicious regress, which is only the manifestation of the predicament regarding the question what, if anything, determines the boundaries of vague predicates. We can express higher-order vagueness intuitions in a fashion similar to that in which we express intuitions about first-order vagueness, (iv) ~ (x)(Def (Fx) & ~Def(Fx`)), or (v) ~Def(Fx`) ~Def(Fx), both of which constitute the No Sharp Boundaries paradox, for (iv) would be the major premise of the higher-order (strengthened) soritical argument. Following Wright, one can apply the trick of introducing the Def operator in order to resolve the strengthened paradox. So, we get form (iv): (vi) ~ (x)(Def(Def(Fx) & Def(~Def(Fx`))), which gives us: (vii) Def(~Def (Fx`)) ~Def(Def(Fx)), and this should generalize for n iterations of the Def operator. According to Wright, however, we cannot iterate the Def operator to resolve the paradox in the strengthened argument since there is an important asymmetry between (ii)

PAGE 64

57 and (vi), or any sentence that is supposed to express vagueness of a higher-order. While (ii) is not paradoxical, (vi) is, according to Wright. (vi) and its ilk look harmless only until we investigate the logic and semantics of the Def operator. Investigation into the logic and semantics of the operator Def shows that (vi) is not as harmless as it looks, for it allows drawing the paradoxical conclusion, (viii) Def(~Def (Fx)) Def(~Def(Fx)), which says is that there are no definite cases of F, and hence reintroduces the No Sharp Boundaries Paradox. 6.3 Wrights Argument Wright has argued that we cannot take higherorder vagueness seriously for it is incoherent. He gives the argument for incoherence of higherorder vagueness, taking (vi) as the characteristic sentence for higherorder vagueness and using the DEF rule of inference to get (vii) fro a reduction, the generalization of which will give us the conclusion that there are no definite cases of F if there is a definite firstorder borderline case of F. The DEF principle is the following rule: A 1 A n P ______________ A 1 A n Def (P), Where {A 1 A n } are definitized 4 propositions. What follows from it is the definitization rule, (DEF+) if Def P, then Def (Def P), 4 To be definitized means that each member, Ai, of the set {A1An} begins with Def.

PAGE 65

58 and the rule of eliminating the Def operator, (DEF elimination) If Def (P), then P. The problem is then the formal equivalent of (vi), that is (vii), and DEF lead to contradiction. That is supposed to show that higher order vagueness is incoherent, given that DEF is a valid rule of inference. Wrights proof goes as follows: {1} [1.] Def (~(x) (Def(Def(Fx) & Def(~Def(Fx`)))) (Premise) {2} [2.] Def (~Def(Fx`)) (Premise for C) {3} [3.] Def (Fx) (Premise for RAA) {3} [4.] Def(Def(Fx)) (3, DEF+) {2,3} [5.] (x) (Def(Def(Fx) & Def(~Def(Fx`))) (4,2 EG) {1} [6.] ~(x) (Def(Def(Fx) & Def(~Def(Fx`))) (1, DEF-) {1,2} [7.] ~Def(Fx) (3,5,6, RAA) {1,2} [8.] Def(~Def(Fx)) (7, DEF+) {1} [9.] Def(~Def(Fx`))Def(~Def(Fx)) (2,8 C) {} [10.] (x) (Def(~Def(Fx`))Def(~Def(Fx))) (9,UG) Now, the next stage is to prove that given [10], F has no definite positive cases, if it has definite borderline cases of the firstorder. The proof goes as follows: {1} [1.] (x) (Def(~Def(Fx`))Def(~Def(Fx))) (premise) {2} [2.] (x) Def (~Def(Fx`)) (premise for C) {1,2} [3.] Def (~Def(Fx)) (1,2) {1,2} [4.] ~Def(Fx) (3, DEF-) {1,2} [5.] (x)(~Def(Fx)) (4, UG) {1} [6.] (x) Def (~Def(Fx`)) (x)(~Def(Fx)) (2,5 C)

PAGE 66

59 It turns out then that the no sharp boundaries paradox is the paradox of higher-order vagueness. The susceptibility to this paradox is said to prove the incoherence of higher-order vagueness, for the paradox cannot be blocked successfully as it is possible to do for the first-order vagueness. Thus, there is an asymmetry between first order vagueness and higher-order vagueness that is an obvious motivation for pointing to higher-order vagueness as the trouble. For the threat of incoherence is distinctively higher-order. This asymmetry is in the fact that (viii), which is paradoxical, can be inferred from the characteristic sentence for higher-order vagueness, while from the characteristic sentence for vagueness of the first-order nothing paradoxical follows, according to Wright. 6.4 Is Higher-Order Vagueness Really Incoherent? There are two different strategies to respond to Wrights conclusion that higher order vagueness is incoherent. Both of them focus on the rule DEF rule and its application. The strategy employed by Richard Heck is to argue that the rule is valid, but it has a restricted application, and Wrights proof has nothing to do with the higher-order vagueness, but only shows that DEF cannot be used with all the freedom of a classical rule. Another strategy, employed by Dorothy Edgington consists in showing that DEF rule is not valid since it allows derivation of a false conclusion from a single indeterminate premise. In what follows we will present these lines of criticisms. 6.5 Hecks Reply The first question that Richard Heck considers is the motivation for using the DEF rule of inference, rather than the alternative rule, DEF*:

PAGE 67

60 (DEF*) A1,,An P _____________ A1,,An Def(P). The reason why Wright does not use this stronger version of the DEF principle is motivated by the worry that it would made the Def operator redundant. Since, DEF* would seem to validate (a) PDef(P), which would destroy Wrights approach to the first-order vagueness. A similar worry arises regarding DEF. For DEF seems to validate: (b) Def(P) Def(Def(P)). Wright uses (b) in his proof, for Def(P) and Def(Def(P)) coincide. That is, however, to reject the higher-order vagueness, which is precisely at issue here. The question is then why not to abandon DEF if it leads to something unacceptable. Heck argues, however, that both DEF and the stronger DEF* are valid rules of inference. The troublesome (a) is not, in fact, validated by DEF*. The diagnosis of the problem is in the application of DEF* and accordingly DEF, in subordinate deduction (conditional proof, reductio ad absurdum), when the distinction between A and Def(A) collapses. In the same way, DEF, when used in subordinate deduction collapses the distinction between Def(P) and Def(Def(P)). So, what Heck disputes is the application of both rules in subordinate deductions which amounts to the validation of the deduction theorem, which is the inference from P Def(P) to PDef(P), which is not correct. That shows that the DEF rule is not classical and cannot be used in classical proofs, for to do so is to collapse this distinction. Also, since both DEF and DEF* are valid rules of inference, if there are no restrictions on

PAGE 68

61 their use, DEF* will give a paradoxical result taken in combination with (ii). This indicates that the incoherence is not distinctively higher-order, and that there is no asymmetry between first-order vagueness and second-order vagueness. 6.6 Edgingtons Reply Dorothy Edgingtons strategy in Wright and Sainsbury on Higher-Order Vagueness consists in disputing Wrights proof for the incoherence of the higher-order vagueness by way of attacking the DEF rule, which she argues is invalid. The reasoning goes by first demonstrating how DEF makes trouble for higher-order vagueness, by inferring a contradiction form the supposition that there is some object which is definitely on the borderline between the definitely red things and not definitely red things, for example. 1. Def(~Def(Def Red(x))) & ~Def(~Def Red(x)) 1a. ~Def (Def Red(x)) [Def elimin., and & elim.] 1b. ~Def(~Def Red(x)) 2a. ~Def Red(x) [DEF-] 3a. Def(~Def Red(x)) [DEF] Now, 3a contradicts to 1b. By looking at the proof, and the rules of inference that are used for drawing the conclusion, it becomes apparent that these rules allow inferring a false conclusion from an indeterminate premise. However, according to Edgington, when indeterminacy of truth-value is at issue, it is necessary for a rule of inference, in order to be valid, to preserve not only truth of the conclusion from the true premises, but also to exclude the derivation of a false conclusion from a single indeterminate premise. The DEF rule does not seem to meet this criterion, according to Edgington, since it allows the derivation of a false conclusion from a single indeterminate premise, namely 2a. As we can see from a footnote in Edgingtons text, Wright replied to this by saying that the DEF

PAGE 69

62 rule preserves only polar (definite) truth. In the light of Wright reply, Edgingtons criticism comes down to a complaint similar to Hecks regarding Wrights proof and the use of the DEF rule. So, Wright emphasizes that his DEF rule is valid and preserves only polar truth. Polar truth is characterized by appealing to sentences being true or false. The problem with Wrights proof then is, according to Edgington, in the use of the rule: (DEF-) ~Def(Def(P)) entails ~Def(P), which relies on reductio. The proof goes as follows: 1. ~Def(Def(P)) [premise for conditionalization] 2. Def(P) [premise for reductio] 3. Def(Def(P)) [by DEF+ rule] 4. ~Def(P) [1,3, RAA] This complaint is similar to the one presented by Heck, for the trouble is in the transition form the step (2) to (3) which is sanctioned by the DEF rule, but which does not do justice to the difference between Def(P), and Def(Def(P)). It seems natural then to conclude that the proof Wright gives in order to show that the higher-order vagueness is incoherent is not satisfactory, for he illegitimately uses the non-classical DEF rule in a classical chain of inference. Conclusion. In light of the foregoing discussion, we cannot but conclude that the threat that higher-order vagueness is said to present is still unanswered. The diagnosis of what has gone wrong with Wrights argument and which consists in blaming the application of the non-classical DEF rule within the classical proofs can be taken to show that the result that Wright gets is just the violation of this restriction on the application of the DEF rule.

PAGE 70

63 This leaves us with the problem of higher-order vagueness still unanswered. So far we have been dealing with the views that no matter how different they are in terms of the proposed solution of the problem of higher-order vagueness had in common the description of vagueness as a semantic phenomenon. All the treatments, as we have seen, turned out to be unsuccessful in dealing with this problem for different reasons. Now we turn to a view that also belong to the paradigmatic view of vagueness, and which on the face of it does not have the problem of semantic vagueness. Thus, we turn to the epistemic view of vagueness, which is the subject of the next Chapter.

PAGE 71

CHAPTER 7 EPISTEMICISM AND HIGHER-ORDER VAGUENESS Overview. In this Chapter we formulate and critically examine the epistemic view of vagueness, which is championed in Timothy Williamsons book Vagueness (1994). 1 The motivation for the discussion is the question whether the epistemic view of vagueness shares the difficulty regarding the problem of higher-order vagueness with the other views that characterize the phenomenon of vagueness by appealing to the presence of borderline cases. As we have learned, once first-order vagueness is characterized by presence of borderline cases that commits one to progress in the hierarchy of borderline cases, since it does not seem to be plausible to stop the hierarchy at any point, without introducing arbitrariness into the account or sharp boundaries that were originally rejected. We have seen in the previous chapters how this commitment to no sharp boundaries creates the difficulty for different views that share the the paradigmatic conception of vagueness. On the face of it, it looks as if the epistemic view, by characterizing borderline cases in epistemic fashion, and claiming that there are sharp semantic boundaries, does not have the problem of higher-order vagueness. The aim of this Chapter is to show that although the epistemic view of vagueness does not have a problem with semantic higher-order vagueness, still, by respecting the phenomenon of higher-order vagueness, epistemicism runs into a trouble that parallels the problem of semantic higher-order 1 For all references to Williamson in the thesis see (Williamson 1994). 64

PAGE 72

65 vagueness. So, it turns out, as we aim to show that by respecting the phenomenon of higher-order vagueness, epistemicism faces the problem of epistemic higher-order vagueness. The plan of the Chapter goes as follows: firstly, we will formulate the epistemic view of vagueness as a conjunction of two theses, and secondly we will present some arguments to show that Williamson does not give the promised successful treatment of higher-order vagueness. A further point emerges, namely that epistemicism cannot hope to be a good theory of vagueness. In Section 7.1 we formulate the epistemic view about vagueness, as presented in Williamson, as a conjunction of two main theses. In Section 7.2 we introduce a principle that is supposed to do the explanatory work for the claim that vagueness is a type of ignorance. Section 7.3 introduces the notion of epistemic higher-order vagueness, and its relation with what Williamson calls a margin for error principle, which leads into the discussion about the failure of the KK principle presented in Section 7.4. In Section 7.5 we show how Williamson uses the alleged failure of the KK principle to answer a possible objection to his reliance on margin for error principles. In Section 7.6 we aim to show that Williamson is still in trouble although KK might fail on independent grounds. In Section 7.7 we formulate an argument that is supposed to show what the trouble is and which directs us to the culprit of the trouble, namely a margin for error principle. The argument uses only Williamsons principle and some reasonable suppositions which when taken together are supposed to show that the principle gives us a surprising and implausible result. In Section 7.8 we will point out to a difficulty with Williamsons argument by analogy for margin for error principles.

PAGE 73

66 We will conclude that Williamsons view of vagueness has an insuperable problem of higher-order vagueness, similarly to the alternative views that he criticizes precisely on the grounds of not being able to give a satisfactory treatment of higher-order vagueness. Williamson has exchanged one problem, namely the problem of semantic higher-order vagueness, with a parallel and equally vexed problem, namely the problem of epistemic higher-order vagueness. The former gives us paradoxical results regarding the truth of certain claims, and the latter regarding our knowledge of them. 7.1 The Epistemic View The epistemic view of vagueness comes down to two major claims: 1. Vagueness is a type of ignorancethe vague predicates have sharp boundaries but we do not know where these boundaries lie. 2. The ignorance in borderline cases is the consequence of our limited powers of perceptual and conceptual discrimination. The first claim, the claim that there is a sharp boundary between the positive and the negative extension of a vague predicate, and that we, ordinary speakers, are ignorant of where the boundary lies, implies that in borderline cases, a vague predicate either applies or fails to apply, and one does not know which. The uncertainty that one experiences in forming the judgment about an object, which is a borderline case of the predicate is epistemic uncertainty. So, according to the epistemic view, there is sharp cut-off point between heaps and non-heaps, and hence a smallest number of grains that constitutes a heap and its predecessor does not. This implies that the major premise of a soritical argument is false. Cognitively limited as we are, the boundary between heaps and non-heaps is not knowable to us. We do have some knowledge, however. For some

PAGE 74

67 sufficiently small, and for some sufficiently large number of grains we certainly know whether heap applies or not. As we go along the series of grains, and as the number of grains decreases (increases) we experience more and more difficulties in forming a judgment to the effect that the object in question is a heap or not. We hesitate with the answer to the question whether the object we are judging is a heap; moreover, we are completely at a loss what to say when asked whether heap applies or not to that object. The area about which we hesitate with an answer and are at a loss what to say is the area wherein the boundary lies, but we do not know exactly where. Clearly, according to the epistemic view of vagueness there is no semantic vagueness. The advantage of such a view, Williamson argues, is that it is able to preserve all the laws of classical logic and semantics, one of which is the principle of bivalence. Now, this opens the question why one would be ignorant of the sharp boundaries of the vague predicates. This question leads us to the second claim that together with (1) constitutes the epistemic view of vagueness. The second claim is needed in order to bolster the credibility of the epistemic view which it raises with its characterization of vagueness as a type of ignorance implying that there is a fact of the matter whether the vague predicate applies or fails to apply, and one cannot know which. The proposed answer to the question why one would be ignorant of such a fact lies in that vagueness is a part of a broader phenomenon, namely the phenomenon of inexact knowledge. This type of knowledge is governed by what Williamson calls a margin for error principle (MEP). This is, according to Williamson, the principle that governs vague predicates, and contrary to the tolerance principle, MEP does not lead into paradox, and accounts for the phenomenon of higher-order vagueness.

PAGE 75

68 In what follows, I will focus my attention on this second claim that constitutes the epistemic view. However, the discussion about the second claim has consequences for the tenability of the first claim, although the motivation for concentrating the attention on the second claim lies in the suspicion that Williamson might have a problem with epistemic higher-order vagueness, parallel to the problem of semantic higher-order vagueness. 7.2 A Margin for Error Principle Vagueness is a part of a broader phenomenon; it is a part of the phenomenon of inexact knowledge. Inexact knowledge is a type of knowledge that necessarily involves some ignorance. So, for example, I can judge the number of people in the stadium and for some numbers I know that there are not exactly n people, and for some numbers I do not know that there are not exactly n people in the stadium. The source of inexactness is my limited power of discrimination and hence judging just on the basis of perception how many people there are in the stadium is not exact, but just rough knowledge. The exact knowledge that there are n people or that there are not n-1 or n+1 people is such that either it is not perceptual knowledge, or it is perceptual knowledge but such that it is reliable enough. So, I know that there are not 0 people, and I know that there are not 100,001 people since the latter number exceeds the capacity of the stadium. However, I do not know that there are 28,000 people in the stadium, just by looking even if there were 28,000. For were there 27,999 people I could easily still believe that there were 28,000, because the difference of one individual is too small to be detected by my limited perceptual apparatus. So, I do not know that there are 28,000 people since I do not know that there are not 27,999 people just by taking a glance at the stadium. This means that inexact knowledge requires some buffer zone, which is going to allow that only safe

PAGE 76

69 beliefs are counted as knowledge. Inexact knowledge is governed by a margin for error principle that explains ignorance in borderline cases. A margin for error principle for this case states that: (MEP) If I know that there are not exactly n people in the stadium, then there are not exactly n-1 people in the stadium. Similarly to the situation in the stadium, the heap knowledge requires a MEP, according to Williamson. Consider the term heap, used in such a way that it is very vague. Someone who asserts n grains make a heap might very easily have made an assertion with that sentence even if our overall use had been slightly different in such a way as to assign the sentence the semantic status presently possessed by n-1 grains make a heap. A small shift in the distribution of uses would not carry every individual use along with it. The actual assertion is the outcome of a disposition to be reliably right only if the counterfactual assertion would have been right. Thus the actual assertion expresses knowledge only if the counterfactual assertion would have expressed a truth. By hypothesis, the semantic status of n grains make a heap in the counterfactual situation is the same as that of n-1 grains make a heap in the actual situation; for the former expresses a truth, so does the latter. Hence, in the present situation, n grains make a heap expresses knowledge only if n-1 grains make a heap expresses a truth. In other words, a margin for error principle holds. (p. 232) (MEP*) If it is known that n grains is a heap, then n-1 grains is a heap. According to Williamson there is the least number of grains that constitutes a heap, and one cannot know what that number is, for given (MEP*), one cannot know the conjunction of the form n grains is a heap and n-1 is not a heap. One might wonder why this would be the case. The grounds for this claim are supposed to be some facts about knowledge, the facts about which conditions are necessary in order to ascribe knowledge to someone or to oneself. Williamson appeals to a reliability condition that is necessary for knowledge. Ones belief that a certain object is a heap, when the object in question is a borderline case, is not reliable enough to count as knowledge. This means

PAGE 77

70 that the belief could be true just by luck, and surely everyone would be reluctant to count a belief true just by luck as knowledge. Here Williamson resorts to an argument by analogy. For no number n, does one know that there are exactly n people in the stadium. There are many numbers, such that they are either big enough or small enough for which one is able to know that there are not exactly n people in the stadium. If one knows that there are n people in the stadium that implies that ones belief meets the reliability condition. That is, the mechanism that is responsible for forming the belief that there are n people in the stadium, would not produce that belief had there been n+1 people in the stadium. But we know, in the given circumstances that this condition is surely not met. And hence, one does not know that there are n people in the stadium. Similarly to the case of ones belief about the number of people in the stadium, ones belief about the heap is not reliable enough to count as knowledge. The belief forming mechanism that produces the belief that n grains make a heap is such that there are counterfactual situations that are suitably different from the actual one in which the belief formed would not be true. The sorts of counterfactual situations Williamson has in mind are ones where heap has a slightly different meaning, one that shifts the semantic borderline. Since, I cannot discriminate between the actual situation and the counterfactual situation I cannot be reliably right, and hence I cannot have knowledge. The claim that vagueness gives raise to (MEP*) is not supposed to depend on the epistemic view of vagueness, according to Williamson. It is supposed to be based on what is independently accepted as necessary for knowledge.

PAGE 78

71 7.3 Epistemic Higher-Order Vagueness The main motivation for discussing (MEP*) and how it accounts for ones ignorance where the boundaries of vague predicates are was motivated by the interest in the problem of higher-order vagueness, and the question whether the epistemic view is immune to this problem. We have seen that the question about higher-order vagueness arises for all the views that characterize the phenomenon of vagueness by the presence of borderline cases, and accept the tolerance intuition. As we have learned before, the paradigmatic conception of the phenomenon of vagueness has a commitment to deny sharp boundaries of any kind (i.e., the paradigmatic conception of vagueness must accommodate and account for the phenomenon of higher-order vagueness). There is a question then whether epistemic treatment of the phenomenon of higher-order vagueness faces a problem parallel to the semantic treatment of the phenomenon of higher-order vagueness. Williamson argues that the alternative theories have a trouble to give adequate treatment of higher-order vagueness. Higher-order vagueness is, in fact, the major weapon that Williamson uses to criticize alternative theories, while claiming immunity from these troubles for his own theory. With the help of (MEP*), Williamson argues that he is able to deny the major premise of the soritical argument, embrace sharp boundaries, but still respect the basic vagueness phenomenon. Clearly, in the case of epistemicism, there is no problem with semantic higher-order vagueness. It does not even get off the ground. Yet, epistemicism must somehow respect the phenomenon of higher-order vagueness, and it does this, of course, by portraying it as an epistemic phenomenon.

PAGE 79

72 Our question now is whether this gives rise to epistemological problems for epistemicism just like it gave rise to semantic problems for other views. Just like the phenomenon of the first-order vagueness, the phenomenon of higher-order vagueness is supposed to be explained, on Williamsons account, by appealing to ignorance. The phenomenon of the first-order is described as ignorance about where the boundary between the positive and the negative extension of the vague predicate lies. Accordingly, the phenomenon of the higher-order vagueness is described as ignorance about this ignorance. Higher-order vagueness, Williamson argues is manifested in the failure of the KK principle; that is, one can know something without being able to know that one knows it. Thus, Williamson has something that is supposed to parallel the phenomenon of higher-order vagueness in other theories, and which consists in a limited number of iterations of the knowledge operator. Epistemic higher-order vagueness consists in vagueness of is known that H. A margin for error meta-principle accounts for our higher-order ignorance, just as (MEP*) accounts for our ignorance of the first-orderthat is, our second-order knowledge is inexact. This is how the epistemic view accounts for the phenomenon of higher-order vagueness. Let us turn now to Williamsons consideration of a possible argument based on the iteration of the K-operator, and his response that the iteration of it gives out at some point before we reach a paradoxical conclusion. Thus our question is why and how KK fails, which is crucial in Williamsons account if he is to account for the phenomenon of higher-order vagueness.

PAGE 80

73 7.4 Why and How KK Fails In a nutshell, the reason why KK fails is because the second-order knowledge is supposed to be inexact, on Williamsons account, and hence it is governed by (MEP*), just as is the first order knowledge. But why does (MEP*) apply to first-order knowledge? The answer that Williamson champions is that first-order knowledge is inexact, and requires some margin for error. 2 So, for example, the reason why I cannot know that 369 grains is not a heap (where 369 is the last number of grains that constitute a heap) lies in that I cannot be justified in uttering the sentence I know that 369 grains is a heap. This is so because in order to be justified in uttering the sentence in question, one needs to have a belief and that belief needs to be reliable. The reliability rapidly decreases as we go along the soritical series and approaching to the borderline. A simple counterfactual test shows that one does not know that the given collection of grains is a heap, because that belief would not be reliable enough to count as knowledge. That is, one would still believe that the object in question is a heap, even if there were a slight shift in the use, and hence in the meaning of the predicate heap, so that the object that was originally in the actual extension of the predicate heap is not in the extension of the predicate in the counterfactual circumstances. Yet, in order to have knowledge, one needs to be reliably right in uttering the sentence This is a heap if heap had a slightly different meaning. However, the difference between actual and counterfactual meaning of heap is too small and indiscernible for an ordinary speaker. An agents belief forming mechanism is insensitive to the change in truth-value of the sentence This is a heap, and consequently she cannot have knowledge. For, in the counterfactual situation, where 2 There are many issues about Williamsons argument here, but I am passing over them for the sake of argument. For a discussion about the problems with Williamsons argument see (Ray 2004).

PAGE 81

74 heap had a slightly different meaning, a belief that heap applies to the object in question would not be true, and the agent would still believe it. Vagueness of an expression, Williamson argues, consists in the semantic differences between it and other possible expressions that would be indiscernible by those who understood them ( 8.5). Thus, it is of crucial importance that ones belief needs to be reliable and that is so only if one can discriminate between actual and counterfactual use of heap, for example. If one cannot do that then her belief about heap is not reliable and does not constitute knowledge. It is clear from the foregoing discussion that Williamsons analysis of ones knowledge about heaps distinguishes two conditions that are necessary for ascribing knowledge to one. These are i) the truth of a belief and ii) the belief in question needs to be a product of a reliable belief-forming mechanism. The reliability of ones belief forming mechanism varies along the series of grains. So, one is more reliable in some areas than in others, presumably less in ones that are close to the borderline. As the number of grains increases (decreases) there is less and less reliability in the belief-forming mechanism, since the mechanism is not sensitive enough to the small differences and it would produce a belief that could easily be false. This calls for some safety zone that is supposed to prevent forming a false belief. That is, only beliefs of certain width (presumably the ones that are about the cases that are far enough from borderline cases) count as reliable, namely the ones which are the product of the belief-forming mechanism which is sensitive enough to the variations along the series of grains. Now, the condition (ii) can be spelled out by saying that a belief is a product of a reliable belief forming mechanism if the belief is of the width which guarantees truth of the belief in suitably different situations. That is a belief close to the borderline would not

PAGE 82

75 count as knowledge since in a slightly different circumstances, were the borderline shifted the belief would not be true, and hence it is not safe enough to count as knowledge in the actual situation. There is a need for a buffer zone. It is not only that my reliability varies for different number of grains, but also there is another dimension of variation, the variation along the scale of reliability. Reliability dimension of variation itself, according to Williamson, requires a margin for error (p.227). That means that the notion of knowledge is vague. Similarly to our first-order knowledge, the second-order knowledge is also inexact in this picture. This clearly opens the room for the failure of the KK principle, because although it is true that I know that y is a heap, there are suitably different circumstances, in which it is false that I know that y is a heap, and hence I know that y is a heap cannot itself be known. That is knowledge of ones knowledge is also inexact and itself requires a margin for error, which in turn implies that KK is false. Each step higher in the hierarchy of knowledge requires a bigger and bigger buffer zone. This means that the width of a belief that is safe becomes smaller as one goes up in the hierarchy. According to Williamson, knowledge that one knows requires two buffer zones. Iteration of the knowledge operator narrows the width of the belief by introducing another buffer zone. Thus, the third order knowledge requires three buffer zones, and consequently widens the required margin for error. So, the width of the safe belief gradually decreases as the progression in the hierarchy of knowledge goes, but there is an upper bound of iteration of knowledge. That is, the iteration of the K-operator gives out at some point before we reach the absurd conclusion, such as that one does not know that a billion grains of sand is a heap.

PAGE 83

76 7.5 The Failure of KK Answers a Seeming Trouble with MEP Inexactness of metaknowledge gives rise to the failure of the KK principle. That is, metaknowledge is inexact and it is governed by (MEP*) ( 8.3). Securing this point is of special importance for Williamson, because he uses it to answer an anticipated objection against (MEP*). The objection fails, according to Williamson, because it relies on the KK principle. The objection Williamson considers ( 8.2) is in that my knowledge of some things, such as that zero grains of sand is not a heap, seems to be inconsistent with what we get when we apply the margin for error principle. He considers a following set of claims, call it, an exhibit argument. It goes as follows. Let n be the least number of grains such that I dont know that it is not a heap, 1. I know that n-1 grains is not a heap (empirical fact by choice of n), 2. I do not know that n grains is not a heap (by choice of n), 3. I know that (if exactly n grains is a heap, then I do not know that n-1 grains is not a heap). We can get from (3) 3.` I know that if I know that n-1 grains is not a heap, then n grains is not a heap, by K-elimination and by contraposition, which gives us the following: 4. n grains is not a heap (1,3`, Modus Ponens). So far, so good. There is nothing problematic with this conclusion. The problem is supposed to arise when the imagined opponent moves from (4), to the conclusion 5. I know that n grains is not a heap (perhaps by reflection). Apparently (5) contradicts (2), and the opponent no doubt relies on the principle that if I can deduce something from certain propositions, then since the purpose of

PAGE 84

77 arguing is to advance ones knowledge in the subject matter I come to know what the conclusion of the argument is. This principle is clearly correct, and it is worth noticing that it is metaprinciple. This result naturally raises the question which one of the propositions 1-3 one needs to give up in order to restore consistency. Williamsons answer is none, for they are not mutually inconsistent. Williamson in his reconstruction of the opponents reasoning commits his opponent not to the argument above which as its background has the principle in question, but to another argument that goes as follows: A) I know (1), B) I know (3`), C) (4) follows from them, D) If I know some proposition and from those propositions it logically follows (4), then I know (4), E) So, I know (4) I know that n grains is not a heap, F) But I dont know that n grains is not a heap (2, by choice of n). Now, (E) and (F) apparently contradict each other. The paradoxical reasoning, Williamson stresses, relies on the inference that take as one of the premises not (A), which is in effect (A) I know that I know that exactly n-1 grains is not a heap, which is according to Williamson introduces the paradox, since it is (A) that is inconsistent with (2), and not (1), as the possible critics could think. This, Williamson concludes, saves the consistency 1-3, and explains why they seem to be inconsistent. The argument for their inconsistency relies on (A) which is false, according to Williamson since KK fails, and hence the argument is unsound. The failure of the KK principle, Williamson argues, manifests higher-order vagueness.

PAGE 85

78 7.6 But Williamson is in Trouble Anyway Although we have a good reason to think that the KK principle is indeed false, the question is whether the KK principle fails for the right sort of reason that Williamson offers in order to respect the phenomenon of higher-order vagueness. That is, the independent plausibility of the thesis that KK is false is not sufficient for Williamsons purposes. The KK principle needs to fail in the right way that is required in order for higher-order vagueness to be respected. Also it is not clear at all that the imagined opponent is committed to the argument style that Williamson states in his condensed diagnosis of what has gone wrong with the opponents reasoning. I will leave this line of criticism aside, and turn to examining whether Williamson establishes that KK fails in the way in which he needs it to fail. By examining Williamsons argument for the failure of the KK principle, we come to suspect that the reason he offers for the failure of the KK principle is not a good oneit is irrelevant for the principle in question, and for the higher-order vagueness, for the reliability dimension of variation that is central in the analysis of the first order-knowledge seems to be irrelevant for the second-order knowledge. In the case of second-order knowledge the reliability condition is surely met. The second-order knowledge meets the reliability condition by preserving link to the first order knowledge that must meet this condition. Once the reliability dimension of variation is fixed in this way one might very well argue that metaknowledge is not inexact. This, if correct, would have implications for Williamsons claim that the epistemicism pays respect to higher-order vagueness. This all suggests that epistemicism might have the problem of epistemic higher-order vagueness.

PAGE 86

79 Moreover, if Williamson is forced to give up (MEP*) in the light of its inconsistency with some undeniable facts, then he cannot give a promised account of why one would be ignorant of ones ignorance about where the hidden boundaries are. Even worse, it seems that (MEP*) is paradoxical in analogous way in which the tolerance principle is paradoxical. 7.7 The Problem of Epistemic Higher-Order Vagueness Now, we aim to show that 1-3 are inconsistent. We will use in the argument Williamsons margin for error principle, and we will suppose that we can iterate K-operator enough times (supposing that KK is not universally false, that is it is true at least in some cases) so that it contradicts to some empirical facts, such as that although I know that a billion grains is a heap, it is going to give us a result that I do not know that a billion grains is a heap. We will sketch the argument in several steps. The major challenge then will be to account for enough-times K-ability of the premises of the argument. Williamsons discussion gives us resources to motivate the premise in question, and we will use his own commitments in order to show that we can iterate the K-operator sufficiently many times, as to cause the trouble for epistemicism. Similarly to the tolerance principle, we aim to show that (MEP*) leads to paradox and Williamson has the trouble parallel to the trouble that other views on vagueness have. Supposing that Williamsons margin for error principle to be K-able enough many times, it will allow us to infer a clearly false conclusion, which Williamson tries to avoid by restricting the number of iterations of the knowledge operator. So, consider an argument such as the one below: 1 K 100 (n) [(H(n)~K~H(n-1))] (K-able premise) 2. K 101 [~H(0)] (K premise)

PAGE 87

80 3. K 100 (n) [(H(1)~K~H(0))] (Instantiating (1) to n=1) 4. K 100 [K(~H(0))] (K premise) 5. K 100 [~H(1)] (K embedded MT, from 3&4) 6. K 99 (n) [(H(n)~K~H(n-1))] (K-elimination, from 1) 7. K 98 [K~H(0)] (K premise) 8. K 99 (n) [(H(1)~K~H(0))] (Instantiating (6) to n=1) 9. K 99 [~H(1)] (K embedded MT, from 7&8) : (i) K (1) (~H(100)) For sufficiently many iterations of the K operator we have, written in general form, the following: 1. K i [(i) [(H(i)~K~H(i-1))] (premise) 2. K i+1 [~H(0)] (premise) 3. K [~H(i)] (1, 2 conclusion) where i is the numeral that stands for the number of grains that certainly make a heap. This is indubitably false conclusion, and contradicts the fact that I know that i grains of sand do constitute a heap. If we are correct this shows that something has gone wrong with the margin for error principle. For, the above reasoning uses only the margin for error principle, and supposes that one can iterate the K-operator enough many times so to cause trouble; that is, to enable us to infer something that contradicts to agreed facts, such as that I know that i grains of sand is a heap. Now, Williamson expects to avoid any such conclusions by restricting the number of iterations of the K operator. So, the expected rejoinder to our argument would be that premise (2) or maybe (1) is false and that the argument is thus unsound. Knowledge is

PAGE 88

81 supposed to give out (with the sort of cases Williamson considers) at some point before we reach the apparently false conclusion. But there is no plausibility at all in denying that one knows that one knows that zero grains is not a heap, or any iteration thereof. I know that zero grains is not a heap, and I know that I know and I know i that I know.For no matter how many iterations of the K operator I know that zero grains is not a heap. My belief that zero grains is not a heap is reliable and any metabelief does not need a further buffer zone that would prevent me from knowing i that zero grains is not a heap. No subtle change in the grain requirement can make zero grains constitute a heap or anything else for that matter. It is a conceptual truth that zero grains is not a heap, and hence knowledge about it is conceptual knowledge. Similarly, knowledge of a margin for error principle would be underwritten by a belief that does not require a further buffer zone in order to make it safe enough to be counted as knowledge. Williamson cannot simply deny that (1) is K-able because of the type of knowledge it represents. The way Williamson arrives at the principle in question is via giving a philosophical argument. If (MEP*) is known at all it must be known independently of experience and on the basis of reflection. This secures the point that the reliability condition is met in this case because the reliability dimension of variation is parasitic on the content of a belief. Unlike the first-order knowledge about heaps, where the first-order knowledge or nesting of the K operator might fail because of variation in the core belief due to the shift of the borderline, a salient feature of (MEP*) is that it does not depend on where the borderline is. If (MEP*) is true at all, it must be a conceptual truth, and then the content of the first order belief, which is about (MEP*) is secure, and the reliability dimension of variation gets fixed in that way. What is central in revealing the security status of a belief is the

PAGE 89

82 way in which we acquire and justify the belief in question. That is, since we come to believe and know (MEP*) by reflection, this way of coming to know tells us something about the reliability status of such a belief. If this is correct, then first-order belief meets the reliability condition, and one can be said to have knowledge. Also, any further nesting of the K operators cannot be prohibited by appealing to the reason that the reliability condition is not met. In light of the forgoing discussion, it looks like Williamson has not given us a reason to think that he gets the kind of failure of the KK principle that he needs. One might wonder what the diagnosis of this failure is. We suspect that what has gone wrong in this story is that it relies on the idea that the theoretical notion of reliability which is co-opted by externalists, and which is a technical notion, is conflated, in Williamsons story, with an ordinary broad notion of reliability that is vague. Williamson conflates the technical notion of reliability and he seems to think that it is just like ordinary vague notion of reliability. Now, if Williamson has not given us a reason to think that he gets the kind of failure of the KK principle that he needs, then we do not have a reason to think that the second-order knowledge is inexact. That is, we can justify nesting enough K-operators so as to show that (MEP*) is inconsistent with some undeniable facts. 7.8 Further Reflection on MEP Another worry about the argument for (MEP*) is that it depends on whether there is an analogy between the stadium example and the heap example. For in the former case, though it is true that I cannot know just by looking the exact number of the people in the stadium, these are not the only available means for obtaining knowledge. By

PAGE 90

83 knowing some facts, such as the capacity of the stadium, and the number of the sold tickets, I can come to know exactly how many people there are in the stadium. Thus, although I do not know exactly how many people there are just on the basis of perception, which is inadequate to this task, I could come to know the exact number by using my power of reflection and making the relevant inferences from the information that is available. The analogy that Williamson exploits consists in that the unreliability of a belief heaps is similar to unreliability of a belief about the number of people in the stadium. The major difference between these two cases, however, is that in the first case, it is not plausible to think that the ignorance cannot be overcome in principle. If there is a fact of the matter, and if the appropriate intellectual resources are deployed it seems that although it might not be known exactly how many people there are in the stadium, it is at least knowable. However, the boundaries of the predicate heap are unknowable in principle. That is, it is not just the case that the boundary of the predicate heap is not known, but it is not knowable either. So if the argument by analogy is to be a successful one, then the two circumstances, namely the stadium example and the heap example must be sufficiently similar. Williamson, however, failed to persuade us that the situations in question are analogous. Namely, he failed to argue that the unreliability associated with judging the number of people in the stadium by perception, and hence the lack of knowledge is something that cannot be overcome, just by employing some other intellectual capacities, such as reflection for example on the relevant facts and making relevant inferences. There is an important asymmetry between not known and not knowable. The former is just a

PAGE 91

84 contingent matter, ignorance that could be overcome, while the latter is ignorance that in principle cannot be overcome. On the other hand, on might wonder why Williamson did not choose another example that is appropriately analogous to the heap case. Surely, he could have done so. Take for example continuous values such as measurements. Measuring length requires margin for error because none of the tools that one could use to measure length is ideally accurate. Thus, by measuring length we get a value and depending on the tool, a margin for error that is specified for it. This means that one never has exact knowledge of length. So far it seems that we have a case perfectly analogous to heap. Now, although Williamson could have taken an example such as the one we just described, he could not have done so without pain of loosing generality of a margin for error principle and making it dependant on some contingent facts. This is, perhaps, why he has chosen not to use an example such as our measurement case. The trouble is in that a margin for error principle cannot be generalized so as to apply to any sort of measurement. It needs to be restricted to a choice of measurement. This becomes apparent if we take an example. One cannot apply a single (MEP*) to a meter unit, and a millimeter, and an inch, since what counts as small difference greatly varies from unit to unit. Now, it looks like that the analogy with heap breaks down again, since (MEP*) in the case of heap is a general principle and independent on the number of grains and where the borderline is. In the light of our argument against (MEP*), one might want to restrict (MEP*) in the heap case to a choice of n (i.e., the number of grains, for example). This is, however, highly implausible. To relativize (MEP*) to a number of grains would have as a result that some arbitrary choice

PAGE 92

85 determines what the principle is. This would introduce arbitrariness into the proposed account of vagueness. Also, there is a question whether it is plausible to think that an arbitrary and contingent choice of n implies (MEP*), which is supposed to be a statement about reliability of ones faculties and is independent on the number of grains. It is worth noticing that in the case of measurements, all (MEPs*) are just the statement of accidental barriers, such as the choice of measurement, for example. But, this does not seem to be plausible for the heap case for the reasons already mentioned. If the foregoing discussion is right, then we can conclude that Williamsons principle is not a good one. On one hand, it looks as if it cannot be a general principle if the analogy with measurement cases works. But the question is what is left of the epistemic account if the principle is restricted as it is the case in those cases; we suspect that the answer is arbitrariness and implausibility. On the other hand, if one attempts to insist on general version of (MEP*), then we have a problem that it is paradoxical in a parallel way in which the tolerance principle is paradoxical. Further, even if granted Williamson that the first order knowledge were inexact, there still remains the question why one would think that the second-order knowledge must be inexact. One diagnosis would be a mistaken belief of inheritance of vagueness that we have talked about in the discussion of Hyde. One of the troubles with Hydes argument was in that it depended on the inheritance of vagueness from the vague predicates to the semantic predicates that we use to talk about them. It seems that something similar is going on in Williamsons discussion. In Williamsons case, the vagueness invades epistemic predicates that he uses to talk about vagueness. Now, since first-order vagueness is described as a type of ignorance, and the same holds for the

PAGE 93

86 second-order vagueness. But it seems that the only reason why Williamson thinks that metaknowledge is inexact is because our theoretical judgments about our reliability, according to him, do not meet reliability condition. The justification for this claim is offered through the claim that knowledge that one knows requires two margins for error. A special case of inexact knowledge is that in which the proposition A is itself of the form It is known that B. As we are not perfectly accurate judges of the number in a crowd, so we are not perfectly accurate judges of the reliability of a belief. A margin for error principle for It is known that B in place of A says that It is known that B is true in all cases similar to cases in which It is known that it is known that B is true. As usual, the required degree and kind of similarity depend on the circumstances, for example, on ones ability to judge reliability; It is known that B and B may need margins for error of different widths. If It is known that B is true but there are sufficiently similar cases in which it is false, then it is not available to be known. It cannot be known within its margin for error. Thus the failure of the KK principle is a natural consequence of inexactness of our knowledge of our knowledge. (pp. 227-8) Now, it seems to be a mistake to claim that we cannot iterate K operators without increasing the width of the buffer zone for the reliability of the belief, as we have seen in the earlier discussion on this matter. For the knowledge we get by iteration of Ks is metaknowledge and is independent from the initial reliability dimension of variation. If possible inexactness of the first-order knowledge is not inheritable, for the types of knowledge in question are different and reliability depends on the content of a belief then we do not have a reason to think that metaknowledge is inexact. Conclusion. If we are right, what we have learned from the forgoing discussion is that the epistemic view does have a problem with higher-order vagueness, although it does not have the problem of the semantic sort. The target that we distinguished to blame for the trouble is (MEP*). There is an apparent tension between the attempt to restrict (MEP*) and attempt to account for ones ignorance in borderline cases, given that one must opt for its restriction if we are to avoid its being paradoxical in the way in which the

PAGE 94

87 tolerance principle is paradoxical. We fear, however, that one cannot sacrifice generality of the principle either without infecting the whole account with implausibility and arbitrariness. We have argued that the failure of the KK principle has not been shown to be responsible for the inconsistency of 1-3, then we have a good reason to think that something has gone wrong with (MEP*). We must conclude, in the light of forgoing discussion that epistemicism i) faces the problem of higher-order vagueness too, and ii) Williamson fails to give a good reason to think that the boundaries postulated by epistemicism are unknowable. Thus, epistemicism cannot to hope to be a good theory of vagueness.

PAGE 95

CHAPTER 8 CONCLUSION In the foregoing discussion we examined the views that share in common the paradigmatic conception of vagueness, as well as the view about the paradigmatic conception of vagueness. We distinguished, on one hand, between the views that acknowledge the phenomenon of higher-order vagueness attempting to give an account of it (i.e., they aim to accommodate higher-order vagueness in the theory) and on the other hand the views that deny higher-order vagueness. In the first category, we discussed Fines (1975) treatment of higher-order vagueness, degree theory, Burgess(1990) attempt to show that higher-order vagueness does not go all the way up in the hierarchy of borderline cases, and then Hydes(1994) proposal why higher-order vagueness should not worry theorists that share the paradigmatic conception of vagueness. Epistemicism (Williamson 1994) also belongs to this category, but it is important to emphasize that the higher-order vagueness that epistemicism acknowledges is of the epistemic sort. The denial of higher-order vagueness was discussed as presented in Wright (1992) with his argument that higher-order vagueness is incoherent. By examining these proposals, we learned that none of these views give a satisfactory treatment of higher order vagueness and higher-order vagueness is a serious problem for all of them. We showed that Fines supervaluational strategy does not work, and we specified three unresolved and seemingly insoluble problems for Fine. First, following Burgess 88

PAGE 96

89 line of criticism, we saw that not even the first-level supervaluational story works. Secondly, sharp boundaries emerge after all. Thirdly, it looks like nothing is counted as supertrue on that account. Similar criticism was articulated regarding the degree theory and its continuum-valued semantics, which shares Fines predicament and has no special resources to handle higher-order vagueness. By examining Burgess treatment of the problem we discovered that his attempt to show that higher-order vagueness is finitely limited fails. His analysis of the secondary-quality predicates falls short of showing that vague secondary-quality predicates can be analyzed in limitedly vague terms. Further, Burgess analysis is hopelessly circular. All these views have in common that they follow an intuitive approach to the phenomenon of vagueness. This means they all attempt to tell some story that accounts not just for vagueness of the first order but also for vagueness of any order, or they aim to show that the hierarchy of borderline cases terminates and does not run all the way up. Having shown that they have failed, we turned to discuss an attempt to deny higher-order vagueness. It is counterintuitive to deny higher-order vagueness, and we have shown that Wrights attempt to back up such a denial failed. He gets the conclusion that higher-order vagueness is incoherent only because of the violation of the restriction on the application of the DEF rule. This violation also gives some contradictory results. We also showed that epistemicism is not immune to the problem of higher-order vagueness either. It has a parallel problem to semantic higher-order vagueness, namely epistemic higher-order vagueness. We came to conclude that Williamsons MEP that is

PAGE 97

90 supposed to allow for respecting the phenomenon of higher-order vagueness is paradoxical in the parallel way in which the tolerance principle is paradoxical. We also discussed an attempt to pay respect to the basic vagueness phenomenon without pain of having the problem of higher-order vagueness. We examined Hydes metatheoretical argument and showed that Hydes argument is not a good one: it relies on fallacious reasoning. Moreover, the argument is question-begging, and does not respect the peculiarity of its dialectical position, namely its being a meta-theory which cannot take for granted what is taken for granted in the theory that it aims to defend. After close examination, we came to the conclusion that all views that share the paradigmatic conception of vagueness i) face the problem of higher-order vagueness, or some parallel problem and ii) fail to deal successfully with it. An important feature of the kind of failure that these views exhibit is that it is not that they fail for some accidental reason, and in such a way that some maneuver would fix the problem. They rather fail for some principled reasons and there seem to be no resources in the discussed theoretical milieu to deal with the problem. So, we come to conclude that the problem of higher-order vagueness is insoluble for the paradigmatic conception of vagueness. We are inclined to think that there is something in the paradigmatic picture that generates the problem and which is the common denominator of the different views that differ in many respects otherwise. A natural candidate to mark as the trouble-generator is the very characterization of the phenomenon of first-order vagueness by presence of borderline cases, which, we suspect, rests on an idealization. If his diagnosis is correct a take-home lesson of the foregoing discussion is that the underlying assumption of the

PAGE 98

91 paradigmatic conception of vagueness should not be taken at face value anymore. The presupposition that generates the problem seems to be the presupposition that vague predicates have application conditions. Undoubtly, vague predicates are used in this fashion in our everyday linguistic practice. Given the semantic role that the predicates play in the language, one can be tempted to assume that vague predicates are assigned to do the classificatory role, and that they either apply or fail to apply. But vague predicates, unlike the precise ones, are not well equipped to perform well the job that predicates are assigned to do in the language, namely to classify and to categorize. So, there is some discrepancy between our expectations for vague predicates and their performance. This should not be so surprising if we think that they are semantically deficient. This deficiency essentially affects their performance. In the everyday practice we neglect this feature of vague predicates and we use them as if they were precise. However, any theory that translates this pragmatic feature of vague predicates into a theoretical account about them does that by translating this idealization into the proposed semantical story about them, and inevitably ends up in trouble. 1 Theorizing about vague predicates under an idealization seems to be the main culprit for the type of trouble that we identified as the problem of higher-order vagueness. All the theories that share the paradigmatic conception of vagueness are based on idealization, namely they move our pragmatic idealization of vague predicates into the theory. So, if the idealization that infects an account of vagueness is correctly identified as a trouble-maker, then a take-home lesson from the failures of the paradigmatic conception of vagueness to successfully deal with the problem of higher-order vagueness is that there is no room for idealization in the 1 For a broader discussion see (Ludwig & Ray 2002).

PAGE 99

92 theory about vague predicates. If this is the correct diagnosis of what has gone wrong with the paradigmatic conception of vagueness, and if it inevitably ends up in iterative conception, the question is what we are left to say then about the prospects for a solution of the problem of higher-order vagueness. We cannot but conclude that it looks like the only promising way to go regarding the solution of the problem of higher-order vagueness is not to let it even to get off the ground. This, however, seems attainable only if one abandons a temptation to theorize about them under an idealization by taking pretheoretical intuitions for granted. Our intuition suggests that what the insuperability of the problem of higher-order vagueness for the paradigmatic theories of vagueness reveals is that the paradigmatic conception of vagueness and the theories who accept it call for a thorough rethinking of their basic presuppositions that we suspect are responsible for this common difficulty that all the discussed views havenamely the difficulty that they are irreconcilable with higher-order vagueness.

PAGE 100

REFERENCES Burgess, John Alexander (1990) The Sorites Paradox and Higher-Order Vagueness, Synthese 85: pp. 417-474. Edgington, Dorothy (1993) Wright and Sainsbury on Higher-Order Vagueness, Analysis 53.4: pp. 193-200. Fine, Kit (1975) Vagueness, Truth and Logic, Synthese 30: pp. 265-300. Heck, Richard (1993) A Note on the Logic of (Higher-Order) Vagueness, Analysis 53.4: pp. 201-208. Hyde, Dominic (1994) Why Higher-Order Vagueness is a Pseudo-Problem, Mind 103.409: pp. 35-41. Ludwig, Kirk & Ray, Greg (2002) Vagueness and the Sorites Paradox, in Tomberlin, J. (ed.), Language and Mind Philosophical Perspectives 16, Ridgeview Press, Atascadero, CA, pp. 419-461. Ray, Greg (2004) Williamsons Master Argument on Vagueness, Synthese 138: pp. 175-206. Sorensen, Roy (1985) An Argument for the Vagueness of Vague, Analysis 45: pp. 134-137. Williamson, Timothy (1994) Vagueness, London, Routledge. Wright, Crispin (1992) Is Higher-Order Vagueness Coherent? Analysis 52.3: pp. 129-139. 93

PAGE 101

BIOGRAPHICAL SKETCH I received a BA in Philosophy at the University of Belgrade (Serbia) in 2001. After receiving an MA in philosophy, I intend to continue my studies in philosophy at the University of Florida. 94


Permanent Link: http://ufdc.ufl.edu/UFE0004887/00001

Material Information

Title: The Problem of Higher-Order Vagueness
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0004887:00001

Permanent Link: http://ufdc.ufl.edu/UFE0004887/00001

Material Information

Title: The Problem of Higher-Order Vagueness
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0004887:00001


This item has the following downloads:


Full Text










THE PROBLEM OF HIGHER-ORDER VAGUENESS


By

IVANA SIMIC


















A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF ARTS

UNIVERSITY OF FLORIDA


2004

































Copyright 2004

by

Ivana Simic
















ACKNOWLEDGMENTS

I would like to thank Gene Witmer and Kirk Ludwig for helpful comments. I am

particularly indebted to Greg Ray for very fruitful discussion, for reading and

commenting on various versions of this thesis, and for very helpful advice in these

matters.



















TABLE OF CONTENTS


page

ACKNOWLEDGMENT S ........._..... ............... iii...___ ....

AB STRAC T ................ .............. vi

CHAPTER


1 INTRODUCTION ................. ...............1.......... ......

2 FINE' S TREATMENT OF HIGHER-ORDER VAGUENESS ................. ................8


2.1 Supervaluational Framework.................. .... .. .. ..................8
2.2 The Application of Supervaluational Strategy to the Soritical Argument...........12
2.3 Supervaluationism and Higher-Order Vagueness .............. .....................1
2.4 Resurrection of the Paradox ................. ...............14........... ...
2.5 Fine' s Expected Reply ................. ...............16...............
2.6 Two Problems for Fine ................. ...............17.......... ....


3 THE DEGREE THEORY AND HIGHER-ORDER VAGUENESS .........................23


3.1 The Basic Idea of a Degree Theory .............. ...............23....
3.2 Meta-Language, Vague or Precise? ................ .........__ ...... 24.........


4 BURGESS'ANALYSIS OF THE SECONDARY-QUALITY PREDICATES........ .27

4.1 Burgess' Project............... ..... .... ..............2
4.2 The Circularity Problem in the Proposed Schema ................. ..... .. ...................29
4.3 The Problem of the Unacknowledged Source of Vagueness in the Proposed
S chema ................. ...............3.. 1..............


5 HYDE'S RESPONSE TO THE PROBLEM OF HIGHER-ORDER
VAGUENE S S ............ ..... ._ ...............36...


5.1 Paradigmatic vs. Iterative Conception of Vagueness and the Problem of
Higher-Order Vagueness .............. ...............38....
5.2 Hyde's Argument............... ...............39
5.3 Sorensen's Argument................... .... ... ........4
5.4 The Circularity Problem in Hyde's Argument .............. ..... ............... 4
5.5 The Problem with the Strategy .............. ...............49....













6 IS HIGHER-ORDER VAGUENESS INCOHERENT? ................ ............. .......52


6.1 The No Sharp Boundaries Paradox............... .. ...............5
6.2 The Higher-Order No Sharp Boundaries Paradox ................. .......................55
6.3 W right's Argument .................... ... ........ .. ...............57....
6.4 Is Higher-Order Vagueness Really Incoherent? ............. .....................5
6.5 Heck' s Reply ................ .......... .................. .......................59
6.6 Ed gington' s Reply .............. ...............61....


7 EPISTEMICISM AND HIGHER-ORDER VAGUENESS ................... ...............64


7.1 The Epistemic View .............. ...............66....
7.2 A Margin for Error Principle .............. ...............68....
7.3 Epistemic Higher-Order Vagueness .............. ...............71....
7.4 Why and How KK Fails ............. ................ ...............7
7.5 The Failure of KK Answers a Seeming Trouble with MEP ............... ... ........._...76
7.6 But Williamson is in Trouble Anyway ................. ...............78........... ..
7.7 The Problem of Epistemic Higher-Order Vagueness ................ ............... .....79
7.8 Further Reflection on MEP............... ...............82..


8 CONCLUSION............... ...............8


REFERENCES .............. ...............93....


BIOGRAPHICAL SKETCH .............. ...............94....
















Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Master of Arts

THE PROBLEM OF HIGHER-ORDER VAGUENESS

By

Ivana Simic

May 2004

Chair: Greg Ray
Major Department: Philosophy

According to the paradigmatic conception of vagueness, vague predicates admit

borderline cases of their applicability, and they tolerate (to some extent) incremental

changes along the relevant dimension of variation. However, given that vague predicates

admit borderline cases of the first order, and that they are tolerant they must be said to

admit borderline cases of the second order, third order, and so on indefinitely. This

feature of vague predicates that they exhibit constitutes the phenomenon of higher-order

vagueness. I argued that all theorists who accepted the paradigmatic conception of

vagueness face the problem of higher-order vagueness or some parallel problem, and fail

to successfully deal with it. An important feature of the failure that these views exhibit is

that they fail not for some accidental reason that would allow for a possible Eix, but they

rather fail for some principled reasons, and there are no resources in this theoretical

milieu to give a satisfactory treatment of the problem of higher-order vagueness. If this is

correct, then what imposes itself as a conclusion is that there is a need for rethinking the









basic vagueness phenomenon by reexamining the basic presuppositions of the

paradigmatic conception of vagueness that cannot be taken for granted anymore.















CHAPTER 1
INTTRODUCTION

Consider predicates such as 'bald', 'heap', 'tall', 'red'. No doubt, these predicates

are vague. Pretheoretically, there are three features that they exhibit.

Firstly, vague predicates seem to admit borderline cases of their applicability. That

is, they give us cases in which the predicate seems to us to clearly apply, cases in which it

seems to us that it clearly fails to apply, and cases in which it seems to us that the

predicate neither clearly applies nor clearly fails to apply.

Secondly, the predicates in question seem to admit at least one dimension of

variation along the relevant scale of applicability, such that small changes along the

relevant scale cannot make any difference whether the predicate applies or fails to apply.

That is, vague predicates seem to be tolerant.

Following the above mentioned intuitions it is seems that vague predicates are at

least first-order vague (i.e., they seem to admit at least first-order borderline cases of their

applicability). By first-order borderline cases we mean that there is no sharp boundary

between the kinds of cases to which the predicate seems to clearly apply, and the kinds of

cases to which it seems to clearly fail to apply. Now, given the tolerance intuition we are

intuitively forced to acknowledge another apparent feature of vague predicates.

So, thirdly, it is also the case that intuitively there seems not to be a sharp

borderline between the kinds of cases to which the predicate seems to clearly apply, and

the kinds of cases that we call borderline cases. Similarly, there seems not to be a sharp

borderline between the kinds of cases to which the predicate seems to clearly fail to










apply, and the kinds of cases that seem to be borderline cases. So, it seems that there are

cases that are i) not cases where the predicate clearly applies, ii) not cases where the

predicate clearly fails to apply, but are also iii) not cases that are clearly borderline cases.

Call such cases second-order borderline ca~ses. Vague predicates seem typically to be

second-order vague, because it is plausible to think (using this intuition) that if there are

first-order borderline cases, then there are second-order borderline cases. By extension,

we can describe what it would be for a predicate to be third-order vague, and so on

indefinitely .

Thus, intuitively vague predicates exhibit vagueness of indefinitely high order. This

is the phenomenon ofhigher-order vagueness.

The goal of this proj ect is to show that the phenomenon of higher-order vagueness

is an insuperable problem for theorists who accept the paradigmatic conception of

vagueness in their attempt to give semantics for vague predicates and to specify the

conditions under which vague sentences (i.e., sentences that involve vague predicates) are

true.

By paradigmatic conception of vagueness we mean the spectrum of views that

attempt to tell a story about the semantic behavior of vague predicates and which take for

granted the pretheoretical intuition that vague predicates either apply or fail to apply, and

admit borderline cases of applicability. These different views might, however, differ in

the way they characterize the notion of borderline cases (semantic characterization and

epistemic characterization, for example), but nevertheless, they all accept the theoretical

characterization of vagueness that rests on co-opting of our intuitions and how things

seem to us on a pretheoretical level into a theory and end up saying that vague predicates










either apply or fail to apply and have borderline cases. Typically, they also accept the

intuition that they are tolerant, but aim to show that the theoretical version of the

tolerance intuition needs some restriction (or must be denied) in order to accommodate or

avoid the phenomenon of higher-order vagueness being a problem for the proposed

account of vague predicates.

It turns out, as we aim to show, that theorists who have accepted the paradigmatic

conception of vagueness and phenomenon of higher-order vagueness have been unable to

successfully deal with or to avoid the problem ofhigher-order vagueness. We also see

that theorists who have accepted the paradigmatic conception of vagueness, but who have

argued against the genuineness of the phenomenon of higher-order vagueness, are also

unable to avoid problems. This leads us to suggest that there is some tension in the

paradigmatic conception of vagueness between its basic presuppositions that vague

predicates admit borderline cases and that they have application-conditions, on one hand,

and the phenomenon of higher-order vagueness on the other hand. Because the only

thing that these different views that share the paradigmatic conception of vagueness have

in common is the characterization of vagueness by presence of borderline cases (no

matter whether they are characterized semantically or epistemically, for example), and

because these views attempt to reconcile the description of vague predicates as higher

order vague, while maintaining that they have application-conditions, we suspect that this

suggests these presuppositions should be targeted as the generator of the trouble for these

VieWS.

The plan of the thesis goes as follows. In Chapter 2, we consider Kit Fine's (1975)

treatment of higher-order vagueness by applying the supervaluational strategy. The










solution Fine proposes consists in respecting higher-order vagueness through a meta-

language that is vague, so that the seeming sharp boundaries set up by the theory are just

the consequence of successive approximations. So long as one keeps moving one level up

in the meta-language, sharp boundaries are avoided.

John Burgess (1990) challenges Fine' s strategy by appealing to its inability to solve

the sorites paradox. Since the sorites paradox is the symptom of vagueness for the

predicates for which it can be constructed, one cannot but conclude that if Burgess is

right, then Fine has not given a good account of vagueness. We have a reason to think

that Burgess has shown that Fine is not successful in dissolving the paradox. We also aim

to show that Fine' s truth-conditions for vague sentences cannot be met if he is to respect

higher-order vagueness. Even worse, he cannot but end up with sharp boundaries

anyway .

In Chapter 3 we briefly discuss the degree-theory and its strategy of introducing the

continuum-valued semantics for dealing with vague terms. One might think that the

degree theory had a natural solution to the problem of higher-order vagueness, but short

of simply denying the phenomenon of higher-order vagueness, degree theory ends up

facing just the same sort of problem Fine faces.

In Chapter 4 we discuss Burgess' thesis that higher-order vagueness terminates at a

low finite level. Burgess aims to show that secondary-quality predicates admit of an

analysis which is such that that it shows that they are limitedly vague. We find his

demonstration unsatisfactory on the grounds that it falls short of showing its promise and

suffers from unavoidable circularity.










After examining these representative views on higher-order vagueness based on the

paradigmatic conception, we come to conclude that none offers a satisfactory treatment

of higher-order vagueness. Thus, we turn, in Chapter 5, to a slightly different approach,

as presented by Dominic Hyde (1994). He acknowledges the phenomenon of higher-

order vagueness, but emphasizes that the paradigmatic theorists need not to do any extra

work in order to modify their theory so as to accommodate higher-order vagueness.

'Vague' is vague, according to Hyde. Higher-order vagueness, he argues, is already

present and respected in these theorists' meta-languages. We aim to show that Hyde' s

argument is not sound, and that it relies on a not-uncommon confusion regarding

semantic predicates such as 'vague'. Also, after examination, Hyde's argument turns out

to be question-begging.

This series of unsuccessful treatments of higher-order vagueness lead us to a view

that responds to higher-order vagueness by denying it. The subj ect of Chapter 6 is Crispin

Wright' s (1992) argument that higher-order vagueness is not a problem, since it is

incoherent. After we present Wright' s argument, we present two related criticisms of it,

namely Richard Heck' s (1993) and Dorothy Edgington's (1993), that show that Wright' s

argument relies on the misapplication of a nonclassical rule of inference in a classical

proof. We aim to show that, in light of Heck' s and Edgington' s criticisms, we must

abandon Wright' s view, and admit that the case of higher-order vagueness is left

unanswered.

In Chapter 7 we turn to the epistemic treatment of higher-order vagueness.

Although epistemicism does not have a problem of semantic higher-order vagueness

(since borderline cases are characterized epistemically) we aim to show that it still has a










parallel problem, namely the problem of epistemic higher-order vagueness.

Epistemicism, as championed by Timothy Williamson (1994), has exchanged one

problem, namely the problem of semantic higher-order vagueness, with a parallel and

equally vexed problem, namely the problem of epistemic higher-order vagueness. The

exchange occurs by rej ecting the tolerance principle as a semantic principle that governs

vague predicates, and replacing it with an epistemic "margin for error principle".

However, we aim to show that just as the former gives us paradoxical results regarding

the truth of certain claims, the latter does likewise regarding our knowledge of them. In

the context of our discussion, some broader issues for Williamson' s view come to light

which suggests more broadly that his epistemicism cannot hope to be a successful theory

of vagueness.

It is worth noticing at the outset that the paradigmatic conception of vagueness is

underwritten by the assumption that vague predicates have application conditions and

that vague sentences have truth-values. No doubt, we do use these predicates in everyday

practice and communication as if they in fact do have the mentioned features. This might

very well be just an idealization. If this is so, then the question is whether the theorists in

question succumb to an idealization in theorizing about the practice, that is the question is

whether they translate our intuitive idealized description of the phenomenon into a

theory, which consequently leads to trouble, namely higher-order vagueness. This

indicates that the assumption that underwrites the paradigmatic conception of vagueness

cannot be taken for granted anymore, given that after critical reflection we come across

an insuperable difficulty for it. The situation is also aggravated by the fact that all

specified difficulties one could not hope to fix and to save the paradigmatic conception of










vagueness. The problem of higher-order vagueness is a serious obstacle to accepting the

basic assumption of the paradigmatic conception of vagueness precisely because, as we

aim to show, all the proj ects of dealing with higher-order vagueness have a principled

problem with higher-order vagueness and one cannot hope to solve this problem by

modifying either of these accounts of vagueness.

We acknowledge that we do not have a positive story about the right conception of

vagueness. That question could be the subj ect of a whole new proj ect. Yet, if the

discussion we pursue is successful, the central presuppositions of the paradigmatic

conception of vagueness cannot be taken for granted and need reexamination, which

amount to rethinking the whole basic vagueness phenomenon.















CHAPTER 2
FINE'S TREATMENT OF HIGHER-ORDER VAGUENESS

Overview. In this Chapter, we will present and critically examine Kit Fine' s

(1975)' treatment of higher-order vagueness and Burgess' (1990)2 criticism. Fine

acknowledges higher-order vagueness and aims to accommodate it in his proposed

account of vagueness based on a supervaluational framework.

The plan of the Chapter goes as follows: in Section 2. 1, we will give a description

of the basic supervaluational idea. In Section 2.2, we will present an application of this

idea to the sorites paradox. In Section 2.3, we will present Fine's treatment of higher-

order vagueness. Section 2.4 presents Burgess' challenge that Fine has not resolved the

paradox. In Section 2.5, we try to give a possible response that Fine could make to this

challenge. In Section 2.6 we will pursue a line of criticism akin to Burgess', and which

also aims to make a further point about Fine' s treatment of higher-order vagueness. These

considerations should have as a result the conclusion that higher-order vagueness

presents an insuperable difficulty for Fine, and that there are no resources in Fine's

strategy to account for the problems that we are concerned with.

2.1 Supervaluational Framework

The central proj ect that Fine undertakes in 'Vagueness, Truth and Logic' consists

in attempting to specify truth-conditions for vague sentences. In order to implement this



SFor all references to Fine in the thesis see (Fine 1975).

2 For all references to Burgess in the thesis see (Burgess 1990).










proj ect, he introduces a supervaluational framework that is supposed to accommodate

two essential features of vague predicates: higher-order vagueness and what Fine calls

penumbrall connections".

The main idea of the supervaluational approach consists in considering not only the

truth-values that vague sentences actually admit, but also truth-values that they could

admit after making them more precise. The underlying idea of the supervaluational

framework is that vague sentences have truth- values. However, we evaluate vague

sentences not just according to actual truth-values that they might have, but a according

to the truth-values that they could have after precisifying the vague terms that they

involve. Within this framework a vague sentence is true just in case it is true for all ways

of making it completely precise -that is supertrue, it is false just in case it is false for all

ways of making it completely precise -that is superfalse and neither true nor false

otherwise. Success in this proj ect is expected to have as a consequence that it leads to the

dissolution of the sorites paradox, and consequently to answer to the question what has

gone wrong with the soritical argument.

In the core of the proposed framework is the characterization of vagueness as a

semantic phenomenon. Vagueness is, as Fine puts it, deficiency of meaning. That is,

meaning of vague predicates and hence meaning of vague sentences is underdetermined

by the rules of the language. The meanings, however, can be made more complete, but

there are constraints on what the possible completing of vague meanings can be. Such

constraints include for example that what has been true before making the meaning more

precise must remain true after the process of meaning completing.










The main motivation for the supervaluational framework and for this approach to

the problem of the sorites paradox lies in dissatisfaction with the truth-functional

approach to logical connectives which presupposes the principle of bivalence. Such an

approach, according to Fine, is not able to accommodate what he calls penumbrall

connections", for it leaves vague sentences without truth-value. This will become clearer

after we say what, for Fine, a penumbrall connection" is.

The notion of penumbral connection and the corresponding notion of penumbral

truth are defined as the possibility that logical relations hold among the predicates and

among the sentences, which are, due to their vagueness, indefinite in truth-value. The

best way to see what Fine has in mind is via an example, and he himself introduces this

notion partly by an example. Fine takes, for example, a vague sentence 'P' which says

that this blob is red. He points out that 'P and not-P' is always false, even when 'P' is

indeterminate in truth-value (i.e., when the blob is the borderline case of the predicate

'red'). The truth of the sentence 'It is always false that P and not-P' is a penumbral truth,

according to Fine. The sentence in question always has a determinate truth-value even

though 'P' is vague, and hence indeterminate in truth-value. Let us take now, following

Fine, another vague sentence, 'R' that says that the blob is pink. The conjunction of 'P'

and 'R' is indefinite, due to vagueness of both 'P' and 'R'. One might wonder how this

could be--namely, how the truth-value of the conjunction sometimes depends on the

truth-value of its conjuncts, and sometimes it does not. Fine has a ready rationale for the

difference in truth-values between 'P and not-P', and 'P and R'. The difference in truth-

value between these two conjunctions, according to Fine, corresponds to the difference in

how the sentences in question can be made more precise by sharpening the vague










predicates that they contain. The sentence 'P &~P' is always false, no matter how we

sharpen 'red', while 'P & R' is true under some sharpenings of what 'P' and 'R' say, and

false on others, and hence neither true nor false. To illustrate this Fine takes into account

the vague predicate 'small' as an example. The sentence 'This blob is red and this blob is

not red' is always be false, according to what has been said above, for no matter which

sharpening of 'red' we take, a blob cannot satisfy both predicates 'red' and 'not red', that

is there is no sharpening under which the blob can be made a clear case of both. Contrary

to the case of 'red' and 'not red', the sentence 'This blob is small and red' is neither true

nor false; for some sharpenings of 'small' and 'red', it is going to be true, on some

sharpenings false, and hence the sentence is indeterminate in truth-value. A salient

feature of the sentence 'This blob is red and small' is that it could sometimes be true if

the blob is a clear case of both predicates 'red' and 'small'. Now, to say that a sentence is

indeterminate in truth-value is not to introduce another semantic category, namely the

indeterminate, one might think. Fine's response to this is that indeterminate' has a

peculiar status and the one which is not the status of a semantic category. Fine

emphasizes that although vague sentence can lack (super) truth-value, while it has a truth-

value on every so-called precisification.

The framework for evaluating vague sentences that Fine develops is based on the

notion of admissible precisification. According to Fine, a precisification of a predicate is

admissible as long as it i) includes all the clear positive cases for the predicate, and

ii) excludes all the clear negative cases for the predicate.

According to Fine, a vague sentence is true just in case it is true for all ways of

making it completely precise, that is under all admissible precisifications. Fine coins the










term 'supertruth' for sentences that meet this condition. Thus, the vague sentence is said

to be true just in case it is true on all admissible precisifications of the vague terms in it.

2.2 The Application of Supervaluational Strategy
to the Soritical Argument

Let us turn now to the application of the supervaluational strategy to the sorites

paradox, and to Fine's answer to the question what has gone wrong with the soritical

argument. Consider a series of people starting with the clearly tall person and ending with

a clearly short person, and the difference between the subsequent members in the series is

negligible (say, less then a millimeter).This series is a soritical series and we can

construct the following soritical argument:

1. X1 is tall.

2. For all Xi, if Xi is tall, then Xi+l is tall.

3. Xn is tall,

when Xn is of the height 1.5m, which clearly contradicts the supposition that the last

member of the series is clearly short.

How does Fine's approach shed light on the sorites paradox? The answer that Fine

provides to this problem consists in the claim that the maj or premise of the soritical

argument is false and hence that the argument is unsound. This is so because there is a

sharpening of 'tall', say 'tall*', which is such that 'tall*' applies to Xi, and it does not

apply to Xi+l. In other words there will be the greatest i such that Xi satisfies the

predicate in question, and its successor does not.

2.3 Supervaluationism and Higher-Order Vagueness

A natural response to this approach to the sorites paradox consists in the charge

that, as it stands, Fine's supervaluational strategy of sharpening vague predicates (and the










notion of admissible precisification in particular) would seem to presuppose that there is

a clear semantic demarcation between cases to which a vague predicate applies, cases to

which it fails to apply, and borderline cases. If that is right, Fine fails to account for the

phenomenon of higher-order vagueness.

Fine, however, has a ready answer to the problem of higher-order vagueness, which

he thinks, besides penumbral connections, is an essential feature of vague predicates. In

fact, he thinks that it is necessary to be higher-order vague in order to count as a vague

predicate at all. His response to the charge that supervaluationism sets sharp boundaries

to vague predicates is to say that the notion of admissible precisification is itself vague.

That in turn implies that the notion of supertruth is vague too, since it is defined in terms

of the notion of admissible precisification. The notion of supertruth belongs to the meta-

language, and admissibility of precisification is central to it, then the meta-language must

be vague too, rather than precise. Thus, it turns out that the truth predicate is vague due to

the vagueness of the notion central to its analysis and, hence, higher-order vague.

Thus, the strategy of supervaluations respects higher-order vagueness by being

applied to the obj ect-language, which is precisified and the boundaries are fixed on the

obj ect-level, but at the same time higher order vagueness is respected by going one level

up in the hierarchy of meta-languages. In other words, vagueness is reflected in a vague

meta-language through the vagueness of the truth predicate. However, the story of

sharpening does not end here, for the meta-language, in which the analysis of the obj ect-

language is given, is itself vague, and needs to be precisified, while vagueness is reflected

in the meta-meta-language, and so on indefinitely.










The upshot of the approach sketched by Fine is to allow one to say that the maj or

premise of the soritical argument is not true, because it is not supertrue, without imposing

any sharp boundaries between different semantic categories. Indeed, it will not be

surprise because there will be some sharpening of the predicate 'tall', say 'tall*', which is

such that that there will be some Xi which is the last obj ect in the soritical series to which

'tall*' applies and it does not apply to its successor Xi+1. This, according to Fine, does not

presuppose sharp boundaries, for it is true just to a first approximation. By reapplying the

strategy we get the result for the second approximation, and so on indefinitely. Thus,

supervaluationism is said not to presuppose sharp boundaries and hence respects higher-

order vagueness.

2.4 Resurrection of the Paradox

We have seen what Fine's response to the sorites paradox is when the soritical

argument has a general inductive premise as a maj or premise. Yet if Fine has resolved the

paradox, the strategy has to be applicable to the soritical argument which can be given in

a different fashion. So, consider again our soritical series of people ordered according to

the height, starting with a clearly tall person and ending with a clearly short person

(where the difference between any two members in the series is less then a millimeter).

We can write the soritical argument as follows:

1. X1 is tall

2. If X1 is tall, then X2 is tall

3. If X2 is tall, then X3 is tall



n. Xn is tall,









when Xn is of the height of 1.5 m, which contradicts the original supposition that Xn is

clearly short.

In this form, the argument has no general inductive premise, but only a stepwise

series of conditionals, where each conditional has the form 'if Xn is tall, then Xn+1 is tall'.

If we write the soritical argument in this form, that is as a series of conditionals instead of

the general inductive premise, then, with the help of the finite number of applications of

Modus Ponens, we get the same paradoxical result that someone whose height is only 1.5

m is tall, for example.

Burgess has challenged Fine's approach, on the grounds that it does not yield a

satisfactory solution to the sorites argument when it has the form of the stepwise series of

conditionals instead of a general inductive premise. Burgess explicates the difficulty for

Fine's and any supervaluational approach by saying that if supervaluational story is

applied to the step-wise soritical argument in which there is only a finite series of

conditionals, there will be the first conditional which is not supertrue. However, taking

any nth conditional as the first one which is not supertrue implies that there is a sharp

boundary of the vague predicate.

The upshot of running the soritical argument with the step-wise series of

conditionals instead of the general inductive premise is to show that the first-level

supervaluational story fails to solve the sorites paradox. If the strategy really worked, it

would be equally applicable to the second form of the soritical argument and not only to

the argument with the generalized inductive premise.









2.5 Fine's Expected Reply

What could Fine say about the soritical argument in this form? We can extrapolate

from Fine's treatment of the inductive soritical argument that he will want to claim that

the stepwise soritical argument is also unsound, while at the same time denying that any

nth conditional is the first one which is not super true. In short, Fine will want to appeal

to the vague meta-language. He will probably think that just reapplication of the strategy

employed for the first form of the soritical argument would help with the stepwise form

of the soritical argument, for the reapplication of the strategy is thought as capable of

doing the trick of not picking any n-th conditional as the first one which is not supertrue.

Now, the reason why one would think that the reapplication of the strategy would help

with the stepwise form of the soritical argument is that one might think, following

supervaluationists that the approach in question only on the face of it seems to impose

sharp boundaries between the two semantic categories: supertrue and superfalse. The

worry that the supervaluational approach sets precise boundaries neglects that the notion

of admissible specification is vague. Generating sharp boundaries would mean that the

notion of admissible specification is precise, which is clearly not the case in Fine's story.

The first level story that supervaluationism offers seems to be committed to the sharp

boundary between supertrue and superfalse just because it is an approximation. As an

approximation it does seem to set sharp boundaries, but they are at the same time avoided

since we do not stop applying the strategy. If we do not stop in reapplying the strategy we

are safe from sharp boundaries.









2.6 Two Problems for Fine

An immediate worry that arises with the commitment not to stop applying the

supervaluational strategy is that there is a tension between this commitment and the fact

that there is only a finite number of conditionals in the series. So, it seems that the

reapplication of the strategy must stop somewhere, since there are just so many

conditionals, and only so many things in the soritical series. Now, given that there is only

a finite number of conditionals the question is how Fine can maintain both the view that

there are no sharp boundaries, and not to pick any n-th conditional as the first one in the

soritical series which is not supertrue. For, by reapplying the strategy, on every next level

fewer and fewer conditionals are going to meet the criterion of being super true. Now, the

nature of admissible sharpening is such that not all the cases that were true all the way up

on some level must be counted in on every further level of approximation. So,

superpositive cases can lose their status as we go up in the hierarchy. But since the sorites

series is finite, the iteration of the strategy must give out at some finite stage. If it does

not there is a worry that nothing is going to be counted as supertrue, for reapplication of

the strategy on every higher level is going to remove more and more cases that were

originally counted in.

Burgess pushes this critical point against the supervaluational higher-order

vagueness strategy by emphasizing that at least the first sample in the soritical series does

absolutely definitely satisfy the vague predicate. This means that the vague sentence that

encompasses the predicate in question is supertrue not just to the some approximation,

but it is true on all admissible precisifications all the way up. We also accept that not all

the cases are like this. There are some clear negative cases, the cases that fall out all the










way up. Thus, in the series of conditionals some of them (at least the first one) are true all

the way up and not all of them are like that. Thus, there will be a first conditional that is

something other than absolutely definitely true. Also, there is nothing in Fine's or

supervaluational strategy in general that would make 'absolutely definitely' vague. For,

there is no vagueness of the matter in 'absolutely definitely true', and hence no further

vagueness.

It seems that Burgess' complaint against the supervaluationist is right and he has

offered a compelling argument against the supervaluational story when we are presented

with the soritical argument in the stepwise series of conditionals instead of generalized

inductive premise. It is not clear at all that Fine' s approach has any resources to answer

this complaint. So, Fine's attempt to handle higher-order vagueness does not look

promising.

Not only has Fine not resolved the paradox, but also it seems that sharp boundaries

appear after all. For take again into account the supervaluationist' s story about the sorites

argument given in terms of the series of conditionals. Fine would want to say that there

are some instances of the general inductive premise that is some conditionals which are

not true. However, they are not false either, but they are neither true nor false. But if

higher-order vagueness is to be respected, then there cannot be a sharp boundary between

the conditionals that are true, those that are neither true nor false, and the conditionals

that are false. If this is so, then, the range of borderline cases is going to get bigger, and

each sharpening reduces the number of clear positive cases, until none is left. This is

clearly a problem for Fine, for all the cases get either positive or not and hence sharp










boundaries emerge after all. Worse yet, it looks as if nothing is going to be super true in

this picture, for the criterion for being super true cannot ever be met.

In what follows we attempt to give a careful formulation of the structure of the

reapplication strategy in order to corroborate Burgess' criticism and to secure this further

point.

Consider a series of obj ects,

al, az,---, an--- m,

which are ordered according to height in such a way that the first member in the series

being the tallest, and hence clearly tall, and as we move along the series of obj ects the

height of the obj ects in question decreases.

Then, we can define possible extension sets for 'tall',

tn= {ai: i < n }.

To represent the notion of admissibility formally we can use the following

symbolism:3

Adm l-[A ifdf AC tisim,

where 'Adm [A]' says that A is a possible first-level sharpening of"admissible

sharpening", and the same holds for higher language levels, namely

Admk+1 -[A] ifdf AC {B: admk[B]}.

Now, Fine's supervaluational truth-conditions for the vague sentence 'n is tall',

commit him to the following:

There is an Al such that i) clearly adml-[Al] and ii) to a first approximation, 'n is
tall' is supertrue iff Vjti EA1, nE ti.


3 The definition is undoubtedly too broad, but it does not matter for our critical points in what follows.










In virtue of the reapplication strategy, however, Fine is also committed to there

being at least one such set at level two, that is

There is an A2, which is such that i) clearly adm2 -[A2] and ii) to a second
approximation, 'n is tall' is supertrue iff JAl e A2, ti EA1, nati.

And so on for every level.



There is an An which is such that i) clearly admn -[An] and ii) to an nth
approximation, 'n is tall' is supertrue iff VJAn-1 E An, An-2 E An-1, ..., Al E A2,
VtieA1, nati.

Thus, there is at least one sequence, , meeting the above conditions.

Also, since each admissible sharpening, An+1 is a clear case of admissibility at the level

n 1, it should include and clear cases of admissible sharpening on the level n. So, we

should have Al EA2 E A3...etc. This implies that negative judgments about what is to be

counted in on the previous levels do not ever go positive as we go up in the hierarchy. So,

in the end, 'n is tall' is supertrue just in case it is positive all the way up and false

otherwise. One can imagine, however, Fine complaining that we have just redefined the

notion of super-truth. Our rej oinder to this is that Fine subscribes to this notion of

supertruth because it comes together in the same package with his notion of supertruth, if

he is to respect higher-order vagueness. Now, it looks as if all this allows the re-

emergence of boundaries, and hence higher-order vagueness is not respected after all.

For, for each integer, either that integer is counted as positive all the way up, or not, and

there is no vagueness about this, and nothing in the supervaluational account suggests

otherwise. Moreover, if n does, then all m, m < n, do as well. So, they all go all the way

up or fail to go all the way up. Thus, there is a greatest n that does go all the way up all

al...an are supertruly tall, but an+l is not. Clearly, the sharp boundaries emerge after all.









The further point that the formal structure of the reapplication strategy reveals is

that the emergence of sharp boundaries is not the worst result that we get by re-applying

supervaluational strategy. What looks to be even worse in this account is that it seems

that all that further approximations can be doing is taking out a few more n's, which were

positive on the lower levels. Unless higher-order vagueness is to give out at some finite

level, we get

If
counts al...an as supertruly tall, then there is k such that
does not count an as supertruly tall.

That is, if we have for some n that an+l is not super trly tall, that is if an+1 does not

go all the way up, then if an is not to be a sharp boundary, then n must not be supertruly

tall either. But if this is so, then all m, such that m < n, must not be supertruly tall. If this

is correct, then, assuming that our sorites series has only a countable number of items,

then, the full sequence
must count no one as supertruly tall, that is there will be

no positive cases. Thus, given the account is correct, nothing is going to be counted as

supertrue, for nothing can meet the condition for being supertrue. Also a parallel

argument can be constructed following this chain of reasoning with the result that nothing

is going to be superfalse either.

Conclusion. In light of the foregoing discussion we can conclude that Fine's

treatment of higher-order vagueness is not satisfactory. The supervaluational strategy

cannot resolve the problem of higher-order vagueness. Moreover, the basic first-level

supervaluational story does not work, and it leaves us short of the solution of the problem

of vagueness. It turns out that the problem of vagueness, namely higher-order vagueness

is an insuperable difficulty for supervaluationism, as presented by Fine. If the forgoing

discussion is correct, we have learned that sharp boundaries emerge after all. Also,










another unresolved difficulty for Fine is that it looks as if on this account nothing is going

to be supertrue. Now, before we turn to Burgess' positive story about higher-order

vagueness we want to take a brief look at another strategy based on the paradigmatic

conception of vagueness that fails to give a satisfactory treatment of higher-order

vagueness for similar reasons to those that show Fine's strategy fails. We turn to the

degree-theory .















CHAPTER 3
THE DEGREE THEORY AND HIGHER-ORDER VAGUENESS

Overview. In what follows we focus on the degree theory of vagueness which

approaches the phenomenon of vagueness by introducing a continuum-valued semantics.

The degree theory also accepts the paradigmatic conception of vagueness in so far as it

treats vague terms as characteristically giving rise to borderline cases. We discuss it here

not because one might hope to find something illuminating in the degree theory itself, but

only to show that this strategy also fails to reconcile the paradigmatic conception of

vagueness with the problem of higher-order vagueness. After a brief description of the

basic idea of the degree theory and its continuum-valued approach (Section 3.1), we turn

to a criticism that establishes this (Section 3.2).

3.1 The Basic Idea of a Degree Theory

The basic idea of the degree theory is to give a continuum-valued semantics for

vague predicates. The argument for the degree theory roughly goes as follows. Consider a

vague predicate 'heap'. A thing can be more or less of a heap. So, we can naturally think

of heapness coming in degrees. Consequently, the truth of the sentence 'x is a heap'

comes in degrees too. The degrees of truth that a sentence could have are represented

with the closed interval of real numbers, [0, 1]. The sentence 'x is a heap' could admit

uncountable infinity of values, corresponding to the uncountable infinity of numbers in

this interval. This is supposed to secure that the boundary between the positive cases and

the negative cases of the application of the predicate is defused. Admittedly, both x and y

can be heaps, yet x can be more of a heap than y, depending where on the scale it is. This










in turn means that neither 'y is a heap' nor 'y is not a heap' are true, if y is a heap to the

degree 0.412. They are rather both true to the degree 0.412. Sharp boundaries between

the two semantic categories, true and false, have been avoided since there is an

uncountable infinity of numbers between 0 and 1, which correspond to the degrees of

heapness that an obj ect could exhibit.

Consider again the sentence 'x is a heap'. If the obj ect in question exhibits

heapness to the degree 0.412, then the sentence 'x is a heap' is true to the degree 0.412.

An appeal to the interval of numbers between 0 and 1 is motivated by an attempt to avoid

an arbitrariness of the semantics given in finitely many values. Introducing the continuum

of values is supposed to do the trick of avoiding any particular choice of a particular

segment in the series as the exact place where a non-heap converts into a heap, in the

series of obj ects that are continuously transforming from a non-heap to a heap. Degree

theory is thus motivated by an attempt to keep the boundaries unsharp, and yet to avoid

arbitrariness. But sharp boundaries and/or arbitrariness seem to come with the meta-

language.

3.2 Meta-Language, Vague or Precise?

Although there is no sharp boundary between 0 and 1, still there is a sharp lower

boundary between 0 and something else.. This conflicts with the intuition that vague

predicates are at least second-order vague. Thus, it looks like the degree theory has

accommodated only one part of the intuitive story about the vague predicates, namely the

intuition that they are first-order vague, but has not accommodated the phenomenon of

higher-order vagueness.










Consider again the sentence "x is a heap' is true to the degree 0.412'. Now, one

might ask what the truth-value of the sentence "'x is a heap' is true to the degree 0.412'is,

that is whether it is true or false. This question corresponds to the general question

whether the metalanguage in the degree theory is vague or precise, or whether the

complex predicate 'is true to the degree 0.412' is vague or precise. Since a simple denial

of higher-order vagueness, and appeal to a precise meta-language is not an available

option, it looks like the degree theory should apply to metalanguage too.

If a vague language requires a continuum-valued semantics, that should apply in

particular to a vague metalanguage. The vague metalanguage will in turn have a vague

meta-meta-language, with a continuum-valued semantics, and so on all the way up the

hierarchy of meta-languages.l

We have already shown in Chapter 2 the principle difficulty with the strategy of

progressing up in the hierarchy of metalanguages. A degree theorist who would like to

say that the metalanguage is vague, and that it itself requires a continuum-valued

semantics is no better off than Fine with respect to the problem of higher-order

vagueness.

One might suggest not taking numbers too seriously, but just as a useful

approximation for modeling vague predicates. One might very well grant the usefulness

of the approximation, but the question is then whether we have been told when 'x is a

heap' is true at all. It seems clearly not. Also, if the proposed theory is just a useful

modeling of vague predicates, then there are some competing modelings that are far

superior to this one, in terms of consistency with some independently plausible


SThis style of criticism has been offered in (Williamson 1994, p.128).










principles, such as the principles of classical logic, for example. So, even in the game of

usefulness, degree theory loses.

Conclusion. We are not surprised that the forgoing discussion, if correct, yields the

conclusion that there are no resources in the degree theory that could give a satisfactory

treatment of higher-order vagueness. The reason one might have hoped to Eind in the

degree-theory a promising way to go regarding the problem of higher-order vagueness is,

as Williamson suggests, that one is mislead in the view that the infinity of numbers

defuses the sharp boundaries between the two semantic categories, true and false.

However, the strategy of continuum-values suffers from the same defect as Fine's

reapplication strategy suffers, and analogous criticisms to those that apply to Fine's

strategy can be extended to a continuum-valued strategy.

The difficulties of the two discussed strategies which have in common

accommodation of the higher-order vagueness that runs all the way up in the hierarchy of

borderline cases leads us to move to a different treatment that attempts to deny that the

vagueness runs all the way up. We turn to Burgess' attempt to deny infinite higher-order

vagueness.















CHAPTER 4
BURGESS'ANALYSIS OF THE SECONDARY-QUALITY PREDICATES

Overview. In the foregoing discussion we have seen how the problem of higher-

order vagueness presents an insuperable difficulty for both Fine and his supervaluational

strategy and for a continuum-valued strategy. Now, we turn to another proj ect, namely

Burgess' (1990) treatment of secondary-quality predicates for which Burgess aims to

provide an analysis that shows that they are only limitedly vague, and that higher-order

vagueness gives out at a fairly low finite stage. Respecting higher-order vagueness turned

out to be problematic (for theorists like Fine), only because it was assumed that higher-

order vagueness has no upper bound. So, success in boundary-specifying proj ect would

have the effect of resolving an outstanding problem for various proposals, such as the

ones that we have already presented.

In the present Chapter we will present Burgess' proj ect and the proposed schema

for analysis of the secondary-quality predicates (Section 4. 1). In subsequent sections we

will describe two problems for Burgess' analysis of the secondary-quality predicates.

Section 4.2 introduces the circularity problem of Burgess' schema, and Section 4.3

introduces the problem of the unacknowledged source of vagueness in the schema. If we

are right, the problems that we specify for Burgess show that his analysis falls short of its

goal; the analysis fails to support his central thesis that higher-order vagueness terminates

at a low finite level.










4.1 Burgess' Project

Burgess' central thesis about higher-order vagueness consists in the claim that it

terminates at a low finite order. This means that it is possible to spell out truth-conditions

for a vague predicate that specify a boundary of the vague predicate. The central proj ect

of Burgess' essay is to provide a nonarbitrary, nonidealized boundary-specifying analysis

of secondary quality predicates that proves his central thesis that higher-order vagueness

terminates at low finite level. Burgess proposes the following schema for the analysis of

secondary quality predicate, F:

(A*") xE ExtLt F f
For most u (u is normal at t & u is competent at t: VC( C is F suitable for x at te(u
observes x in C at t O x seems F to u at t))). (p. 438)

The proposed schema is supposed to fix the extension of the vague secondary-

quality predicate. 'x E ExtLt (F' says that 'x' is a member of the extension of the

predicate 'F', and '(u observes x in C at t la x seems F to u at t)' is a counterfactual

conditional which is true just in case the consequent is true in all the closest worlds in

which the antecedent is true.

What Burgess needs to establish about the proposed analytic schema is that for all

elements in the schema that are possible sources of vagueness a boundary-specifying

analysis can be given. These elements of the schema that can be possible sources of

vagueness, according to Burgess, are the following expressions: 'u is normal at t', 'u is

competent at t', 'most', counterfactual construction, 'F suitable'. Succeeding in this

proj ect enables Burgess to calculate the order of vagueness that secondary quality

predicates exhibit, and to show that these predicates are bounded, and where those

boundaries lie.









In order to achieve a boundary-specifying analysis of secondary quality predicates,

Burgess needs not only to establish that a boundary-specifying analysis for all the

constituents of (A*) can be given, but also the proposed schema must not be viciously

circular. That means that the constituents of the analysis in (A*) must not explicitly or

implicitly appeal to the notion that we want to give an analysis for--in this case the

secondary quality predicate in question. So, the main purpose of the analysis is to break

down and bring to light in limitedly vague terms what it is for x to be red, for example.

4.2 The Circularity Problem in the Proposed Schema

Now, it seems immediately obvious that the proposed analysis will be circular once

we try to spell out the notion of suitable conditions that figure in the analysans of (A*).

Burgess acknowledges that (A*) suffers from a kind of circularity, but he thinks that this

is not a vicious circularity and hence that it is not a problematic feature of the proposed

analysis of vague predicates, but is, in fact, essential to it. The circularity Burgess

acknowledges comes from the analysis of the notion of F-suitability, and Burgess argues

that this circularity is crucial in order for the notion ofF-suitability to perform the

function required of it, which is tracking F-ness closely. Since the analysis does not

purport to be a reductive analysis, he claims, this much circularity is not a problem.

The analysis of the notion of F-suitability goes as follows:

(C*) Conditions C are F-suitable for x at t iff
For most u (u is normal and competent at t: (u observes x in C at t la (x seems F;
to u at t*-+x E ExtLt(F)))) (p. 453)

The charge for circularity seems to be fully appropriate, however. Burgess uses the

notion of F-suitability to analyze an obj ect' s being F, and he appeals to the notion of










being F in characterization of F-suitability. So, when we do the substitution in (A*)

according to the proposed analysis of F-suitability, we get

(A*") xE ExtLt F f
For most u (u is normal at t & u is competent at t. C(For most u (u is normal and
competent at t: (u observes x in C at t 04(x seems F to u at t +-+ x E ExtLt -))

Clearly, we have 'xe ExtLt (F)', which is only a different way to say that 'x is F'

both in the analysandum and in the analysans. This seems to be a problem for Burgess'

proj ect since he not only promises to give an analysis of a secondary quality predicate,

but also he wants to break down the higher-order vagueness of the secondary quality

predicates. The predicate 'is F' is the vague secondary quality predicate that we want to

give not only analysis for, but also we want analysis in the terms which are shown to be

limitedly vague if we are to calculate its order of vagueness. However, given Burgess'

analysis we cannot do that. For if 'is F' appears in the analysans then we need to give an

analysis for it, for it needs to be shown to be a limitedly vague secondary quality

predicate. That is, we need to conduct analysis further for 'is F'. But the analysis for 'is

F' is supposed to be given by (A*), so we have (A*) figuring in the analysans for the

predicate 'is F'. That is, part of the analysis of (A*) is (A*) itself. This seems to be far

from being a benign circularity, for the original motivation for giving an analysis for the

secondary quality predicates was not to give an analysis for its own sake, but the idea was

to specify the way how to calculate the order of vagueness by showing that the predicate

in question is analyzable in limitedly vague terms.

To show that this circularity is not vicious in its character, but a welcome feature of

the offered analysis, Burgess defends only its status as non-reductive analysis of the

proposed schema, (A*). But it looks as if he has forgotten what the analysis is supposed










to do--he has forgotten that the upshot of giving an analysis is to show that secondary-

quality predicates are limitedly vague, and merely establishing that that this is an analysis

of some kind is irrelevant for what the announced goal of the proj ect is.

The result is that 'is F' in the analysans of (A*) has not been shown to be limitedly

vague. This, apparently, makes Burgess' proj ect of boundary-specifying analysis for the

secondary quality predicates fail, for it does not enable us to calculate the order of

vagueness of the predicates in question. This is what makes the circularity in Burgesses

analysis vicious rather than benign.

4.3 The Problem of the Unacknowledged Source of
Vagueness in the Proposed Schema

Another difficulty with (A*) as a boundary-specifying analysis of secondary

quality predicates is the use of 'seems F' in the analysans of (A*), which was not

acknowledged as a relevant source of vagueness by Burgess, although it appears to be

vague precisely in the same way as 'is F'. Since Burgess aims for a boundary-specifying

analysis of the secondary quality predicates, all the constituents of the analysis of (A*)

must be shown to be at most limitedly vague. Yet, he fails to identify 'seems F' as a

possible source of vagueness. Now the question is why Burgess fails to acknowledge

'seems F' as a possible source of vagueness. One can only suppose that Burgess thinks

that 'seems F' is clearly not vague. However, this is a mistake, which is a consequence of

a not uncommon confusion regarding this expression.




SHere one might be worried that 'seems F' involves a family of notions and that our criticism hinges on
how the notion of 'seems F' is spelled out. So, one might say that we use 'seems F' as a phenomenal notion
as opposed to an epistemic notion, for example. However, even the epistemic notion, spelled out that x
seems F to one just in case one believes that x is F, or is inclined to believe that x is F, falls under our
criticism for the same reason for which the phenomenal notion is said to be vague. That is, the relevant
vague term, namely 'is F' is used in both of them.










In order to illustrate this confusion, take for example different patches that are all

some or other shade of red. Now imagine an observer who is presented with these

different patches. Each patch looks different to the observer, so she can discriminate

between them. No doubt, there is a way that each of the patches appears to the observer

color-wise, and each of the patches looks different to her color-wise. Not only can our

observer can discriminate between the patches, she can even invent a particular color

term for each way that the patch looks to her color-wise. So, she introduces the term

'ROO7' in this way. Now, while the observer cannot be mistaken that the patch looks to

her color-wise however it does, she can certainly be mistaken about whether it looks

ROO7 to her. If we ask the observer whether a particular shade we are presenting her with

is ROO7, what the observer is being asked to do here is to categorize, although the

category 'ROO7' is a very peculiar one. She can be mistaken about this, for example, if

the light is not normal, and when presented with a patch, which, in fact, is not ROO7, she

can judge that it is because of light. However, she cannot be mistaken that the patch looks

her the way that it does color-wise.

Now, take for example the question whether each patch she is presented with seems

red to her. In the clear cases of red the observer has no trouble answering our question

whether the patch seems red to her. In borderline cases, however, our observer expresses

uncertainty what to answer and hesitates with the answer. In contrast to the case when the

observer does not hesitate and cannot be wrong that the patch looks to her that way color-

wise, in this case, when asked whether the way the patch seems to her color-wise should

be called 'red', she can be at loss what to answer. Thus, although there is a particular way

that the patch looks to the observer color-wise, when it comes down to the question










whether it seems red, then she can be at a loss what to say. For, what she has been asked

to do is to categorize. Both 'seems F' and 'is F' work as categorical terms and the only

difference between them is with respect what one has been asked to categorize. In both

cases there is a color categorization, using the category 'red', and the difference is in that

that in the case of 'is F' one is asked to categorize patches, and in the case of 'seems F'

one is asked to categorize not patches, but visual impressions. Clearly, since the relevant

category in the case of 'seems red' is 'red', and it is vague, 'seems red' inherits

vagueness from its sortal, and hence what is offered as an analysis of a secondary quality

term hasn't been shown to be less vague than what we aim to analyze.

So, Burgess' failing to acknowledge 'seems F' as a relevant source of vagueness

may be a consequence of this common confusion about 'seems F' confusing 'seeming

F' with 'seeming that way color-wise'. He apparently takes 'seems F' as if it does not

involve any categorization, which is a mistake.

Conclusion. If the charges for vicious circularity and vagueness in the analysans

are correct, then we can conclude that Burgess' analysis falls short of its goal it fails to

support his central thesis; that is, the difficulties in his analysis of secondary quality

predicates prevents it from being shown to be a boundary-specifying analysis that uses

limitedly vague terms in the analysans, which is, of course, necessary if we are to

calculate the order of vagueness of a secondary quality predicate.

In the forgoing discussion, I specified two unresolved problems for the analysis that

Burgess offers; namely, the problem of circularity of the schema (A*), and the problem

of vagueness of 'seems F' in the analysans of (A*). In virtue of these complaints we

cannot but conclude that, as it stands, the analysis fails to prove Burgess' central thesis










that higher-order vagueness terminates at a low finite level. One might wonder, however

whether those problems are fixable by removing the sources of vagueness, or circularity

from the analysans of (A*). Our answer to this question is 'No', and not only that, but it

would also be Burgess' answer in the light of his commitments.

The only reason why Burgess thinks that the circularity is not a problem, and that

the offered analysis is minimally plausible is that he hopes that 'is F' can be cashed in

terms of 'seems F'. That looks like a good way to go, however, only through confusion

about 'seems F' and its categorical role, to which we have already pointed. Thus, the

answer to the question whether the analysis is hopelessly circular is even according to

Burgess 'Yes'. For if the hope for cashing 'is F' in terms of 'seems F' breaks down, then

we have hopeless circularity, given that 'seems F' is essential to the analysis of 'is F'.

According to Burgess, 'seems F' is indispensable in the analysis of 'is F', and hence the

analysis is unamendable, and not for some accidental reasons but for principle reasons.

Thus, in virtue of Burgess' commitments, these problems are not fixable. And certainly

there is a commitment to some circular vagueness in the analysis he proposes, and that

was expected, given that we must use some language to talk about what is the obj ect of

analysis. Now that language must be vague. However, there are two options. The

language can be either limitedly vague or non-limitedly vague. But given, that Burgess

commits himself to the use of 'seems F' in the analysans of the proposed schema, which

is dangerously close to 'is F', and is essentially dependant on 'is F' we still we have not

been shown that higher-order vagueness terminates and consequently do not have the

promised recipe how to calculate the order of vagueness for secondary-quality predicates.










Accordingly, since Burgess' analysis fails even for the secondary quality

predicates, which are presumably the simplest case, then we have a good reason to think

that Burgess did not show that the vagueness of the other vague predicates terminates at a

low finite order.

So far, we have seen that different attempts to give a satisfactory treatment of

higher-order vagueness have failed. Besides the fact that they all share the paradigmatic

conception of vagueness, these theories share the dialectical situation with respect to the

challenge to deal with the problem of higher-order vagueness. Namely, they all aim to

deal with the problem within the paradigmatic conception, and not challenging the basic

presupposition of the paradigmatic conception of vagueness. We have seen that they are

unable to deal with this problem, which motivates us to turn to an approach that has a

different dialectical position, namely, it aims to give a reason why one should hold on to

the paradigmatic conception of vagueness and at the same time not to worry about the

problem of higher-order vagueness.















CHAPTER 5
HYDE' S RESPONSE TO THE PROBLEM OF HIGHER-ORDER VAGUENESS

Overview. Hyde's (1994)' approach to higher-order vagueness differs from the

approaches that we have been discussing, or will discuss in that his proj ect is meta-

theoretical. Since paradigmatic conception of vagueness is not a single view about

vagueness, but rather a generic name for different theories that have in common that they

characterize the phenomenon of vagueness by the presence of borderline cases, it looks as

if Hyde' s approach is neutral between different theories inside of paradigmatic

conception of vagueness. If this is correct, then if Hyde' s argument is successful, it still

does not provide sufficient grounds for deciding between different theories inside of the

paradigmatic conception; that is, it would not give us a criterion to decide whether

epistemicism is true, or supervaluationism, for example.

According to Dominic Hyde, higher-order vagueness is a real phenomenon, but he

argues that it does not present a problem for the paradigmatic conception of vagueness.

By showing that the problem of higher-order vagueness is a pseudo-problem he aims to

save the paradigmatic conception of vagueness as an adequate conception against the

charge that what he calls 'the iterative conception of vagueness' is inescapable on

paradigmatic approach, and also misguided. So, how Hyde's treatment is supposed to

help to a theorist such as Fine consists in saying that the challenge that one might put

forward to Fine that he must reapply his supervaluational strategy is out of place, and the



SFor all references to Hyde in the thesis see (Hyde 1994).










theorist such as Fine should do nothing to respond to this challenge since it neglects that

higher-order vagueness is already present in the language that the theorist has used in

order to give a theory of vagueness. Since this language is vague, rather than precise,

higher-order vagueness is already respected and nothing needs to be done to

accommodate it.

The crucial point that Hyde makes is that 'vague' is vague, that is 'vague' is a

homological term. That claim is supposed to lead to the conclusion that 'has borderline

cases' is vague, and that consequently borderline cases have borderline cases. This

feature of borderline cases does not need, however, to be explicitly stated in the analysis

of vague predicates and the paradigmatic conception does not need to end up in what

Hyde calls the iterative conception of vagueness. This is the main theme of Section 5.1.

In establishing the conclusion of the argument, Hyde relies on Sorensen' s (1985)2

argument for the vagueness of 'vague'.

In Section 5.2 and in Section 5.3 we will sketch both Hyde's and Sorensen's

arguments respectively. Then, we will specify two worries that Hyde's argument raises,

and a related worry about Hyde's general argumentative strategy.

The first worry is related to the question whether Sorenson's argument is sound.

Our answer to this question is 'no'. Hyde's argument exploits Sorensen's argument that

does not seem to be good since it relies on bad reasoning.

The second worry is related to the circularity of the proposed argument and we deal

with it in Section 5.4. Hyde anticipates this worry, and has a ready answer to it. We aim

to show that the type of response he offers is not a good one.


2 For all references to Sorensen in the thesis see (Sorensen 1985).









In Section 5.5 we raise a question about the general argumentative strategy. Hyde

makes a strategic mistake in overlooking the asymmetry in the presuppositions that are

available to those who are in the paradigmatic conception, and the presuppositions that

are available to one who is defending the paradigmatic conception of vagueness. By

making his argument essentially dependent on the presuppositions of the paradigmatic

conception, Hyde simply seems to presuppose that the conception that he aims to defend

is correct.

5.1 Paradigmatic vs. Iterative Conception of Vagueness
and the Problem of Higher-Order Vagueness

The paradigmatic conception of vagueness is the view that vagueness of a predicate

can properly be characterized by the presence of borderline cases of the applicability of

the vague predicate in question. The difficulty for views that characterize the

phenomenon of vagueness by the presence of borderline cases is that these views cannot

distinguish vague predicates from merely partially defined predicates. In order to make

such a distinction, the paradigmatic view moves its talk about vagueness by way of

appealing to the presence of borderline cases to the talk about hierarchy of borderline

cases. This shift in talk about vagueness is supposed to be a recognition that not only do

vague predicates not draw sharp boundaries, but that vague predicates fail to draw any

boundaries within their range of significance. This is the leading intuition that

underwrites the iterative conception of vagueness--namely, if there are borderline cases

of the first order, then there are borderlines of the second order, and so on indefinitely.

This amounts to saying that if the predicate suffers from vagueness of the first order, then

it suffers from vagueness of every order, and this feature of vague predicates, namely the










feature of being higher-order vague, distinguishes them from merely partially defined

predicates.

It looks then as if the paradigmatic conception of vagueness inevitably ends up with

the iterative conception of vagueness; that is, the phenomenon of higher-order vagueness

needs to be accommodated if the conception pleads to be the correct one. We have seen

earlier, in the discussion of supervaluationist view of vagueness that one might be

worried that the iterative conception of vagueness might be inadequate since it either

does not respect higher-order vagueness after all (if the strategy is applied just finitely

many times-the limited iterative conception), or it is incapable of specifying the

application conditions for vague predicates, and consequently the truth-conditions for

vague sentences. Although, Hyde does not mention these worries explicitly, it is plausible

to think that these difficulties motivate his proj ect.

At the outset, Hyde points out that any criticism of any conception of vagueness

centered on the question whether the conception of vagueness accommodates higher-

order vagueness presupposes that the higher-order vagueness is not only a real

phenomenon, but also that it presents a problem for the paradigmatic conception of

vagueness. Hyde, however, gives an argument that is supposed to demonstrate that there

is higher-order vagueness, and he gives an account why it need not to worry anyone who

accepts the paradigmatic conception of vagueness.

5.2 Hyde's Argument

Hyde' s central thesis is that the phenomenon of higher-order vagueness is real

enough, and the paradigmatic conception of vagueness captures it, but without collapsing

in the iterative conception of vagueness. That is, the phenomenon of higher-order










vagueness is not a problem for the paradigmatic conception of vagueness. The insistence

on the iterative conception of vagueness is just the consequence of ignorance of the

ambiguity of 'borderline case'. Once we realize this ambiguity, and appreciate it, it

becomes clear, Hyde argues, that the paradigmatic conception of vagueness does not need

to end up committed to the iterative conception of vagueness. This allows one to avoid

the difficulties of the iterative conception that we mentioned earlier.

The core of Hyde's argument on the premise that 'vague' is vague that allows him

to claim that the paradigmatic conception of vagueness needs not to be modified or is not

challenged by attempting to accommodate the presence of higher-order vagueness. That

is, different theorists need not to worry that their semantic theory imposes sharp

boundaries and that they need to do some extra work in order to respect higher-order

vagueness. These theorists should do nothing; since 'vague' is vague, according to Hyde,

and it is part of the meta-language in these theories, higher-order vagueness is respected.

So, higher-order vagueness is already present in the characterization of vagueness by

appealing to the presence of borderline cases, for borderline cases have themselves

borderline cases, but this need not to be explicitly stated in the analysis of vague

predicates. Hyde's argument can be reconstructed as follows:

1. 'Vague' is vague--it is an homological term. (by Sorensen)

2. Since 'vague' is vague, it cannot be defined in purely precise terms. (1)

3. The vagueness of a predicate is properly characterized by the presence of
borderline cases. (Suppressed premise)

4. Therefore, 'has borderline cases' is vague. (2, 3)

5. And hence, borderline cases have borderline cases. (3, 4)










Hyde's argument is, no doubt, valid, but the question is whether it is sound. By

examining the premises of the argument one might get worried whether Hyde' s argument

is successful. The initiators of the worries could be identified as the premise (1) and the

premise (3) in his argument. Premise (1) is a conclusion of Sorensen's argument for the

vagueness of 'vague'. Since Hyde's argument depends on the conclusion of Sorensen' s

argument, if Sorensen' s argument is sound, Hyde' s argument would be sound too if there

were no other candidates that could undermine its soundness, such as the premise (3), for

example. However, Sorensen's argument is unsound, as we will explain later on,

although the conclusion might very well be true. For even if the conclusion turns out to

be true the reasons he gives to support his conclusion are flawed. The second candidate

for suspicion in Hyde' s argument, as we pointed out, is the premise (3). If we are right,

(3) commits Hyde to something that makes his argument question-begging. The two

mentioned worries are not disconnected. For by adopting the conclusion of Sorensen' s

argument, Hyde subscribes to the presuppositions that Sorenson makes, which are not

available to Hyde on pain of begging the question. These presuppositions although

benign for Sorenson' s proj ect might be fatal for Hyde' s proj ect; for these two proj ects are

different in character. As we have already pointed out, Hyde' s proj ect is meta-theoretical,

(i.e., it is supposed to be a defense of the theories such as Sorenson' s) while Sorenson' s

proj ect aims to give an account of the problem of vagueness, and is not about theories of

vagueness. The presupposition we particularly have in mind is (3) that is, that the

vagueness of a predicate is properly characterizable by the presence of borderline cases,

which needs to be established on independent grounds.










If these charges are correct, then they would, clearly, undermine Hyde' s attempt to

establish the thesis that borderline cases have borderline cases, which claim is crucial to

the establishment of his central claim. We will discuss each suspicion in turn. Thus, let us

begin with Sorensen's argument and the reasoning behind it.

5.3 Sorensen's Argument

Sorensen's argument goes as follows:

1. The vagueness of 'small' allows one to construct the following soritical argument:

(a) 0 is a small number.

(b) If n is a small number, then n+1 is a small number.

(c) One billion is a small number.

2. Numerical predicates such as 'n-small' can be used to construct a soritical
argument for the predicate 'vague',

where 'n-small' is a numerical disjunctive predicate defined as applying to only those

integers that are either small or less than n.

(a' ) l-small' is vague.

(b') If 'n-small' is vague, then 'n+1-small' is vague.

(c') 'One billion-small' is vague.

3. The soritical argument is the symptom of vagueness of the predicate for which it
can be constructed.

4. Therefore, 'vague' is vague. (2, 3)

Now, the question is whether Sorensen's reasoning really establishes that 'vague' is

vague. Take first a paradigmatically vague predicate, 'small'. 'Small' is typically vague,

which is, in paradigmatic conception to say that it is tolerant and admits borderline cases,

and hence one can construct a soritical argument as above. The soritical argument in

question exploits this feature of 'small', namely its being tolerant with respect to










incremental changes along the relevant dimension of variation. By analogy one would

expect that the soritical argument for 'vague' would exploit this feature of 'vague'. In

other words, similarly as the vagueness of 'small' implies that 'small' has borderline

cases, one would expect that the vagueness of 'vague' implies that 'vague' has borderline

cases.

However, we want to argue that Sorensen's argument is not sound. There is no

analogy between 'vague' and 'small' as Sorensen conceives it. The argument is not sound

because it relies on fallacious reasoning. Even if we agreed with the conclusion that

'vague' is vague, which could be true, yet we could not assent to it for the reasons that

Sorensen offers; the reason he offers is of the wrong sort.

The soritical argument for 'vague' does not depend on some feature of 'vague' that

is responsible for the paradox, but it depends on a feature of 'small'. Now, either the view

we are discussing overlooks that 'small' is responsible for the second soritical argument,

or the view is that the semantic predicate that we use to talk about the predicate in

question inherits vagueness from it. So, we can run the sorites for 'vague' not because

'vague' is vague, but because 'small' is vague. Thus, vagueness exhibited in the premise

(a' ) is a feature of 'vague' only if there is such relation of inheritance between the

predicate that is mentioned and the predicate that is used to talk about the referent of the

mentioned predicate. For what the name of the predicate 'n-small' refers to is a vague

predicate, namely the predicate 'small', rather than the predicate which is used to talk

about it, namely 'is vague'. The sorites paradox for 'vague', thus, owes its existence to

the vagueness to the predicate which was referred to by 'n-small', namely 'small'. Hyde

explicitly refers to the inheritance principle (p. 38), IP.










(IP) If all the constituent phrases of a complex phrase are precise, then the complex
phrase is precise.

Now, the trouble is that in (a') the subj ect term cannot be vague, and still one

experience the same sort of hesitance and faces the same difficulties in forming a

judgment regarding the question whether 'vague' applies to 'n-small', for there are some

values of n for which it is unclear whether 'is small' applies to it or not. He takes, for

example, 'l-small'. It is just as vague as 'small', because both predicates apply to 0, and

apply in the same way to all other integers. The same holds for '2-small', '3-small', and

so on. However, there are some cases when 'vague' does not apply, such as 'one billion-

small', for 'less than n' clause in the definition of 'n-small' takes care of this. But,

according to Sorensen, it is not clear what the value of n is when the clear cases give out.

Thus, Sorensen concludes that 'vague' must be vague, for there are no other candidates

for the source of vagueness in (a' ) that could be blamed for the vagueness, and hence for

the soritical argument in question.

The reasoning that Sorensen employs, and which Hyde adopts in his argument can

be roughly stated as follows:

1. As the series of numbers increases from nl to nj it becomes more difficult to answer
the question whether 'small' applies to ni.

2. To the same degree it is difficult to answer the question whether 'vague' applies to
'ni-small'.

3. Thus, there is a series of 'l-small', '2-small',..., 'nj-small' to which the application
of 'vague' is essentially doubtful.

4. Therefore, 'vague' is vague to the same degree and in the same way in which
'small' is vague.3


3 For a similar style of argument see (Ludwig & Ray 2002, p.455).










However, it is just the feature of 'small' in virtue of which what is referred to by

the name 'n-small' is vague. Consequently it is a mistake to think that we can run the

sorites because 'vague' is vague. We can run the sorites just because 'small' is vague.

Now, one could suppose, as Hyde does, that there is an inheritance relation in question

between 'small' and 'vague'. But it is a mistake to think that the semantic predicates

inherit vagueness from the predicates that they are used to talk about. Sorensen

apparently overlooks that the paradox-generator in the second soritical argument that he

gives is 'small'. So, even if 'vague' is vague, this reasoning does not establish it. The

predicate 'is vague' belongs to a semantic category that we use to talk about something

that is vague. What the predicate 'n-small' refers to is vague, no doubt for it applies to a

predicate such as 'either n is small or less than n'. However, what we use the semantic

category, such as 'vague', to talk about is the vagueness of 'small'. It could be said that it

is unclear whether 'vague' applies to 'n is n-small', for there are some values of n for

which it is unclear whether 'small' applies to n. Sorensen, however either sees the

vagueness of 'small' as transferable to the semantic predicate that we use to talk about it

or he simply overlooks the role of 'small' in the soritical argument that is supposed to

show that 'vague' is vague. But, in any case, it is a mistake to infer from the soritical

argument for 'n-small' that 'vague' is vague too; that is, Sorensen illegitimately extends

the vagueness of 'small' to 'vague' (i.e., he transfers the vagueness of the mentioned

predicate to the predicate that is used). If we look in the two soritical arguments for

'small' and for 'vague', we can notice that in the former, the predicate 'small' is used,

whereas in the latter it is not used, but mentioned. The mistake lies in the inference from

the hesitancy over whether to assert a sentence in which a vague word is mentioned,









which arises from the vagueness of the mentioned word, to the conclusion that another

word in the sentence that is used to talk about the mentioned word is vague too.

If 'small' does not transfer its vagueness to 'vague', then we can say that

Sorensen's argument fails to establish his conclusion that 'vague' is vague, for it is not

due to the vagueness of 'vague' that the sorites runs as stated above.

Although Sorensen does not explicitly subscribe to the inheritance of vagueness

between the semantic terms, such as 'vague', for instance, and the terms that they are

used to talk about, Hyde explicitly appeals to this principle, which is just a mistake, if the

forgoing reasoning is right.

5.4 The Circularity Problem in Hyde's Argument

Another worry about Hyde's argument is that not only that Sorensen's argument

fails to give a good ground for Hyde's premise (1), but also Hyde, by adopting

Sorenson's argument and its conclusion in his argument, he adopts together with it some

presuppositions that Sorensen makes, and which he cannot adopt on the pain of

circularity in his argument. Hyde's argument essentially depends on the assumption that

the predicate 'small' has borderline cases, or more generally that the vagueness of

predicates is properly characterizable in terms of borderline cases. But that is precisely

what is at issue here, and what Hyde' s proj ect is supposed to show. If we recall that the

goal of Hyde' s argument is to show that the characterization of vague predicates in terms

of borderline cases needs no revision in order to accommodate higher-order vagueness,

this cannot be done under an assumption that 'has borderline cases' is vague, for it is to

assume that the characterization one wants to defend is correct, which is clearly question-

begging. To see this worry clearly it is enough to see that (3) commits Hyde to (3*),










(3*") If X is vague, then X has borderline cases, and if X has borderline cases, then
it is vague.

In order for (3*) to be true, 'has borderline cases' must be assumed to be vague, for

if it is not, then, the second conjunct of (3*") is false and the whole conjunction is false.

Take for example a predicate 'child*', and say that it applies to the individuals between 1

till 12 years of age, it fails to apply to those who are between 17 and up(s), and it neither

applies nor fails to apply to those between 13 and 16 years of age. Now, 13 year old

individual is a borderline case of 'child*', but 'child*' is not vague, for the boundaries

between these three categories are sharp. Now, in the case of a paradigmatically vague

predicate, such as 'child' these three categories are not sharp, and hence borderline cases

have themselves borderline cases. For if a 13 year individual is a borderline case of

'child', so is a 12 year old individual, for small differences cannot make a change in the

application of the concept, that is 'child' is tolerant. Hyde explicitly commits himself to

the assumption that 'border case' is vague.

According to Hyde, there are two senses of 'borderline cases': a precise and a

vague sense, due to the ambiguity of the phrase 'indeterminate', and 'definitely'. In the

case of partially defined predicates, for example, 'indeterminate' and 'definitely' have a

precise senses, and hence the line between borderline cases and positive (negative) cases

of the application of the predicate is sharp, whereas in the case of vague predicates these

phrases have vague senses, and consequently the demarcation between the borderline

cases and positive (negative) cases is not sharp. Thus, when applied to precise and

partially defined predicates, it is presupposed that these terms are used in their precise

sense, and when applied to vague terms it is presupposed they have vague sense. This,

according to Hyde, gives a criterion for distinguishing between vague predicates from










merely partially defined predicates, without running into the trouble with higher-order

vagueness. So to speak, partially defined predicates do not have borderline cases in the

proper sense, for it seems that Hyde takes the vague sense of 'borderline case' to be the

proper one, and hence he points out that it would be useful to use some other expressions

to designate the precise sense of 'borderline case'.

Relying on the inheritance principle, Hyde comes to think that if 'small' is vague

and hence has borderline cases, then this would imply that it has borderline borderline

cases. Having borderline borderline cases is from the very beginning built in the predicate

'small', without the need to explicitly state this in the analysis of the vague predicate in

question. The trouble is that Hyde did not show us that 'small' had borderline cases to

begin with, but he just assumed this. This assumption makes his account viciously

circular, for the goal of his meta-theoretical enterprise is precisely to defend the theorists

who characterize vagueness by the presence of borderline cases. To illustrate this point

we can easily imagine theorists who disagree that there are the borderline cases of the

first order; would Hyde' s argument be a successful defense of the paradigmatic view of

vagueness against these theorists? The answer is clearly 'no'. For it would provide a

support for the paradigmatic account simply by assuming that it is the right one.

Clearly, if Hyde supposes that 'small' has borderline cases to begin with, then in

virtue of (3) plus the inheritance principle, he is committed to suppose that 'small' has

borderline borderline cases. Thus, Hyde not only presupposes that vagueness of 'small'

entails border cases, but also that vagueness of 'small' entails border borderline cases.

So, the theorists who have a problem with how to accommodate higher-order

vagueness should do nothing only if we suppose that vagueness is correctly










characterizable in terms of borderline cases. But given that Hyde' s proj ect is to defend

the theorists who advocate this characterization of predicates' vagueness, he cannot

assume that they are simply right, which Hyde in fact does by assuming that 'border case'

is vague, mistakenly thinking that vagueness is transferable from the mentioned term to

the term that is used to talk about it.

Hyde anticipates the worry about the circularity and has a ready answer to it. The

charge of circularity is just recognition of the homological aspect of 'vague'. Although

there is some circularity in his account due to the homological nature of 'vague', this type

of circularity is, according to Hyde, benign, for he does not use the word 'vague', in his

argument, but he just uses vague words. Here, he resorts to an analogy with 'meaningful'

and its homological nature. He argues that in the same way in which we characterize

'meaningful' using meaningful terms, we characterize 'vague' using vague terms.

The maj or disanalogy that Hyde overlooks is that in the analysis of 'meaningful',

there is no supposed inheritance relation between the terms used to talk about

'meaningful', and the term mentioned, namely 'meaningful', while in the case of 'vague'

we get the result that the semantic predicate used to talk about 'vague' only if we suppose

that vagueness is transferable from the referent of the predicate that is mentioned to the

predicate that is used to talk about it.

5.5 The Problem with the Strategy

In light of the discussion above we cannot but conclude that Hyde's argument is

not successful, and for two reasons. First, it relies on bad reasoning that is underwritten

with a false principle, namely, the Inheritance principle. Second, it already assumes what

needs to be argued for, namely, that the paradigmatic conception of vagueness needs no










modification and does not have to end up endorsing the iterative conception of

vagueness.

To illustrate this, we can just recall for a moment Hyde' s general argumentative

strategy. He wants to show that the paradigmatic conception of vagueness is correct.

Sorensen' s theory fits the description of Hyde' s definition of the paradigmatic conception

of vagueness, for Sorensen presupposes that the vagueness of a predicate is properly

characterizable by the presence of borderline cases. Given that Hyde' s proj ect is meta-

theoretical, he cannot presuppose of what is the obj ect of the defense. Hyde' s argument is

still unsuccessful, for he fails to establish on independent grounds that 'small' has

borderline cases, that is that the borderline cases characterization of vagueness is correct.

If these charges are correct, then we can conclude that Hyde' s argument is

question-begging, and not just unsound, and hence the problem of higher-order

vagueness still presents a great difficulty for the paradigmatic conception of vagueness.

Conclusion. In the forgoing discussion, we learned that the type of response that

Hyde proposes as a defense of the paradigmatic conception of vagueness is not a good

one. We specified three unresolved problems: the problem of the soundness of Hyde' s

argument, the circularity problem, and the problem with the general argumentative

strategy. If we are correct, then we have just seen another example of an attempt to

defend the paradigmatic conception of vagueness which fails. A distinctive feature of this

endeavor was in an attempt to use a meta-theoretical strategy in defending the

paradigmatic conception. But, since the strategy is essentially flawed because of adoption

of some unwarranted presuppositions, and the reasoning deployed depends on some false

principles, the whole proj ect is unsuccessful.










In what follows we discuss a view that shares with the paradigmatic views of

vagueness the characterization of vagueness by presence of borderline cases, but which is

also significantly different from the views that we have dealt so far, for it does not allow

the problem of higher-order vagueness to get off the ground. The response consists

simply in denying the higher-order vagueness as incoherent.















CHAPTER 6
IS HIGHER-ORDER VAGUENESS INCOHERENT?

Overview. In this Chapter we will present Crispin Wright' s (1992)' solution to the

problem of higher-order vagueness. Higher-order vagueness turns out not to be a problem

since, according to Wright, no one should take higher-order vagueness seriously; higher-

order vagueness is incoherent. If this is correct, then one can calmly rej ect the tolerance

principle (the maj or premise of the soritical argument), and avoid the charge that one

does not respect higher-order vagueness by doing so. If higher-order vagueness is

incoherent, then the charge is simply out of place.

The plan of the Chapter is as follows. First, in Section 6. 1 we will say something

about what Wright calls "the no sharp boundaries paradox", and its relation to the

tolerance principle, which Wright calls "the characteristic sentence" for vague predicates,

and which generates the no sharp boundaries paradox. Section 6.2 considers the 'higher-

order paradox'. Further, in Section 6.3, we will give an exposition of Wright' s proof for

the incoherency of higher-order vagueness. Then we will turn to some criticisms of

Wright' s argument, in Section 6.4. Section 6.5 discusses Richard Heck' s (1993)2

criticism and, Section 6.6, Dorothy Edgington's (1993)3 criticism, which seem to be on

the target concerning the problems with Wright' s proof.



SFor all references to Wright in the thesis see (Wright 1992).

SFor all references to Heck in the thesis see (Heck 1993).

SFor all references to Edgington in the thesis see (Edgington 1993).









6.1 The No Sharp Boundaries Paradox

A sorites paradox is a manifestation of the vagueness of the predicates for which it

can be constructed. According to the paradigmatic conception about vagueness, the

sorites paradox largely depends on what is taken to be a salient feature of the vague

predicates, namely their feature of being tolerant. The idea about a predicate's being

tolerant is typically expressed by saying something to the effect that small changes

cannot make a difference in the application of the vague predicate. So, in the series of

gradually changing obj ects, there is no obj ect that is such that a certain predicate applies

to it, and it does not apply to its successor. To say that there is such an obj ect in the series

is to impose sharp boundaries on vague predicates contrary to what is said to be the

essential feature of the vague predicates, namely that they do not have sharp boundaries.

The maj or premise of the soritical argument is said to express this intuition, and it is what

Wright calls "the characteristic sentence for vague predicates":

(i) ~(?x) (Fx & ~Fx'),

(where x' is the immediate successor of x).

As it stands, (i) constitutes the No Sharp Boundaries Paradox, and it is not a proper

expression of intuitions about vague predicates. According to Wright, although this

sentence meets our tolerance intuitions, it conflicts with our other intuitions about vague

predicates, and hence cannot be the characteristic sentence that expresses all our

intuitions about vague predicates. For what follows from it is that all the obj ects are Fs or

none are, depending on how the series of gradually changing obj ects starts, which

conflicts with our convictions about the existence of clear positive and clear negative

cases. What is needed, according to Wright, is the definition of vagueness (characteristic

sentence) that meets all intuitions about vague predicates. Thus, Wright aims to find a









definition of vague predicates that would express our tolerance intuitions, and also our

intuitions about some clear positive and some clear negative cases of application of vague

predicate.

According to Wright, the sorites paradox, which has as its maj or premise the

tolerance principle, and which is said to constitute the no sharp boundaries paradox, can

be resolved, and it is not the paradox of vagueness. The lesson that we can learn, though

is that "when dealing with vague expressions, it is essential to have the expressive

resources afforded by an operator expressing definiteness or determinacy" (p. 130). Then,

vagueness would simply consist in negating such definiteness. Wright emphasizes that

the operator that would play such a role is not redundant, as one might think, for 'A' and

'Def A' do not always coincide in truth-value. When 'A' is true, then both 'A' and 'Def

A' have the same truth value, but if 'A' is not true, then the 'A' and 'DefA' might differ

in truth value, in such a way that 'Def A' is false, even though 'A' is not false. '~Def(A)'

is not equivalent to 'Def(~A)', since 'A' is not true is not equivalent to '~A', when 'A' is

indeterminate in truth-value.

Wright proposes the following sentence as a proper representative of the intuitions

about vague predicates, and hence as the characteristic sentence:

(ii) ~ (?x) (Def(Fx) & Def(~Fx')),

from which we get:

(iii) Def(~Fx')4~Def(Fx),

neither of which is paradoxical, for what they say is just that no definitely tall thing, for

example, is succeeded by a definitely not tall thing.









6.2 The Higher-Order No Sharp Boundaries Paradox

As we have seen, the simple maneuver of introducing 'Def' operator is supposed to

remove the paradox. Now, an immediate worry that arises is whether this definition of

vagueness just Eixes one problem by replacing it with a new problem that is also raised by

some intuitions, and commitments that we have by virtue of defining vagueness in a

certain way, namely by defining vagueness by the presence of borderline cases. That is if

(ii) is supposed to negate the sharp boundaries of the first order, then the question is

whether that commits one to the view that there are no sharp boundaries of the second

order, then of the third order, and so on indefinitely.

The worry is that any strategy that deals just with the first-order vagueness, instead

of solving the problem, just postpones it, for higher-order vagueness presents an apparent

challenge. If we only have a strategy for dealing with sorites paradoxes involving only

first-order vague predicates, this amounts to have no strategy at all. All that we have then

is that the first obstacle has been overcome, that is a strategy may work for the first-order

borderlines, avoiding in that way imposing sharp boundaries of the first-order, while

imposing sharp boundaries on some higher level. This is widely recognized as

incompatible with the characterization of the phenomenon of vagueness by presence of

borderline cases. A commitment to first-order borderlines commits one to second-order

borderlines and so on indefinitely, since, according to the paradigmatic picture of

vagueness, to be a genuinely vague predicate is to admit not only borderline cases, but

also to admit a hierarchy of borderline cases. Thus, in order to deal with the problem of

vagueness, it is not sufficient to have a strategy that handles only the sorites paradox of

the first order. On the one hand we have a sorites paradox of the first order, to which we









can apply the strategy of introducing a border area for the applicability of any vague

predicate, in order to respond to a challenge that it presents. But this just postpones the

resolution of the problem by shifting the problem to the next level. For there is the

problem of the higher-order (strengthened) sorites paradox that looks exactly like the first

order paradox, except that in the former we have 'is definitely red' instead of 'is red', for

example.

Clearly, it looks as if by applying the strategy of introducing border cases of higher

and higher order, we are driven into a vicious regress, which is only the manifestation of

the predicament regarding the question what, if anything, determines the boundaries of

vague predicates.

We can express higher-order vagueness intuitions in a fashion similar to that in

which we express intuitions about first-order vagueness,

(iv) ~ (?x)(Def(Fx) & ~Def(Fx')), or

(v) ~Def(Fx')~ Def(Fx),

both of which constitute the No Sharp Boundaries paradox, for (iv) would be the maj or

premise of the higher-order (strengthened) soritical argument.

Following Wright, one can apply the trick of introducing the 'Def operator in

order to resolve the strengthened paradox. So, we get form (iv):

(vi) ~ (3x)(Def(Def(Fx) & Def(~Def(Fx'))),

which gives us:

(vii) Def(~Def (Fx')) 4 ~Def(Def(Fx)),

and this should generalize for n iterations of the 'Def operator.

According to Wright, however, we cannot iterate the 'Def operator to resolve the

paradox in the strengthened argument since there is an important asymmetry between (ii)









and (vi), or any sentence that is supposed to express vagueness of a higher-order. While

(ii) is not paradoxical, (vi) is, according to Wright. (vi) and its ilk look harmless only

until we investigate the logic and semantics of the 'Def operator. Investigation into the

logic and semantics of the operator 'Def shows that (vi) is not as harmless as it looks, for

it allows drawing the paradoxical conclusion,

(viii) Def(~Def (Fx')) 4 Def(~Def(Fx)),

which says is that there are no definite cases of F, and hence reintroduces the No Sharp

Boundaries Paradox.

6.3 Wright's Argument

Wright has argued that we cannot take higher-order vagueness seriously for it is

incoherent. He gives the argument for incoherence of higher-order vagueness, taking (vi)

as the characteristic sentence for higher-order vagueness and using the DEF rule of

inference to get (vii) fro a reduction, the generalization of which will give us the

conclusion that there are no definite cases of F if there is a definite first-order borderline

case ofF.

The DEF principle is the following rule:

A1...An [P



A1...An ) Def(P),

Where (A1...An) are definitized4 prOpositions.

What follows from it is the definitization rule,

(DEF+) if Def P, then Def (Def P),



4 To be definitized means that each member, A,, of the set (Al A,) begins with 'Def .









and the rule of eliminating the Def operator,

(DEF elimination) If Def (P), then P.

The problem is then the formal equivalent of (vi), that is (vii), and DEF lead to

contradiction. That is supposed to show that higher order vagueness is incoherent, given

that DEF is a valid rule of inference.

Wright' s proof goes as follows:

{1} [1.] Def (~(?x) (Def(Def(Fx) & Def(~Def(Fx')))) (Premise)

{2} [2.] Def (~Def(Fx')) (Premise for C)

{3} [3.] Def (Fx) (Premise for RAA)

{3} [4.] Def(Def(Fx)) (3, DEF+)

{2,3 } [5.] (?x) (Def(Def(Fx) & Def(~Def(Fx'))) (4,2 EG)

{1} [6.]~-(?x) (Def(Def(Fx) & Def(~Def(Fx'))) (1, DEF-)

{,} [7.] ~Def(Fx) (3,5,6, RAA)

{1,2} [8.] Def(~Def(Fx)) (7, DEF+)

{1} [9.] Def(~Def(Fx' ))4Def(~Def(Fx)) (2,8 C)

{ } [10.] (Vx) (Def(~Def(Fx'))4Def(~Def(Fx))) (9,UG)

Now, the next stage is to prove that given [10], 'F' has no definite positive cases, if

it has definite borderline cases of the first-order. The proof goes as follows:

{1} [1.] (VJx) (Def(~Def(Fx' ))4Def(~Def(Fx))) (premise)

{2} [2.] (?x) Def (~Def(Fx')) (premise for C)

{ ,} [3.] Def (~Def(Fx)) (1,2)

{1,2} [4.] ~Def(Fx) (3, DEF-)

{1,2} [5.] (VJx)(~Def(Fx)) (4, UG)

{1} [6.] (?x) Def (~Def(Fx'))4 (VJx)(~Def(Fx)) (2,5 C)










It turns out then that the no sharp boundaries paradox is the paradox of higher-order

vagueness. The susceptibility to this paradox is said to prove the incoherence of higher-

order vagueness, for the paradox cannot be blocked successfully as it is possible to do for

the first-order vagueness. Thus, there is an asymmetry between first order vagueness and

higher-order vagueness that is an obvious motivation for pointing to higher-order

vagueness as the trouble. For the threat of incoherence is distinctively higher-order. This

asymmetry is in the fact that (viii), which is paradoxical, can be inferred from the

characteristic sentence for higher-order vagueness, while from the characteristic sentence

for vagueness of the first-order nothing paradoxical follows, according to Wright.

6.4 Is Higher-Order Vagueness Really Incoherent?

There are two different strategies to respond to Wright' s conclusion that higher

order vagueness is incoherent. Both of them focus on the rule DEF rule and its

application.

The strategy employed by Richard Heck is to argue that the rule is valid, but it has

a restricted application, and Wright' s proof has nothing to do with the higher-order

vagueness, but only shows that DEF cannot be used with all the freedom of a classical

rule.

Another strategy, employed by Dorothy Edgington consists in showing that DEF

rule is not valid since it allows derivation of a false conclusion from a single

indeterminate premise. In what follows we will present these lines of criticisms.

6.5 Heck's Reply

The first question that Richard Heck considers is the motivation for using the DEF

rule of inference, rather than the alternative rule, DEF*:









(DEF*) A1,...,An ) P



A1,...,An ) Def(P).

The reason why Wright does not use this stronger version of the DEF principle is

motivated by the worry that it would made the 'Def' operator redundant. Since, DEF*

would seem to validate

(a) PaDef(P),

which would destroy Wright's approach to the first-order vagueness.

A similar worry arises regarding DEF. For DEF seems to validate:

(b) Def(P) RDef(Def(P)).

Wright uses (b) in his proof, for 'Def(P)' and 'Def(Def(P))' coincide. That is,

however, to rej ect the higher-order vagueness, which is precisely at issue here. The

question is then why not to abandon DEF if it leads to something unacceptable.

Heck argues, however, that both DEF and the stronger DEF* are valid rules of

inference. The troublesome (a) is not, in fact, validated by DEF*. The diagnosis of the

problem is in the application of DEF* and accordingly DEF, in subordinate deduction

(conditional proof, reduction ad absurdum), when the distinction between 'A' and

'Def(A)' collapses. In the same way, DEF, when used in subordinate deduction collapses

the distinction between 'Def(P)' and 'Def(Def(P))'.

So, what Heck disputes is the application of both rules in subordinate deductions

which amounts to the validation of the deduction theorem, which is the inference from

P ) Def(P) to ) PaDef(P), which is not correct. That shows that the DEF rule is not

classical and cannot be used in classical proofs, for to do so is to collapse this distinction.

Also, since both DEF and DEF* are valid rules of inference, if there are no restrictions on









their use, DEF* will give a paradoxical result taken in combination with (ii). This

indicates that the incoherence is not distinctively higher-order, and that there is no

asymmetry between first-order vagueness and second-order vagueness.

6.6 Edgington's Reply

Dorothy Edgington's strategy in 'Wright and Sainsbury on Higher-Order

Vagueness' consists in disputing Wright' s proof for the incoherence of the higher-order

vagueness by way of attacking the DEF rule, which she argues is invalid. The reasoning

goes by first demonstrating how DEF makes trouble for higher-order vagueness, by

inferring a contradiction form the supposition that there is some obj ect which is definitely

on the borderline between the definitely red things and not definitely red things, for

example.

1. Def(~Def(Def Red(x))) & ~Def(~Def Red(x))

la. ~Def (Def Red(x)) [Def elimin., and & elim.] lb. ~Def(~Def Red(x))

2a. ~Def Red(x) [DEF-]

3a. Def(~Def Red(x)) [DEF]

Now, 3a contradicts to lb. By looking at the proof, and the rules of inference that

are used for drawing the conclusion, it becomes apparent that these rules allow inferring a

false conclusion from an indeterminate premise. However, according to Edgington, when

indeterminacy of truth-value is at issue, it is necessary for a rule of inference, in order to

be valid, to preserve not only truth of the conclusion from the true premises, but also to

exclude the derivation of a false conclusion from a single indeterminate premise. The

DEF rule does not seem to meet this criterion, according to Edgington, since it allows the

derivation of a false conclusion from a single indeterminate premise, namely 2a. As we

can see from a footnote in Edgington's text, Wright replied to this by saying that the DEF









rule preserves only polar (definite) truth. In the light of Wright' reply, Edgington' s

criticism comes down to a complaint similar to Heck' s regarding Wright' s proof and the

use of the DEF rule. So, Wright emphasizes that his DEF rule is valid and preserves only

polar truth. Polar truth is characterized by appealing to sentence's being true or false. The

problem with Wright' s proof then is, according to Edgington, in the use of the rule:

(DEF-) ~Def(Def(P)) entails ~Def(P),

which relies on reduction. The proof goes as follows:

1. ~Def(Def(P)) [premise for conditionalization]

2. Def(P) [premise for reduction]

3. Def(Def(P)) [by DEF+ rule]

4. ~Def(P) [1,3, RAA]

This complaint is similar to the one presented by Heck, for the trouble is in the

transition form the step (2) to (3) which is sanctioned by the DEF rule, but which does

not do justice to the difference between Def(P), and Def(Def(P)). It seems natural then to

conclude that the proof Wright gives in order to show that the higher-order vagueness is

incoherent is not satisfactory, for he illegitimately uses the non-classical DEF rule in a

classical chain of inference.

Conclusion. In light of the foregoing discussion, we cannot but conclude that the

threat that higher-order vagueness is said to present is still unanswered. The diagnosis of

what has gone wrong with Wright' s argument and which consists in blaming the

application of the non-classical DEF rule within the classical proofs can be taken to show

that the result that Wright gets is just the violation of this restriction on the application of

the DEF rule.









This leaves us with the problem of higher-order vagueness still unanswered. So far

we have been dealing with the views that no matter how different they are in terms of the

proposed solution of the problem of higher-order vagueness had in common the

description of vagueness as a semantic phenomenon. All the treatments, as we have seen,

turned out to be unsuccessful in dealing with this problem for different reasons. Now we

turn to a view that also belong to the paradigmatic view of vagueness, and which on the

face of it does not have the problem of semantic vagueness. Thus, we turn to the

epistemic view of vagueness, which is the subj ect of the next Chapter.















CHAPTER 7
EPISTEMICISM AND HIGHER-ORDER VAGUENESS

Overview. In this Chapter we formulate and critically examine the epistemic view

of vagueness, which is championed in Timothy Williamson' s book Vagueness (1994).1

The motivation for the discussion is the question whether the epistemic view of

vagueness shares the difficulty regarding the problem of higher-order vagueness with the

other views that characterize the phenomenon of vagueness by appealing to the presence

of borderline cases. As we have learned, once first-order vagueness is characterized by

presence of borderline cases that commits one to progress in the hierarchy of borderline

cases, since it does not seem to be plausible to stop the hierarchy at any point, without

introducing arbitrariness into the account or sharp boundaries that were originally

rej ected. We have seen in the previous chapters how this commitment to no sharp

boundaries creates the difficulty for different views that share the 'the paradigmatic

conception of vagueness.

On the face of it, it looks as if the epistemic view, by characterizing borderline

cases in epistemic fashion, and claiming that there are sharp semantic boundaries, does

not have the problem of higher-order vagueness. The aim of this Chapter is to show that

although the epistemic view of vagueness does not have a problem with semantic higher-

order vagueness, still, by respecting the phenomenon of higher-order vagueness,

epistemicism runs into a trouble that parallels the problem of semantic higher-order



SFor all references to Williamson in the thesis see (Williamson 1994).










vagueness. So, it turns out, as we aim to show that by respecting the phenomenon of

higher-order vagueness, epistemicism faces the problem of epistemic higher-order

vagueness.

The plan of the Chapter goes as follows: firstly, we will formulate the epistemic

view of vagueness as a conjunction of two theses, and secondly we will present some

arguments to show that Williamson does not give the promised successful treatment of

higher-order vagueness. A further point emerges, namely that epistemicism cannot hope

to be a good theory of vagueness.

In Section 7.1 we formulate the epistemic view about vagueness, as presented in

Williamson, as a conjunction of two main theses. In Section 7.2 we introduce a principle

that is supposed to do the explanatory work for the claim that vagueness is a type of

ignorance. Section 7.3 introduces the notion of epistemic higher-order vagueness, and its

relation with what Williamson calls 'a margin for error principle', which leads into the

discussion about the failure of the KK principle presented in Section 7.4. In Section 7.5

we show how Williamson uses the alleged failure of the KK principle to answer a

possible obj section to his reliance on margin for error principles. In Section 7.6 we aim to

show that Williamson is still in trouble although KK might fail on independent grounds.

In Section 7.7 we formulate an argument that is supposed to show what the trouble is and

which directs us to the culprit of the trouble, namely a margin for error principle. The

argument uses only Williamson's principle and some reasonable suppositions which

when taken together are supposed to show that the principle gives us a surprising and

implausible result. In Section 7.8 we will point out to a difficulty with Williamson's

argument by analogy for margin for error principles.










We will conclude that Williamson' s view of vagueness has an insuperable problem

of higher-order vagueness, similarly to the alternative views that he criticizes precisely on

the grounds of not being able to give a satisfactory treatment of higher-order vagueness.

Williamson has exchanged one problem, namely the problem of semantic higher-order

vagueness, with a parallel and equally vexed problem, namely the problem of epistemic

higher-order vagueness. The former gives us paradoxical results regarding the truth of

certain claims, and the latter regarding our knowledge of them.

7.1 The Epistemic View

The epistemic view of vagueness comes down to two maj or claims:

1. Vagueness is a type of ignorance--the vague predicates have sharp boundaries

but we do not know where these boundaries lie.

2. The ignorance in borderline cases is the consequence of our limited powers of

perceptual and conceptual discrimination.

The first claim, the claim that there is a sharp boundary between the positive and

the negative extension of a vague predicate, and that we, ordinary speakers, are ignorant

of where the boundary lies, implies that in borderline cases, a vague predicate either

applies or fails to apply, and one does not know which. The uncertainty that one

experiences in forming the judgment about an obj ect, which is a borderline case of the

predicate is epistemic uncertainty. So, according to the epistemic view, there is sharp cut-

off point between heaps and non-heaps, and hence a smallest number of grains that

constitutes a heap and its predecessor does not. This implies that the maj or premise of a

soritical argument is false. Cognitively limited as we are, the boundary between heaps

and non-heaps is not knowable to us. We do have some knowledge, however. For some









sufficiently small, and for some sufficiently large number of grains we certainly know

whether 'heap' applies or not. As we go along the series of grains, and as the number of

grains decreases (increases) we experience more and more difficulties in forming a

judgment to the effect that the obj ect in question is a heap or not. We hesitate with the

answer to the question whether the obj ect we are judging is a heap; moreover, we are

completely at a loss what to say when asked whether 'heap' applies or not to that obj ect.

The area about which we hesitate with an answer and are at a loss what to say is the area

wherein the boundary lies, but we do not know exactly where.

Clearly, according to the epistemic view of vagueness there is no semantic

vagueness. The advantage of such a view, Williamson argues, is that it is able to preserve

all the laws of classical logic and semantics, one of which is the principle of bivalence.

Now, this opens the question why one would be ignorant of the sharp boundaries of the

vague predicates. This question leads us to the second claim that together with (1)

constitutes the epistemic view of vagueness.

The second claim is needed in order to bolster the credibility of the epistemic view

which it raises with its characterization of vagueness as a type of ignorance implying that

there is a fact of the matter whether the vague predicate applies or fails to apply, and one

cannot know which. The proposed answer to the question why one would be ignorant of

such a fact lies in that vagueness is a part of a broader phenomenon, namely the

phenomenon of inexact knowledge. This type of knowledge is governed by what

Williamson calls 'a margin for error principle' (MEP). This is, according to Williamson,

the principle that governs vague predicates, and contrary to the tolerance principle, MEP

does not lead into paradox, and accounts for the phenomenon of higher-order vagueness.









In what follows, I will focus my attention on this second claim that constitutes the

epistemic view. However, the discussion about the second claim has consequences for the

tenability of the first claim, although the motivation for concentrating the attention on the

second claim lies in the suspicion that Williamson might have a problem with epistemic

higher-order vagueness, parallel to the problem of semantic higher-order vagueness.

7.2 A Margin for Error Principle

Vagueness is a part of a broader phenomenon; it is a part of the phenomenon of

inexact knowledge. Inexact knowledge is a type of knowledge that necessarily involves

some ignorance. So, for example, I can judge the number of people in the stadium and for

some numbers I know that there are not exactly n people, and for some numbers I do not

know that there are not exactly n people in the stadium. The source of inexactness is my

limited power of discrimination and hence judging just on the basis of perception how

many people there are in the stadium is not exact, but just rough knowledge. The exact

knowledge that there are n people or that there are not n-1 or n+1 people is such that

either it is not perceptual knowledge, or it is perceptual knowledge but such that it is

reliable enough. So, I know that there are not 0 people, and I know that there are not

100,001 people since the latter number exceeds the capacity of the stadium. However, I

do not know that there are 28,000 people in the stadium, just by looking even if there

were 28,000. For were there 27,999 people I could easily still believe that there were

28,000, because the difference of one individual is too small to be detected by my limited

perceptual apparatus. So, I do not know that there are 28,000 people since I do not know

that there are not 27,999 people just by taking a glance at the stadium. This means that

inexact knowledge requires some buffer zone, which is going to allow that only 'safe'










beliefs are counted as knowledge. Inexact knowledge is governed by a margin for error

principle that explains ignorance in borderline cases. A margin for error principle for this

case states that:

(1VEP) If I know that there are not exactly n people in the stadium, then there are
not exactly n-1 people in the stadium.

Similarly to the situation in the stadium, the heap knowledge requires a MEP,

according to Williamson.

Consider the term 'heap', used in such a way that it is very vague. Someone who
asserts 'n grains make a heap' might very easily have made an assertion with that
sentence even if our overall use had been slightly different in such a way as to
assign the sentence the semantic status presently possessed by 'n-1 grains make a
heap'. A small shift in the distribution of uses would not carry every individual use
along with it. The actual assertion is the outcome of a disposition to be reliably
right only if the counterfactual assertion would have been right. Thus the actual
assertion expresses knowledge only if the counterfactual assertion would have
expressed a truth. By hypothesis, the semantic status of 'n grains make a heap' in
the counterfactual situation is the same as that of 'n-1 grains make a heap' in the
actual situation; for the former expresses a truth, so does the latter. Hence, in the
present situation, 'n grains make a heap' expresses knowledge only if 'n-1 grains
make a heap' expresses a truth. In other words, a margin for error principle
holds. ... (p. 232)

(1VEP*) If it is known that n grains is a heap, then n-1 grains is a heap.

According to Williamson there is the least number of grains that constitutes a heap,

and one cannot know what that number is, for given (1VEP*), one cannot know the

conjunction of the form 'n grains is a heap and n-1 is not a heap'. One might wonder

why this would be the case. The grounds for this claim are supposed to be some facts

about knowledge, the facts about which conditions are necessary in order to ascribe

knowledge to someone or to oneself. Williamson appeals to a reliability condition that is

necessary for knowledge. One's belief that a certain obj ect is a heap, when the obj ect in

question is a borderline case, is not reliable enough to count as knowledge. This means









that the belief could be true just by luck, and surely everyone would be reluctant to count

a belief true just by luck as knowledge.

Here Williamson resorts to an argument by analogy. For no number n, does one

know that there are exactly n people in the stadium. There are many numbers, such that

they are either big enough or small enough for which one is able to know that there are

not exactly n people in the stadium. If one knows that there are n people in the stadium

that implies that one' s belief meets the reliability condition. That is, the mechanism that

is responsible for forming the belief that there are n people in the stadium, would not

produce that belief had there been n+1 people in the stadium. But we know, in the given

circumstances that this condition is surely not met. And hence, one does not know that

there are n people in the stadium.

Similarly to the case of one' s belief about the number of people in the stadium,

one's belief about the heap is not reliable enough to count as knowledge. The belief

forming mechanism that produces the belief that n grains make a heap is such that there

are counterfactual situations that are suitably different from the actual one in which the

belief formed would not be true. The sorts of counterfactual situations Williamson has in

mind are ones where 'heap' has a slightly different meaning, one that shifts the semantic

borderline. Since, I cannot discriminate between the actual situation and the

counterfactual situation I cannot be reliably right, and hence I cannot have knowledge.

The claim that vagueness gives raise to (MEP*) is not supposed to depend on the

epistemic view of vagueness, according to Williamson. It is supposed to be based on

what is independently accepted as necessary for knowledge.










7.3 Epistemic Higher-Order Vagueness

The main motivation for discussing (MEP*) and how it accounts for one's

ignorance where the boundaries of vague predicates are was motivated by the interest in

the problem of higher-order vagueness, and the question whether the epistemic view is

immune to this problem. We have seen that the question about higher-order vagueness

arises for all the views that characterize the phenomenon of vagueness by the presence of

borderline cases, and accept the tolerance intuition. As we have learned before, the

paradigmatic conception of the phenomenon of vagueness has a commitment to deny

sharp boundaries of any kind (i.e., the paradigmatic conception of vagueness must

accommodate and account for the phenomenon of higher-order vagueness). There is a

question then whether epistemic treatment of the phenomenon of higher-order vagueness

faces a problem parallel to the semantic treatment of the phenomenon of higher-order

vagueness. Williamson argues that the alternative theories have a trouble to give adequate

treatment of higher-order vagueness. Higher-order vagueness is, in fact, the maj or

weapon that Williamson uses to criticize alternative theories, while claiming immunity

from these troubles for his own theory. With the help of (MEP*), Williamson argues that

he is able to deny the maj or premise of the soritical argument, embrace sharp boundaries,

but still respect the basic vagueness phenomenon.

Clearly, in the case of epistemicism, there is no problem with semantic higher-

order vagueness. It does not even get off the ground. Yet, epistemicism must somehow

respect the phenomenon of higher-order vagueness, and it does this, of course, by

portraying it as an epistemic phenomenon.










Our question now is whether this gives rise to epistemological problems for

epistemicism just like it gave rise to semantic problems for other views.

Just like the phenomenon of the first-order vagueness, the phenomenon of higher-

order vagueness is supposed to be explained, on Williamson's account, by appealing to

ignorance. The phenomenon of the first-order is described as ignorance about where the

boundary between the positive and the negative extension of the vague predicate lies.

Accordingly, the phenomenon of the higher-order vagueness is described as ignorance

about this ignorance. Higher-order vagueness, Williamson argues is manifested in the

failure of the KK principle; that is, one can know something without being able to know

that one knows it. Thus, Williamson has something that is supposed to parallel the

phenomenon of higher-order vagueness in other theories, and which consists in a limited

number of iterations of the knowledge operator. Epistemic higher-order vagueness

consists in vagueness of 'is known that H'. A margin for error meta-principle accounts

for our higher-order ignorance, just as (MEP*) accounts for our ignorance of the first-

order--that is, our second-order knowledge is inexact. This is how the epistemic view

accounts for the phenomenon of higher-order vagueness.

Let us turn now to Williamson's consideration of a possible argument based on the

iteration of the K-operator, and his response that the iteration of it gives out at some point

before we reach a paradoxical conclusion. Thus our question is why and how KK fails,

which is crucial in Williamson' s account if he is to account for the phenomenon of

higher-order vagueness.









7.4 Why and How KK Fails

In a nutshell, the reason why KK fails is because the second-order knowledge is

supposed to be inexact, on Williamson's account, and hence it is governed by (MEP*),

just as is the first order knowledge. But why does (1VEP*) apply to Birst-order

knowledge? The answer that Williamson champions is that first-order knowledge is

inexact, and requires some margin for error.2 So, for example, the reason why I cannot

know that 369 grains is not a heap (where 369 is the last number of grains that constitute

a heap) lies in that I cannot be justified in uttering the sentence 'I know that 369 grains is

a heap'. This is so because in order to be justified in uttering the sentence in question, one

needs to have a belief and that belief needs to be reliable. The reliability rapidly decreases

as we go along the soritical series and approaching to the borderline. A simple

counterfactual test shows that one does not know that the given collection of grains is a

heap, because that belief would not be reliable enough to count as knowledge. That is,

one would still believe that the obj ect in question is a heap, even if there were a slight

shift in the use, and hence in the meaning of the predicate 'heap', so that the obj ect that

was originally in the actual extension of the predicate 'heap' is not in the extension of the

predicate in the counterfactual circumstances. Yet, in order to have knowledge, one needs

to be reliably right in uttering the sentence 'This is a heap' if 'heap' had a slightly

different meaning. However, the difference between actual and counterfactual meaning of

'heap' is too small and indiscernible for an ordinary speaker. An agent' s belief forming

mechanism is insensitive to the change in truth-value of the sentence 'This is a heap', and

consequently she cannot have knowledge. For, in the counterfactual situation, where

SThere are many issues about Williamson's argument here, but I am passing over them for the sake of
argument. For a discussion about the problems with Williamson's argument see (Ray 2004).









'heap' had a slightly different meaning, a belief that 'heap' applies to the object in

question would not be true, and the agent would still believe it. Vagueness of an

expression, Williamson argues, consists in the semantic differences between it and other

possible expressions that would be indiscernible by those who understood them (@ 8.5).

Thus, it is of crucial importance that one' s belief needs to be reliable and that is so only if

one can discriminate between actual and counterfactual use of 'heap', for example. If one

cannot do that then her belief about 'heap' is not reliable and does not constitute

knowledge. It is clear from the foregoing discussion that Williamson' s analysis of one' s

knowledge about heaps distinguishes two conditions that are necessary for ascribing

knowledge to one. These are i) the truth of a belief and ii) the belief in question needs to

be a product of a reliable belief-forming mechanism. The reliability of one' s belief

forming mechanism varies along the series of grains. So, one is more reliable in some

areas than in others, presumably less in ones that are close to the borderline. As the

number of grains increases (decreases) there is less and less reliability in the belief-

forming mechanism, since the mechanism is not sensitive enough to the small differences

and it would produce a belief that could easily be false. This calls for some 'safety' zone

that is supposed to prevent forming a false belief. That is, only beliefs of certain width

(presumably the ones that are about the cases that are far enough from borderline cases)

count as reliable, namely the ones which are the product of the belief-forming mechanism

which is sensitive enough to the variations along the series of grains.

Now, the condition (ii) can be spelled out by saying that a belief is a product of a

reliable belief forming mechanism if the belief is of the width which guarantees truth of

the belief in suitably different situations. That is a belief close to the borderline would not









count as knowledge since in a slightly different circumstances, were the borderline

shifted the belief would not be true, and hence it is not 'safe' enough to count as

knowledge in the actual situation. There is a need for a buffer zone.

It is not only that my reliability varies for different number of grains, but also there

is another dimension of variation, the variation along the scale of reliability. Reliability

dimension of variation itself, according to Williamson, requires a margin for error

(p.227). That means that the notion of knowledge is vague. Similarly to our first-order

knowledge, the second-order knowledge is also inexact in this picture. This clearly opens

the room for the failure of the KK principle, because although it is true that I know that y

is a heap, there are suitably different circumstances, in which it is false that I know that y

is a heap, and hence 'I know that y is a heap' cannot itself be known. That is knowledge

of one' s knowledge is also inexact and itself requires a margin for error, which in turn

implies that KK is false. Each step higher in the hierarchy of knowledge requires a bigger

and bigger buffer zone. This means that the width of a belief that is 'safe' becomes

smaller as one goes up in the hierarchy. According to Williamson, knowledge that one

knows requires two buffer zones. Iteration of the knowledge operator narrows the width

of the belief by introducing another buffer zone. Thus, the third order knowledge requires

three buffer zones, and consequently widens the required margin for error. So, the width

of the 'safe' belief gradually decreases as the progression in the hierarchy of knowledge

goes, but there is an upper bound of iteration of knowledge. That is, the iteration of the

K-operator gives out at some point before we reach the absurd conclusion, such as that

one does not know that a billion grains of sand is a heap.









7.5 The Failure of KK Answers a Seeming Trouble with MEP

Inexactness of metaknowledge gives rise to the failure of the KK principle. That is,

metaknowledge is inexact and it is governed by (MEP*) (@ 8.3). Securing this point is of

special importance for Williamson, because he uses it to answer an anticipated obj section

against (MEP*). The obj section fails, according to Williamson, because it relies on the KK

principle.

The obj section Williamson considers (@ 8.2) is in that my knowledge of some

things, such as that zero grains of sand is not a heap, seems to be inconsistent with what

we get when we apply the margin for error principle. He considers a following set of

claims, call it, an exhibit argument. It goes as follows. Let n be the least number of grains

such that I don't know that it is not a heap,

1. I know that n-1 grains is not a heap (empirical fact by choice of n),

2. I do not know that n grains is not a heap (by choice of n),

3. I know that (if exactly n grains is a heap, then I do not know that n-1 grains is not a
heap).

We can get from (3)

3.' I know that if I know that n-1 grains is not a heap, then n grains is not a heap,

by K-elimination and by contraposition, which gives us the following:

4. n grains is not a heap (1,3', Modus Ponens).

So far, so good. There is nothing problematic with this conclusion. The problem is

supposed to arise when the imagined opponent moves from (4), to the conclusion

5. I know that n grains is not a heap (perhaps by reflection).

Apparently (5) contradicts (2), and the opponent no doubt relies on the principle

that if I can deduce something from certain propositions, then since the purpose of









arguing is to advance ones knowledge in the subj ect matter I come to know what the

conclusion of the argument is. This principle is clearly correct, and it is worth noticing

that it is metaprinciple. This result naturally raises the question which one of the

propositions 1-3 one needs to give up in order to restore consistency. Williamson's

answer is 'none', for they are not mutually inconsistent. Williamson in his reconstruction

of the opponents reasoning commits his opponent not to the argument above which as its

background has the principle in question, but to another argument that goes as follows:

A) Know (1),

B) Know (3'),

C) (4) follows from them,

D) If I know some proposition and from those propositions it logically follows (4),
then I know (4),

E) So, I know (4) I know that n grains is not a heap,

F) But I don't know that n grains is not a heap (2, by choice of n).

Now, (E) and (F) apparently contradict each other.

The paradoxical reasoning, Williamson stresses, relies on the inference that take as

one of the premises not (A), which is in effect

(A) I know that I know that exactly n-1 grains is not a heap,

which is according to Williamson introduces the paradox, since it is (A) that is

inconsistent with (2), and not (1), as the possible critics could think. This, Williamson

concludes, saves the consistency 1-3, and explains why they seem to be inconsistent. The

argument for their inconsistency relies on (A) which is false, according to Williamson

since KK fails, and hence the argument is unsound. The failure of the KK principle,

Williamson argues, manifests higher-order vagueness.









7.6 But Williamson is in Trouble Anyway

Although we have a good reason to think that the KK principle is indeed false, the

question is whether the KK principle fails for the right sort of reason that Williamson

offers in order to respect the phenomenon of higher-order vagueness. That is, the

independent plausibility of the thesis that KK is false is not sufficient for Williamson' s

purposes. The KK principle needs to fail in the right way that is required in order for

higher-order vagueness to be respected.

Also it is not clear at all that the imagined opponent is committed to the argument

style that Williamson states in his condensed diagnosis of what has gone wrong with the

opponent' s reasoning. I will leave this line of criticism aside, and turn to examining

whether Williamson establishes that KK fails in the way in which he needs it to fail.

By examining Williamson' s argument for the failure of the KK principle, we come

to suspect that the reason he offers for the failure of the KK principle is not a good one-

it is irrelevant for the principle in question, and for the higher-order vagueness, for the

reliability dimension of variation that is central in the analysis of the first order-

knowledge seems to be irrelevant for the second-order knowledge. In the case of second-

order knowledge the reliability condition is surely met. The second-order knowledge

meets the reliability condition by preserving link to the first order knowledge that must

meet this condition. Once the reliability dimension of variation is fixed in this way one

might very well argue that metaknowledge is not inexact. This, if correct, would have

implications for Williamson's claim that the epistemicism pays respect to higher-order

vagueness. This all suggests that epistemicism might have the problem of epistemic

higher-order vagueness.









Moreover, if Williamson is forced to give up (MEP*) in the light of its

inconsistency with some undeniable facts, then he cannot give a promised account of why

one would be ignorant of one's ignorance about where the hidden boundaries are. Even

worse, it seems that (MEP*) is paradoxical in analogous way in which the tolerance

principle is paradoxical.

7.7 The Problem of Epistemic Higher-Order Vagueness

Now, we aim to show that 1-3 are inconsistent. We will use in the argument

Williamson's margin for error principle, and we will suppose that we can iterate K-

operator enough times (supposing that KK is not universally false, that is it is true at least

in some cases) so that it contradicts to some empirical facts, such as that although I know

that a billion grains is a heap, it is going to give us a result that I do not know that a

billion grains is a heap. We will sketch the argument in several steps. The maj or

challenge then will be to account for enough-times K-ability of the premises of the

argument. Williamson's discussion gives us resources to motivate the premise in

question, and we will use his own commitments in order to show that we can iterate the

K-operator sufficiently many times, as to cause the trouble for epistemicism. Similarly to

the tolerance principle, we aim to show that (MEP*) leads to paradox and Williamson has

the trouble parallel to the trouble that other views on vagueness have.

Supposing that Williamson's margin for error principle to be K-able enough many

times, it will allow us to infer a clearly false conclusion, which Williamson tries to avoid

by restricting the number of iterations of the knowledge operator. So, consider an

argument such as the one below:

1 K1oo.(n) [(H(n)4~K~H(n-1))] (K-able premise)

2. K1oi[~H(0)] (K premise)









3. K1oo.(n) [(H(1)4~K~H(0))] (Instantiating (1) to n=1)

4. Kloo[K(~H(0))] (K premise)

5. Kloo[~H(1)] (K embedded MT, from 3&4)

6. K99(n) [(H(n)4~K~H(n-1))] (K-elimination, from 1)

7. K98 [K~H(0)] (K premise)

8. K99.(n) [(H(1)4~K~H(0))] (Instantiating (6) to n=1)

9. K99 [~H(1)] (K embedded MT, from 7&8)



(i) K 1(~H(100))

For sufficiently many iterations of the K operator we have, written in general form, the

following:

1. '[i (i4KHi1) (premise)

2. Ki+1 [~H(0)] (premise)

3. K [~H(i)] (1, 2 conclusion)

where 'i' is the numeral that stands for the number of grains that certainly make a heap.

This is indubitably false conclusion, and contradicts the fact that I know that i

grains of sand do constitute a heap. If we are correct this shows that something has gone

wrong with the margin for error principle. For, the above reasoning uses only the margin

for error principle, and supposes that one can iterate the K-operator enough many times

so to cause trouble; that is, to enable us to infer something that contradicts to agreed facts,

such as that I know that i grains of sand is a heap.

Now, Williamson expects to avoid any such conclusions by restricting the number

of iterations of the K operator. So, the expected rej oinder to our argument would be that

premise (2) or maybe (1) is false and that the argument is thus unsound. Knowledge is









supposed to give out (with the sort of cases Williamson considers) at some point before

we reach the apparently false conclusion. But there is no plausibility at all in denying that

one knows that one knows that zero grains is not a heap, or any iteration thereof. I know

that zero grains is not a heap, and I know that I know and I know' that I know....For no

matter how many iterations of the K operator I know that zero grains is not a heap. My

belief that zero grains is not a heap is reliable and any metabelief does not need a further

buffer zone that would prevent me from knowing' that zero grains is not a heap. No subtle

change in the grain requirement can make zero grains constitute a heap or anything else

for that matter. It is a conceptual truth that zero grains is not a heap, and hence

knowledge about it is conceptual knowledge. Similarly, knowledge of a margin for error

principle would be underwritten by a belief that does not require a further buffer zone in

order to make it safe enough to be counted as knowledge. Williamson cannot simply

deny that (1) is K-able because of the type of knowledge it represents. The way

Williamson arrives at the principle in question is via giving a philosophical argument. If

(MEP*) is known at all it must be known independently of experience and on the basis of

reflection. This secures the point that the reliability condition is met in this case because

the reliability dimension of variation is parasitic on the content of a belief. Unlike the

first-order knowledge about heaps, where the first-order knowledge or nesting of the K

operator might fail because of variation in the core belief due to the shift of the

borderline, a salient feature of (MEP*) is that it does not depend on where the borderline

is. If (MEP*) is true at all, it must be a conceptual truth, and then the content of the first

order belief, which is about (MEP*) is secure, and the reliability dimension of variation

gets fixed in that way. What is central in revealing the security status of a belief is the










way in which we acquire and justify the belief in question. That is, since we come to

believe and know (1VEP*) by reflection, this way of coming to know tells us something

about the reliability status of such a belief. If this is correct, then first-order belief meets

the reliability condition, and one can be said to have knowledge. Also, any further nesting

of the K operators cannot be prohibited by appealing to the reason that the reliability

condition is not met.

In light of the forgoing discussion, it looks like Williamson has not given us a

reason to think that he gets the kind of failure of the KK principle that he needs. One

might wonder what the diagnosis of this failure is. We suspect that what has gone wrong

in this story is that it relies on the idea that the theoretical notion of reliability which is

co-opted by externalists, and which is a technical notion, is conflated, in Williamson's

story, with an ordinary broad notion of reliability that is vague. Williamson conflates the

technical notion of reliability and he seems to think that it is just like ordinary vague

notion of reliability.

Now, if Williamson has not given us a reason to think that he gets the kind of

failure of the KK principle that he needs, then we do not have a reason to think that the

second-order knowledge is inexact. That is, we can justify nesting enough K-operators so

as to show that (1VEP*) is inconsistent with some undeniable facts.

7.8 Further Reflection on MEP

Another worry about the argument for (MEP*) is that it depends on whether there

is an analogy between the stadium example and the 'heap' example. For in the former

case, though it is true that I cannot know just by looking the exact number of the people

in the stadium, these are not the only available means for obtaining knowledge. By









knowing some facts, such as the capacity of the stadium, and the number of the sold

tickets, I can come to know exactly how many people there are in the stadium. Thus,

although I do not know exactly how many people there are just on the basis of

perception, which is inadequate to this task, I could come to know the exact number by

using my power of reflection and making the relevant inferences from the information

that is available. The analogy that Williamson exploits consists in that the unreliability of

a belief heaps is similar to unreliability of a belief about the number of people in the

stadium. The maj or difference between these two cases, however, is that in the first case,

it is not plausible to think that the ignorance cannot be overcome in principle. If there is a

fact of the matter, and if the appropriate intellectual resources are deployed it seems that

although it might not be known exactly how many people there are in the stadium, it is at

least knowable. However, the boundaries of the predicate 'heap' are unknowable in

principle. That is, it is not just the case that the boundary of the predicate 'heap' is not

known, but it is not knowable either. So if the argument by analogy is to be a successful

one, then the two circumstances, namely the stadium example and the 'heap' example

must be sufficiently similar.

Williamson, however, failed to persuade us that the situations in question are

analogous. Namely, he failed to argue that the unreliability associated with judging the

number of people in the stadium by perception, and hence the lack of knowledge is

something that cannot be overcome, just by employing some other intellectual capacities,

such as reflection for example on the relevant facts and making relevant inferences. There

is an important asymmetry between 'not known' and 'not knowable'. The former is just a










contingent matter, ignorance that could be overcome, while the latter is ignorance that in

principle cannot be overcome.

On the other hand, on might wonder why Williamson did not choose another

example that is appropriately analogous to the 'heap' case. Surely, he could have done so.

Take for example continuous values such as measurements. Measuring length requires

margin for error because none of the tools that one could use to measure length is ideally

accurate. Thus, by measuring length we get a value and depending on the tool, a margin

for error that is specified for it. This means that one never has exact knowledge of length.

So far it seems that we have a case perfectly analogous to 'heap'. Now, although

Williamson could have taken an example such as the one we just described, he could not

have done so without pain of loosing generality of a margin for error principle and

making it dependant on some contingent facts. This is, perhaps, why he has chosen not to

use an example such as our measurement case.

The trouble is in that a margin for error principle cannot be generalized so as to

apply to any sort of measurement. It needs to be restricted to a choice of measurement.

This becomes apparent if we take an example. One cannot apply a single (MEP*) to a

meter unit, and a millimeter, and an inch, since what counts as small difference greatly

varies from unit to unit. Now, it looks like that the analogy with 'heap' breaks down

again, since (MEP*) in the case of 'heap' is a general principle and independent on the

number of grains and where the borderline is. In the light of our argument against

(MEP*), one might want to restrict (MEP*) in the 'heap' case to a choice of n (i.e., the

number of grains, for example). This is, however, highly implausible. To relativize

(MEP*) to a number of grains would have as a result that some arbitrary choice









determines what the principle is. This would introduce arbitrariness into the proposed

account of vagueness. Also, there is a question whether it is plausible to think that an

arbitrary and contingent choice of n implies (1VEP*), which is supposed to be a statement

about reliability of one' s faculties and is independent on the number of grains. It is worth

noticing that in the case of measurements, all (MEPs*) are just the statement of

accidental barriers, such as the choice of measurement, for example. But, this does not

seem to be plausible for the 'heap' case for the reasons already mentioned.

If the foregoing discussion is right, then we can conclude that Williamson' s

principle is not a good one. On one hand, it looks as if it cannot be a general principle if

the analogy with measurement cases works. But the question is what is left of the

epistemic account if the principle is restricted as it is the case in those cases; we suspect

that the answer is arbitrariness and implausibility. On the other hand, if one attempts to

insist on general version of (1VEP*), then we have a problem that it is paradoxical in a

parallel way in which the tolerance principle is paradoxical.

Further, even if granted Williamson that the first order knowledge were inexact,

there still remains the question why one would think that the second-order knowledge

must be inexact. One diagnosis would be a mistaken belief of inheritance of vagueness

that we have talked about in the discussion of Hyde. One of the troubles with Hyde' s

argument was in that it depended on the inheritance of vagueness from the vague

predicates to the semantic predicates that we use to talk about them. It seems that

something similar is going on in Williamson's discussion. In Williamson's case, the

vagueness invades epistemic predicates that he uses to talk about vagueness. Now, since

first-order vagueness is described as a type of ignorance, and the same holds for the









second-order vagueness. But it seems that the only reason why Williamson thinks that

metaknowledge is inexact is because our theoretical judgments about our reliability,

according to him, do not meet reliability condition. The justification for this claim is

offered through the claim that knowledge that one knows requires two margins for error.

A special case of inexact knowledge is that in which the proposition 'A' is itself of
the form 'It is known that B'. As we are not perfectly accurate judges of the
number in a crowd, so we are not perfectly accurate judges of the reliability of a
belief. A margin for error principle for 'It is known that B' in place of 'A' says that
'It is known that B' is true in all cases similar to cases in which 'It is known that it
is known that B' is true. As usual, the required degree and kind of similarity depend
on the circumstances, for example, on one's ability to judge reliability; 'It is known
that B' and 'B' may need margins for error of different widths. If 'It is known that
B' is true but there are sufficiently similar cases in which it is false, then it is not
available to be known. It cannot be known within its margin for error. Thus the
failure of the KK principle is a natural consequence of inexactness of our
knowledge of our knowledge. (pp. 227-8)

Now, it seems to be a mistake to claim that we cannot iterate K operators without

increasing the width of the buffer zone for the reliability of the belief, as we have seen in

the earlier discussion on this matter. For the knowledge we get by iteration of K's is

metaknowledge and is independent from the initial reliability dimension of variation. If

possible inexactness of the first-order knowledge is not inheritable, for the types of

knowledge in question are different and reliability depends on the content of a belief then

we do not have a reason to think that metaknowledge is inexact.

Conclusion. If we are right, what we have learned from the forgoing discussion is

that the epistemic view does have a problem with higher-order vagueness, although it

does not have the problem of the semantic sort. The target that we distinguished to blame

for the trouble is (1VEP*). There is an apparent tension between the attempt to restrict

(1VEP*) and attempt to account for one's ignorance in borderline cases, given that one

must opt for its restriction if we are to avoid its being paradoxical in the way in which the










tolerance principle is paradoxical. We fear, however, that one cannot sacrifice generality

of the principle either without infecting the whole account with implausibility and

arbitrariness.

We have argued that the failure of the KK principle has not been shown to be

responsible for the inconsistency of 1-3, then we have a good reason to think that

something has gone wrong with (MEP*). We must conclude, in the light of forgoing

discussion that epistemicism i) faces the problem of higher-order vagueness too, and

ii) Williamson fails to give a good reason to think that the boundaries postulated by

epistemicism are unknowable.

Thus, epistemicism cannot to hope to be a good theory of vagueness.















CHAPTER 8
CONCLUSION

In the foregoing discussion we examined the views that share in common the

paradigmatic conception of vagueness, as well as the view about the paradigmatic

conception of vagueness. We distinguished, on one hand, between the views that

acknowledge the phenomenon of higher-order vagueness attempting to give an account

of it (i.e., they aim to accommodate higher-order vagueness in the theory) and on the

other hand the views that deny higher-order vagueness.

In the first category, we discussed Fine's (1975) treatment of higher-order

vagueness, degree theory, Burgess'(1990) attempt to show that higher-order vagueness

does not go all the way up in the hierarchy of borderline cases, and then Hyde' s(1 994)

proposal why higher-order vagueness should not worry theorists that share the

paradigmatic conception of vagueness. Epistemicism (Williamson 1994) also belongs to

this category, but it is important to emphasize that the higher-order vagueness that

epistemicism acknowledges is of the epistemic sort. The denial of higher-order vagueness

was discussed as presented in Wright (1992) with his argument that higher-order

vagueness is incoherent.

By examining these proposals, we learned that none of these views give a

satisfactory treatment of higher order vagueness and higher-order vagueness is a serious

problem for all of them.

We showed that Fine' s supervaluational strategy does not work, and we specified

three unresolved and seemingly insoluble problems for Fine. First, following Burgess'










line of criticism, we saw that not even the first-level supervaluational story works.

Secondly, sharp boundaries emerge after all. Thirdly, it looks like nothing is counted as

supertrue on that account.

Similar criticism was articulated regarding the degree theory and its continuum-

valued semantics, which shares Fine's predicament and has no special resources to

handle higher-order vagueness.

By examining Burgess' treatment of the problem we discovered that his attempt to

show that higher-order vagueness is Einitely limited fails. His analysis of the secondary-

quality predicates falls short of showing that vague secondary-quality predicates can be

analyzed in limitedly vague terms. Further, Burgess' analysis is hopelessly circular.

All these views have in common that they follow an intuitive approach to the

phenomenon of vagueness. This means they all attempt to tell some story that accounts

not just for vagueness of the first order but also for vagueness of any order, or they aim to

show that the hierarchy of borderline cases terminates and does not run all the way up.

Having shown that they have failed, we turned to discuss an attempt to deny higher-order

vagueness. It is counterintuitive to deny higher-order vagueness, and we have shown that

Wright' s attempt to back up such a denial failed. He gets the conclusion that higher-order

vagueness is incoherent only because of the violation of the restriction on the application

of the DEF rule. This violation also gives some contradictory results.

We also showed that epistemicism is not immune to the problem of higher-order

vagueness either. It has a parallel problem to semantic higher-order vagueness, namely

epistemic higher-order vagueness. We came to conclude that Williamson's MEP that is










supposed to allow for respecting the phenomenon of higher-order vagueness is

paradoxical in the parallel way in which the tolerance principle is paradoxical.

We also discussed an attempt to pay respect to the basic vagueness phenomenon

without pain of having the problem of higher-order vagueness. We examined Hyde's

metatheoretical argument and showed that Hyde's argument is not a good one: it relies on

fallacious reasoning. Moreover, the argument is question-begging, and does not respect

the peculiarity of its dialectical position, namely its being a meta-theory which cannot

take for granted what is taken for granted in the theory that it aims to defend.

After close examination, we came to the conclusion that all views that share the

paradigmatic conception of vagueness i) face the problem of higher-order vagueness, or

some parallel problem and ii) fail to deal successfully with it.

An important feature of the kind of failure that these views exhibit is that it is not

that they fail for some accidental reason, and in such a way that some maneuver would

fix the problem. They rather fail for some principled reasons and there seem to be no

resources in the discussed theoretical milieu to deal with the problem. So, we come to

conclude that the problem of higher-order vagueness is insoluble for the paradigmatic

conception of vagueness.

We are inclined to think that there is something in the paradigmatic picture that

generates the problem and which is the common denominator of the different views that

differ in many respects otherwise. A natural candidate to mark as the trouble-generator is

the very characterization of the phenomenon of first-order vagueness by presence of

borderline cases, which, we suspect, rests on an idealization. If his diagnosis is correct a

take-home lesson of the foregoing discussion is that the underlying assumption of the










paradigmatic conception of vagueness should not be taken at face value anymore. The

presupposition that generates the problem seems to be the presupposition that vague

predicates have application conditions. Undoubtly, vague predicates are used in this

fashion in our everyday linguistic practice. Given the semantic role that the predicates

play in the language, one can be tempted to assume that vague predicates are assigned to

do the classifieatory role, and that they either apply or fail to apply. But vague predicates,

unlike the precise ones, are not well equipped to perform well the job that predicates are

assigned to do in the language, namely to classify and to categorize. So, there is some

discrepancy between our expectations for vague predicates and their performance. This

should not be so surprising if we think that they are semantically deficient. This

deficiency essentially affects their performance. In the everyday practice we neglect this

feature of vague predicates and we use them as if they were precise. However, any theory

that translates this pragmatic feature of vague predicates into a theoretical account about

them does that by translating this idealization into the proposed semantical story about

them, and inevitably ends up in trouble.l Theorizing about vague predicates under an

idealization seems to be the main culprit for the type of trouble that we identified as the

problem of higher-order vagueness. All the theories that share the paradigmatic

conception of vagueness are based on idealization, namely they move our pragmatic

idealization of vague predicates into the theory. So, if the idealization that infects an

account of vagueness is correctly identified as a trouble-maker, then a take-home lesson

from the failures of the paradigmatic conception of vagueness to successfully deal with

the problem of higher-order vagueness is that there is no room for idealization in the


SFor a broader discussion see (Ludwig & Ray 2002).









theory about vague predicates. If this is the correct diagnosis of what has gone wrong

with the paradigmatic conception of vagueness, and if it inevitably ends up in iterative

conception, the question is what we are left to say then about the prospects for a solution

of the problem of higher-order vagueness.

We cannot but conclude that it looks like the only promising way to go regarding

the solution of the problem of higher-order vagueness is not to let it even to get off the

ground. This, however, seems attainable only if one abandons a temptation to theorize

about them under an idealization by taking pretheoretical intuitions for granted. Our

intuition suggests that what the insuperability of the problem of higher-order vagueness

for the paradigmatic theories of vagueness reveals is that the paradigmatic conception of

vagueness and the theories who accept it call for a thorough rethinking of their basic

presuppositions that we suspect are responsible for this common difficulty that all the

discussed views have--namely the difficulty that they are irreconcilable with higher-

order vagueness.
















REFERENCES

Burgess, John Alexander (1990) 'The Sorites Paradox and Higher-Order Vagueness',
Synthese~\ l 85: pp. 417-474.

Edgington, Dorothy (1993) 'Wright and Sainsbury on Higher-Order Vagueness',
Analysis 53.4: pp. 193-200.

Fine, Kit (1975) 'Vagueness, Truth and Logic', Synthese~\l 30: pp. 265-300.

Heck, Richard (1993) 'A Note on the Logic of (Higher-Order) Vagueness', Analysis
53.4: pp. 201-208.

Hyde, Dominic (1994) 'Why Higher-Order Vagueness is a Pseudo-Problem', M~ind
103.409: pp. 35-41.

Ludwig, Kirk & Ray, Greg (2002) 'Vagueness and the Sorites Paradox', in Tomberlin, J.
(ed.), Language and M~indPhilosophical Perspectives 16, Rid geview Press, Atascadero,
CA, pp. 419-461.

Ray, Greg (2004) 'Williamson's Master Argument on Vagueness', Synthese~\l 138: pp.
175-206.

Sorensen, Roy (1985) 'An Argument for the Vagueness of 'Vague"', Analysis 45: pp.
134-137.

Williamson, Timothy (1994) Vagueness, London, Routledge.

Wright, Crispin (1992) 'Is Higher-Order Vagueness Coherent?' Analysis 52.3: pp. 129-
139.