What do we mean when we speak of pseudoscience? The development of a demarcation criterion based on the analysis of twenty-one previous attempts

Angelo Fasce (University of Valencia) – angelofasce@hotmail.com

A critical analysis of twenty-one demarcation criteria is carried out, obtaining as a result a demarcating tool that allows appropriate screening between science and pseudoscience. After an introduction that will emphasize the scientific and social relevance of the demarcation problem and the need of an adequate approach to face it, the specific problems of multicriterial attempts will be remarked, such as their lack of theoretical foundations and the presence of dispensable and contradictory items. On the basis of this first analysis, a metacriterion, the necessary general requirements for a demarcation criterion, will be established. The data analysis will show a lack of progress among demarcation criteria from 1964 to date, and will provide a demarcation criterion partially based on the items with greater support.

The problem of demarcation is the problem regarding the limits of science; the problem of where to draw the boundaries between science and religion, paranormal thought or humanities, with the distinction between science and pseudoscience having special historical relevance (Nickles, 2006; 2013). In this article, a criterion to demarcate between science and pseudoscience will be developed based on the critical analysis of twentyone demarcation criteria. In the first place, an introduction to the problem will be carried out, in which some of the historical complexities of the demarcation problem and some points of view regarding it will be discussed ― for example, the ideas of Popper, Laudan or Pigliucci. After this introduction, an analysis of the specific problems of the multicriterial approach will be carried out, finishing with the establishment of three necessary requirements for a demarcation criterion. Subsequently, we will analyse a data matrix elaborated with twenty-one multicriterial criteria published between 1964 and 2016. As a result of this analysis, a demarcation criterion will be proposed. In the last section, the relationship between this criterion and (Hansson, 2009) will be discussed.

The definition of pseudoscience, besides being a philosophical problem, is also a scientific issue since without a demarcation criterion the study of the phenomenon can suffer a lack of reliability due to the inability of researchers to define a homogeneous domain. The social implications of pseudoscience are also a reason to develop a demarcation criterion, since it has a high social prevalence, being especially harmful in clinical and educational contexts ― although the study of pseudoscience can be a very useful tool to understand the nature of science (Lilienfeld, Lohr, and Morier, 2004; Afonso and Gilbert, 2009). Nevertheless, in spite of the fact that there have been various attempts, none of them have ever been satisfactory to the fussy community of philosophers of science.

Some of the main problems related to the demarcation of science (Hansson, 2017b; Mahner, 2013), which should serve as a guideline for the development and evaluation of demarcation criteria, are:

  1. The problem of demarcation is generalized and not only related to pseudoscience. It is a distinction between science and non-science a heterogeneous set of practices and ideas that includes religion, the paranormal, conspiracy theories, humanities, art and so on. It is necessary to choose what to demarcate.

  2. There is a distinction between good science and bad science (Goldcrane, 2008; Elliot, 2016). This distinction has been stated as ambiguous, as a continuum (Pigliucci, 2013), but we must take into account that bad science is still science, so the demarcation criterion between good/bad science and pseudoscience may be the same. In any case, this issue has been barely discussed.

  3. Another decision for the philosopher is whether his demarcative proposal will be monocriterial or multicriterial. That is, if he will bet on a silver bullet, as is the case of falsificationism, or if, on the contrary, there will be several necessary and/or sufficient conditions to be science and pseudoscience.

  4. Another decision of great relevance is the one concerning the units of demarcation: to which aspects of science are we going to pay attention when demarcating. Proposals in this regard are many: we can demarcate propositions (Popper, 1963), fields of knowledge (Bunge, 1982; Thagard, 1988), theories (Kitcher 1982; Lugg, 1987), research programs (Lakatos, 1978a), we could focus our attention on the logical-methodological level (Wilson, 2000), etc.

  5. The fifth problem is whether the criterion will be historical or ahistorical; whether it will consider the demarcating unit or units from a synchronous or diachronic standpoint. Falsifiability is an ahistorical criterion, nevertheless, other authors have considered as relevant theoretical progress over time (Lakatos, 1978a; Thagard, 1978; Grove, 1985; Hansson, 2009; Lilienfeld, Ammirati, and David, 2012).

  6. The unity or disunity of science is another theoretical problem to be faced by any demarcative attempt (Cat, 2006). Nowadays, once the physicalist project of neopositivists has been abandoned, the concept of consilience is often appealing; the idea that science as a whole offers us a worldview (Reisch 1998; Wilson 1998). In this regard, external congruence can be stated as necessary or merely desirable.

  7. Finally, the problem concerning the sense or necessity of a criterion. Is it worth developing a demarcation criterion or can we just get rid of the problem? Although this question is valid, the problem of demarcation plays a key role in other questions such as: Should our public health systems include alternative medicine? Should the funding system of science finance research on homeopathy or on reiki? Should biology teachers teach as alternatives the theory of evolution and intelligent design? Should we hire dowsers, neurolinguistic programmers or biodynamic farmers to improve our management of public resources? Our answers to these questions are momentous since they will directly affect our quality of life.

The most famous demarcation criterion has been falsifiability, developed by Popper as a response to the verificationism of the Vienna Circle ― which should not be understood as a criterion between science and pseudoscience, but as a criterion of epistemic meaning which sought to demarcate between physics and metaphysics. According to falsifiability, every proposition is scientific if and only if it can be subjected to a process of falsification (Popper, 1963). This idea has been, and still is, very popular (Ruse, 1982; Lilienfeld, Ammirati, and David, 2012; Lack and Rousseau, 2016), although as a criterion and as an interpretative framework for scientific activity, it has been widely rejected by the community of philosophers of science (Kuhn, 1970; Agassi, 1971; Lakatos, 1974a; Thagard, 1978; 1988; Kitcher, 1982; Laudan, 1983; Siitonen, 1984; Lugg, 1987; Rothbart, 1990; Derksen, 1993; Resnik, 2000; Mahner, 2007).

It has several problems, with three as especially relevant. In the first place, falsificationism denies any importance to the confirmation of theories, an idea that does not correspond to the reality of scientific inquiry (Hansson, 2006). Secondly, it assumes the existence of an instantaneous rationality in falsifications, so that they are understood as a final verdict, something that characterizes as irrational several events of the history of science in which a theory overcame disconfirmations (Lakatos, 1978a). Finally, the biggest problem of falsibiability is simple: falsifiable pseudoscience exists and is usual. “Homeopathy cures the flu” or “repressed memories are a psychological phenomenon” are falsifiable claims (McNally, 2007; Ernst, 2010). This is why as a criterion between science and pseudoscience it turns out to be an imprecise tool.

After Popper, there have been multiple attempts to avoid all these problems. Kuhn proposed a demarcation criterion based on the concept of puzzle-solving (Kuhn, 1974), typical of what he called “normal science” as opposed to moments of scientific revolution. A scientist’s activity would consist of solving puzzles rather than testing fundamental theories; puzzles that would be defined within these fundamental theories, which, in turn, would be part of a “paradigm” in which scientists would be immersed. Nevertheless, Kuhn’s criterion is at least partially relativistic, since, although we could demarcate between science and pseudoscience within a paradigm in which, for example, astrology is not carrying out puzzle-solving (Kuhn 1974, p. 804), standards of rationality would be defined only within this particular paradigm. Thus, despite the existence of certain agreed values (Kuhn, 1977), the pseudoscience of one paradigm could be the science of another given that they are “incommensurable” among them. According to the interpretation of Popper and Lakatos (Worrall, 2003), the acceptance of Kuhn’s criterion would mean the “replacement of a rational criterion of science by a sociological one” (Popper 1974, p. 1146).

The sociological approach to demarcation based on consensual values, which we could call “axiological demarcationism”― although Lakatos, who proposed his own criterion of progressivity (Lakatos, 1974b), calls it “elitism” (Lakatos, 1978b) ― has had many defenders (Merton, [1942] 1973; Laudan, 1984; Lacey, 2004), being  Shermer its most famous backer within circles of skeptical thinkers (Shermer, 2013). According to this approach to the demarcation problem, science is what scientists do based on some consensual values shared by all of them. The main problems of axiological demarcationism is that it includes a circular reasoning ― (1) scientific values are those in which scientists agree together; (2) scientists are scientists because they make science; (3) science is what meets scientific values; and again (1) ― and it leaves the decision on the nature of science in the hands of a community in which values are not homogeneously distributed, something that is observable in problems such as scientific fraud (Fanelli, 2009) and pseudoscience offered by healthcare professionals (Garb and Boyle, 2003; CAMbrella, 2012).

Laudan’s announcement of the demise of the demarcation problem, labeling it as a philosophical pseudoproblem (Laudan, 1983), has been especially influential as a reaction to all these failed attempts. In his famous article, Laudan states that “The (demarcation) question is both uninteresting and, judging by its checkered past, intractable. If we would stand up and be counted on the side of reason, we ought to drop terms like ‘pseudo-science’ and ‘unscientific’ from our vocabulary; they are just hollow phrases which do only emotive work for us. As such, they are more suited to the rhetoric of politicians and Scottish sociologists of knowledge than to that of empirical researchers” (Laudan, 1983, p. 125). Laudan holds a pessimistic point of view about the problem on the basis of his analysis of its history: since no philosopher of science has managed to develop a well-grounded and functional criterion, able to fulfill Laudan’s metaphilosophy (Laudan, 1983, p. 122), the problem must lie in the irrational nature of terms like “pseudoscience” and, therefore, its demarcation is a philosophical blind alley.

His denial of the demarcation problem, nonetheless, is not as radical as one might think: Laudan, in fact, offers a demarcation criterion for what is commonly called “pseudoscience” based on the “empirical and conceptual credentials for claims about the world. The ‘scientific’ status of those claims is altogether irrelevant” (Laudan, 1983, p. 125). Thus, Laudan considers that the problem of demarcation belongs to general epistemology. This implies that he is not a radical denialist of the demarcation problem; he is, rather, a denialist of pseudoscience and of a priori demarcation. But, despite his pessimism, his expectations have not been fulfilled (Mahner, 2013). In the first place, philosophy of science has not been replaced by general epistemology and is still a very lively field. Secondly, during the last decades the development of demarcation criteria has continued to take place, showing that the demarcation problem is still interesting for philosophers of science. And thirdly, the study of the rhetorical, psychological and sociological mechanisms of pseudoscience as a specific and delimited set of epistemologically unwarranted beliefs has been progressing (Lewandowsky, Gignac and Oberauer, 2013; Lobato et al., 2014; Blancke, Boudry, and Pigliucci, 2016; Hansson, 2017a; Fasce, 2017).

Pigliucci (2013) has criticized Laudan’s ideas, especially his metaphilosophy. Pigliucci argues against his requirement of necessary and sufficient conditions, accusing him of being out of fashion and he appeal, following Dupré (1993), to the Wittgensteinian concept of family resemblance to point out the fuzzy limits of concepts such as science and pseudoscience. But, despite being an attractive idea based on cluster techniques of classification, Pigliucci’s proposal is insufficient since pseudoscience is a concept sufficiently delineated to be differentiated in a much more detailed way from other types of unwarranted beliefs (Lobato et al., 2004; Bensley, Lilienfeld, and Powell, 2014; Fasce and Picó, 2018a; 2018b) ― the key factor here is that pseudoscience is non-science disguised as science. The appeal to intuitive resemblance is a problematic strategy to develop a criterion, since intuitive resemblance may not be aware of some key features. For example, someone might think that acupuncture and shamanism seem to be sufficiently similar to be considered using the same concept, or that denialism of vaccines or AIDS are conspiracy theories of the same kind as the one defended by the 9/11 truth movement. Pigliucci’s proposal makes explicit some basic thoughts that underlie any demarcationist intention ― that the entities to be demarcated are in similar ―, but the justification of our decision must be beyond intuitive resemblance. Why do we call all these practices and ideas “pseudoscience”? Pigliucci tentatively appeals to the “theoretical understanding” and to the “empirical knowledge”, but this does not offer us a detailed and well-founded answer to the question, something that Laudan (justly) demands.

Being a simple idea that looks promising, the concept of family resemblance has been very influential in recent years (Irzik and Nola, 2011), but sometimes its use has been a problem. For example, in the scale used in (Majima, 2015), we can find categorized as pseudoscience the belief in God, which is a paranormal belief (Tobacyk 2004), or myths such as the anomalous behaviors of animals before an earthquake. Lundström and Jakobsson (2009) consider as pseudoscientific beliefs telepathy or the use of pendulums to choose the sex of babies. These non-pseudoscientific kinds of beliefs amount to 40% of their scale. In (Tseng et al., 2013) we find a similar problem, with 60% of their scale pertaining to the paranormal ― for example, lucky numbers, precognition or telekinesis. Franz and Green (2013) use a scale of pseudoscience in which the 100% of its items are paranormal or spiritual thinking. Johnson and Pigliucci (2004) include 80% of items that refer to paranormal or conspiracy ideation ― aliens in Area 51, the ability of animals to detect ghosts, voodoo or seven years of bad luck after breaking a mirror. Correct psychometry of pseudoscience is unusual since its family resemblance can be a double-edged sword. In fact, the only validated scale for the measurement of pseudoscientific beliefs is based on the criterion developed in this article, which rejects the use of the concept of family resemblance to assess pseudoscience (Fasce and Picó, 2018c).

Problems of multicriterial attempts

Multicriterial approach to demarcation, although it has precedents (Langmuir [1953] 1989), has been booming since the 1980s. It consists of the development of lists of features of science and/or pseudoscience ― according to which the criterion is focused ― that would allow us to define them. Whether these characteristics are necessary or sufficient conditions depends on each author, since in many cases the transgression of some items is allowed (Park, 2003; Lilienfeld, Ammirati, and David, 2012) and in others it is a completely strict criterion (Bunge, 1982; Mahner 2007; Hansson, 2009). For example, Lack and Rousseau define pseudoscience as “Any claim, hypothesis, or theory that is presented in the language and manner typical of scientific claims, but that fails to conform to accepted standards in science regarding openness to peer review, replicability, transparent methodology, and the potential for falsifiability is highly likely to be a pseudoscientific claim, hypothesis, or theory” (Lack and Rousseau, 2016, p. 39). In this case, we have as first premise that the assertion, hypothesis or theory is presented as science, something that is essential in any definition of pseudoscience (Hansson, 1996), and the following five items try to define when the first one is a fraud ― when something is non-science.

Nevertheless, we are not told if these criteria are all necessary or if one of them is enough to be pseudoscience. Given that Lack and Rousseau later offer a list of characteristics that are merely indicative of pseudoscience (Lack and Rousseau, 2016, p. 42) we may think that all these requirements are what they consider to be necessary, but if so, and even if not, the list is problematic. For example, replicability is required, a feature shared with other proposals (Gruenberger, 1964; Hansson, 1983; Beyerstein, 1995) that would leave out much of the social sciences, and even parts of psychology would have serious problems for not being pseudoscience (Open Science Collaboration, 2015). As other authors have argued (Vollmer, 1993; Norton, 2015), replicability should not be considered as part of a demarcation criterion for this reason, being merely a positive value. Another obvious problem of this criterion is the inclusion of Popperian falsifiability which, as argued above, has already been ruled out as a demarcation criterion between science and non-science. Also, the openness to peer review is another problem. A great deal of scientific research is not open to peer review and does not even make public its data and its results ― for example, private or military research ― which would thus be characterized as pseudoscience. In spite of it, this item is present in other criteria (Gruenberger, 1964; Tuomela, 1985; Beyerstein, 1995; Park, 2003; Lilienfeld, Ammirati, and David, 2012).

These types of theoretical problems are a constant in multicriterial criteria, even since its origins. The forerunner of multicriterial demarcation was Gruenberger with his article A Measure for Crackpots (1962). In his article, Gruenberger tries to develop a way to measure how loon the assessed pseudoscientific is. It assigns a particular value to each item of his criterion, which would allow us to calculate a final score going from zero to one hundred, zero being very scientific and one hundred very pseudoscientific ― it is the only criterion that carries out an explicit and numerical measurement of pseudoscience ― and without postulating any characteristic as necessary or sufficient. Gruenberger lists thirteen items, among which are predictability, the use of controlled experiments, parsimony, humility, mental openness or paranoia against the system. But there are two basic problems with this criterion which the multicriterial tradition has not been able to solve. In the first place: “What follows, then, is a check list of some significant items which we think are among the main attributes of the scientist” (italics are mine) (p. 6). The selection of items usually has weak foundations. The second problem, which Gruenberger is also aware of: “The scores are personal, arbitrary, and biased. The reader is urged to fill his own values, rather than waste time quibbling over mine” (p. 12). This is the usual problem of hierarchy between items. Why are some of them more important than others? ― If some of them are more important than others.

Demarcation criteria have been published in academic articles as well as in books and websites related to skepticism and to the study of pseudoscience. They have also been used by courts. For example, in the famed trial between Tammy Kitzmiller and the Dover Area School District, in which a group of parents filed a lawsuit against the teaching of intelligent design in public schools of Pennsylvania, claiming that, as a type of creationism, its presence in the classrooms violated the first amendment of United States’ Constitution. Judge Jones appealed to multicriterial demarcation (Jones, 2005) in order to decide whether or not intelligent design was a religious pseudoscience ― as expected, the sentence ruled that it is a pseudoscientific way of creationism. His criterion included presenting incomplete explanations, without causalities and teleological thinking, but there are scientific studies that do not establish causalities ― for example, some interpretations of quantum mechanics or correlational studies ―, give incomplete explanations ― physics as a whole is incomplete ― and functional explanations, despite having detractors (Hempel, [1959] 1994), are often used by scientists (Godfrey-Smith, 1993).

Other judges prior to Judge Jones also adopted these kind of criteria, such as Judge Overton, who headed the litigation between William McLean and the Arkansas Board of Education. Judge Overton adopted in his decision (Overton, 1982) a criterion provided by Ruse, who later published it reviewed (Ruse, 1982). Ruse’s criterion includes as essential characteristics of science that (1) must refer to natural laws; (2) must explain these natural laws; (3) must be empirically testable; (4) its conclusions must be tentative; (5) must be falsifiable ― being all these characteristics necessary and sufficient in conjunction. Despite being officially adopted in a court and having passed reviews, Ruse’s criterion has been the target of strong criticism, especially by Laudan (1988). Laudan criticizes that (1) and (2) are too strong, while (3), (4) and (5) are too weak, so that Ruse’s criterion would leave out scientific theories and fields due to the first two items and would let in pseudosciences for its last three. For example, with regard to (1) it is debatable that all science uses nomological explanations, with several authors against this idea (Cartwright, 1989; van Fraassen, 1989). With regard to (2) there are historical examples that would refute this item, such as the postulation of gravity by Newton, in addition to many correlational studies, purely predictive, that do not formulate explanations; in addition, there are cases of pseudosciences that appeal to supposed laws, such as new German medicine or the law of attraction. On (3), (4) and (5) Laudan claims that creation science is ― or at least could be ― testable, tentative and falsifiable.

In view of all these problems, and in order for a demarcation criterion to be solid and functional, it will have to fulfill these three metaphilosophical requirements:

(X) All items must be discriminative, but not all have to be necessary. The fulfillment of each of them must be crucial and their presence must respond to a careful reasoning based on counterexamples. The criterion must be focused on logical needs rather than on sociological values. In this regard, we have to ask ourselves when demarcating pseudoscience: “If science fulfill this item, would it cease to be science?”

(Y) Be presented as science must be mandatory to be pseudoscience since this is the very nature of pseudoscience. This item would be sufficient in conjunction with the fulfillment of one of the items of (X) ― the items regarding being non-science.

(Z) It must be supported by cumulative work of many authors, assuming their errors but also incorporating their successes. Theoretical foundations capable to achieve consensus are very important.

It is necessary to carry out an in-depth analysis of the multicriterial tradition in order to reveal if these three requirements could be satisfied or not. The two general questions that should guide this analysis are: Has there been progress in the development of demarcation criteria? And, is it possible to develop a demarcation criterion able to meet these three requirements?

An analysis of twenty-one demarcation criteria

An analysis of the following twenty-one criteria will be carried out (Gruenberger, 1964; Dutch, 1982; Bunge, 1982; Kitcher, 1982; Hansson, 1983; 2009; Grove, 1985; Tuomela, 1985; Thagard, 1988; Glymore and Stalker, 1990; Derksen, 1993; Vollmer, 1993; Beyerstein, 1995; Schick and Vaughn, 1995; Ruse, 1996; Coker, 2001; Park, 2003; Jones, 2005; Skelton, 2011; Lilienfeld, Ammirati, and David, 2012; Lack and Rousseau, 2016). Criteria that have not passed a minimum review, either by peer review or by a publisher, or do not have some official status or do not have been written by a recognized expert in the field (for example, Beyerstein, 1995) have been left out. Some borderline proposals (Thagard, 1978; Giere, 1979; Rothbart, 1990; Reisch, 1998), which are not clearly multicriterial or which are not considered as such in the available bibliography on the topic (Hansson, 2017b), have been left out as well in order to increase consensus regarding the sample. Finally, only demarcation criteria published in English have been selected to avoid misunderstandings when translating terms belonging to a field with many semantic nuances as philosophy of science. For example, the subtle difference between verifiability, testability, confirmability, disconfirmability or falsifiability, or between progressiveness and fruitfulness, in a language that this author does not handle could make the sample bias. At any rate, this author is not aware of any criterion that meets the selection criterion and has not been published in English ― considering that (Hansson, 1983) has been translated by its own author (Hansson, 2017b) and that (Vollmer, 1993) has been translated in (Mahner, 2007).

These twenty-one criteria have in total seventy items:

1 Arguments from ignorance; 2 Non-replicable; 3 Appeal to tradition; 4 Different social support than offered to science; 5 Lack of attention to contrary evidence; 6 Lack of explanatory power; 7 Lack of progress; 8 Creation of mysteries; 9 Circular arguments; 10 Lack of assessment of alternative theories; 11 Non-cumulative knowledge; 12 Lack of credentials among its defenders; 13 Misuse of scientific data; 14 Inability to predict; 15 Lack of fruitfulness; 16 Rejection by the scientific community; 17 Publications without peer review; 18 Cherry Picking; 19 Extraordinary claims; 20 Hypertechnic language; 21 Lack of boundary conditions; 22 Invention of facts; 23 Appeal to subjective or exceptional evidence; 24 Dependence on cultural facts; 25 Internal incongruity; 26 Lack of evidence; 27 Use of rhetoric and propaganda; 28 Authoritarianism; 29 Appeal to emotions; 30 False authorities; 31 Does not appeal to laws; 32 Paradoxical relationship with scientific methodology; 33 Teleological thinking; 34 Incomplete and non-causal explanations; 35 Conspiracy theories; 36 Work in solitude; 37 External incongruity; 38 Pretend to be new and old at the same time; 39 Magical thinking; 40 Conflicts of interest; 41 Non-existence of theories; 42 Is presented as science; 43 Lack of systematicity; 44 Nonfalsifiable; 45 Alien to the domain of science; 46 Metaphysical ideas; 47 Claims to be consistent with facts, but superficially; 48 Abuse of ad hoc hypotheses; 49 Appeal to paranormal abilities; 50 Avoid logic and mathematics; 51 Appeal to mythology; 52 Focuses on practical problems; 53 Lack of openness to criticism; 54 Is within the domain of science; 55 It is not reliable; 56 Presence of spurious correlations; 57 “Timid” phenomena; 58 Lack of parsimony; 59 Reluctance to test; 60 Lack of humility; 61 Abuse of statistics; 62 Ambiguous language; 63 Community of believers, not of researchers; 64 Poor approach to problems; 65 Deficient methodology; 66 Excessive pretensions; 67 Unnecessarily complex theories; 68 Unnecessary use of mathematical language; 69 “Many geniuses have been despised”; 70 Quote mining.

1
Table 1. Data matrix.
Note: Criteria are sorted from oldest to newest (from top to bottom) and items are ranked from less to more supported (from left to right). In the bottom, the total number of authors who support each item, rightward the number of items of each criterion.

The elaboration of this list of items has also required some methodological decisions. In the first place, in some cases different criteria call the same item by different names. For example, item 17 ― “publications without peer reviews” ― is called in various ways: “public scrutiny” (Beyerstein, 1995), “peer review” (Liliefeld et al., 2012) or “the discoverer pitches the claim directly to the media” (Park, 2003); or item 46 ― “metaphysical ideas” ― that is called “ontology that allows the existence of immaterial entities and processes” (Bunge, 1982) or (lack of)”testability” (Schick and Vaughn, 1995). In these cases, the decision has been to group them under the same label, since the idea they intended to express was equivalent. Nevertheless, in case of slight semantic differences it has been preferred to keep those items separate. This is the case of, for example, items 7 and 11 ― “lack of progress” / “non-cumulative knowledge”; 23, 26 and 59 ― “Appeal to subjective or exceptional evidence” / “Lack of evidence” / “Reluctance to test”; or 39 and 49 ― “Magical thinking” / “Appeal to paranormal abilities”.

On the other hand, other items have been chopped since they express diverse ideas at the same time. Examples of these items that include several of them are: “Lack of falsifiability and overuse of ad hoc hypotheses” (Lilienfeld, Ammirati, and David, 2012), “Pseudoscientific concepts tend to be shaped by individual egos and personalities, almost always by individuals who are not in contact with mainstream science. They often invoke authority for support” (Skelton, 2011), or “Pseudoscience appeals to false authority, to emotion, sentiment, or distrust of established fact” (Coker, 2001). Another methodological decision related to the classification of items has been to reverse the meaning of some of them, so that they are now always negative and, therefore, point to pseudoscience. Some of these items have been originally expressed pointing to science (e.g. Kitcher, 1982; Vollmer, 1993), others to pseudoscience (e.g. Hansson, 2009; Lack and Rousseau, 2016), and others include the positive and negative expression of the same idea (e.g. Bunge, 1982; Thagard, 1988; Skelton, 2011). The only item that could not be affected by this situation is number 42 ― “is presented as science” ― since its exclusive of pseudoscience, being included just by criteria that choose a negative expression of its items.

It is worthwhile to make an additional comment on the presence of dispensable and contradictory items, an evidence of lack of theoretical foundations. The recognition of dispensable items has been carried out based on the question included in the criterion requirement (X). For example, with regard to item 14 ― “inability to predict” ― some authors would consider this characteristic to be mandatory of non-science, for example Reichenbach, Popper or Lakatos, but it is debatable to what extent we can or cannot demand predictive capacity to all scientific fields (Rescher, 1998) ― for example, history, economics, or sociology have difficulties performing predictions. Another example is item 62 ― “ambiguous language” ―, since the presence of some linguistic ambiguity is tolerated within science. Although semantic elucidation has been a classic aspiration of philosophy of science (Carnap, 1950), there is still ambiguity in some scientific concepts, for example in such basic ones as “gene” (Dietrich, 2000) or “species” (Hey, 2001). Nevertheless, some other items can be eliminated with no theoretical resistance, such as the lack of humility or the work in solitude ― the complete list is: 8, 12, 17, 20, 24, 36, 40, 50, 52, 57, 58, 60, 62, 67, 68. Another problem is the presence of contradictory items. For example, items 45 and 54 tell us that pseudoscience makes statements inside and outside the domain of science. Other cases are 20/62 and 22/47.

2
Table 2. Data analysis.
Note: AVI = Average of voted items; ASI= Average support per item; MAI = Maximum agreement per item; IVAASI = Items voted above the ASI; MVI= Most voted items (above the ASI).

The first thing to ask about these data is if they show some kind of theoretical progress during the last decades. Given that we can hypothesize that the selection of items responds to a continuous reflection based on the study of previous proposals, it would be possible and desirable that these criteria tend toward consensus based on the accumulation of theoretical successes. Looking at the data matrix (Table 1), this would be visible if as we go down ― towards the most recent criteria ― the items these newer criteria choose are accumulated in the lower-right corner of the table. Nonetheless, at first glance this does not seem to occur. A detailed analysis of the data (Table 2) reveals that the average support by item (ASI = the average of the votes divided by the number of authors) including the 21 authors in the calculation is 12% ― this means that on average 2.52 authors support each of the items. The item with the most support (MAI) has 42% ― no item has majority agreement ― and 35% of the items have a support above the ASI (IVAASI). An IVAASI rate of 35% together with a MAI of only 42% tell us that the distribution is quite unequal, with a large part of 65% of the items below the ASI having marginal support of just one or two authors ― in fact, 62% of the items do not exceed two votes. These data reveal a great disagreement about the chosen items. However, the total numbers could be hiding a greater consensus in the most recent proposals, something that would constitute evidence of theoretical progress.

To measure whether there has been a greater agreement over the decades, the matrix should be divided into three groups: the seven oldest criteria (1964-1985 = (A)), the seven newer (2001-2016 = (C)) and the seven that are in between (1988-1996 = (B)). The evidence of theoretical progress would be: less total items, a higher ASI, a higher MAI or a higher IVAASI in (C). Nevertheless, what we observe does not denote the slightest progress. The number of items is the same in the three groups ― one more in (A), something that is not statistically relevant. The ASI has been maintained over time, standing around 22% within each band, just like the MAI, which remains at 0.57 in all three. If we look at the IVAASI, we can see also a homogeneous distribution with fluctuations of just around 2 percentage points ― although the highest IVAASI is (A) it is not a relevant difference. These numbers denote that demarcation criteria are theoretically stagnated, since their total number of items and their support for those items are almost identical throughout its historical evolution, showing no general progress. These numbers are evidence against the theoretical defense of philosophical progress regarding demarcation carried out by authors like Pigliucci (2013), justifying, at least partially, Laudan’s historical pessimism.

In order to have a deeper insight of the theoretical development of the multicriterial tradition, it is appropriate to analyse the relationship between (A) and (C); the relationship between the most recent and the oldest criteria, avoiding the possible influence of (B) in the results. What we observe after carrying out this analysis is that the increase of total items between (A) and (C) is very pronounced ― 52.5% ―, thus (A) and (C) explain 87% of the variability of items ― (A) and (B) explain 80%, and (B) and (C) 81%. In fact, 83% of the items supported by (C) are different from those supported by (A), with 27% of them having at least one vote in each generation ― a number that is reduced to 9% if we demand at least two votes in each generation. These new items cannot be explained by a greater length in the newer criteria because the criteria of (A) are slightly longer than those of (C) (AVI). Moreover, taking into account the ASI the new items have done nothing to reach an agreement. Given that the two generations consider very different items, the ASI is of 14% and together with the IVAASI, which does not have the leveling effect of (B), are reduced by 51% compared to the global data. This is because many items have been introduced by a minimal support. Instead of understanding demarcation as a joint work, these authors have increased their theoretical isolation with the passing of the decades.

Nevertheless, it is possible to recognize some items as the most popular, so that it is possible to define an average criterion composed by 5, 17, 46, 53, 44, 7, 65 and 37 ― eight items have been selected since the overall AVI is 8.76. If we analyse this criterion (TOP 8) we find numbers that, even though they are far from achieving consensus, improve those of the complete list of items. We find 2.9 of AVI and an ASI of 0.36. The item with more agreement reaches 42%, having items 7, 65 and 37 this level of support. In addition, IVAASI rises to 62%, so there are now no items with marginal support. But even in TOP 8 there is still a high level of disagreement about the nature of science, even regarding its most essential features, something that has already been measured among philosophers in a broad sense (Alters, 1997), but never among experts in demarcation. The ASI of TOP 8 triples that of the general data. It is still far from majority but it is the best we can obtain regarding the support of a demarcation criterion.

Since it was established as the first requirement a demarcation criterion must fulfill, all items must be discriminative. Thus, it is worth asking ourselves if science could fulfill some of these eight features and maintain its status. In this regard, two items are problematic: 17 and 37. 17 has already been argued, noting that, although peer review is a tool of great interest for the maintenance of the reliability of scientific publications, not going through a process like this does not necessarily invalidate the results of an investigation. For this reason, because a considerable part of science does not go through peer review and thus meets the item 17, this item must be removed from the criterion.

On the other hand, the item 37 labels as unscientific heterodoxy and progress based on criticism of well-established theories. To give two classic examples: the theory of relativity violated some of the accepted principles of Newtonian mechanics or the initial rejection of Alfred Wegener’s continental drift theory, both conflicting theories regarding the beliefs of the scientific community of their time. To label something as pseudoscientific simply because it is not orthodox is something that should be avoided in the light of the history of science (Toulmin, 1985), since doing so could suffocate the freedom of thought and criticism within science, impeding its progress. Although it is true that the Popperian model of scientific progress, based on increasing explanatory power with a new theory that includes the previous one, would be ideal, scientific progress often takes place defending new ideas that are partially incongruent with current knowledge. After all, the epistemological problem with homeopathy is not that it violates the principles of chemistry, the problem is that it violates them without offering in return a theory based on evidence.

The thematic spectrum of the remaining items is: (1) Scientific domain: 46, 44; (2) Method: 65; (3) Evidence: 5, 53, 7. Beyond the specific details of these items, these seem to be the three essential indicators of pseudoscience. They are also inviolable: there can be no science outside the scientific domain, or with a method that is too biased, or not based on evidence. No philosopher of science has advocated a science that does not have these characteristics. A scientific theory or hypothesis on the embryology of unicorns is not valid, a scientific theory without a good methodology is not reliable and a scientific theory without confirmatory evidence is no more, at best, than a mere hypothesis. To these three essential features we should add, by imperative, the item 42 ― “is presented as science”. This is due to the prefix “pseudo-” of “pseudoscience”: its nature is to be non-science presented as science. With this, pseudoscience will be defined as follows:

(Pseudoscience) (1 and/or 2 and/or 3) and (4).

    1. Refers to entities and/or processes outside the domain of science.

    2. Makes use of a deficient methodology.

    3. Is not supported by evidence.

    4. Is presented as scientific knowledge.

Being necessary to fulfill any of the first three items and sufficient in conjunction with (4) to be pseudoscience, and taking into account that pseudoscience is defined based on current knowledge on methodologies and evidence, this demarcation criterion meets the three requirements, demarcating between science and pseudoscience any epistemological product ― theories, hypothesis, propositions, etc. Nevertheless, it could be even more explicit. Concepts such as “scientific domain” should be elucidated ― which, based on items 46 and 44 would be defined as non-metaphysical and disconfirmable semantic content ―, “poor methodology”, which will require an analysis of the epistemological foundations that unify all methods and methodologies of science, and “evidence”, since here, scientific evidence is appealed. The elucidation of these concepts is not the goal of this paper, although it is a necessary continuation for an in-depth foundation of this demarcation criterion.

Nevertheless, at its current level of completion it already allows a fairly solid and functional demarcation. For example, paranormal thinking meets (1) since by definition it appeals to phenomena outside the domain of science (Broad, 1953; Tobacyk, 2004), although it would not be pseudoscience because it does not meet (4). On the contrary, parapsychology can present an optimal methodology and does not necessarily fulfill (3), but it meets (1) ― since the existence of the so-called “psi phenomena” has never been demonstrated ― and (4), so it is pseudoscience. There are many cases of fulfillment of (1); pseudoscientific ideas that appeal to metaphysical concepts such as acupuncture and its “Qi”, reiki or chiropractic and its “subluxations”. Others fail to avoid (2). A case would be EMDR (Herbert et al., 2000), a technique that avoids (1) and (3) but not (2) since when studies are done using a triple-blind methodology ― EMDR without imitation of saccadic movements ― results suggest that it works by covert exposure by visualization and not because of the specific technique offered by EMDR (Davidson and Parker, 2001; Cusack, 2016). Conspiracy theories could avoid (1) but not (2), and depending on whether they meet (4) they will or will not be pseudoscience ― for example, science denialism would be a pseudoscience, given that it fakes scientific controversies using conspirational ideation (Hansson, 2017a; Fasce and Picó, 2018d). It would be a great contribution to the discussion that the reader endeavored to try to find counterexamples to this criterion.

A comment on the relationship between this criterion and Hansson’s. Revolution or progress?

The result of the analysis carried out, a demarcation criterion which is constituted by four items, has a direct relationship with (Hansson, 2009). His criterion is the only one of the whole data matrix that did not fulfill any of the items of TOP 8, a surprising fact if we misinterpreted Hansson’s proposal. In spite of the fact that in recent years has gained notoriety by being used as a demarcation criterion in a large number of studies, Hansson’s proposal is between a metacriterion and a criterion: it is the general structure that any criterion, either between science and pseudoscience, philosophy and pseudophilosophy or good science and bad science, should present. This general structure is as follows (Hansson, 2009, p. 240):

1) It pertains to an issue within the domains of science in the broad sense (the criterion of scientific domain).

2) It suffers from such a severe lack of reliability that it cannot at all be trusted (the criterion of unreliability).

3) It is part of a doctrine whose major proponents try to create the impression that it represents the most reliable knowledge on its subject matter (the criterion of deviant doctrine).

Consequently, a proposal, whether theoretical or practical, will be pseudoscience if and only if it meets the first two items and, moreover, is presented as science. Nevertheless, it presents serious problems if we took it as a demarcation criterion. In the first place, it presents a problem of indefinition, something that the author is aware of and that is caused by the theoretical foundations of his work. Hansson defines the fields he considers as scientific based on the German concept of Wissenschaft, claiming that “their very raison d’etre is to provide us with the most epistemically warranted statements that can be made, at the time being, on the subject matter within their respective domains. Together, they form a community of knowledge disciplines characterized by mutual respect for each other’s results and methods” (Hansson, 2013, p. 63). He even asserts that “Philosophy, of course, is a science in this broad sense of the word” (Hansson, 2013, p. 63). Hansson aims to conceptualize pseudophilosophy as pseudoscience and this compelled him to mantain his ideas in the maximun indefinition, given that the domain of humanities, as well as its methodology and its evidence, are so different than those of science that any concreteness would lead him to reintroduce the concept of pseudohumanities ― “The rationale for choosing a criterion that is not directly applicable to concrete issues of demarcation is that such direct applicability comes at a high price: it is incompatible with the desired exhaustiveness of the definition.” (Hansson, 2013, p. 73). This high indefiniteness turns his ides into a basic, but empty, structure.

The demarcation criterion presented in this paper is a progress regarding Hansson’s schema. It accepts as valid his general idea but gives more meaning to its key concepts ― a meaning that needs later elucidation, but that already gives functionality to this criterion when indicating the non-negotiable characteristics of science and pseudoscience. Thereby, Hansson’s criterion of scientific domain is now defined in relation to the debate on the nature of metaphysics, with consideration to disconfirmability. His criterion of unreliability is now split into two items, one related to methodological problems and another one related to lack of evidence. And finally, his criterion of deviant doctrine remains as an item of necessary compliance. Nevertheless, and this solves a serious problem of Hansson’s general schema, it is no longer necessary for pseudoscience to have a discourse that is within the scientific domain, since there are several cases of pseudoscientific ideas that are outside this domain ― part of reiki, parapsychology or chiropractic have been mentioned. In addition, this is a progress that, as I hope to have shown, has the greatest possible consensus. In this regard, it is desirable that this demarcation criterion, initiated by Hansson, continues progressing in the future.

Afonso, A.; Gilbert, J. (2009). Pseudo-science: A Meaningful Context for Assessing Nature of Science. International Journal of Science Education, 32(3), 329-48.

Agassi, J. (1991). Popper’s demarcation of science refuted, Methodology and Science, 24, 1-7.

Alters, B. (1997). Whose Nature of Science?. Journal of Research in Science Teaching, 34, 39-55.

Bensley A.; Lilienfeld, S.; Powell, L. (2014). A new measure of psychological misconceptions: Relations with academic background, critical thinking, and acceptance of paranormal and pseudoscientific claims. Learning and Individual Differences, 36, 9-18.

Beyerstein, B. (1995). Distinguising Science from Pseudoscience. Consulted in: http://www.sld.cu/galerias/pdf/sitios/revsalud/beyerstein_cience_vs_pseudoscience.pdf.

Blancke, S.; Boudry M.; Pigliucci, M. (2016). Why Do Irrational Beliefs Mimic Science? The Cultural Evolution of Pseudoscience. Theoria, 83(1), 78-97.

Broad, C. (1953). The relevance of psychical research to philosophy. In J. Ludwig (Ed.), Philosophy and parapsychology (pp. 43- 63). Buffalo, NY: Prometheus.

Brotherton, R.; French, C.; Pickering, A. (2013). Measuring Belief in Conspiracy Theories: The Generic Conspiracist Beliefs Scale. Frontiers in Psychology, 4, 1-15.

Bunge, M. (1982). Demarcating Science from Pseudoscience. Fundamenta Scientiae, 3, 369-388.

CAMbrella (2012). CAMbrella Documents and Reports. Consulted in: http://www.cambrella.eu/home.php?il=203&l=deu.

Carnap, R. (1950). Logical Fundations Probability. Chicago: University of Chicago Press.

Cartwright. N. (1983). How the Laws of Physics Lie. Oxford: Oxford University Press.

Cat, J. (2006). Unity and Disunity of Science. In S. Sarkar and J. Pfeifer (Eds.), The Philosophy of Science: An Encyclopedia (pp. 842-47). New York: Routledge.

Coker, R. (2001). Distinguishing Science and Pseudoscience. Quackwatch. Consulted in: http://hep.physics.utoronto.ca/~orr/wwwroot/JPH441/Pseudoscience.pdf.

Cusack, K., et al. (2016). Psychological treatments for adults with posttraumatic stress disorder: A systematic review and meta-analysis. Clin Psychol Rev., 43, 128-141.

Davidson, P.; Parker, K. (2001). Eye movement desensitization and reprocessing (EMDR): A meta-analysis. Journal of Consulting and Clinical Psychology, 69(2), 305-316.

Derksen, A. (1993). The seven sins of pseudo-science. Journal for General Philosophy of Science, 24(1), 17-42.

Dietrich, M. (2000) The problem of the gene. C R Acad Sci III., 323(12), 1139-46.

Dupre, J. (1993). The Disorder of Things: Metaphysical Foundations of the Disunity of Science. Cambridge, MA: Harvard University Press.

Dutch, S. (1982). Notes on the nature of fringe science. Journal of Geological Education, 30, 6-13.

Elliott, S. (2016). Bad Science: Cause and Consequence. J Pharm Sci., 105(4), 1358-61.

Ernst, E. (2010). Homeopathy: what does the “best” evidence tell us? Med J, 192(8), 458-60.

Fanelli, D. (2009). How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. PLoS ONE, 4(5), e5738.

Fasce, A. (2018). Los parásitos de la ciencia. Una caracterización psicocognitiva del engaño pseudocientífico. Theoria. An International Journal for Theory, History and Foundations of Science, 32(3), 347-365.

Fasce, A; Picó, A. (2018a). Sociodemographic, Personality and Cognitive Differences Among Believers in Pseudoscience, the Paranormal and Conspiracy Theories. Submitted manuscript.

Fasce, A; Picó, A. (2018b) Science as a Vaccine. The impact of Scientific Literacy on Unwarranted Beliefs. Submitted manuscript.

Fasce, A; Picó, A. (2018c) Conceptual Foundations and Validation of the Pseudoscientific Belief Scale. Submitted manuscript.

Fasce, A; Picó, A. (2018d) Two of a Kind. More Similarities than Differences Among Science Deniers and Pseudo-Theory Promoters. Submitted manuscript.

Franz, T; Green, K. (2013). The impact of an interdisciplinary learning community course on pseudoscientific reasoning in first-year science students. Journal of the Scholarship of teaching and Learning, 13(5), 90-105.

Garb, H; Boyle, P. (2003). Understanding Why Some Clinicians Use Pseudoscientific Methods: Findings from Research on Clinical Judgment. In S. Lilienfeld, S. Lynn, and J. Lohr (Eds.), Science and pseudoscience in clinical psychology. New York: The Guilford Press.

Giere, R. (1979). Understanding scientific reasoning. Holt, Rinehart and Winston. New York: Holt, Rinehart and Winston.

Glymour, C.; Stalker, D. (1990). Winning through Pseudoscience. In Patrick Grim (Ed.) Philosophy of Science and the Occult (pp. 92-103). Albany: State University of New York Press.

Godfrey-Smith, P. (1993) Functions: Consensus Without Unity. Pacific Philosophical Quarterly 74, 196-208.

Goldacre, B. (2008). Bad Science. London: Fourth Estate.

Grove, J. (1985). Rationality at Risk: Science against Pseudoscience. Minerva 23, 216-240.

Gruenberger, F. (1964). A measure for crackpots. Science 25, 1413-1415.

Hansson, S.O. (1996). Defining Pseudoscience. Philosophia Naturalis, 33, 169-176.

(1983). Vetenskap och ovetenskap. Stockholm: Tiden.

(2006). Falsificationism Falsified. Foundations of Science, 11(3), 275-286.

(2007). Values in Pure and Applied Science. Foundations of Science, 12, 257-268.

(2009). Cutting the Gordian Knot of Demarcation. International Studies in the Philosophy of Science 23(3), 237-243.

(2013). Defining pseudoscience and science. In M. Pigliucci and M. Boudry (Eds.), Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (pp. 61-77). Chicago: University of Chicago Press.

(2017a). Science denial as a form of pseudoscience. Studies in History and Philosophy of Science, 63, 39-47.

— (2017b) Science and Pseudo-Science. In E. Zalta (Ed.) The Stanford Encyclopedia of Philosophy. Consulted in: http.//plato.stanford.edu/archives/sum2017/entries/pseudo-science.

Hempel, C. [1959] (1994). The Logic of Functional Analysis. Symposium on Sociological Theory. In M. Martin and L. McIntyre (Eds.), Readings in the Philosophy of Social Science. Cambridge, MA: The MIT Press.

Herbert, J., et al. (2000). Science and pseudoscience in the development of Eye Movement Desensitization and Reprocessing: Implications for clinical psychology. Clinical Psychology Review, 20, 945-971.

Hey, J. (2001). The mind of the species problem. Trends Ecol Evol., 16(7), 326-329.

Irzik, G.; Nola, R. (2011) A Family Resemblance Approach to the Nature of Science for Science Education. Science & Education, 20(7-8), 591-607.

Johnson, M.; Pigliucci, M. (2004). Is Knowledge of Science Associated with Higher Skepticism of Pseudoscientific Claims?. The American Biology Teacher, 66(8), 536-548.

Jones, J. (2005). Memorandum opinion. Consulted in: https://upload.wikimedia.org/wikipedia/commons/8/8d/Kitzmiller_v._Dover_Area_School_District.pdf.

Kitcher, P. (1982). Abusing Science: Th e Case Against Creationism. Cambridge, MA: MIT Press.

Kuhn, T. (1974). Logic of Discovery or Psychology of Research?. In P. Schilpp (Ed.), The Philosophy of Karl Popper, The Library of Living Philosophers, vol XIV, book II (pp. 798-819). La Salle: Open Court.

(1977). The essential tension: selected studies in scientific tradition and change. Chicago: University of Chicago Press.

Lacey, H. (2004). Is Science Value Free?: Values and Scientific Understanding. New Jersey: Routledge.

Lack, C.; Rousseau, J. (2016). Critical thinking, science, and pseudoscience : why we can’t trust our brains. New York : Springer Publishing Company.

Lakatos, I. (1970). Falsification and the Methodology of Research program. I. Lakatos and A. Musgrave (eds.) Criticism and the Growth of Knowledge (pp. 91-197). Cambridge: Cambridge University Press.

(1974a). Popper on Demarcation and Induction. In P.A. Schilpp (Ed.) The Philosophy of Karl Popper, The Library of Living Philosophers, vol XIV, book I (pp. 241–273). La Salle: Open Court.

(1974b). Science and pseudoscience. Conceptus, 8, 5-9.

(1978a). The Methodology of Scientific Research Programmes. Cambridge: Cambridge University Press.

(1978b) Mathematics, Science, and Epistemology. Cambridge: Cambridge University Press.

Langmuir, I. [1953] (1989). Pathological Science. Physics Today, 42(10), 36-48.

Laudan, L. (1983). Th e Demise of the Demarcation Problem. In R. Cohen and L. Laudan (Eds.), Physics, Philosophy and Psychoanalysis (pp. 111-127). Dordrecht: D. Reidel.

(1984). Science and Values: The Aims of Science and Their Role in Scientific Debate. California: University of California Press.

Science at the Bar – Causes for Concern. In M. Ruse (Ed.), But Is It Science?. Buffalo: Prometheus.

Lewandowsky, S.; Gignac, G.; Oberauer, K. (2013). The role of conspiracist ideation and worldviews in predicting rejection of science. PLoS One, 8(10), e75637.

Lilienfeld, S.; Ammirati, R.; David, M. (2012). Distinguishing science from pseudoscience in school psychology: Science and scientific thinking as safeguards against human error. Journal of School Psychology, 50(1), 7-36.

Lilienfeld, S.; Lohr, J.; Morier, D. (2004). The Teaching of Courses in the Science and Pseudoscience of Psychology: Useful Resources. Teaching of Psychology, 28(3), 182-191.

Lobato, E., et al. (2014). Examining the Relationship Between Conspiracy Theories, Paranormal Beliefs, and Pseudoscience Acceptance Among a University Population. Applied Cognitive Psychology, 28, 617-625.

Lugg, A. (1987). Bunkum, Flim-Flam and Quackery: Pseudoscience as a Philosophical Problem. Dialectica, 41, 221–30.

Lundstrôm, M. y Jakobsson, A. (2009). Students’ Ideas Regarding Science and Pseudo-science in Relation to the Human Body and Health. Nordina 5(1), 3-17.

Mahner, M. (2007). Demarcating Science from Non-science. In T. Kuipers (Ed.), Handbook of the Philosophy of Science Vol. 1, General Philosophy of Science—Focal Issues (pp. 515–575). Amsterdam: North Holland.

(2013). Science and Pseudoscience. How to Demarcate after the (Alleged) Demise of the Demarcation Problem. In M. Pigliucci and M. Boudry (Eds.), Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (pp. 29-45). Chicago: University of Chicago Press.

Majima, Y. (2015). Belief in Pseudoscience, Cognitive Style and Science Literacy. Applied Cognitive Psychology, 29, 552-559.

McNally, R. (2007). Dispelling confusiong about traumatic dissociative amnesia. Mayo Clin. Proc., 82(9), 1083-1090.

Merton, R. [1942] (1973). The Normative Structure of Science. In R.Merton (Ed.), The Sociology of Science. Theoretical and Empirical Investigations (pp. 267-278). Chicago: University of Chicago Press.

Nickles, T. (2006). Th e Problem of Demarcation. In S. Sarkar and J. Pfeifer (Eds.), Th e Philosophy of Science: An Encyclopedia. Vol. 1 (pp. 188-197). New York: Routledge.

(2013). The Problem of Demarcation History and Future. In M. Pigliucci and M. Boudry (Eds.), Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (pp. 101-121). Chicago: University of Chicago Press.

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.

Overton, W. (1982). Creationism in Schools: The Decision in McLean vs. the Arkansas Board of Education. Science. 215(4535), 934-943.

Park, R. (2003.) Seven Warning Signs of Bogus Science. The Chronicle of Higher Education. Consulted in: http://www.unl.edu/rhames/park-seven-signs.pdf.

Pigliucci, M. (2013). The Demarcation Problem. A (Belated) Response to Laudan. In M. Pigliucci and M. Boudry (Eds.), Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (pp. 9-28). Chicago: University of Chicago Press.

Popper, K (1963). Conjectures and Refutations. New York: Basic Books.

(1974). Reply to my critics. In P. Schilpp (Ed.), The Philosophy of Karl Popper. The Library of Living Philosophers, vol. XIV, book 2 (pp. 961-1197). La Salle: Open Court.

Reisch, G. (1998). Pluralism, Logical Empiricism, and the Problem of Pseudoscience. Philosophy of Science, 65, 333-48.

Rescher, N. (1998). Predicting the Future. An Introduction to the Theory of Forecasting. New York: SUNY Press.

Resnik, D. (2000). A Pragmatic Approach to the Demarcation Problem. Studies in History and Philosophy of Science, 31, 249-267.

Rothbart, D. (1990) Demarcating Genuine Science from Pseudoscience. In P. Grim (Ed.), Philosophy of Science and the Occult (pp. 111-122). Albany: State University of New York Press.

Ruse, M. (1982). Creation-Science is Not Science. Science, Technology, and Human Values, 7(40), 72-78.

Schlick, T.; Vaughn, L. (1995). How to think about weird things. Critical thinking for a New Age. New Tork: McGraw-Hill.

Shermer, M. (2013). Science and Pseudoscience. The Difference in Practice and the Difference It Makes. In M. Pigliucci and M. Boudry (Eds.), Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (pp. 203-225). Chicago: University of Chicago Press.

Siitonen, A. (1984). Demarcation of Science from the Point of View of Problems and Problem-Stating. Philosophia naturalis, 21, 339-353.

Skelton, R. (2011). A Survey to the Forensic Sciences. Raleigh: lulu.com.

Thagard P (1978) Why Astrology is a Pseudoscience? PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1: 223-234.

(1988). Computational philosophy of science. Cambridge: The MIT Press.

Tobacyk, J. (2004). A Revised Paranormal Belief Scale. International Journal of Transpersonal Studies 23(1), 94-98.

Toulmin, S. (1984). The new philosophy of science and the “paranormal”. Skeptical Inquirer, 9, 48-55.

Tseng, Y., et al. (2014) The Relationship Between Exposure to Pseudoscientific Television Programmes and Pseudoscientific Beliefs among Taiwanese University Students. International Journal of Science Education B(4), 107-122.

Tuomela, R. (1985). Science, Action and Reality. Reidel, Dordrecht.

van Frassen, B. (1989). Laws and Symmetry. Oxford: Clarendon Press.

Vollmer, G. (1993). Wissenschaftstheorie im Einsatz, Beiträge zu einer selbstkritischen Wissenschaftsphilosophie Stuttgart: Hirzel Verlag.

Wilson, E.O. (1998). Consilience: The Unity of Knowledge. New York: Vintage Books.

Wilson, F. (2000). The Logic and Methodology of Science and Pseudoscience. Toronto: Canadian Scholars’ Press.

Worrall, J. (2003). Normal Science and Dogmatism, Paradigms and Progress: Kuhn ‘versus’ Popper and Lakatos. In T. Nickles (Ed.), Thomas Kuhn. Cambridge: Cambridge University Press.

Anuncios

10 comentarios en “What do we mean when we speak of pseudoscience? The development of a demarcation criterion based on the analysis of twenty-one previous attempts

  1. What a pity! When you publish an article in English, there are no comments. This is a bad symptom about spanish scientific researchers: many of them do not publish anything in scientific journals because they dont know the language… =(

    Le gusta a 1 persona

  2. Interesante, todo el lobby escéptico cabe perfectamente en todos los puntos propuestos por Angelo.

    1. Usan argumentos desde la ignorancia.
    2. Apelan a la tradición “escéptica” de Hume… que también era racista.
    3. Sus argumentos no son “replicables” por nadie fuera de sus medios de relaciones públicas.
    4. Eligen las pruebas selectivamente.
    5. Niegan cualquier estudio que contradiga sus creencias.
    6. Tienen un pensamiento teleológico de progreso infinito.
    7. Sus seudorefutaciones no son falsables.
    8. Acuden a la retórica y la propaganda como todo lo que hacen en los medios.
    9. La mayoría de sus publicaciones no están revisadas por pares (“El escéptico” y demás magazines basura).
    10. Mantienen incongruencias internas.
    11. Se basan en el autoritarismo “mi verdad”.
    12. ¡No acumulan conocimiento, siguen en lo mismo desde los 60!
    13. No ofrecen explicaciones causales fuera de la propaganda.
    14. Se niegan a sustentar sus “refutaciones” con pruebas objetivas.
    15. Apelan al pensamiento mágico (ver punto 3, 10 y 6).
    16. No reconocen sus erróres (carecen de humildad).
    17. Se presentan como la “ciencia de las seudociencias” e imitan la filosofía de la ciencia.
    18. Se enfocan en problemas práticos (manipular tendencias en twitter, crear publiaciones falsas como las de Angelo con todo y retórica basura, crean campañas de acoso).
    19. No usan el lenguaje matemático en su “teoría” porque no predicen nada.
    20. No tienen parsimonia.
    21. No existen teorías científicas de las “ciencias de las seudociencias”.
    22. Evitan la lógica y se escudan en seudológica.
    23. Tienen ideas metafísicas no falsables.
    24. Inventan su propia realidad y abusan de “posverdad”.
    25. Crean misterios en los medios de comunicación…

    Me gusta

      1. La diferencia es que los “escépticos” no ven sus debilidades, y no es cosa de sólo apuntar a otros, el criterio de Fasce no tiene sentido, es la misma amalgama de Bunge sólo que remasterizada con otros autores, no tiene nada nuevo que ofrecer. Parece más una proyección de sus propias debiladades que de las demás. Si no tienes un argumento real, pues ROFL!

        Me gusta

Responder

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión /  Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión /  Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión /  Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión /  Cambiar )

w

Conectando a %s