Thursday, September 5, 2013

MIRACLES AND SCIENCE (PART2)

Use of vague, exaggerated or untestable claims

Assertion of scientific claims that are vague rather than precise, and that lack specific measurements

Failure to make use of operational definitions (i.e. publicly accessible definitions of the variables, terms, or objects of interest so that persons other than the definer can independently measure .Failure to make reasonable use of the principle of parsimony, i.e. failing to seek an explanation that requires the fewest possible additional assumptions when multiple viable explanations are possible

Use of obscurantist language, and use of apparently technical jargon in an effort to give claims the superficial trappings of science Lack of boundary conditions: Most well-supported scientific theories possess well-articulated limitations under which the predicted phenomena do and do not apply.

Lack of effective controls, such as placebo and double-blind, in experimental design Lack of understanding of basic and established principles of physics and engineering

Over-reliance on confirmation rather than refutation

Assertions that do not allow the logical possibility that they can be shown to be false by observation or physical experiment (see also: Falsifiability)

Assertion of claims that a theory predicts something that it has not been shown to predict Scientific claims that do not confer any predictive power are considered at best "conjectures", or at worst "pseudoscience"

Assertion that claims which have not been proven false must be true, and vice versa

Over-reliance on testimonial, anecdotal evidence, or personal experience: This evidence may be useful for the context of discovery (i.e. hypothesis generation), but should not be used in the context of justification (e.g. Statistical hypothesis testing).

Presentation of data that seems to support its claims while suppressing or refusing to consider data that conflict with its claims This is an example of selection bias, a distortion of evidence or data that arises from the way that the data are collected. It is sometimes referred to as the selection effect. Reversed burden of proof: In science, the burden of proof rests on those making a claim, not on the critic. "Pseudoscientific" arguments may neglect this principle and demand that skeptics demonstrate beyond a reasonable doubt that a claim (e.g. an assertion regarding the efficacy of a novel therapeutic technique) is false. It is essentially impossible to prove a universal negative, so this tactic incorrectly places the burden of proof on the skeptic rather than the claimant.

Appeals to holism as opposed to reductionism: Proponents of pseudoscientific claims, especially in organic medicine, alternative medicine, naturopathy and mental health, often resort to the "mantra of holism" to dismiss negative findings.

Lack of openness to testing by other experts

Evasion of peer review before publicizing results (called "science by press conference"):  Some proponents of ideas that contradict accepted scientific theories avoid subjecting their ideas to peer review, sometimes on the grounds that peer review is biased towards established paradigms, and sometimes on the grounds that assertions cannot be evaluated adequately using standard scientific methods. By remaining insulated from the peer review process, these proponents forgo the opportunity of corrective feedback from informed colleagues.
Some agencies, institutions, and publications that fund scientific research require authors to share data so others can evaluate a paper independently. Failure to provide adequate information for other researchers to reproduce the claims contributes to a lack of openness.

Appealing to the need for secrecy or proprietary knowledge when an independent review of data or methodology is requested.

Absence of progress

Failure to progress towards additional evidence of its claims ,Terence Hines has identified astrology as a subject that has changed very little in the past two millennia. 
Lack of self-correction:
scientific research programmes make mistakes, but they tend to eliminate these errors over time. By contrast, ideas may be accused of being pseudoscientific because they have remained unaltered despite contradictory evidence. The work Scientists Confront Velikovsky (1976) Cornell University, also delves into these features in some detail, as does the work of Thomas Kuhn, e.g. The Structure of Scientific Revolutions (1962) which also discusses some of the items on the list of characteristics of pseudoscience. Statistical significance of supporting experimental results does not improve over time and are usually close to the cutoff for statistical significance. Normally, experimental techniques improve or the experiments are repeated, and this gives ever stronger evidence. If statistical significance does not improve, this typically shows the experiments have just been repeated until a success occurs due to chance variations.

Personalization of issues

Tight social groups and authoritarian personality, suppression of dissent, and groupthink can enhance the adoption of beliefs that have no rational basis. In attempting to confirm their beliefs, the group tends to identify their critics as enemies.

Assertion of claims of a conspiracy on the part of the scientific community to suppress the results

Attacking the motives or character of anyone who questions the claims (see Ad hominem fallacy)

Use of misleading language

Creating scientific-sounding terms to add weight to claims and persuade nonexperts to believe statements that may be false or meaningless: For example, a long-standing hoax refers to water by the rarely used formal name "dihydrogen monoxide" and describes it as the main constituent in most poisonous solutions to show how easily the general public can be misled. Using established terms in idiosyncratic ways, thereby demonstrating unfamiliarity with mainstream work in the discipline.

No comments:

Post a Comment