On the evening of July 19, 1759, Emanuel Swedenborg arrived at a dinner party in Gothenburg, Sweden. He was seventy-one years old, a former mining engineer and anatomist who had published respected work on metallurgy and the structure of the brain, a member of the Swedish House of Nobles, and—increasingly, in ways that discomfited his admirers—a mystic whose claims of conversing with angels and traversing the landscapes of heaven and hell had made him a figure of controversy across the learned circles of Europe. The host was William Castel, a wealthy merchant. Fifteen guests were present.[1]
At approximately six o'clock, Swedenborg grew pale. He excused himself from the table and left the room. When he returned, he was visibly agitated. He told the assembled company that a fire had broken out in Stockholm—roughly three hundred miles to the northeast—and that it was spreading rapidly through the Södermalm district, the southern island where his own house stood.[2]
Over the next two hours Swedenborg left the room repeatedly, each time returning with fresh reports: the fire had consumed a friend's home; it was advancing toward his own; the wind was driving it westward toward the water. The other guests listened with what one imagines was a mixture of concern for their agitated companion and skepticism about what he claimed to perceive, since Stockholm was days away by the fastest courier, and there existed no mechanism—none conceivable within the physics of the age—by which news could travel faster than a horse.
At eight o'clock, Swedenborg announced with evident relief that the fire had been extinguished. It had stopped, he said, three houses from his own.[3]
Word of his performance reached the provincial governor that same evening. Swedenborg was summoned to provide a detailed account of what he had seen, or claimed to have seen, and the governor recorded his statements and waited for whatever confirmation or refutation the post might eventually deliver.
Two days later, a messenger arrived from Stockholm. A great fire had indeed broken out on the evening of July 19, in the Södermalm district. It had consumed approximately three hundred houses and rendered two thousand persons homeless before being brought under control. The Maria Magdalena Church had been gutted, its tower collapsed, its interior reduced to ash. The fire had stopped—as Swedenborg had announced at precisely eight o'clock on Saturday evening, while seated among fifteen witnesses in a merchant's dining room three hundred miles away—three doors from his residence on Hornsgatan.[4]
The incident would become the most extensively documented case of apparent clairvoyance in the eighteenth century, and it would draw the sustained attention of the greatest philosopher of the age, whose response to the evidence—a response characterized first by rigorous investigation, then by private acknowledgment, and finally by public betrayal—established a template for scientific engagement with such phenomena that has persisted, with remarkable fidelity to its original form, for nearly three hundred years.
II. The Philosopher's Betrayal
Immanuel Kant was thirty-five years old when reports of Swedenborg's vision began circulating through the coffeehouses and salons of Königsberg. He was not yet the author of the three Critiques—those works would emerge over the following decades to remake the landscape of Western philosophy—but he was already a respected lecturer in philosophy and natural science at the university, a man of considerable reputation, and a figure committed to what we now call the Enlightenment project: the systematic replacement of superstition with reason, of tradition with inquiry, of revelation with the patient accumulation of empirical evidence.
The Swedenborg reports posed a problem of the most fundamental kind. A man could not perceive events occurring three hundred miles distant from his body. The mind did not operate outside the confines of the skull. These were, or seemed to be, necessary conditions for a rational universe, conditions without which the entire mechanistic worldview that Kant and his contemporaries were laboring to construct would collapse into the chaos of medieval superstition, of demons and angels and arbitrary divine intervention. But still the reports from Gothenburg were specific, multiply attested, and documented by witnesses of unimpeachable reputation.
Kant did what a philosopher, confronted with evidence that threatens his most fundamental assumptions, ought to do: he investigated. He commissioned an English merchant named Joseph Green—a man of such intelligence and probity that Kant would later claim never to have written a sentence in the Critique of Pure Reason without first reading it to Green and subjecting it to his judgment—to travel to Sweden and interview the witnesses directly.[5] Green spent months on the inquiry, visiting both Gothenburg and Stockholm, speaking with those who had been present at the dinner and those who had experienced the fire, assembling a dossier of testimony that he transmitted to Kant upon his return.
Green's original report has been lost to history, but Kant's response to it survives in a letter to Charlotte von Knobloch, written in 1763:
"The following occurrence appears to me to have the greatest weight of proof, and to place the assertion respecting Swedenborg's extraordinary gift beyond all possibility of doubt."[6]
Beyond all possibility of doubt. The words are Kant's own. The philosopher who would go on to establish the limits of human knowledge as the central problem of metaphysics, who would argue that space and time are forms of intuition rather than features of things-in-themselves, who would insist that reason must be brought before the tribunal of critique before its claims could be credited—this Kant, examining evidence compiled by his most trusted associate, found the case for Swedenborg's clairvoyance beyond dispute.
Three years later, in 1766, Kant published Träume eines Geistersehers, erläutert durch Träume der Metaphysik—Dreams of a Spirit-Seer, Elucidated by Dreams of Metaphysics—a work that subjected Swedenborg's theological writings to satirical critique and whose surface invited readers to dismiss him as a fantasist and "spook hunter." The book is stranger than simple mockery—it uses Swedenborg as occasion to question whether rationalist metaphysics possesses any better grounding than the visions it derides, raising questions about the limits of knowledge that would occupy Kant for the rest of his career—but the satire is what circulated, and the satire is what was remembered.[7]
What happened? In 1763 Kant acknowledged evidence that placed Swedenborg's abilities beyond all possibility of doubt. In 1766 he published a work whose philosophical substance was genuine—Moses Mendelssohn, reading it, could not determine "whether Dreams was meant to make metaphysics laughable or spirit-seeking credible"—but whose satirical register served purposes Kant would later admit had as much to do with self-protection as with truth. No intervening discovery had discredited Green's report. No new evidence had emerged to cast doubt on the testimony of the Gothenburg witnesses.
He explained himself with unusual candor in a letter to Moses Mendelssohn:
"I realized that I would have no peace from incessant inquiries until I had rid myself of my suspected knowledge of all these anecdotes... The best way of forestalling any attacks of mockery directed against my person would be to mock at myself."[8]
The confession deserves attention because it establishes, at the very dawn of the critical philosophy, the template that would govern scientific engagement with anomalous cognition for the next three centuries. Kant did not claim that further investigation had changed his assessment. He did not report having discovered fraud or error in Green's inquiry. He admitted, with the frankness of a man writing to a friend rather than for publication, that he had attacked Swedenborg to protect his standing among colleagues—to inoculate himself against charges of credulity by demonstrating that he was capable of derision, that he could not be suspected of taking such matters seriously, whatever he might privately believe.
Mendelssohn, having read Dreams of a Spirit-Seer, wrote to Kant expressing bewilderment. The book's tone, he observed, left one uncertain "whether Dreams was meant to make metaphysics laughable or spirit-seeking credible." Kant's reply was blunt: "My own mind was in a state of paradox."[9]
Investigate privately, dismiss publicly, protect reputation at the cost of intellectual honesty—the greatest philosopher of the Enlightenment established this precedent. His descendants—and they are legion, comprising the great majority of scientists and philosophers who have engaged with this evidence over the subsequent centuries—have maintained the tradition with a dedication that would be admirable if it were not so contemptible to the project Kant claimed to dedicate his life to.
III. The Long Accumulation
The pattern Kant established drove investigation underground, to the margins, or into institutions sufficiently remote from mainstream academic life that their findings could be safely ignored. The Society for Psychical Research, founded at Cambridge in 1882, was one such institution: serious scholars conducting rigorous research whose results were greeted with dismissal by colleagues who had not troubled to examine them.
The founding members were heavyweight intellectuals of their era, men whose reputations in other domains remained unassailable even as their interest in psychical research invited condescension. Henry Sidgwick, the Society's first president, was Professor of Moral Philosophy at Cambridge and one of the most respected ethicists of his generation. Frederic Myers, who coined the term "telepathy" and whose theory of the subliminal self would influence Jung's conception of the unconscious, brought to the inquiry a combination of classical learning and psychological sophistication rare in any age. William James, who established the American branch of the Society and served as its president, was not only the most important psychologist in America but arguably its most important philosopher, the founder of pragmatism, a figure whose Principles of Psychology remained the standard text for decades.[10]
Their approach was empirical, statistical, painstaking. The Census of Hallucinations, published in 1894, surveyed seventeen thousand individuals across Britain and found that approximately ten percent reported experiencing hallucinations while awake and in good health. The critical finding concerned what the researchers termed "crisis apparitions"—hallucinations of persons occurring at or near the moment of their death, often at great distance. Statistical analysis, comparing the observed frequency of such coincidences against what mortality tables and chance alone would predict, demonstrated that they occurred far more often than any normal explanation could accommodate.[11]
The work was methodologically sophisticated for its era and it was ignored.
In the 1930s, Joseph Banks Rhine established the first university-based laboratory for parapsychological research at Duke University, bringing to the field the apparatus of experimental psychology: standardized stimuli, controlled conditions, statistical analysis of results. The Zener cards—five simple symbols repeated in a deck of twenty-five—permitted thousands of trials under conditions where sensory leakage could be progressively eliminated and chance expectation precisely calculated.[12]
The results, accumulated over decades and across thousands of subjects, showed small but persistent deviations from chance. In one extensively documented series, a divinity student named Hubert Pearce sat in a library cubicle while the experimenter, J. Gaither Pratt, handled cards in a separate building a hundred yards away. Neither man saw the faces of the cards during the trials; both recorded their responses independently; the records were sealed before comparison. Across 1,850 trials, Pearce achieved 558 hits against an expected 370—a hit rate exceeding thirty percent where chance predicted twenty, with calculated odds against coincidence of twenty-two billion to one.[13]
The methodology was attacked on every conceivable ground: inadequate randomization, possible sensory cues, recording errors, statistical naivety. Rhine responded to each criticism by tightening his protocols. Cards were handled behind screens, then not handled at all until after guesses were recorded. Subjects were tested at increasing distances, then in separate buildings, then in separate cities. The statistical methods were reviewed by professional mathematicians and found sound.[14]
Nevertheless, these effects persisted. They diminished somewhat under the most stringent conditions—a pattern that critics attributed to the elimination of artifacts and that proponents attributed to the well-documented tendency of psi performance to decline under conditions of heightened scrutiny—but they did not vanish. Decade after decade, the database grew, but the dismissal continued.
Charles Honorton, working in the 1970s, introduced what would become the most extensively studied protocol in the field: the ganzfeld procedure, adapted from Gestalt psychology's research on perception. The theoretical premise was that telepathic signals, if they existed, were weak—easily overwhelmed by the noise of ordinary sensory processing. By reducing that noise through mild sensory deprivation—halved ping-pong balls taped over the eyes to create an undifferentiated visual field, white noise played through headphones to mask auditory variation—the signal might become detectable.[15]
The procedure was standardized with a precision unusual in psychology. A "receiver" entered the ganzfeld state while a "sender" in a separate, acoustically isolated room viewed a randomly selected target—typically an art print or video clip—and attempted to transmit its content mentally. After the session, the receiver examined four potential targets, including the actual one, and ranked them by correspondence to the imagery experienced during the ganzfeld period. With four options, chance performance was twenty-five percent. Across the accumulated studies, receivers selected the correct target approximately thirty-three percent of the time—an eight-point deviation from chance that, compounded across thousands of trials, reached levels of statistical significance that could not plausibly be attributed to luck.
By the mid-1980s, the database had grown large enough to attract serious methodological scrutiny. Ray Hyman, a psychologist and prominent skeptic, published a detailed critique identifying potential flaws in the accumulated studies—inadequate randomization, insufficient documentation, possibilities for sensory leakage between sender and receiver.[16] Honorton responded with his own analysis, arguing that the methodological variations Hyman identified did not correlate with effect sizes, and that the overall pattern could not be explained by selective reporting or statistical artifact.[17]
What happened next was unprecedented in the history of this controversy. Hyman and Honorton, skeptic and proponent, entered into extended correspondence, examined each other's analyses, and in 1986 issued a joint statement. The key passage:
"There is an overall significant effect in this database that cannot reasonably be explained by selective reporting or multiple analysis."[18]
They disagreed about interpretation. Hyman believed that some as-yet-unidentified artifact would eventually account for the results. Honorton believed the effect was genuine. But both acknowledged, in a document signed by both parties and published in peer-reviewed journals, that something was happening—something statistically robust, something that resisted the standard explanations by which anomalous findings are typically dissolved.
The joint communiqué should have marked a turning point, but denial continued.
IV. What the Numbers Say
The ganzfeld database has now grown to encompass seventy-eight studies conducted between 1974 and 2020, involving forty-six different principal investigators working in laboratories across multiple continents. In 2021, a registered report meta-analysis—meaning the analytical methods were specified and peer-reviewed before the data were examined, eliminating any possibility that researchers had massaged their statistics to achieve desired results—synthesized the accumulated evidence.[19]
The numbers: a hit rate approximately seven percentage points above chance expectation, an effect size of 0.08 in standard units, a Bayes Factor of 89.5. An effect size of 0.08 is small—roughly one-twelfth of a standard deviation, a shift invisible in any individual experiment but compounding across thousands of trials into a pattern too consistent to attribute to noise. A Bayes Factor of 89.5 means the data are approximately ninety times more likely under the hypothesis that the effect is real than under the hypothesis that it results from chance alone; by convention, anything above ten constitutes strong evidence, and ninety approaches what methodologists term "decisive."[20]
The obvious objection concerns publication bias—the tendency of journals to publish positive findings while negative results languish in file drawers, creating a literature that shows effects which do not exist in the underlying population of experiments. The meta-analysis addressed this with seven distinct statistical tests, each designed to detect different signatures of selective reporting. Six of the seven found no significant evidence of bias. The seventh detected a pattern consistent with mild publication bias but insufficient to account for the observed effect.[21]
A separate consideration argues against selective reporting: only 26.5 percent of the studies in the database achieved conventional statistical significance. If researchers were selectively publishing successes and suppressing failures, the proportion would be far higher—closer to the eighty or ninety percent that characterizes literatures known to be contaminated by bias. The ganzfeld database looks like what an honest literature looks like (a rare thing in psychology): mostly null results punctuated by occasional successes, with the successes outnumbering chance expectation by a margin too large to dismiss.[22]
Cumulative meta-analysis permits the tracking of effect size over time. The ganzfeld effect stabilized around 1997 and has remained essentially constant since—no decline as methodology improved, no attenuation as skeptics introduced more stringent controls. If the effect were an artifact of sloppy methods, tightening those methods should have diminished it. The opposite appears: effect sizes from methodologically superior studies exceed those from earlier, less rigorous work.[23]
How large must an effect be to matter? Jacob Cohen, whose conventional benchmarks (small = 0.2, medium = 0.5, large = 0.8) have shaped a generation of psychological research, acknowledged that these categories are arbitrary—that practical significance depends on consequences, not abstract thresholds.[24] The Physicians' Health Study, which established aspirin as a standard intervention for preventing heart attacks and was terminated early due to what investigators termed "conclusive evidence," found an effect size of approximately 0.03—less than half the ganzfeld effect, explaining less than one-tenth of one percent of variance.[25] Millions take aspirin daily on that evidence. The effect is real; the effect is tiny; the effect saves lives. No one dismisses aspirin research as pseudoscience because its effects are too small to see with the naked eye.
Jessica Utts, who would later serve as president of the American Statistical Association—the world's largest community of professional statisticians, the body responsible for setting standards of evidence across the sciences—conducted the official government evaluation of remote viewing research in 1995. Her conclusion:
"Using the standards applied to any other area of science, it is concluded that psychic functioning has been well established. The statistical results of the studies examined are far beyond what is expected by chance. Arguments that these results could be due to methodological flaws in the experiments are soundly refuted."[26]
The phrasing bears emphasis: "using the standards applied to any other area of science." Utts was not making a claim about the paranormal. She was making a claim about consistency. The evidence for anomalous cognition, assessed by the same criteria that govern research in medicine and psychology and economics, meets or exceeds the threshold for acceptance. It is rejected not because it fails those criteria but because it threatens assumptions most scientists prefer not to question.
The rejection is sometimes frank about its premises. Arthur Reber and James Alcock, responding to a comprehensive 2018 review of parapsychological evidence in American Psychologist, wrote: "Claims made by parapsychologists cannot be true... Hence, data that suggest that they can are necessarily flawed and result from weak methodology or improper data analyses."[27]
The argument is circular but honest: we know the conclusion is impossible; therefore we know the evidence is wrong; the task is not to examine the evidence but to identify the flaw that must exist because the conclusion cannot be true. This bears no resemblance to science as understood by Popper or Kuhn or Lakatos or any philosopher who has reflected on what distinguishes empirical inquiry from theology. It is theology dressed in the language of methodology—a commitment to prior belief that no evidence is permitted to disturb.
V. The Government's Twenty Million Dollars
Between 1972 and 1995, the United States government spent approximately twenty million dollars investigating what its internal documents termed "anomalous cognition"—the apparent ability to perceive distant locations, future events, or hidden information through means that defy conventional explanation. The program operated under a succession of codenames—SCANATE, GONDOLA WISH, GRILL FLAME, CENTER LANE, SUN STREAK, and finally STARGATE—and was housed first at Stanford Research Institute, later at Science Applications International Corporation. It was funded by the Defense Intelligence Agency and later the Central Intelligence Agency, organizations not known for indulging claims that cannot be verified through conventional surveillance.[28]
The program's origins lay in Cold War anxiety. American intelligence had obtained reports suggesting that the Soviet Union was investing heavily in what Soviet documents called "psychotronics"—research into telepathy, remote viewing, and psychokinesis as potential tools of espionage and warfare. Estimates of Soviet expenditure ranged as high as three hundred million rubles annually by the mid-1970s, with dozens of research institutes reportedly devoted to the work.[29] The Americans, whatever their private skepticism, were unwilling to cede potential strategic advantage in a domain they did not understand. If the Soviets were developing psychic spies, the United States needed its own—or at minimum, needed to understand what was possible.
The methodology was demanding. In coordinate remote viewing, subjects received only latitude and longitude—two numbers specifying a location on Earth's surface—and were asked to describe what occupied that location. In blind judging protocols, independent evaluators who had no knowledge of the actual target ranked photographs against the viewer's descriptions. Remote viewers reportedly located downed aircraft, identified hostages, described foreign military installations, and sketched weapons systems—claims documented in declassified files though difficult to verify independently.[30]
Over two decades the program accumulated 154 formal experiments and more than 26,000 individual trials. When Jessica Utts evaluated the statistical evidence in 1995, she found combined significance that defied easy expression—p-values below 10⁻²⁰, odds against chance exceeding one hundred billion billion to one. Effect sizes ranged from modest to substantial, with some individual studies achieving correlations of 0.3 to 0.5—medium to large by any standard.[31]
Even Ray Hyman, the skeptic appointed to provide the adversarial evaluation the government required, could not dismiss the findings. "The case for psychic functioning seems better than it ever has been," he acknowledged. "I do not have a ready explanation for these observed effects."[32] He maintained that some artifact must exist—some methodological flaw, some pattern of sensory leakage—but he could not identify it, and he conceded that the hypothesis of flawed methodology had not been demonstrated.
The CIA terminated the program anyway, ostensibly because operational utility remained difficult to demonstrate. Intelligence work requires precision—coordinates, names, timetables. Remote viewing produced impressions accurate in general character but often vague in particulars: useful perhaps for generating leads, insufficient for targeting packages.[33]
Some operational cases remain in the record, neither confirmed nor debunked. In July 1974, a remote viewer named Pat Price was given only the coordinates of a Soviet facility at Semipalatinsk and asked to describe what he perceived. He reported—and sketched—a large building with a distinctive gantry crane, rail-mounted and massive, of the sort used to move extremely heavy objects. The sketch was specific enough that it triggered a congressional security investigation: how had a civilian psychic in California obtained what appeared to be classified information about Soviet nuclear weapons infrastructure?[34]
Joe McMoneagle, designated Remote Viewer #001 and one of the program's most consistent performers, received the Legion of Merit upon his retirement from the Army—a decoration not typically awarded for contributions to programs that accomplished nothing. The citation, worded to avoid explicit reference to classified activities, noted that he had provided intelligence information "unavailable from other sources."[35]
What the STARGATE program demonstrated was not that remote viewing works reliably enough to replace satellite reconnaissance—it does not—but that the phenomenon cannot be dismissed as noise. Something was happening that exceeded chance by margins too large for coincidence and too consistent across experimenters and protocols for isolated fraud. The evidence met standards that would be accepted anywhere else. It was rejected because the domain was forbidden.
VI. The Pseudoscience That Wasn't
There is an irony here that deserves attention, though it is rarely noted and never dwelt upon by those who dismiss parapsychology as methodologically naive. The field that mainstream science derides as pseudoscience was, in fact, methodologically ahead of the sciences that deride it.
Pre-registration—the practice of specifying hypotheses, methods, and analytical approaches before data collection, now recognized as essential for distinguishing genuine findings from artifacts of researcher flexibility—was required by the Journal of Parapsychology beginning in 1975.[36] Mainstream psychology did not widely adopt pre-registration until after the replication crisis erupted in 2011, when it became undeniable that the field's published literature was contaminated by selective reporting, p-hacking, and the systematic suppression of null results. Parapsychologists were pre-registering their studies four decades before the rest of psychology was forced to confront the consequences of not doing so.
The Hyman-Honorton joint communiqué of 1986, in which skeptic and proponent agreed on methodological standards for ganzfeld research, established requirements—computer-automated target selection, complete sender-receiver isolation, two-experimenter designs, pre-specified statistical analyses—that anticipated by decades the reforms psychology would eventually implement under duress.[37] The "pseudoscience" was practicing methodological hygiene while the "real" science was producing literatures so contaminated that major findings—ego depletion, social priming, the facial feedback hypothesis—would later collapse under attempted replication.
When Daryl Bem published "Feeling the Future" in 2011, reporting experimental evidence for precognition in the Journal of Personality and Social Psychology, the psychology community responded with outrage.[38] How could a major journal publish evidence for the impossible? Critics dissected Bem's methods and found them wanting—but the methods were standard for psychology at the time. His statistics were conventional. His effect sizes were comparable to published work in social cognition. If his findings were invalid due to methodological weakness, then so was most of the literature his methods mirrored. The replication crisis confirmed exactly this: the problem was that Bem had done what was standard and expected, and the status quo was broken.
Bem's paper became the catalyst for methodological reform because the implications of finding evidence for precognition made the community willing to scrutinize methods it had accepted without question when the conclusions were congenial. The same statistical flexibility that had produced decades of "findings" in social psychology—findings now known to be largely spurious—had produced evidence for telepathy, and this was intolerable in a way that false findings about priming or stereotype threat were not.
Pre-registered presentiment studies—experiments measuring physiological arousal before randomly selected emotional stimuli—show larger effect sizes than non-pre-registered studies.[39] This is the opposite of what publication bias or questionable research practices would produce. Stricter methodology, in this domain, appears to clarify the signal rather than eliminate it. The pattern suggests either that the effect is genuine or that some artifact exists which grows stronger under tighter controls—an artifact no critic has yet identified.
With that said, the Transparent Psi Project, a 2023 adversarial collaboration with methodology approved by both believers and skeptics before any data were collected, found no significant precognition effect across nine laboratories and 2,097 participants.[40] A single null result does not negate decades of positive findings, but neither can it be dismissed. The honest assessment is that we do not know what is happening—that the data show something, and that something has not yet been explained.
What we can say with confidence is that the charge of methodological naivety, so often leveled at parapsychology by its critics, is projection. The field that pioneered pre-registration, that subjected itself to adversarial collaboration, that published its null results alongside its successes, that agreed to joint statements with its most vocal skeptics—this field has been dismissed as unscientific by disciplines whose own practices could not survive equivalent scrutiny. The pseudoscience, it turns out, was more rigorous than the science that mocked it.
VII. The Machine That Cannot Think
In 1950, the mathematician Alan Turing published "Computing Machinery and Intelligence" in the journal Mind—a paper that would become the founding document of artificial intelligence, the text that introduced what we now call the Turing Test and framed the questions that would occupy researchers in machine cognition for seventy-five years. The paper is canonical: required reading in computer science curricula, cited in thousands of subsequent works, analyzed in countless discussions of minds and machines.
In Section 6 of the paper, Turing considers objections to his proposal that machines might think. He addresses theological arguments, arguments from consciousness, arguments from the informality of human behavior, Lady Lovelace's objection that machines can only do what they are programmed to do. The ninth and final objection he considers is "The Argument from Extra-Sensory Perception." His treatment of it:
"I assume that the reader is familiar with the idea of extra-sensory perception, and the meaning of the four items of it, viz., telepathy, clairvoyance, precognition and psychokinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming. It is very difficult to rearrange one's ideas so as to fit these new facts in. Once one has accepted them it does not seem a very big step to believe in ghosts and bogies."[41]
Turing does not hedge. He does not speak of suggestive evidence or anomalous results that might reward further investigation. He states flatly that the statistical evidence for telepathy is "overwhelming"—a word that admits no qualification—and acknowledges that accepting this evidence would require rearranging one's scientific ideas in fundamental ways. He notes the psychological resistance ("how we should like to discredit them!") but does not permit that resistance to override his assessment of the data.
More remarkably, he takes the objection seriously enough to propose a solution. If telepathy exists—if human minds can access information through non-sensory channels—then his Imitation Game would be compromised. A human participant might read the interrogator's mind, gaining an advantage no machine could match. Turing's remedy: conduct the test in a "telepathy-proof room," whatever that might mean, to ensure that the comparison between human and machine remained fair.[42]
The response of subsequent commentators has been revealing. The passage is, as one Oxford Academic chapter puts it, "so out of step with the rest of the paper that most writers on Turing (myself included) have tended to ignore it or gloss over it, while some editions omit it altogether."[43] The heresy is excised, as though Turing had never written it, as though the founding document of artificial intelligence did not contain an explicit acknowledgment that the evidence for telepathy is overwhelming and that accepting it would transform our understanding of mind.
Turing invented the conceptual framework within which machines are understood to process information—and in the same document acknowledged phenomena suggesting that information processing is not confined to physical systems operating through known causal mechanisms. He proposed the test by which machine intelligence would be evaluated—and recognized that the test would fail if human intelligence included capacities that machines could not, by their nature, replicate. He stands at the origin of every attempt to reduce mind to computation—and his own assessment of the evidence contradicted that reduction before the project began.
The pattern is Kant's pattern: acknowledge privately or in unguarded moments, suppress publicly, protect the coherence of the intellectual program against evidence that would disturb it. But where Kant at least had the honesty to confess his cowardice to Mendelssohn, modern commentators have sublimated the betrayal into something cleaner—editorial erasure, a quiet excision of inconvenient text, as though the history of ideas could be tidied by removing the passages that embarrass current orthodoxy.
What Turing saw, and what his successors have preferred not to see, is that the question of mind cannot be settled by stipulating what minds are permitted to do. If the evidence suggests that human cognition operates through channels that physical theory does not accommodate, then the evidence must be engaged, not erased. The alternative is to win the argument by refusing to read it—a victory indistinguishable from defeat.
The pattern extends across centuries and disciplines. Serious investigators examine the evidence, find it more compelling than expected, and either retreat from the topic or suffer marginalization. The evidence accumulates and the dismissal intensifies in proportion to the quality of the data, because good data is more threatening than noise. The taboo strengthens not despite the findings but because of them.
Swedenborg perceived a fire three hundred miles distant. Kant investigated, found the evidence beyond doubt, and published ridicule. Rhine accumulated decades of statistically significant results and the academy ignored him. Honorton and Hyman agreed that the ganzfeld effect was real and could not be explained as artifact; the agreement was forgotten. The U.S. government spent twenty million dollars and twenty-three years investigating remote viewing, found effects that exceeded conventional explanation, and terminated the program rather than confront the implications. Turing acknowledged that the evidence for telepathy was overwhelming, and his acknowledgment is excised from editions of the paper that created artificial intelligence.
The question is no longer whether the phenomena are real. The data are what they are; they have been examined by statisticians of the highest competence and found to meet standards applied in every other domain of empirical inquiry. The question is now what it would mean to take them seriously—what revisions to our understanding of mind and matter would be required, what institutional commitments would be threatened, what reputations would be damaged by the admission that the evidence was compelling all along and was suppressed not because it was weak but because it was strong.
The cost of such an admission would be substantial. The cost of refusing it may be higher. A science that cannot engage with evidence that contradicts its assumptions is not a science but a priesthood, guarding orthodoxy against heresy. The evidence that won't disappear will continue to accumulate, as it has accumulated for three centuries, waiting for an intellectual culture willing to look at what it shows.
The guest count of fifteen appears in multiple sources derived from contemporary accounts, including William White's 1868 biography of Swedenborg. ↩
Great Stockholm Fire of 1759, Wikipedia; Swedenborg Studies 2002: "On the Shoulders of Giants." ↩
Ibid. The timing and content of Swedenborg's announcements were recorded by multiple witnesses. ↩
Great Stockholm Fire of 1759, Wikipedia. Historical records confirm approximately 300 houses destroyed. ↩
A contemporary biographer noted Kant "never penned a sentence in his Critique of Pure Reason without reading it to Green." ↩
Kant's letter to Charlotte von Knobloch, variously dated 1763 or 1768. ↩
Immanuel Kant, Träume eines Geistersehers, erläutert durch Träume der Metaphysik, 1766. ↩
Kant's letter to Moses Mendelssohn, quoted in multiple analyses of the Kant-Swedenborg relationship. ↩
Letter from Kant to Mendelssohn, April 8, 1766. ↩
Society for Psychical Research founding documented in multiple sources. ↩
Eleanor Sidgwick et al., "Report on the Census of Hallucinations," Proceedings of the SPR 10 (1894). ↩
Rhine's work at Duke is documented extensively; the Zener card methodology became standard. ↩
Psi Encyclopedia entry on the Pearce-Pratt Experiment (1933-1934). ↩
The methodological evolution of Rhine's protocols is documented in multiple historical treatments. ↩
Charles Honorton's development of the ganzfeld protocol is documented in his original publications. ↩
Ray Hyman, "The Ganzfeld Psi Experiment: A Critical Appraisal," Journal of Parapsychology 49 (1985). ↩
Charles Honorton, "Meta-Analysis of Psi Ganzfeld Research," Journal of Parapsychology 49 (1985). ↩
Hyman-Honorton Joint Communiqué (1986), Journal of Parapsychology. ↩
Tressoldi and Storm, "Registered Report Meta-Analysis of Ganzfeld Studies," analyzing 78 studies (1974-2020). ↩
Bayes Factor interpretation standards: BF > 10 typically "strong evidence," BF > 100 "decisive." ↩
Publication bias tests detailed in the registered report methodology section. ↩
The 26.5% significance rate resembles what an unbiased literature looks like. ↩
Cumulative meta-analysis in Storm, Tressoldi, and Di Risio (2010) and subsequent updates. ↩
Jacob Cohen, Statistical Power Analysis for the Behavioral Sciences, 2nd ed. (1988). ↩
Sullivan and Feinn, "Using Effect Size," Journal of Graduate Medical Education. ↩
Jessica Utts, "An Assessment of the Evidence for Psychic Functioning," JSE 10, no. 1 (1996). ↩
Reber and Alcock, response to Cardeña, American Psychologist 73, no. 5 (2018). ↩
STARGATE program history documented in declassified CIA documents and academic analyses. ↩
Soviet psychotronics estimates from U.S. intelligence assessments. ↩
Coordinate Remote Viewing methodology documented in SRI and SAIC program reports. ↩
Utts, "Assessment of the Evidence for Psychic Functioning"; statistical summary of findings. ↩
Ray Hyman, American Institutes for Research evaluation report, 1995. ↩
CIA termination rationale documented in program closure documents. ↩
Pat Price Semipalatinsk session documented in Psi Encyclopedia. ↩
Joe McMoneagle Legion of Merit citation; documented in multiple sources. ↩
Martin Johnson's 1975 pre-registration requirements in the Journal of Parapsychology. ↩
Hyman-Honorton Joint Communiqué (1986), establishing methodological standards. ↩
Daryl Bem, "Feeling the Future," JPSP 100, no. 3 (2011): 407-425. ↩
Mossbridge, Tressoldi, and Utts, presentiment meta-analysis, Frontiers in Psychology (2012). ↩
Transparent Psi Project, Royal Society Open Science (2023). ↩
Alan Turing, "Computing Machinery and Intelligence," Mind 59 (1950): 433-460. ↩
Ibid. Turing's proposed solution: "To put the competitors into a 'telepathy-proof room.'" ↩
"Turing and the Paranormal," in The Turing Guide (Oxford Academic, 2017). ↩