• Promoted
  • New
  • Top

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Weekly LW Meetups: Pittsburgh, Sao Paulo, Seattle, Sydney

0FrankAdamek09 March 2012 11:17PM

There are upcoming irregularly scheduled Less Wrong meetups in:

  • Pittsburgh Meetup: Big Gaming Fun 4!: 11 March 2012 12:00PM
  • [Seattle] Queueing and More: 11 March 2012 03:00PM
  • Sydney - Core sequences: 13 March 2012 06:00PM
  • São Paulo Meetup: 14 March 2012 07:00PM
  • Brussels meetup: 17 March 2012 11:15AM
  • Atlanta: 17 March 2012 05:30PM
  • Fort Collins Meetup Saturday 17th: 17 March 2012 05:00PM
  • [Ohio/Washington DC] Interest in Reason Rally meetup?: 24 March 2012 04:14PM

The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

  • Ohio Monthly: 17 March 2012 03:00PM

Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, London, Madison WI, Melbourne, Mountain View, New York, Ohio, Ottawa, Oxford, Portland, Seattle, Toronto, Waterloo, and West Los Angeles.

continue reading »
Vote upVote down
Comments (1)

    I Was Not Almost Wrong But I Was Almost Right: Close-Call Counterfactuals and Bias

    43Kaj_Sotala08 March 2012 05:39AM

    Abstract: "Close-call counterfactuals", claims of what could have happened but didn't, can be used to either defend a belief or to attack it. People have a tendency to reject counterfactuals as improbable when those counterfactuals threaten a belief (the "I was not almost wrong" defense), but to embrace counterfactuals that support a belief (the "I was almost right" defense). This behavior is the strongest in people who score high on a test for need for closure and simplicity. Exploring counterfactual worlds can be used to reduce overconfidence, but it can also lead to logically incoherent answers, especially in people who score low on a test for need for closure and simplicity.

    ”I was not almost wrong”

    Dr. Zany, the Nefarious Scientist, has a theory which he intends to use to achieve his goal of world domination. ”As you know, I have long been a student of human nature”, he tells his assistant, AS-01. (Dr. Zany has always wanted to have an intelligent robot as his assistant. Unfortunately, for some reason all the robots he has built have only been interested in eradicating the color blue from the universe. And blue is his favorite color. So for now, he has resorted to just hiring a human assistant and referring to her with a robot-like name.)

    ”During my studies, I have discovered the following. Whenever my archnemesis, Captain Anvil, shows up at a scene, the media will very quickly show up to make a report about it, and they prefer to send the report live. While this is going on, the whole city – including the police forces! - will be captivated by the report about Captain Anvil, and neglect to pay attention to anything else. This happened once, and a bank was robbed on the other side of the city while nobody was paying any attention. Thus, I know how to commit the perfect crime – I simply need to create a diversion that attracts Captain Anvil, and then nobody will notice me. History tells us that this is the inevitable outcome of Captain Anvil showing up!”

    But to Dr. Zany's annoyance, AS-01 is always doubting him. Dr. Zany has often considered turning her into a brain-in-a-vat as punishment, but she makes the best tuna sandwiches Dr. Zany has ever tasted. He's forced to tolerate her impundence, or he'll lose that culinary pleasure.

    ”But Dr. Zany”, AS-01 says. ”Suppose that some TV reporter had happened to be on her way to where Captain Anvil was, and on her route she saw the bank robbery. Then part of the media attention would have been diverted, and the police would have heard about the robbery. That might happen to you, too!”

    Dr. Zany's favorite belief is now being threatened. It might not be inevitable that Captain Anvil showing up will actually let criminals elsewhere act unhindered! AS-01 has presented a plausible-sounding counterfactual, ”if a TV reporter had seen the robbery, then the city's attention had been diverted to the other crime scene”. Although the historical record does not show that Dr. Zany's theory would have been wrong, the counterfactual suggests that he might be almost wrong.

    There are now three tactics that Dr. Zany can use to defend his belief (warrantedly or not):

    1. Challenge the mutability of the antecedent. Since AS-01's counterfactual is of the form ”if A, then B”, Dr. Zany could question the plausibility of A.

    ”Baloney!” exclaims Dr. Zany. ”No TV reporter could ever have wandered past, let alone seen the robbery!”

    That seems a little hard to believe, however.

    2. Challenge the causal principles linking the antecedent to the consequent. Dr. Zany is not logically required to accept the ”then” in ”if A, then B”. There are always unstated background assumptions that he can question.

    ”Humbug!” shouts Dr. Zany. ”Yes, a reporter could have seen the robbery and alerted the media, but given the choice of covering such a minor incident and continuing to report on Captain Anvil, they would not have cared about the bank robbery!”

    3. Concede the counterfactual, but insist that it does not matter for the overall theory.

    ”Inconceivable!” yelps Dr. Zany. ”Even if the city's attention would have been diverted to the robbery, the robbers would have escaped by then! So Captain Anvil's presence would have allowed them to succeed regardless!”


    Empirical work suggests that it's not only Dr. Zany who wants to stick to his beliefs. Let us for a moment turn our attention away from supervillains, and look at professional historians and analysts of world politics. In order to make sense of something as complicated as world history, experts resort to various simplifying strategies. For instance, one explanatory schema is called neorealist balancing. Neorealist balancing claims that ”when one state threatens to become too powerful, other states coalesce against it, thereby preserving the balance of power”. Among other things, it implies that Hitler's failure was predetermined by a fundemental law of world politics.

    continue reading »
    Vote upVote down
    Comments (35)

      Using degrees of freedom to change the past for fun and profit

      38CarlShulman07 March 2012 02:51AM

      Follow-up to: Follow-up on ESP study: "We don't publish replications", Feed the Spinoff Heuristic!

      Related to: Parapsychology: the control group for science, Dealing with the high quantity of scientific error in medicine

      Using the same method as in Study 1, we asked 20 University of Pennsylvania undergraduates to listen to either “When I’m Sixty-Four” by The Beatles or “Kalimba.” Then, in an ostensibly unrelated task, they indicated their birth date (mm/dd/yyyy) and their father’s age. We used father’s age to control for variation in baseline age across participants. An ANCOVA revealed the predicted effect: According to their birth dates, people were nearly a year-and-a-half younger after listening to “When I’m Sixty-Four” (adjusted M = 20.1 years) rather than to “Kalimba” (adjusted M = 21.5 years), F(1, 17) = 4.92, p = .040

      That's from "False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant," which runs simulations of a version of Shalizi's "neutral model of inquiry," with random (null) experimental results, augmented with a handful of choices in the setup and analysis of an experiment. Even before accounting for publication bias, these few choices produced a desired result "significant at the 5% level" 60.7% of the time, and at the 1% level 21.5% at the time.

      I found it because of another paper claiming time-defying effects, during a search through all of the papers on Google Scholar citing Daryl Bem's precognition paper, which I discussed in a past post about the problems of publication bias and selection over the course of a study. For Bem, Richard Wiseman established a registry for the methods, and tests of the registered studies could be set prior to seeing the data (in addition to avoiding the file drawer).

      Now a number of purported replications have been completed, with several available as preprints online, including a large "straight replication" carefully following the methods in Bem's paper, with some interesting findings discussed below. The picture does not look good for psi, and is a good reminder of the sheer cumulative power of applying a biased filter to many small choices.

      continue reading »
      Vote upVote down
      Comments (21)

        How to Fix Science

        37lukeprog07 March 2012 02:51AM

        Like The Cognitive Science of Rationality, this is a post for beginners. Send the link to your friends!

        spacer

        Science is broken. We know why, and we know how to fix it. What we lack is the will to change things.

         

        In 2005, several analyses suggested that most published results in medicine are false. A 2008 review showed that perhaps 80% of academic journal articles mistake "statistical significance" for "significance" in the colloquial meaning of the word, an elementary error every introductory statistics textbook warns against. This year, a detailed investigation showed that half of published neuroscience papers contain one particular simple statistical mistake.

        Also this year, a respected senior psychologist published in a leading journal a study claiming to show evidence of precognition. The editors explained that the paper was accepted because it was written clearly and followed the usual standards for experimental design and statistical methods.

        Science writer Jonah Lehrer asks: "Is there something wrong with the scientific method?"

        Yes, there is.

        This shouldn't be a surprise. What we currently call "science" isn't the best method for uncovering nature's secrets; it's just the first set of methods we've collected that wasn't totally useless like personal anecdote and authority generally are.

        As time passes we learn new things about how to do science better. The Ancient Greeks practiced some science, but few scientists tested hypotheses against mathematical models before Ibn al-Haytham's 11th-century Book of Optics (which also contained hints of Occam's razor and positivism). Around the same time, Al-Biruni emphasized the importance of repeated trials for reducing the effect of accidents and errors. Galileo brought mathematics to greater prominence in scientific method, Bacon described eliminative induction, Newton demonstrated the power of consilience (unification), Peirce clarified the roles of deduction, induction, and abduction, and Popper emphasized the importance of falsification. We've also discovered the usefulness of peer review, control groups, blind and double-blind studies, plus a variety of statistical methods, and added these to "the" scientific method.

        In many ways, the best science done today is better than ever — but it still has problems, and most science is done poorly. The good news is that we know what these problems are and we know multiple ways to fix them. What we lack is the will to change things.

        This post won't list all the problems with science, nor will it list all the promising solutions for any of these problems. (Here's one I left out.) Below, I only describe a few of the basics.

        continue reading »
        Vote upVote down
        Comments (124)

          Rationality Quotes March 2012

          3Thomas03 March 2012 08:04AM

          Here's the new thread for posting quotes, with the usual rules:

          • Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.)
          • Do not quote yourself
          • Do not quote comments/posts on LW/OB
          • No more than 5 quotes per person per monthly thread, please.

           

          Vote upVote down
          Comments (408)

            Get Curious

            47lukeprog24 February 2012 05:10AM

            Being levels above in [rationality] means doing rationalist practice 101 much better than others [just like] being a few levels above in fighting means executing a basic front-kick much better than others.

            - lessdazed

            I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.

            - Bruce Lee

            Recently, when Eliezer wanted to explain why he thought a certain person was among the best rationalists he knew, he picked out one feature of their behavior in particular:

            I see you start to answer a question, and then you stop, and I see you get curious.

            For me, the ability to reliably get curious is the basic front-kick of epistemic rationality. The best rationalists I know are not necessarily those who know the finer points of cognitive psychology, Bayesian statistics, and Solomonoff Induction. The best rationalists I know are those who can reliably get curious.

            Once, I explained the Cognitive Reflection Test to Riley Crane by saying it was made of questions that tempt your intuitions to quickly give a wrong answer. For example:

            A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

            If you haven't seen this question before and you're like most people, your brain screams "10 cents!" But elementary algebra shows that can't be right. The correct answer is 5 cents. To get the right answer, I explained, you need to interrupt your intuitive judgment and think "No! Algebra."

            A lot of rationalist practice is like that. Whether thinking about physics or sociology or relationships, you need to catch your intuitive judgment and think "No! Curiosity."

            Most of us know how to do algebra. How does one "do" curiosity?

            Below, I propose a process for how to "get curious." I think we are only just beginning to learn how to create curious people, so please don't take this method as Science or Gospel but instead as an attempt to Just Try It.

            As with my algorithm for beating procrastination, you'll want to practice each step of the process in advance so that when you want to get curious, you're well-practiced on each step already. With enough practice, these steps may even become habits.

            continue reading »
            Vote upVote down
            Comments (96)

              Quantified Health Prize results announced

              39Zvi19 February 2012 08:10AM

               

              Follow-up to: Announcing the Quantified Health Prize

              I am happy to announce that Scott Siskind, better known on Less Wrong as Yvain, has won the first Quantified Health Prize, and Kevin Fischer has been awarded second place. There were exactly five entries, so the remaining prizes will go to Steven Kaas, Kevin Keith and Michael Buck Shlegeris.

              The full announcement can be found here until the second contest is announced, and is reproduced below the fold. 

              While we had hoped to receive more than five entries, I feel strongly that we still got our money’s worth and more. Scott Siskind and Kevin Fischer in particular put in a lot of work, and provided largely distinct sets of insight into the question. In general, it is clear that much time was usefully spent, and all five entries had something unique to contribute to the problem.

              We consider the first contest a success, and intend to announce a second contest in the next few weeks that will feature multiple questions and a much larger prize pool.

              Discussion of all five entries follows:Place ($500):

              5th Place ($500): Full report

              Steven Kaas makes a well-reasoned argument for selenium supplementation. That obviously wasn't a complete entry. It's very possible this was a strategic decision in the hopes there would be less than five other entries, and if so it was a smart gamble that paid off. I sympathize with his statements on the difficulty of making good decisions in this space.

              4th Place ($500):

              4th Place ($500): Full report

              Kevin Keeth’s Recommendation List is as follows: “No quantified recommendations were developed. See ‘Report Body’ for excruciating confession of abject failure.” A failure that is admitted to and analyzed with such honesty is valuable, and I’m glad that Kevin submitted an entry rather than giving up, even though he considered his entry invalid and failure is still failure. Many of the concerns raised in his explanation are no doubt valid concerns. I do think it is worth noting that a Bayesian approach is not at a loss when the data is threadbare, and the probabilistic consequences of actions are highly uncertain. Indeed, this is where a Bayesian approach is most vital, as other methods are forced to throw up their hands. Despite the protests, Kevin does provide strong cases against supplementation of a number of trace minerals that were left unconsidered by other entries, which is good confirmation to have.

              3rd Place ($500):

              3rd Place ($500): Full report

              Michael Buck Shlegeris chose to consider only five minerals, but made reasonable choices of which five to include. None of the recommendations made on those five seem unreasonable, but the reasoning leading to them is unsound. This starts with the decision to exclude studies with less than a thousand participants. While larger sample sizes are obviously better (all else being equal), larger studies also tend to be retrospective, longitudinal monitoring studies and meta-analyses. The conclusions in each section are not justified by the sources cited, and the RDI (while a fine starting point) is leaned on too heavily. There is no cost/benefit analysis, nor are the recommendations quantified. This is a serious entry, but one that falls short.

              2nd Place ($1000):

              2nd Place ($1000): Full report

              Kevin Fischer provides a treasure trove of information, teasing out many fine details that the other entries missed, and presented advocacy of an alternate approach that treats supplementation as a last resort far inferior to dietary modifications. Many concerns were raised about method of intake, ratios of minerals, absorption, and the various forms of each mineral. This is impressive work. There is much here that we will need to address seriously in the future, and we’re proud to have added Kevin Fischer to our research team; he has already been of great help, and we will no doubt revisit these questions.

              Unfortunately, this entry falls short in several important ways. An important quote from the paper:

              "“Eat food high in nutrients” represents something like the null hypothesis on nutrition - human beings were eating food for millions of years before extracting individual constituents was even possible. “Take supplements” is the alternative hypothesis.

              This is an explicitly frequentist, and also Romantic, approach to the issue. Supplementation can go wrong, but so can whole foods, and there’s no reason to presume that what we did, or are currently doing with them, is ideal. Supplementation is considered only as a last resort, after radical dietary interventions have “failed,” and no numbers or targets for it are given. No cost-benefit analysis is done on either supplementation or on the main recommendations.

              Winner ($5000): Scott Siskind (Yvain)

              Winner: Scott Siskind / Yvain ($5000): Full report

              Scott Siskind’s entry was not perfect, but it did a lot of things right. An explicit cost/benefit analysis was given, which was very important. The explanations of the origins of the RDAs were excellent, and overall the analysis of various minerals was strong, although some factors found by Kevin were missed. Two of the analyses raised concerns worth noting: potassium and sodium.

              On sodium, the concern was that the analysis treated the case as clear cut when it was not; there have been challenges to salt

              gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.