Part II: Speed
The Ultimate Rock, Paper, Scissor Strategy 60 percent of the time, it works every time By Colin Schultz
[Time 2]
In China, a team of researchers tapped 360 students to try to crack the ever-important nut: how do people play Rock, Paper, Scissors? And what's the best strategy? Based on their study, says the Washington Post, at the population level, Rock, Paper, Scissors strategies follow a relatively simple pattern: People start by picking each variable (rock, paper or scissors) about one-third of the time. You can’t really game this stage. BUT after the first round: • If a player wins, he will usually stick with the same play.
• If a player loses, he will usually switch actions in “a clockwise direction”: rock changes to paper, paper to scissors, scissors to rock. So that's it. If you know what someone will play next, it's easy to counter and achieve a grand victory. But wait, what if they know the strategy, too? And they try to predict and out-smart your next move? But then you, knowing that they know, try to preempt their prediction? Then they, knowing you know they know... Actually, though, if this all sounds too simple, that's because it probably is. People won't just keep riding Paper victory string into the sunset. Instead, says Graham Walker, from the World Rock Paper Scissor society (via this old Mental Floss post), people playing Rock, Paper, Scissors like to think they're being random. They aren't. “People hate being predictable and the perceived hallmark of predictability is to come out with the same throw three times in row,” he says. [246 words]
[Time 3]
When playing with someone who is not experienced at the RPS, look out for double runs or in other words, the same throw twice. When this happens you can safely eliminate that throw and guarantee yourself at worst a stalemate in the next game. So, when you see a two-Scissor run, you know their next move will be Rock or Paper, so Paper is your best move. The researchers in China weren't just trying to work out the strategy to a schoolyard game, though. They were using Rock, Paper, Scissor as a way to study people's behavior when making decisions in “non-cooperative strategic interactions.” They were testing which of two different broad strategies people use: either trying to play truly randomly, or playing in an evolutionary way with strategies shifting depending on the outcome. (It was the latter.) Still, though, as good as your strategy may be, you're never going to beat this Rock, Paper, Scissor-playing robot. Sorry. [158 words]
Source: Smithsonian
http://www.smithsonianmag.com/smart-news/ultimate-rock-paper-scissor-strategy-180951322/
Top 3 Ways the Media Screws Up Reporting Science Avoid these common traps the next time you see a scientific headline by David Rettew, M.D. | April 29, 2014
[Time 4]
Another day, another headline about the latest tantalizing research study. Could Tylenol cause ADHD? TV result in behavioral problems? Parental involvement cause poorer school achievement? As someone whose gets asked to review and decide whether or not to publish articles submitted to scientific journals, I’ve learned how to dig a little deeper. If you think, however, that the “peer review” process means that scientists can’t publish a study that turns out to be completely misleading or wrong in its conclusions, think again. Furthermore, even when an article goes to great lengths to point out their own flaws and qualify their results, these subtleties are often the first to go when a media outlet is trying to make a splashy story. To be fair, many television, radio, and internet reporters are doing their best the walk a fine line between creating interest and not overselling a study. Nevertheless, here is my view on the three most common ways the media can get it wrong when covering medical and psychological studies. [196 words]
[Time 5]
#3. Inflating risk. Many studies that are trying to find risk factors for diseases often report their results something like this…”People who consume green M&Ms are at twice the risk of developing green skin disorder (GSD).” While true (although obviously not in this case since I made this up), it is critical to know first what the baseline incidence of the disorder is. If GSD occurs in one in a million people, a doubling of the risk (which sounds pretty bad) means that those who eat green M&Ms have GSD at a rate of 2 in a million: hardly a smoking gun in terms of a risk factor determining a particular illness. #2. Misleading titles. Everyone knows in the media that a good title is critical to get people to read content (which is why I spent so long on mine). Titles need to grab you and they need to be short, which means that more nuanced ideas aren’t going to fit. Author Mark Bittman wrote a short piece for the New York Times discussing a recent study that showed that the link between saturated fat consumption and heart disease may not be as strong as we thought. The title of the original study was a riveting “Association of Dietary, Circulating, and Supplement Fatty Acids With (Are you yawning yet?) Coronary Risk: A Systematic Review and Meta-analysis.” The Times piece that discussed the article was called “Butter is Back.” Indeed, the actual review was quite reasonable and measured, but the title gives a very different impression about what was actually contained in the article, let alone the original research study. [270 words]
[Time 6]
#1. Concluding causation from association. This one happens all the time. These provocative studies generally admit that they can’t prove causation but that usually gets lost in the proverbial fine print when it comes to reporting. And while this flaw may sound like methodological minutia, the problem really has the potential of rendering an entire study invalid. The issue is usually related to one of two things namely a) not being able to answer the timeless chicken or the egg dilemma, or b) there being a lurking and unmeasured variable that was the real driver of the association. A great example of the chicken/egg problem can be found in many studies that show a link between television viewing and attention problems. While it may be true that excessive screen time causes attention problems (which is the way the stories are generally spun), it may also be true that those with existing attention problems are more drawn to the stimulation of television and video games. An example of the unmeasured variable is the recent stir over a study linking taking Tylenol in pregnancy to child ADHD. While Tylenol might be the cause of increased ADHD, higher rates of ADHD might also be related to the reason a mom took Tylenol in the first place. The authors tried to measure some of those things but admitted there could have been some that were missed. This distinction is critical because if indeed the cause was related to the underlying reason someone took acetaminophen rather than the medication itself, then the message for pregnant moms to stop taking it could actually be exactly the wrong thing to do. Science, in particular behavioral science, is really complicated with many moving parts that need to be controlled or at least accounted for. Those of us who have the privilege of trying to synthesize this information for people less familiar with how research works need to be very careful with those interpretations, even at the risk of sounding a bit more boring and wishy-washy. [337 words]
Source: Psychologytoday
http://www.psychologytoday.com/collections/201404/april-26-may-2/how-the-media-screws-science?tr=HomeColItem
|