Fiedler on the Replicability Project

This was originally posted into the ISCON Facebook Page, I repost it here in its entirety:


Klaus Fiedler has granted me permission to share a letter that he wrote to a reported (Bruce Bowers) in response to the replication project. This letter contains Klaus’s words only and the only part I edited was to remove his phone number. I thought this would be of interest to the group.

These are his words on the 2015 estimating the replicability of psychology article.


Dear Bruce:

Thanks for your email. You can call be tomorrow but I guess what I have to say is summarized in this email.

Before I try to tell it like it is, I ask you to please attend to my arguments, not just the final evaluations, which may appear unbalanced. So if you want to include my statement in your article, maybe along with my name, I would be happy not to detach my evaluative judgment from the arguments that in my opinion inevitably lead to my critical evaluation.

First of all I want to make it clear that I have been a big fan of properly conducted replication and validation studies for many years – long before the current hype of what one might call a shallow replication research program. Please note also that one of my own studies has been included in the present replication project; the original findings have been borne out more clearly than in the original study. So there is no self-referent motive for me to be overly critical.

However, I have to say that I am more than disappointed by the present report. In my view, such an expensive, time-consuming, and resource-intensive replication study, which can be expected to receive so much attention and to have such a strong impact on the field and on its public image, should live up (at least) to the same standards of scientific scrutiny as the studies that it evaluates. I’m afraid this is not the case, for the following reasons …

The rationale is to plot the effect size of replication results as a function of original results. Such a plot is necessarily subject to regression toward the mean. On a-priori-grounds, to the extent that the reliability of the original results is less than perfect, it can be expected that replication studies regress toward weaker effect sizes. This is very common knowledge. In a scholarly article one would try to compare the obtained effects to what can be expected from regression alone. The rule is simple and straightforward. Multiply the effect size of the original study (as a deviation score) with the reliability of the original test, and you get the expected replication results (in deviation scores) – as expected from regression alone. The informative question is to what extent the obtained results are weaker than the to-be-expected regressive results.

To be sure, the article’s muteness regarding regression is related to the fact that the reliability was not assessed. This is a huge source of weakness. It has been shown (in a nice recent article by Stanley & Spence, 2014, in PPS) that measurement error and sampling error alone will greatly reduce the replicability of empirical results, even when the hypothesis is completely correct. In order not to be fooled by statistical data, it is therefore of utmost importance to control for measurement error and sampling error. This is the lesson we took from Frank Schmidt (2010). It is also very common wisdom.

The failure to assess the reliability of the dependent measures greatly reduces the interpretation of the results. Some studies may use single measures to assess an effect whereas others may use multiple measures and thereby enhance the reliability, according to a principle well-known since Spearman & Brown. Thus, some of the replication failures may simply reflect the naïve reliance on single-item dependent measures. This is of course a weakness of the original studies, but a weakness different from non-replicability of the theoretically important effect. Indeed, contrary to the notion that researchers perfectly exploit their degrees of freedom and always come up with results that overestimate their true effect size, they often make naïve mistakes.

By the way, this failure to control for reliability might explain the apparent replication advantage of cognitive over social psychology. Social psychologists may simply often rely on singular measure, whereas cognitive psychologists use multi-trial designs resulting in much higher reliability.

The failure to consider reliability refers to the dependent measure. A similar failure to systematically include manipulation checks renders the independent variables equivocal. The so-called Duhem-Quine problem refers to the unwarranted assumption that some experimental manipulation can be equated with the theoretical variable. An independent variable can be operationalized in multiple ways. A manipulation that worked a few years ago need to work now, simply because no manipulation provides a plain manipulation of the theoretical variable proper. It is therefore essential to include a manipulation check, to make sure that the very premise of a study is met, namely a successful manipulation of the theoretical variable. Simply running the same operational procedure as years before is not sufficient, logically.

Last but not least, the sampling rule that underlies the selection of the 100 studies strikes me as hard to tolerate. Replication teams could select their studies from the first 20 articles published in a journal in a year (if I correctly understand this sentence). What might have motivated the replication teams’ choices? Could this procedure be sensitive to their attitude towards particular authors or their research? Could they have selected simply studies with a single dependent measure (implying low reliability)? – I do not want to be too suspicious here but, given the costs of the replication project and the human resources, does this sampling procedure represent the kind of high-quality science the whole project is striving for?

Across all replication studies, power is presupposed to be a pure function of the size of participant samples. The notion of a truly representative design in which tasks and stimuli and context conditions and a number of other boundary conditions are taken into account is not even mentioned (cf. Westfall & Judd).


What do you think about this?


I 100% agree with his concern about the expense. Speaking with some of the replicators, we estimated the endeavor cost over 1 million euros, all told. This paid for the time of 300 psychologists, who ‘donated’ their time to the endeavor. The taxpayer paid for this… Is it the best use of their tax dollars, I guess not.

I also definitely agree with his assessment about regression to the mean.



Top ten (actual) psychology books

Books are some of my favorite things, they are all the most polished ideas the person can put together which is really great on its own, but it also has the added benefit of that you can close them when they stop making sense.  😀

Especially if one reads the ones that last through time, there is really an extraordinary amount to learn. Koffka’s Principles of Gestalt Psychology essentially is an outline of all the work that is actually being done now, except that it was done in the 1930s.

The allure of the psychology book is not only in the text its self, but in the people who might have owned the book before you. For instance, my copies of the books below has been owned by several very good psychologists whom I respect greatly (and whose signature and notes on the book greatly enhance both its informativeness and I believe its market value!).

Without further ado, we get to the actual books:

  1. Walden 2 – B. F. Skinner. This book essentially covers Skinner’s ideas about psychology and what it could do for humanity, creating a more efficient and enjoyable life. Above all this, it is  written within the context of a story and is thus accessible for anyone. In my opinion it could be given to any one and should be a required reading for basically every Psychology major.
  2. The collected writings of Friedrich Nietzsche – F. W. Nietzsche. This is a man who will be talked about in 1000 or 2000 years. He questions everything, and does a great job doing it. He gets a bad rap, but this is one of the most original and thoughtful men I have ever read. Check out my favorite page of Nietzsche.
  3. Crime and Punishment – F. M. Dostoyevsky. Anything by him is great, but some of it is really quite long. Crime and Punishment though is not. The story is of a man who struggles with many ethical questions revolving around the right to kill another person for the greater good. It is filled with many ethical questions and dilemmas from different angles, for instance, the drunk who drinks because he knows he has let down his family. A thoroughly entertaining book. Also check out his ‘the idiot’ which is essentially my ideal.
  4. Being and Time – M. Heidegger. This is potentially the toughest, but also the most rewarding book on the list. Heidegger explores the question of what it means to be human, and in the process of doing so, outlines essentially everything in modern psychology. One of my favorite ideas of his is ‘the they’ which is essentially the inauthentic life, the one which does things because that is the way they are done, rather than because that is the way they want to do them. People just say it is tough because he essentially creates a new language to talk about these things.
  5. Principles of Gestalt Psychology – K. Koffka. This is just a great reference book and something to look back on. It is a general textbook that traces a small set of basic principles (e.g., the gestalt) from the most basic of psychological questions (how do we tell objects from the background?) to emotions, memory, and personality. This is a great book just to browse sometimes (make sure to read the first and last chapters for an overview!), and you will Very often go to a conference and see a poster or talk that basically redoes the research that they did back in the early 1900s. It is a really great bit to just have around.
  6. General Systems Theory – L. von Bertalanffy. This is really a great one, that is not just about psychology, but about all of the systems in the universe, and what they have in common. Bertalanffy puts together a large number of consistencies across the entire range, from the single cell, to the human, to the society. It is a great read for those who see that psychological principles can be applied across many levels.
  7. The Death of Ivan Ilyich – L. Tolstoy. This is a bit similar to Heidegger in that it challenges the individual to live the authentic life. It is short and sweet and so awesome. It follows a ‘successful’ and well respected judge through the final days of his dying with something that might today be called lung cancer. He writhes in agony and looks back on his life as having done all the ‘right’ things, but not enjoying them and never really living for himself. If you want someone to realize their potential in life, just pass them this book. 
  8. A theory of Cognitive Dissonance – L. Festinger. Leo Festinger put forward probably the most important theory in the most unimportant way possible. He articulates the essence of Heidegger in a way normal people can understand. We have a desire for cognitive consistency and when the world presents us with some information that contradicts this, we experience dissonance, which then motivates us to reduce this. He shows how it applies across a large number of situations.
  9. Civilization and its Discontents – S. Freud. this is another short one, where Freud essentially talks about the tension between the individual and the society. Tracing from the influence of the alpha male in the tribe to the governments and religions that decide our way of life, he suggests that as control becomes better in general, the person is less and less free to do as they choose. It is just packed and tight, which I really like.
  10. Collected writings of William James – W. James. This is another general textbook like reference from one of the most influential psychologists of his time.  One of the first American Psychologists, his ideas and writings were the basis of what most psychology students would learn for much to come. He was among the first to suggest the stream of consciousness, and his writing style is second to none, having written something like an average of 20 pages per day over his entire life. He would later become a philosopher and then even a religious thinker toward the end of his career.


So that’s ten, but honestly there are wayyyy too many to go on this list. Other great ones are, for instance, Games People Play by E. Byrne which is a great little book for anyone you know, only about 100 pages. Also the Tao Te Ching  by Laozi is a really nice introduction to eastern philosophy and just how to get and exist on the path. Finally, The Naked Ape by D. Morris is a zoologist’s look at psychology and human behavior; there is so much interesting here, and you can read more about it here.

One book that I was really surprised by how bad it was was Freud’s Interpretation of Dreams (it is in my library though).  This sold like a million copies and was all the rage when it first came out, but I read about the first third of it (so about the first 200 pages) and did not feel that I had really learned anything at all. Until then it was mostly just anecdotal stories about people he knew, their dreams, and how they connected. Maybe I should have skipped toward the end, but there was a major lack of actual evidence (and thus science) or even just ideas here for me (like Civilizations).


What are your favorites? There are surely ones I missed and probably don’t even know about! 😀


This is something that will be elaborated as I have more time, though check out the other articles in the meantime.

Basically, it is all about meaning and dissonance. 😀 We need meaning (e.g., knowledge, paradigms, mental structures) to achieve our goals of, for instance, staying alive.

The gathering and maintenance of our meaning structures is essentially all of the learning that has ever happened in the world, including science and religion. Meaning Maintenance is also basically the major driver behind ever conflict that has ever occurred, including ever war ever (it is essentially two groups with conflicting cognition). In the end I guess it is just Memetics, but we will see.

Most of modern psychology can be said to be an investigation of the ways the world surprises us and how we react to it. For instance, emotions are generally associated with meaning violations, we feel great after we do unexpectedly well on something, and we feel angry or frustrated when someone (unexpectedly) does something mean or we cannot achieve our goal as easily as we thought.

More generally than that, I would like to remind you of a few things that Psychology has essentially shown to be true and that I just like to live my life by (this is more for non professionals, though it is based on solid data and reasoning).

number 1. You make the world more like you, simply by being you.

Everyone has someone who looks up to them (unless you are someone nobody wants to be like), and that person is literally trying to be like you, so make sure they are becoming a good person. This means living life in a ‘good’ way (a way you can be proud of), so that person can have a nice life. Assuming you care about them at all, it is important to help show them a good way.

number 2. The crowd is not always right. 

They killed Jesus. They put Galileo in Prison. The crowd is Not Always Right. This can definitely be a burden, especially combined with number 1, because you have to do it the right way. I haven’t found a way out of it (besides becoming someone that nobody would want to be like, but trust me, you can’t do it any worse than me!). This is one of the reasons we look unflinchingly for the truth here, you are an example. They also killed Ghandi and put Nelson Mandella in Prison. And Socrates. Socrates!

number 3. Those who believe they can, and those who believe they can’t, are usually right. 

I’m pretty sure I can! (sometimes even people say too much, but I don’t think so) and I definitely think you can. Really. Whatever it is. WHATEVER! 😀 definitely if we work together we can. So I’m taking action toward my goals, and I am not afraid of failure.

number 4. The first step at being good at something, is sucking at something. 

Being awesome takes time. About 10,000 hours if you listen to some. Most of that is failing, and trying again. and again, and even when you fail again (for whatever it is) doing it still one more time. If you are setting your goals correctly, you will fail sometimes, trust me, I have!. Don’t give up. I hear it is comfortable but blegghh To quote K, sometimes giving up is way harder than trying.

number 5. Not all practice is equal. 

10,000 hours, not of bullshit, of constantly pushing the boundary, of constantly pushing harder and soft failing (compared to hard failing which is basically suicide or death, which is the end of hours). This is no joke but the amount of work that I do in an hour is equal to two hours of my colleagues’ work. And I’m putting in more hours than they are. Im working half days on my day off. not always high intensity, for fun.

number 6. Enjoy the life

Work is not work when you enjoy it, and I spend my life doing what I like. This means a lot of Psychology (because I really like Psychology and they even pay me for it!), but also hanging with friends, video games, working out, all sorts of wasteful adventures as far as careers go. Do a few projects and don’t rush with any single one. After making significant progress on one project, I like to move to another, letting the excitement of getting back to a fun project build, wondering what step I will be able to do next or allowing myself to work through problems or difficulties.

That’s basically it. Psychology in 600 words. 😀 no, I joke, but that is most of it, I would say (especially the meaning part I wrote about at the top).

We need metrics(!), so let’s make them as good as possible

This is a post that is forthcoming at the London School of Economics Impact blog, I only post it here so I can receive feedback and link to it somewhere else. 🙂




Recently at the LSE Impact blog several individuals have argued against metrics, at least journal level metrics. Jane Tinkler put it well when, in a recent post here, she said,  ‘One of the most common concerns that colleagues discussed with us is that impact metrics focus on what is measurable at the expense of what is important.’ One ill-informed individual went so far as to say that any attempt to measure science is ill fated.

With such crazy comments, I feel obliged to stress three small things, the first of which is:

Metrics are good (but can be bad).

While it is true that ‘systems based on counting can be gamed,’ science has always been about taking the complexity of the world and making it measurable and predictable. Our evaluation of science should be no different. ‘What good science is?’, is a complex and many faceted problem, but is it any more complicated than the well-being or happiness of an individual? I would suggest not. It is true that metrics often miss some nuance, but the goal of the metric is to maximally explain the cases, on average, rather than as an individual. It is psychologically interesting that scientists are ready to apply metrics to everything in the world, except how good or bad their work is.

More than ‘metrics’ simply being the way science works, their value and necessity is evidenced most clearly by their increasing usage and popularity. Who does good science is an important variable to predict within a number of contexts, and if we spent the time to read each of the articles a potential hiree has put out (even just top 5 or 1), there would be little time for anything else.

Even while metrics are useful and necessary, they are still flawed. People are still people, and they will search for ways to increase their score on any widely accepted metric. This is why we clean up before we have someone over to our house and why students study for (and cheat on) exams. There may be fundamental problems with metrics, but they are more due to the humans who use them, rather than the metrics themselves.

It is because metrics are so necessary and widely spread that bibliometrics is such an essential study (unlike those who suggest it is a fake study at the fringe of science). People make real decisions, about real people’s lives, utilizing these tools, and it is important to make sure that they are as excellent as possible.

Creating metrics (!) that matter

Rather than expecting people to stop utilizing metrics all together (which is unreasonable given the value they offer), we should focus on making sure the metrics are effective and accurate. Blind use of bibliometrics is as inadvisable as blind use of any other data source. Importantly, this does not mean the data are bad. The metrics we utilize to measure our own worth will never problem free, but we can work to make them as useful and problem free as possible; and this is exactly what we discussed at the recent workshop on Quantifying and Analyzing Scholarly Communication on the Web at Websci15.

One of the large topics at this meeting, similar Whalen et al discussed the potential to better understand impact by looking at how  ’topically distant’ the target article is being cited in. My response focused more on the potential utility of social media discussions between scientists, utilizing either their keywords, or the sentiment in their discussion to learn more about the target article.

This desire for more metrics is similar to several calls at the LSE blog, which suggest looking across metrics to better understand the facets of research impact, rather than striving for any single definition or measure of quality. Such a goal is easily achievable by integrating APIs into one hub by which to collect and analyze data about scientists (help us build it! 😀 ). Such a data source would be helpful not only in understanding research impact, but also ‘more fundamental’ questions (e.g., what team or personal characteristics leads to optimal knowledge exchange?).

Having many metrics and understanding the facets of research will allow users to form a fuller understanding of a particular researcher’s work. Such a system might also make it harder to cheat, as it becomes more difficult to manipulate all metrics (assuming they are measuring different things) than a single one alone.

More than simply a plethora of metrics, we need metrics that will help, rather than hurt, the scientific endeavor. This implies not only an ongoing effort to monitor the effects of metrics on the research enterprise, but also:

The inclusion of theory and the scientist in metric creation

It is possible to build many metrics, but the best metrics will probably utilize the accumulated scientific knowledge into their development. The metrics we utilize now are mostly simple counts of things, but there is much to be gained by utilizing the keywords, review rating, and other data to their full advantage.

Scientists are humans (even if we pretend not to be) and this implies a large number of predictable behavioral patterns and biases. Many fields (e.g., Psychology, Philosophy of Science, Sociology, Marketing, Computer Science, Communication) can be utilized fruitfully to better understand scientific communication and impact.

Some instances of things science knows about include how biases: affect the types of experiments scientists run, how they conceptualize their experiments, who they cite, how they search for information, what happens when something unexpected happens, how the group responds to controversy, how we can conceptualize conflict in science, and many other questions. These hypotheses can probably even be tested and confirmed among scientists, if people saw that it would be worth the time and effort.

More than just using science to make the most informative metrics, we can utilize it to try and understand the long term consequences of these particular metrics. It is generally accepted within psychology that living things (scientists probably included) look to maximize their reward while minimizing their input. Keeping this in mind in a small way while developing and implementing novel metrics can go a long way toward avoiding later problems for the field.

Ultimately, it is an empirical question how science works most efficiently, and I most definitely think we should use science to improve science, not just to build better metrics, but to build a better functioning science and society, generally speaking.

In summary

Metrics are a necessary and valuable aspect of the scientific endeavor, and thus are good in general, even if they sometimes miss nuance or are harmful in small ways.

It is because metrics are so necessary and widely spread that bibliometrics is such an essential study (unlike those who suggest it is a fake study at the fringe of science). People make real decisions, about real people’s lives, utilizing these tools, and it is important to make sure that they are as excellent as possible.

The study of bibliometrics can also benefit from bringing in the understanding that the rest of science has built up about: humans, groups, systems, knowledge exchange, knowledge creation, and literally over one  million other topics of study.

Most generally, we should be using science to improve science, not just to build better metrics, but to build a better functioning science, generally speaking.

What does it mean to preregister a study?

Science is going through a change period right now and one thing that is being discussed a lot is ‘preregistration’ (especially in Brent Robert’s great post on the new rules of research). While the word its self just sounds like extra work and bureaucracy, there has not been a lot of discussion or a firm standard for what that means and so I’m going to suggest some things to do and avoid and hopefully we discuss.

Most generally, in my opinion, preregistration is ‘formally’ logging, writing down, one’s hypotheses and expectations about some experiment. This normally also implies some words about the methods of the experiment, and maybe even some indication of the analysis plan on how to get the desired result.

And I am hoping that this is how the idea of registration stays, as simple as possible. There is little reason in my opinion that a wordfile or pdf with a timestamp from before the experiment was conducted with some notes could not work.

Of course someone could just ‘preregister’ every outcome and say they got it right, but this would still mean that at least they had thought of it first (and especially hard if they wrote multiple versions!).

Or if this is not formalized enough, I could see much value in utilizing the Internal Review Board application as a preregistration. It already contains some information about the study hypotheses and how the researchers plan to test them. At a minimum, it might do those who are interested in seeing this implemented stress that it could be, or at least it could be (easily) elaborated into one.

My nightmare

is that preregistration will become as time consuming, frustrating, and seemingly worthless as getting IRB approval in the first place. Just another thing slowing science and hurting everyone because of the transgressions of the few (not that I think it is bad, but).

Last week I saw a presentation about ‘preregistration’ that really sort of scared me. A journal was setting up a section where authors could submit an experiment idea, method, plan of analyses, and expected results to peer review before running them.

The reviewers would suggest changes to the proposed study, and based on this the journal would ‘conditionally accept’ the paper or not. After the study had been run and analyzed, the results would again go through peer review, in order to ensure that rigor was upheld.

Who would submit themselves to such torture?! 😀 two rounds of lengthy review just to run a study you potentially didn’t even want to do!

General remarks about preregistration (or any other change initiative)

It should be easy. It should be valuable. Ideally it even makes my job easier. At the current moment, it is difficult to see how going through two rounds of peer review is going to be attractive to researchers (even though this might allow them to get a publication with just one study). I think these things are some of the reasons why it is doing so poorly among psychologists as to what they think should be done (seen in Table 2, even though it is Table 1, more results can be seen here: osf.io/xwnrm).

More results can be seen here: osf.io/xwnrm/

Most generally, lets Use science to improve science. and talk more about what preregistration (should) mean! 😀

Are we watching a paradigm shift? 7 hot trends in cognitive neuroscience according to me

Some interesting ideas here. 😀

Dr. Micah Allen


In the spirit of procrastination, here is a random list I made up of things that seem to be trending in cognitive neuroscience right now, with a quick description of each. These are purely pulled from the depths of speculation, so please do feel free to disagree. Most of these are not actually new concepts, it’s more about they way they are being used that makes them trendy areas.

7 hot trends in cognitive neuroscience according to me


Obviously oscillations have been around for a long time, but the rapid increase of technological sophistication for direct recordings (see for example high density cortical arrays and deep brain stimulation + recording) coupled with greater availability of MEG (plus rapid advance in MEG source reconstruction and analysis techniques) have placed large-scale neural oscillations at the forefront of cognitive neuroscience. Understanding how different frequency bands interact (e.g. phase coupling) has become a…

View original post 805 more words

My first academic paper

Check it out yall, my first academic paper. 😀

Using science and psychology to improve the dissemination and evaluation of scientific work

Here I outline some of what science can tell us about the problems in psychological publishing and how to best address those problems. First, the motivation behind questionable research practices is examined (the desire to get ahead or, at least, not fall behind). Next, behavior modification strategies are discussed, pointing out that reward works better than punishment. Humans are utility seekers and the implementation of current change initiatives is hindered by high initial buy-in costs and insufficient expected utility. Open science tools interested in improving science should team up, to increase utility while lowering the cost and risk associated with engagement. The best way to realign individual and group motives will probably be to create one, centralized, easy to use, platform, with a profile, a feed of targeted science stories based upon previous system interaction, a sophisticated (public) discussion section, and impact metrics which use the associated data. These measures encourage high quality review and other prosocial activities while inhibiting self-serving behavior. Some advantages of centrally digitizing communications are outlined, including ways the data could be used to improve the peer review process. Most generally, it seems that decisions about change design and implementation should be theory and data driven.




Learning not to give a fack

It is the thing that holds us back the most: Other people’s opinions.

If we know that everybody is terrible at something, and has done stupid or bad things in the past, then why do we worry about what people will think of us if we fail? 

We shouldn’t. 

Become free, become secure in yourself. You ARE a worthwhile person, and seriously, there is not a person in the room that you are not better than them at something, even if that thing is naming the characters of your favorite book.

There is no ultimate value other than that you place on the things around you. Instead of letting other people tell you what is valuable (this job, this style of clothing, these dance moves) you make your own valuable. Who has a right to tell you that your dance moves are worse than someone else’s? They are lying if they say so.

You chase YOUR dreams. you live YOUR life. you only get one and you’d best cash it in for all its worth (a LOT). 

When you stop living by the chains that others place on you, you can go up up up, and see more than you ever dreamed possible.

What would you do if what other’s thought didn’t matter? Tell us below. 🙂

Story Excerpt: Birthday Memories

An excerpt from a friend’s Novel he is working on. Check it out and provide feedback! 😀

Mind of Malm

So, it’s election day (and more importantly, the release date of Halo 4). While I have my opinions, I don’t think that voicing them here is particularly appropriate, so…let’s look at Social Phobia! Below is an excerpt from Chapter 15: Birthday Memories. After a night of enjoying herself, Elise has a sobering dream; inadvertently dragging Matt along for the ride…


The sleeping girl next to me made a noise. At first, I thought she’d heard me and I had woken her up. But she looked like she was still asleep. She made the noise again. This time, I managed to identify the sound: a sniffle. And then, she released a quiet whimper. I bent forward for a closer look, squinting to look through the darkness. I could just barely make out tears rolling down the side of her face. Elise was sobbing in her sleep.

I reached over and…

View original post 990 more words

Effectively preventing PTSD (with dogs)

It is estimated that the number of people suffering from Post Traumatic Stress Disorder (PTSD) is greater than the number of people who live in Texas, costing the government (the taxpayer) literally billions.

Many solutions have been proposed, mostly looking how to cure an individual of it once an individual has it. Far less frequent are solutions that look at keeping individuals from developing PTSD in the first place. This is where we are going to focus.

One way to avoid PTSD is to avoid getting into wars, but this seems unfeasible for our government. :p Another way is to lower the number of instances of individuals who develop PTSD after experiencing the horrors of war. For instance, research is beginning to examine the efficacy of distracting trauma victims, which keeps them from remembering the events and developing PTSD. But the President’s Council on Bioethics decided that changing people’s memories is ethically unsound and other solutions need to be found.

And this is where dogs come in. Kind, loving, sweet dogs. All one needs to do is a simple youtube or google search for ‘dog soldier reunite’ for evidence of this special bond. If we can better incorporate dogs into the armed forces, they can provide the distraction and positive affect needed to keep soldiers from consolidating those traumatic memories without the need of medications.

Now, dogs are already used for a variety of tasks, for instance to sniff out landmines and find people, but these dogs can also provide an important source of strength and love for soldiers as they are far from home, especially after they have witnessed something traumatic.

It is actually relatively simple. Dogs, especially shelter dogs, can be transported, raised, and maintained on base, part of whose job it is to play with soldiers and cheer them up (like the children in the hospitals). To distract them and make them smile, especially just after returning from traumatic situations.

In this way, soldiers are prevented from forming memories about the terrible things that happen in war (again, let’s not get into wars in the first place! 😀 ). This solution  can stem the tide of PTSD while avoiding the ethically unsound of biologically changing the way an individual’s brain works AND helping shelter dogs not be put down (just ship them off to war instead!).

But really, it seems to be good on both ends. Also, as soldiers develop relationships with these dogs, the dogs are likely to find good homes after they have served their tour of duty (perhaps soldiers will even help pay for the dogs?).

Obviously, research would have to be done concerning how effective the program is, but if it could prevent even 10% of soldiers from contracting PTSD it would save the government and taxpayers millions, as PTSD is currently estimated to cost society 42.3 Billion each year


What do you think? is it feasible? I need some business people to go over it. 



Find me on Facebook and Twitter for less ‘serious’ content, and always remember friends that your opinion matters.