Sunday, March 12, 2017

Opportunism in research

When I was in seventh grade I joint the youth-group of a political party. Approximately two years later we had the opportunity to meet with the leader of that party to “discuss”(1) the further directions said party could or should take.
The leader of that party had been of the view that the party should takes some positions of the green party, not of reasons related to the content of the positions, but because the green party had dropped these positions.

While I could understand that getting votes is important for a political party, I found that too opportunistic and left: I felt that you should argue for things bases on the arguments (and counterarguments) you have not based on popularity.

I changed a lot of my views, but I still believe that. However, maybe I understand better that it’s not always easy not to be opportunistic, because opportunisms seems to be what brings people forward. The questions is whether it brings you where you originally wanted to go.
I hope that research is primarily about learning new things about the world, but it’s also politics: Should you point your results just a little brighter than they really are? How much, if any, opposing literature should you cite? Change that effect size estimate a little to increase the chances of funding? Present only your positive sides in a CV?

Some people may say that none of these things are ok, ever, under any circumstances. I tend to agree, but then, how long do you search for counter evidence? For how long do you check if things aren’t worse than you think? Living, breathing, consuming is inherently opportunistic, isn’t it? Working is. If you can’t work in research, you’ve to work elsewhere, and I don’t know if selling burgers at McDonalds would be any less opportunistic?

So the question may be where to draw the line. That however is a difficult questions, especially when a lot of current problems in research may stem from people taking every chance (to get money, to publish, to stay in research) with no regard to the long term consequences… (2)




(1) Of course I know that such “discussions” are just for publicity/popularity as well.
(2) I don’t know if I could say what I want to say. Basically my question is: What is ok and what isn’t and how should we decide? It would be easy to say, all but perfect honesty is wrong, but a) perfect honesty is difficult to achieve, because you can never tell everything that *might possibly* be relevant and b) I think it would put most people out of research pretty quickly. Some people may say, the perfectionism shouldn’t be to goal, but then the question is where to draw the line? What is ok and what isn’t?

Friday, March 10, 2017

A (very) critical "review" of CBT: individual psychotherapy

Now, to say at least something nice about the CBT-program which I did: The psychotherapists were nice, kind and, I think, understanding. However, in the time I was there I had four different therapists and that’s not because I didn’t get along with one or the other, but because they have contracts which suck as well: Their contract ran out, so they switched to a different position. 
Therefore I obviously couldn’t really discuss anything with them in depth, due to the time constraints. However, that wasn’t too bad, because I wouldn’t have known what to discuss anyways. I told them about the struggle with my PhD, but I feel like twitter is the better place to discuss that, because the details are of course difficult to understand for people who don’t work in research.

So, it was nice that they were sympathetic and at least pretended to understand how I was feeling, but I’m not sure that did anything to help me with depression, though I do think it possibly made staying at the hospital (which is not the nicest place to be) a bit easier. But then, on the other hand, they try to figure out why you are depressed, i.e. which live events made you depressed. Maybe I’m just not good at arriving at hypotheses (regarding that) in which I feel some sense of certainty (though of course you fundamentally can’t know), but I don’t know why this happened to me. My working environment might be a candidate (for a cause) but I really don’t know, I liked my work and I liked what I was doing (mostly), I feel it would be unfair to blame that entirely. 
But then, psychologist come up with their own hypothesis, like maybe your parents avoided and ignored you as a child. Of course they state it as a hypothesis, not a fact. They barely know you, let only your parents, so how could they? But still, a hypothesis that is spoken out is something you think about, is it true, is it not true, how can I know whether it’s true or not, I only have my memories and not the ones of other children and no one did an experiment on my family (well, I hope ;)) to figure out if I was treated worse than other children. So how could I possibly know? And, I dunno, but I think this can be the side-effect of psychotherapy: That you search for a cause, because there has to be one (hasn’t there?) and start to reinterpret (normal or maybe not so normal) events in the light of that hypothesis. (Depression certainly can help with that…). And I just don’t know if much is gained with that; if I’d think of my parents as bad parents (I don’t!) how would that help me (other than avoiding the awkward moments when I’ve to explain that I don’t know how it happened)?

Wednesday, March 08, 2017

A (very) critical "review" of CBT: music therapy

Even though the psychotherapy program I did for depression was called "cognitive behavioral" it had some elements which I’d classify as psychodynamic, namely the "art-therapy" and the "music-therapy". Luckily I got rid of both of them pretty fast.

In the art therapy we had to do stuff like drawing our feelings. Surely you can make up a visual metaphor for everything, but to me it didn’t make a lot of sense, nor did I want to provoke the therapist with blank papers. The music-therapy however was a lot stranger than that, it didn’t have really anything to do with music, even though that’s what it was called. Well, to be fair, I could have played an instrument, the therapist had asked if I wanted, but I didn’t want. So she thought up her own therapy for me, which consisted of figuring out how many parts you (I) have and naming them. Since I told her, that I am, fortunately or not, only one person, who – as everyone else – of course has different personality traits or aspects but no different parts, she did the work for me and defined and named different parts. I don’t remember all of them, but one was the self and a supposed other one was a waiting person. She wrote these "parts" on individual pieces of paper and asked me to lay them down on the floor. So I did; I placed the stack of papers on the floor. Of course my stupid part didn’t get *how* she wanted the pieces to be laid down; I had to put them besides each other, kind of like a mind map. But even when I did that, it wasn’t right. The therapist didn’t feel that I’d placed them the right distances apart, so she asked me to step on one piece of paper while she stood on another and asked me if that was a comfortable distance. It was not, I don’t like standing face to face with another person, without any distance. But I’m sure this didn’t have to do anything with the pieces of paper on the floor. So I told her about my suspicions about cause and effect here. And that’s how I got rid of that therapy :).

To be honest it is quite mysterious to me why the health insurances even pays for a therapy like that. They don’t if you are not inpatient, but if you are in a psychiatry they pay for all kind of stuff. However, there were other patients (though not many!) who said that it helped them. But then, if it’s not evidence based (and it isn’t by the insurances own criteria; otherwise they had to pay it on an outpatient basis as well), you don’t know what else might have helped them….

Monday, February 27, 2017

A (very) critical "review" of CBT: mindfulness

In previous posts I told you that I did an inpatient CBT program for depression. I described the activity planning group here and the relaxation classes here. Today I’d like to share my experiences with the mindfulness group.

I doubt that these classes were according to any protocol, so I don’t want to extend my opinion on it to everything that’s called mindfulness. In the beginning of (almost) every class we were told the mindfulness or pleasure rules: Pleasure needs time, needs to be allowed, is individual… and the rest I don’t recall despite all the repetition. After that we had to focus on one sense, seeing, hearing, tasting, touching, and smelling. For example we were given an item in the hand with closed eyes and had to touch it to see how it feels, or we had to walk through the room and focus on the different colors we could see. I hope you get the point.

Now, I think it this *may* be *some* distraction, but it doesn’t really help. The mechanism by which it should help is not clear to me. When I’m feeling well, this would probably be an interesting task, but when I’m not it’s not. When I’m fine I like a lot of little things, an accidental splash of color on my water bottle, packaging foil (the sounds and looks of it), trees (they just look interesting and come in so many different forms), and lots, lots more. But part of depression is loss of pleasure and it doesn’t come back with just seeing things you might otherwise find beautiful or interesting. I can well describe the sensations (the roughness or softness of a material, the temperature of it, etc.) but it doesn’t mean anything. Chocolate tastes different than cardboard, but it is the same feeling. It’s just not nice. It’s effort. In some way I think the focus on what should be nice but isn’t is quite depressing in itself. It is like they try to tell you “but look, the world is nice, you’ve just to focus on that” but it doesn’t feel, look, take, sound nice.

Sunday, February 26, 2017

Having a story to tell...

In a TV-documentary I once saw there was a prosecutor who basically said (smiling): “Yeah, I know that he fitted the criteria for insanity, everyone knew, it was so obvious. I didn’t think we’d succeed with our strategy but we had to build this *narrative* that he was sane.” And she seemed to be totally fine with it, she did her job and won, so that’s something to celebrate, isn’t it? Well, I don’t know. While she succeeded in her job, shouldn’t trials be about evidence and justice instead?

In science, I think this is even more complicated because there isn’t necessarily a representative for every side of an argument. While there of course is peer-review (which serves that purpose), peer-reviewers (most of the time) can’t know what is not reported, what is not part of the narrative that the authors tell. They don’t go and dig failed preliminary studies or replications out of the authors’ file-drawers; they don’t search for evidence against the authors claims.

Therefore I think that in research it is even more important that the full story, not just the prettiest element of it is told. However, that is not what we are taught. In 2015 I attended a Summer School for PhD-students. It was very interesting and I learned a lot, however we also were told that we have to tell a story, report the aspects that are most convincing and interesting to tell. While I understand and agree with some parts of that (data can be shown in confusing or less confusing ways) I think that this advice is mostly understood (and probably meant) as “out of the many things you could tell about your study/your data, you’ve to pick the most interesting part and choose the aspects of the data that support it, while ignoring the rest”. I think that this is wrong.

Tuesday, February 21, 2017

Who's responsible for flawed studies?

In his blog post about case of data-fabrication in which the (alleged) fabricator was not included as an author, Neuroskeptic notes that „Authorship means responsibility“. I agree!!

In this case, the grad student who collected and manipulated the data remained (so far) unnamed, because he is, rightly or not, not listed as an author. But while it should be clear that authors have to take responsibility for what is published under their name, I think that responsibility reaches further.

In any complex study naturally a lot of people are involved. Of course it’s not the fault of the person at Siemens who wrote the scanning sequence if we then go and use it incorrectly and – in my opinion – neither is it the fault of a supervisor if their PhD-student “cleverly” manipulates the data.(1) After all it’s not kindergarten; not everything can be checked.

But it should be (though I know that it not always is) by “surprise” if an aspect of a study turns out to be problematic after publication: You shouldn’t be able to know prior to publication that, say, the statistics are somewhat dubious; papers with fishy statistics shouldn’t be published. However, it might turn out later on that “consistent signals” may just be artifacts produced by the imaging techniques - and the fact that brains need a constant supply of blood. In such cases it would be useful to know which sequence was used and how – exactly - the resulting data were analyzed. To figure that out the people who chose the sequence or did the analyzes should be known – which in the current mess of unregistered studies is much more likely if they are (co-)authors. Of course they also deserve to be on the paper if they had (substantial) work with the study but for them to become important at some point it doesn’t really matter how “scientific” their work was. The purely technical aspects can be just as important: How long did it take before blood was frozen, how (exactly) where the patients immobilized in the scanner, which arm was the blood pressure apparatus on, did the optic cables reliable transfer the signal and how was that made sure, …? One might say, that all these kind of information should be part of the publication but often it isn’t.

Another point is which responsibilities people involved in research have beyond their specific duties. While I generally think that the responsibility doesn’t extent (much) above ones area of expertise, I do think that everyone does have the responsibility not to knowingly do or support unethical stuff – and any unjustified or flawed study is unethical in my opinion. In this sense funding agencies share a great deal of responsibility and, once known, the media are also responsible for reporting scientific misconduct. However, of course it can be difficult to judge which studies are unethical or flawed.


(1) If the data are manipulated “uncleverly” I do however think that the supervisor has some responsibility because he or she should be able to “see” that.

(Unfortunately English is still not my native language. I'm sorry about that!)

Sunday, February 05, 2017

A (very) critical "review" of CBT: activity planning

I have been doing a cognitive behavioral therapy (CBT) program for depression. In yesterdays post I described the relaxation classes we had to do as part of the program. Today I’d like to talk about the “activity planning” group.

The group was based on the notion that actions, feelings and thoughts influence each other. Therefore – we were told – we just had to get doing (pleasurable!) stuff again in order to feel better and have more pleasant thoughts. We were handed a list of over 200 potentially pleasurable activities and had to do one such activity alone and one together with other people for each class. Each such activity furthermore had to be rated on a 1 to 10 rating scale in the dimensions “mood”, “drive” and “ease of mind” before, during and after the respective activity.

While I understand that it is important to get doing stuff – cos otherwise with absence of or low drive you’d just lie in bed and eventually die of thirst (or lack thereof) – I find that it is asked a bit much, that those activities should be perceived as pleasurable. For me, it just made me feel even more like a loser, because I couldn’t get any pleasure out of going for a walk, even though I tried. (“But it’s nice out here… I just have to tell myself… It’s nice… Sun and wind and…. But it sucks so much, why do I have to do this… I don’t have the energy, I’ll never arrive at the other end of the street…. And how do I know if it’s nice, the world is f*cked pretty much. It’s NOT nice. Climate change, and plastic in water, air pollution…. Why do I have to walk here, it’s so exhausting…. And the food we get here is wrapped all in plastic…. Which pollutes the seas….. But the river is NICE…. I’ve to tell myself…… But I don’t want to lie, not to others and not to me…… it’s not nice…. But maybe liars are the better people, at least they make others happy and aren’t such failures….. I hate this.” And so on….).

But what’s more important than that potentially nice activities didn’t seem to work for me is how to group was lead: It was instructed by whichever nurse had duty that day and they seemed to have very different opinions about what constitutes a pleasurable activity (independent of the list). While some accepted everything others had very specific ideas and would even get mad at you if you did an activity which they didn’t find pleasurable, for example tidying up or learning something (as an activity alone) or skyping and chatting or tweeting (as an activity with others). In their mind you had to get a massage or take a bath or color in something as a nice activity alone and an activity with others had to involve more than virtual contact (going to the cinema or a cafe would do). Now, I don’t find tidying up that pleasurable either, but at least something’s done afterwards, i.e. there’s less mess. If I’d get a massage I wouldn’t enjoy it and my place would be just as messy afterwards, so therefore nothing would be gained. Probably I’d feel guilty for spending the money on something I didn’t even enjoy, and like a failure for not enjoying something which should be enjoyed. Therefore I prefer tidying up – it has the better consequences. However, as I said, not everybody shares that opinion and while of course everybody is entitled to theirs I find it unhelpful decry the activities of people who tried their best to do (and enjoy...) them anyways. But of course not every nurse was like that, some didn’t really give a shit (so that the group ended within 5 minutes after everyone had recited their ratings) and others were trying to be helpful and encouraging (e.g. by suggesting in which other situations the said activity could be helpful or how the activity itself could be improved or what could be done instead).

Overall, is it helpful having to be doing a “pleasant” activity? Personally, I don’t think so, because it’s just not pleasurable and not feeling anything nice made me feel bad. *Just* doing some stuff, without any aspiration to finding it pleasurable *might* be helpful: Not eating or not drinking (water ;) ) doesn’t either, nor does not showering or not brushing teeth etc. But, in my opinion, it can be unhelpful to hope for any pleasure doing these things. Because if that’s the point and it’s too far away you might as well just not do it.