14/ 03/ 2012
22/ 02/ 2012
14/ 03/ 2012
22/ 02/ 2012
I will be discussing a paper written by Richard P. Hastings, who talks about the sources of support grandparents provide to children with disabilities and their families. He also goes into the consequences of negative reactions from grandparents on the family, and how clinical and educational professionals should work with grandparents so that they can facilitate their role as a resource for families of children with disabilities.
Firstly, there is little research that goes in the the relationship between grandparents and grandchildren with a disability, and the importance of grandparents them as a source of support. However, it is known that the right amount and type of support provided by grandparents relates to better parental adjustment and ultimately, improved quality of care for the child (Lavers & Soung-Barke, 1997). From this we can gather that there are positive effects of having a good relationship between grandparent and grandchild. Not only does it improve the lives of the parent and child, but of the grandparent themselves. Many have reported meaning derived from their roles (Kivnick, 1983), and there is a relationship between a lack of meaning in life for older people and depression (McCullock, 1985). This suggests that we should promote grandparents to engage with their families and grandchildren, whether they are disabled or not.
The issue surrounding having a grandchild with a disability is that there is an increased burden when compared to traditional, ‘nuclear’, families. What is interesting when investigating grandparents is that their reaction is similar to the parents on discovering their grandchild’s condition (Mosla and Ikonen-Mosla, 1985). This means that we cannot ignore the possibility of grandparents having a negative reaction to this news as it can only result in a negative impact on the families well-being. Grandparents are an important source of support to the parents and grandchildren for a number of reasons, which can be condensed down to emotional, financial and practical. If they are unable to cope with the emotional distress associated with finding out about a grandchild’s disability, then the family loses out on a vital source of support and will require more support form public services – which are already under a huge amount of pressure and have limited funding.Despite the stress associated with this, grandparents have been shown to adapt more quickly than parents to the child’s disability after the initial shock. This can only suggest that grandparents are an essential tool for families to use when adapting to looking after and caring for a child with a disability, and helping them help their families will reduce the burden on the public services.Hastings reminds us that we need to be aware of the needs of grandparents before recruiting them, and research into the roles associated with grandparenting a child with disabilities needs to be instigated.
My research project is a qualitative study into the roles of grandparents in a family with a child who has an intellectual disability (ID). The aim of this study is to gain an understanding of grandparents experiences when concerning this issue. We feel that this is the most appropriate method to use as there is very little research surrounding grandparenting a grandchild with an ID, and in order to create a hypothesis we need to identify common themes across accounts, which we will do using Interpretative Phenomenological Analysis. Hopefully this will be a small step towards being able to understand the relationship between grandparents and grandchildren and the influence they have on their families when a child has an intellectual disability.
Hastings, R. (1997). Grandparents of children with disabilities: A Review.
Kivnick, H. Q (1983). dimensions of grandparenthood meaning: Deductive conceptualization and empirical deviation.
Lavers, C. A., & Sonuga-Barke, E. J. S. (1997). The grandmother’s role in the adjustment of grandchildren.
McCulloch, A. W. (1985). Adjustment to old age in a changing society.
Last Monday I went to a talk given by Professor Richard Hastings, who discussed the evidence surrounding Early Intensive Behavioural Interventions (EIBI). He talked through some of the latest research demonstrating the effectiveness of this intervention for children with autism. I will briefly discuss some of his findings and will hopefully get you questioning why this is not being brought into schools as soon as possible, and how much research does it take before it is brought into practice.
Firstly, an intervention is a focused teaching experience used to teach individuals a new skill or reduce unwanted behaviour. EIBI requires a comprehensive schedule of 35+ hours a week, 52 weeks per year – this usually continues for 2 years or more. Lovass (1987) was the first psychologist to test this against the typical programme for children with autism, and found that 47% of children in the EIBI group achieved “normal intellectual and education functioning”, compared to 2% of children in the control groups. Although Lovass produced significant effects, common critiques of the study are the method used to assign subjects to control groups, the criteria for subject selection and the intellectual level of the subjects, and the choice of outcome measures (Schopler, 1989). In response to this critique the study was replicated a number of times and each found a significant effect, which suggests that EIBI is effective (MacEachin, Smith, & Lovaas, 1993; Smith, Green, & Wynn, 2000; Sallows, Graupner, Tamlynn, 2005; Cohen et al., 2006). However, a recent meta-analysis that reviewed all studies on EIBI and recovered the individual data from the children who participanted. This meant that they were able to analyse the effectiveness of the behavioural intervention at an individual level, and not only look at group means. They found that the EIBI group had significantly better outcomes than those for the control and comparison groups (Eldevik, Hastings, Hughes, Jahr, Eikeseth, & Cross, 2010).
The most interesting finding from this meta-analysis, was the improvement of both IQ and what’s called ABC gains, which is a measure of personal and social skills. This can be measured using the Vineland-II Adaptive Behaviour Scale (Sparrow, Cicchetti, Balla, 2005). What this means is that children with autism who go under this behavioural intervention will be more likely to have these important gains and therefore function within society with greater ease. This also means a better quality of life for the family, as they will be able to communicate more easily with their child and the child is more capable, than they were prior to the intervention, of performing everyday tasks by themselves. So, why is it not being implemented on a large scale?
Well, a large concern is the cost of funding this kind of intervention. The figures behind current specialist education are already very large and as there have been huge cuts to the education system, unfortunately, spending more money is impossible. EIBI involves paid tutors, Applied Behavioural Analysts (ABA), ABA supervisors and 35 hours or more of EIBI (Eldevik, Hastings, Hughes, Jahr, Eikeseth, & Cross (2010) – a lot of time and money. However, the main cost for the government when concerning the current specialist programme for children with intellectual and developmental disabilities is due to particularly difficult children being sent to residential facilities. One argument for the EIBI method is that it will reduce the number of children being sent into residential homes and will be able to stay with their family. This means that the cost could end up as equivalent to the current amount already being spent, or even cheaper (Jacobson, Mulick, & Green, 1998).
Ultimately, I hope you can see from the research i’ve discussed that EIBI is an incredible intervention that allows children with autism to develop to the best of their abilities, which should in itself outweigh the overall cost it may be to the government. However, it is unclear as to whether this would be an issue until it is implemented nationwide.
In this blog I will focus on the information we provide social-networking sites, such as Facebook and Twitter, and discuss whether researchers have the right to trawl through them without our consent.
Firstly, I would like to discuss some interesting research I found on whether your choice of social networking site (twitter or facebook) said anything about you. David Hughes (2011) found that there are some meaningful personality differences between the users, and some personality links with the way the sites are used. They used the big 5 personality measures and found that Facebook had users with higher scores in sociability, neuroticism and extraversion, which suggested that it was the more social site. Twitter users had scores higher in “need for cognition”, which suggests that it is more about sharing and exchanging information – interesting, isn’t it! What this says to me is that facebook users are more at risk of having their personal information used, and quite possibly abused. This is because it requests more personal details from you than twitter, and is possibly the reason it is a more sociable networking site as you can discover more about another person, which is similar to meeting them face-to-face.The networking sites revolve around telling people who you are, where you are, and what you have been doing. Although facebook provides privacy settings that allow you to limit how much strangers can view your profile, many people do not use this and their information is free for all. And to me, this is where the issue lies…
If you do not use the privacy settings given to you, is that you consenting to anyone using the information you provide?
Privacy is a huge area of concern as research has shown that whilst online, people self-disclose or act out more frequently or intensely than they would in person (Suller, 2004) – this is known as the online disinhibition effect. Nosko et al. (2010) examined how much facebook users disclose and what types of people are likely to disclose the most. They highlighted the importance of finding out the potential risks of disclosing personal or stigmatizing information on the individual and potentially groups of people. As a result of these findings, learning about the effects and potential consequences of using social networking sites has become an area of psychology in itself. Professor B J Fogg developed the psychology of Facebook and teaches students how to dissect different aspects of the site. They look into the use of profile pictures, status updates, comments… everything!
Getting back to the point I was initially making, in order to find these results about the psychology of social networking sites… Can they ethically use the information we provide them without giving informed consent? I definitely do not think that they can, but the problem is that we are unable to know for sure whether it is being used or not.
I will no go over the 5 ethical principles and describe how they relate to this issue:
1. Benefience and non-malificence is about the long-term effects of the individual’s information being used for research.
Does it contribute to societal good?
Does this contribution outweigh the benefits to the individual?
Social networking sites have become an important aspect of human social interaction, so it isn’t surprising why so many researchers want to investigate their uses and the impact they have on us. So in this case, I would say that the findings would be beneficial enough to using people’s information. However, this would only be acceptable under certain conditions, where the individuals identity would remain anonymous and no consequences of their information having been used would have a direct impact on them.
2. Fidelity and responsibility is about the responsibility the researchers have over the individual’s who’s information they are using.
But is this the responsibility of the researchers or of the social networking site? Different laws and codes of conduct restrict them so it is difficult to say whose responsibility it is.
3. Integrity is about the researcher generally providing honest, accurate, reliable and truthful data.
This principle raises the issue of deception within the topic. If the individuals do not know that their information is being used for research, then is that a form of deception? Well, they haven’t been deceived… have they? I’m not really sure, as they haven’t been told one thing and the researchers are measuring another. However, it breaks all bonds of trust between researchers and individuals, which would be detrimental to all research fields, as participants will no longer trust researchers.
4. Justice means that everyone has a right to access and benefit from research, and researchers should ensure they use objective methods that prevent false information from being released.
The problem with using information off these sites is that the information may not be valid to use as individuals may give false profiles that are not true to themselves. However, contrary to what you might think, facebook profiles have been found to generally reflect their owner’s actual rather than idealised selves (Back et al., 2010).
5. Respect for others is the most important principle, in my opinion, when discussing this topic.
In using the information on these sites without informed consent, individuals have lost the ability to determine for themselves whether it is allowed to be used or not. Because of this, they will not be able to clear any of their data collected, as they do not know it has been!
Overall, I believe that the privacy issue’s that surround social networking sites such as Facebook demonstrate a highly relevant example of where ethical principles could possibly be violated. I fundamentally think that it is the responsibility of the company to protect their users from being used by researchers without having given consent, but also, people should be aware of the disinhibition effect that could lead to some detrimental consequences. Thankfully, Boyd (2010) has found that more young people are using the privacy settings than they were a year ago, which hopefully means that people are starting to become more sensible when using these sites.
As I have already described what a correlation is in my previous blog (“Can accurate predictions be made when using student’s previous exam results to predict future results”), I will only state the key points of correlational research here:
I have chosen to discuss the debate on the uses of correlational research within social neuroscience as it currently a very hot topic. In this blog I will talk about the study that brought this discussion to light and will discuss the pros and cons to using it in social neuroscience.
But to start with, what is social neuroscience?
It is an area that developed out of social cognitive neuroscience (SCN) in 2003. SCN (which became an independent area of research in 2001) is a research field devoted to the understanding of behaviour and experience by looking at the cognitive level, information processing mechanisms that influence social level phenomenon, and the neural level that underlies cognitive-level processes. Basically, this means that it uses physiological testing and brain imaging techniques to test social psychology phenomenon. The cognitive bit comes from trying to explain the processes between the neural and social. Social neuroscience (SN) differs because it focuses on how the brain influences social processes and vice versa. From this definition you can see why correlational research appears to be the ideal way to investigate in this area – it almost appears to have developed out of the method. As you may have also noticed, it is still a very new area within psychology. Despite this, it has already produced some groundbreaking results, all of which claim extraordinarily high correlations between localized areas of brain activity and specific behavioural and personality measures. A popular example of this is Naomi Eisenberger’s finding that “social pain is analogous to physical pain, alerting us when we have sustained injury to social connections”. This means that rejection actually hurts!
She reported that participants levels of self-reported rejection correlated at r=.88 with levels of activity in the anterior singulate cortex, which is an incredibly high correlation (as 1.0 would be perfect and 0 means that there is no correlation). Correlation coefficient (r) is a measure of how strong the relationship between the two measures and is able to tell you whether it is positive or negative. A positive relationship means that as one variable increases so does the other, and negative correlations go in the opposite direction. In my previous blog there is an example of a positive correlation.
All of these high correlations rose suspicion in some researchers, one of them being Vul et al. (2009). They suggested that these results are due to methodological mistakes and errors. He stated that social neuroscientists have fallen prey to “non-independence error”. This is when a researcher chooses the highest numbers from a set of data and then takes the average from them. This means the average tends to be high, but you may not have realised you did this and go on to think that the high average is significant. SN’s predominantly use brain scanning techniques such as fMRI, which was the method used by Eisenberger in her study, and is what Vul believed to be the fundamental issue. SN’s search the whole brain for any part that shows a statistically significant correlation between activity and the behaviour or personality measure, and then work out a correlation coefficient in only those parts. By only picking out a specific area the correlation coefficient will tend to be higher. This does not mean to say that searching for specific areas in the brain where activation is significantly correlated with a behaviour or personality measure is invalid, but it can be misleading. Vul et al. (2009) stated that another, but more restrictive, type of analysis should be conducted instead.
But is this really the way to solve this issue?
Going back to Eisenbergers (2003) study, she based her research on the question, ‘Does Rejection hurt?’, which is an incredibly creative way of thinking. She is also using the technology available to researchers now to their full potential. Vul rightly criticizes the methodological errors associated with her research (as well as a number of other studies), but is wrong to suggest that harsher standards should be placed on social neuroscience research using fMRI machines. As social neuroscience is a new and complex area there is room for improvement, but we should not get in the way of its development. By restricting the methods too much researchers will be unable to investigate such innovative theories. It will also increase the risk of type 2 errors, because the level of significance will be too small. A Type 2 error is when the researcher fails to reject the null hypothesis when it is actually true.
The coalition government is hoping to change the current university admissions process so that student’s can apply once they know their actual grades, rather than using predictions made by their teachers in the January before taking their exams. I will firstly explain the current university admissions process before continuing (this will be based on a-levels, which is what I did), as some of you may not know about it. Teachers use students AS results, which were taken in lower-sixth and contribute to half of your overall A level results, to predict how students will perform later that year when sitting they’re A2 exams. This is necessary as university admission need to finished by January, which is a whole 8 months before you receive you’re real A-level results. Is this a fair process?
The argument made so far for using predicted grades when admitting for university is because there is meant to be a strong correlation between the two. What I mean by this, is that the grades teachers predict based on previous results tend to be accurate of students final grades – but what about the few whose aren’t? Are they at a disadvantage? Well, yes. In a paper recently published by UCAS, it turns out that 45% of predicted grades are inaccurate, and are therefore an injustice. Students may be predicted grades too high for them to achieve or even too low, so when they receive their real grades they realise that they could have pushed for better universities.UCAS also found that only 39% of students from a low socio-economical background received accurate predicted grades. These statistics are full of correlations and relationships… but how are they predicted, and can they ever be truly accurate?
In order to make a correlation, the two variables (predicted grades and actual grades) need to be compared. Correlation is a statistical technique used to measure and describe a relationship between two variables, which is observed on a scattergraph. Scattergraphs are used to look out for any patterns or trends that exists in the data. Here is an example scattergraph (spent ages looking for one online and, somehow, this is the best i could find), which looks at the predictive validity of the Undergraduate Medicine and Health Sciences Admission Test (UMAT) for academic performance at university:
This graph demonstrates a positive correlation because the UMAT score increases (variable X) and so does the Undergraduate GPA (variable Y). This means that there is a relationship between variable X and Y so, in theory, one can be used to predict the other. But are correlations accurate enough to make accurate predictions?
This is were regression comes in, as it is used to see whether we can actually make predictions using these correlations. The above scattergraph demonstrates a linear equation, because it allows us to predict how well the UMAT scores correspond to the individuals undergraduate grade point average (GPA) . A similar statistical procedure could be used to see how well predicted a-level grades correspond to the individuals actual grades. This relationship can be expressed using the equation: Y = a + bX
X is the explanatory variable and Y is the dependent variable. The slope of the line is b, and a is the intercept, which is the value of y when x = 0.
This equation represents the line of best fit, which you can see in the above scattergraph. The line of best fit can be used to identify the central tendency of the relationship, and therefore can make a more accurate prediction than a simple correlation. This statistical procedure is regression, and instead of the line of best fit it is known as the regression line. Although this can be used to make a prediction, it is not perfect as there is a residual portion. The residual is measured by R2 (which can be seen on the above scattergraph) and provides a measure between the predicted Y values on the line and the actual data points. This is important to look at because R2 indicates how accurate these predictions will be. So, in the above scattergraph R2 = 0.02, which means that there data points move away from the regression line, and the standard error of estimate gets larger. Overall, the findings from the study showed that UMAT scores had limited predictive validity for academic performance (Wilkinson, Zhang and Parker, 2011).
SO, what the was point in going into so much detail about correlations and regression, and what do they have to do with the possibility of changing to a post-exam university admissions process? Well, correlations between predicted grades or previous academic performance and later academic performance have been re-analysed, and has been proven to be an unfair system (Thorndike and Hagen, 1959; Holland and Richards, 1965; Elton and Shevel, 1969; Leonard, 1985). Professor Steven Schwartz has led many inquiries into the possibility of changing the university admissions system, and whilst the government agree’s that it would be an important change, it is the universities are refusing to let it happen. This is because changing the system so that admissions were based on real grades would mean altering term times, which is apparently not worth giving others a fair enough chance of getting into university.
One last little thought, is what effect changing the system would have on success rate. And by success, I mean in a number of areas including music, art, science, math, psychology… The reason I ask this is from my own personal experience, where I received predicted grades lower than I thought I could achieve, which had a huge impact on my confidence and my motivation to try hard. I ended up where I wanted to be, but to me, the predicted grades where my final grades as I could never get them out of my head. Did any of you have a similar issue?